CN113740837B - Obstacle tracking method, device, equipment and storage medium - Google Patents

Obstacle tracking method, device, equipment and storage medium Download PDF

Info

Publication number
CN113740837B
CN113740837B CN202111018932.5A CN202111018932A CN113740837B CN 113740837 B CN113740837 B CN 113740837B CN 202111018932 A CN202111018932 A CN 202111018932A CN 113740837 B CN113740837 B CN 113740837B
Authority
CN
China
Prior art keywords
feature
elements
semantic
network
preset
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202111018932.5A
Other languages
Chinese (zh)
Other versions
CN113740837A (en
Inventor
蒋楠
葛琦
韩旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Weride Technology Co Ltd
Original Assignee
Guangzhou Weride Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Weride Technology Co Ltd filed Critical Guangzhou Weride Technology Co Ltd
Priority to CN202111018932.5A priority Critical patent/CN113740837B/en
Publication of CN113740837A publication Critical patent/CN113740837A/en
Application granted granted Critical
Publication of CN113740837B publication Critical patent/CN113740837B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/66Radar-tracking systems; Analogous systems
    • G01S13/72Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar
    • G01S13/723Radar-tracking systems; Analogous systems for two-dimensional tracking, e.g. combination of angle and range tracking, track-while-scan radar by using numerical data
    • G01S13/726Multiple target tracking
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/865Combination of radar systems with lidar systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/86Combinations of radar systems with non-radar systems, e.g. sonar, direction finder
    • G01S13/867Combination of radar systems with cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S13/00Systems using the reflection or reradiation of radio waves, e.g. radar systems; Analogous systems using reflection or reradiation of waves whose nature or wavelength is irrelevant or unspecified
    • G01S13/88Radar or analogous systems specially adapted for specific applications
    • G01S13/93Radar or analogous systems specially adapted for specific applications for anti-collision purposes
    • G01S13/931Radar or analogous systems specially adapted for specific applications for anti-collision purposes of land vehicles
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/66Tracking systems using electromagnetic waves other than radio waves
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/86Combinations of lidar systems with systems other than lidar, radar or sonar, e.g. with direction finders
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S17/00Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
    • G01S17/88Lidar systems specially adapted for specific applications
    • G01S17/93Lidar systems specially adapted for specific applications for anti-collision purposes
    • G01S17/931Lidar systems specially adapted for specific applications for anti-collision purposes of land vehicles

Abstract

The invention discloses a method, a device, equipment and a storage medium for tracking obstacles, wherein the method comprises the following steps: determining barrier elements and semantic elements in a vehicle driving environment; performing feature enhancement on the barrier elements and the semantic elements to obtain the embedded features of the barriers; performing characteristic association on the embedded characteristics and the historical track of the vehicle to obtain target characteristics; and respectively inputting the target characteristics into a preset association network and a preset track prediction network to obtain a matching result of the obstacles output by the association network and the historical tracks and a plurality of groups of predicted tracks of the obstacles output by the track prediction network. The method can realize the state prediction of the barrier for a longer time, can effectively predict various future behaviors of the barrier, can further reduce the calculation pressure of a decision-making system, and can correct a short-time single wrong decision-making instruction in time.

Description

Obstacle tracking method, device, equipment and storage medium
Technical Field
The present invention relates to a target tracking technology, and in particular, to a method, an apparatus, a device, and a storage medium for tracking an obstacle.
Background
In the field of automatic driving, an automatic driving system generally disposed in a vehicle includes a sensing system and a decision-making system, the sensing system as an upstream can provide various sensing information in a vehicle driving environment for a downstream decision-making system, including sensing of an obstacle in the driving environment, the sensing system can track a state of the obstacle in the driving environment, provide a motion state of the obstacle for the decision-making system, and help the decision-making system to make a correct decision.
However, the obstacle tracking method designed for the sensing system at present can only provide the movement state of the obstacle in a short time, cannot establish the correlation between the obstacle and the vehicle driving state, and cannot predict various possible movement states of the obstacle in a future time period. This tends to increase the computational pressure of the downstream decision-making system and may also cause erroneous decision instructions.
Disclosure of Invention
The invention provides an obstacle tracking method, an obstacle tracking device, obstacle tracking equipment and a storage medium, which can solve the technical problems that the prior art can only provide the motion state of an obstacle in a short time, cannot establish the correlation between the obstacle and the vehicle running state, and cannot predict various possible motion states of the obstacle in a future time period.
In a first aspect, an embodiment of the present invention provides an obstacle tracking method, where the method includes:
determining barrier elements and semantic elements in a vehicle driving environment;
performing feature enhancement on the barrier elements and the semantic elements to obtain embedded features of the barriers;
performing characteristic association on the embedded characteristic and a historical track of the vehicle to obtain a target characteristic;
and respectively inputting the target characteristics into a preset association network and a preset track prediction network to obtain a matching result of the obstacles output by the association network and the historical track and a plurality of groups of predicted tracks of the obstacles output by the track prediction network.
In a second aspect, an embodiment of the present invention further provides an obstacle tracking apparatus, where the apparatus includes:
the system comprises an element determination module, a semantic element determination module and a semantic element determination module, wherein the element determination module is used for determining barrier elements and semantic elements in the vehicle running environment;
the feature enhancement module is used for carrying out feature enhancement on the barrier elements and the semantic elements to obtain the embedded features of the barriers;
the characteristic association module is used for carrying out characteristic association on the embedded characteristic and the historical track of the vehicle to obtain a target characteristic;
and the result output module is used for respectively inputting the target characteristics into a preset association network and a preset track prediction network to obtain a matching result of the obstacles and the historical tracks output by the association network and a plurality of groups of predicted tracks of the obstacles output by the track prediction network.
In a third aspect, an embodiment of the present invention further provides a computer device, where the computer device includes:
one or more processors;
a memory for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the obstacle tracking method according to the first aspect.
In a fourth aspect, the present invention further provides a computer-readable storage medium, on which a computer program is stored, which when executed by a processor implements the obstacle tracking method according to the first aspect.
According to the method, barrier elements and semantic elements in a vehicle running environment are determined, the barrier elements comprise various characteristics of barriers in the vehicle running environment, the barrier elements and the semantic elements are subjected to characteristic enhancement, and embedded characteristics of the barriers are obtained, the embedded characteristics are fused with the self-owned characteristics of the barriers and semantic information in the environment, prior information of a geographic scene can be provided for the follow-up prediction of the state of the barriers, and the robustness of the barrier tracking method facing to a complex terrain scene is enhanced; the embedded features are subjected to feature association with the historical track of the vehicle, the association relationship between the barrier and the driving state of the vehicle is established, the target features are obtained, namely the target features represent the association relationship between the embedded features and the historical position information of the vehicle, the embedded features of the barrier are subjected to feature enhancement again by using the historical frame position information in the historical track, the target features are fused with the driving state information of the vehicle, the target features are respectively input into a preset association network and a preset track prediction network, the matching result of the barrier and the historical track output by the association network and a plurality of groups of predicted tracks of the barrier output by the track prediction network are obtained, the state prediction of the barrier for a longer time can be realized, the future various behaviors of the barrier can be effectively predicted, and the calculation pressure of a decision-making system can be further reduced, and correcting the short-time single wrong decision instruction in time.
Drawings
Fig. 1 is a flowchart of an obstacle tracking method according to an embodiment of the present invention;
fig. 2 is a flowchart of an obstacle tracking method according to a second embodiment of the present invention;
FIG. 3 is a flow chart of feature enhancement based on a graph neural network according to a second embodiment of the present invention;
FIG. 4 is a flowchart of feature association based on a graph neural network according to a second embodiment of the present invention;
fig. 5 is a schematic structural diagram of an obstacle tracking device according to a third embodiment of the present invention;
fig. 6 is a schematic structural diagram of a computer device according to a fourth embodiment of the present invention.
Detailed Description
The present invention will be described in further detail with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the invention and are not limiting of the invention. It should be further noted that, for the convenience of description, only some of the structures related to the present invention are shown in the drawings, not all of the structures.
Example one
Fig. 1 is a flowchart of an obstacle tracking method according to an embodiment of the present invention, where the embodiment is applicable to a situation where a vehicle performs motion tracking on an obstacle in a driving environment and predicts a future state of the obstacle, and the method may be executed by an obstacle tracking device, where the obstacle tracking device may be implemented by software and/or hardware, and may be configured in a computer device, for example, an unmanned device such as an unmanned vehicle, a robot, and an unmanned aerial vehicle, and a computing device such as a server and a personal computer, and the method specifically includes the following steps:
s101, determining barrier elements and semantic elements in the vehicle running environment.
In the present embodiment, the obstacle element may be understood as an element composed of various features of an obstacle existing in a vehicle driving environment, including appearance features, state features (for example, speed and acceleration of a moving obstacle) and the like of the obstacle, and the obstacle may be a stationary building, a bush, a fence, a stone pier and the like, or may be a moving vehicle, a pedestrian and the like. The semantic elements may be understood as elements composed of semantic information of objects existing in the vehicle running environment, the objects existing in the vehicle running environment may be traffic lights, zone information, stop lines, lane markers, and the like, these objects with semantic information may be referred to as semantic objects, one semantic element corresponding to one semantic object. There may be a plurality of barrier elements and a plurality of semantic elements in this embodiment, and the number of barrier elements and the number of semantic elements are not limited in this embodiment.
In a specific implementation, if a vehicle is provided with a sensing sensor (e.g., a laser radar, a camera, or a millimeter wave radar), obstacle information in a driving environment of the vehicle may be acquired by the sensing sensor, and obstacle elements including characteristics of an appearance, a speed, an acceleration, and the like of the obstacle may be acquired by analyzing the obstacle information, and the semantic elements may be acquired by analyzing a semantic map representing the driving environment of the vehicle.
In a preferred implementation mode, point cloud data containing obstacles in the driving environment of the vehicle can be collected through a laser radar sensor installed on the vehicle; inputting the point cloud data into a preset target detection network to obtain barrier elements output by the target detection network; acquiring a semantic map matched with the point cloud data, wherein the semantic map can be updated in real time according to road conditions in a vehicle driving environment, for example, if a laser radar acquires the t-th frame of point cloud data at the time t, a map marked with semantic information and capable of representing the driving environment of the vehicle at the time t needs to be acquired as a semantic map matched with the current t-th frame of point cloud data, and the semantic map can be a three-dimensional point cloud map marked with the semantic information; after the corresponding semantic map is determined, in order to acquire semantic information which may have an association relationship with the barrier element, the semantic element in a preset range of the position of the barrier can be selected from the semantic map in combination with the position of the barrier in the semantic map.
The preset target detection network may be a neural network trained for implementing the obstacle detection function after training the point cloud data, and specifically, the preset target detection network may be a neural network trained by using the point cloud data including obstacles as a training sample and using obstacles labeled in the network output approaching the training sample as training targets, and the neural network may be a regional convolutional neural network R-CNN (Region with CNN feature), a fast R-CNN (fast Region with CNN feature), a YOLO, an SSD (Single Shot multi box Detector), and the like, which is not limited in this embodiment.
In another implementation mode, image data of an obstacle in a vehicle running environment can be acquired through a camera mounted on a vehicle, and target detection and target recognition are carried out on the image data to acquire an obstacle element; the semantic elements can also be extracted from image data which is marked with semantic information and represents the vehicle running environment panorama by a semantic segmentation method. In other implementations, other modules included in a sensing system disposed in the vehicle may also be used to obtain the obstacle elements and semantic elements, such as a lane line detection module. It should be noted that, the embodiment does not limit the specific implementation manner of determining the obstacle element and the semantic element in the vehicle driving environment.
And S102, performing feature enhancement on the barrier elements and the semantic elements to obtain the embedded features of the barriers.
In this embodiment, the embedded features of the obstacle may be obtained by respectively performing feature extraction on the obstacle elements and the semantic elements, and then enhancing and fusing the features by using a feature enhancement method with respect to the features extracted by the two. The present embodiment does not limit the specific feature enhancement manner.
In one implementation, the features of the barrier elements and the semantic elements can be respectively extracted to obtain the barrier features and the semantic features, the feature distribution of the barrier features and the semantic features on the feature space is respectively calculated to obtain the barrier feature distribution and the semantic feature distribution, the association features between the barrier elements and the semantic elements are determined based on the barrier feature distribution and the semantic feature distribution, and the association features and the barrier features are used for fusion, so that the embedding features of the barrier are obtained.
In another implementation manner, feature enhancement can be performed on the barrier elements and the semantic elements in a graph construction manner to obtain the embedded features of the barrier. Specifically, the barrier elements and the semantic elements can be used as nodes, edges between the nodes are connected, features are extracted from the barrier elements and the semantic elements respectively to obtain barrier features and semantic features, the barrier features and the semantic features are normalized in feature dimensions, the weights of the edges of the nodes are determined by the normalized features, a graph is constructed by the nodes, the edges between the nodes and the weights of the edges, and the weights of the edges of the target nodes are enhanced based on the incidence relation between the nodes in the graph, so that the barrier features are enhanced, and the embedded features of the barrier fused with feature information of the semantic elements are obtained.
It can be understood that there may be a plurality of obstacles in the present embodiment, the present embodiment does not limit the number of obstacles, each obstacle corresponds to one obstacle element, there may also be a plurality of embedding features of obstacles, each obstacle element corresponds to an embedding feature of one obstacle, and the present embodiment does not limit the number of embedding features.
S103, carrying out characteristic association on the embedded characteristics and the historical track of the vehicle to obtain target characteristics.
In this embodiment, when the embedding feature of the obstacle after obtaining the enhanced feature information is obtained, the embedding feature may be selected, and the embedding feature with high confidence coefficient is selected to perform feature association with the historical track of the vehicle, so as to obtain the target feature in which the historical track information and the obstacle information are fused again. It can be understood that the target feature includes historical track information, obstacle information and semantic information, and the target feature is a feature which is enhanced again on the basis of the embedded feature and is fused with the historical track information. Therefore, the acquired target feature may also be extracted from the embedded feature and the historical track of the vehicle by adopting a feature enhancement mode, and the embodiment does not limit the specific feature association and the feature enhancement mode.
The history track of the vehicle in the present embodiment includes vehicle pose information of a history frame, which may be understood as previous times t-1, t-2, t-3 … … t-n (n < ═ t) compared to the current time t; the vehicle pose information can comprise the position and the attitude of the vehicle, the position can be expressed as three-dimensional Euclidean coordinates (x, y, z), and the attitude can be expressed as a pitch angle, a yaw angle and a roll angle which are commonly used in attitude angles.
And S104, respectively inputting the target characteristics into a preset association network and a preset track prediction network to obtain a matching result of the obstacles output by the association network and the historical tracks and a plurality of groups of predicted tracks of the obstacles output by the track prediction network.
In this embodiment, the preset association network may be a neural network trained on the feature information of the obstacle and the trajectory information of the vehicle, the association network is used to implement matching between the obstacle in the driving environment of the vehicle and the trajectory of the vehicle, the association network performs matching by measuring the feature similarity between the obstacle and the historical trajectory, and since the target feature is obtained by performing feature association and feature enhancement on the embedded feature of the obstacle and the historical trajectory of the vehicle, the target feature is input into the trained association network, and the result of matching between the obstacle and the historical trajectory can be obtained by outputting the result of matching between the obstacle and the historical trajectory by the association network. The training process of the association network is the same as that of a common neural network, namely a sample data set is obtained, the sample data set is composed of feature data of obstacles and historical track data of vehicles, the association network is trained by using the sample data set, loss values between predicted values output by the network and real values marked in the sample data set are calculated in the forward propagation process, various parameters in the network are optimized in the backward propagation process by using the loss values until the calculated loss values are smaller than a preset threshold or other preset training conditions are met (for example, the parameters converge to the preset threshold, the total training times reach the maximum times and the like), and then the training of the association network is determined to be completed.
The preset trajectory prediction network may be a neural network trained on the feature information of the obstacle and the trajectory information of the vehicle, the trajectory prediction network is used for predicting the state of the obstacle, and may predict various future behaviors of the obstacle, the training process of the trajectory prediction network is the same as that of a common neural network, and reference may be made to the foregoing brief description of the associated network training process, which is not repeated herein. Inputting the target characteristics into a preset track prediction network, so as to obtain multiple groups of predicted tracks of the obstacles output by the track prediction network, wherein the multiple groups of predicted tracks are position distribution predicted according to the track position of each obstacle, and each position distribution can comprise multiple self-defined control parameters; each set of predicted trajectories in this embodiment may have a confidence and a variance, and the most likely behavior of the obstacle in a future period of time may be determined according to the confidence and the variance.
In the specific implementation, since the acquired historical track of the vehicle, the barrier element and the semantic element are limited in data acquisition time, in order to enable the association network and the track prediction network to output prediction results more accurately for the barriers at different times, after the target features are acquired, the RNN recurrent neural network is used to model the barriers in a time sequence, the target features are optimized, and then the optimized target features are respectively input into the preset association network and the preset track prediction network, so that matching results of the barriers output by the association network and the historical track, and multiple groups of predicted tracks of the barriers output by the track prediction network are obtained.
In the embodiment, the barrier elements and the semantic elements in the vehicle running environment are determined, the barrier elements contain various features of the barrier in the vehicle running environment, the features of the barrier elements and the semantic elements are enhanced, and the embedded features of the barrier are obtained, the embedded features are fused with the self-owned features of the barrier and the semantic information in the environment, so that the prior information of a geographic scene can be provided for the subsequent prediction of the barrier state, and the robustness of the barrier tracking method facing to a complex terrain scene is enhanced; the embedded features are subjected to feature association with the historical track of the vehicle, the association relationship between the barrier and the driving state of the vehicle is established, the target features are obtained, namely the target features represent the association relationship between the embedded features and the historical position information of the vehicle, the embedded features of the barrier are subjected to feature enhancement again by using the historical frame position information in the historical track, the target features are fused with the driving state information of the vehicle, the target features are respectively input into a preset association network and a preset track prediction network, the matching result of the barrier and the historical track output by the association network and a plurality of groups of predicted tracks of the barrier output by the track prediction network are obtained, the state prediction of the barrier for a longer time can be realized, the future various behaviors of the barrier can be effectively predicted, and the calculation pressure of a decision-making system can be further reduced, and correcting the short-time single wrong decision instruction in time.
Example two
Fig. 2 is a flowchart of an obstacle tracking method according to a second embodiment of the present invention, where the present embodiment further refines the obstacle tracking method based on the foregoing embodiment, and the method specifically includes the following steps:
s201, determining barrier elements and semantic elements in the vehicle running environment.
S202, extracting first initial features with unified dimensions from the barrier elements and the semantic elements through a preset first feature extraction network.
The first feature extraction network refers to a neural network for extracting features of the barrier elements and the semantic elements, and the first feature extraction network may be obtained based on some common neural networks through training, or may be obtained by improving some common neural networks through training the improved networks, which is not limited in this embodiment. For example, the first feature extraction network may be obtained by modifying and training any one of a convolutional Neural network cnn (convolutional Neural networks), a feature map Pyramid network fpn (feature pyramids), a recurrent Neural network rnn (recurrent Neural networks), a residual Neural network rescet (residual Neural networks), and the like, which is not limited in this embodiment.
In this embodiment, the barrier elements include various features of the barrier, such as appearance features, state features, and the like, the semantic elements include various semantic information, such as traffic light semantic information, area identification semantic information, lane line semantic information, stop line semantic information, and the like, the barrier elements and the semantic elements are input into a first feature extraction network after training is completed, feature extraction is performed on the barrier elements and the semantic elements through the network, the features of the elements can be unified into a same dimension F, and first initial features with unified dimensions are output, so that the first initial features of the elements can be subsequently enhanced and fused to obtain embedded features of the barrier.
S203, constructing a first feature matrix based on the first initial features.
In this embodiment, the product of the total number of elements of the barrier element and the semantic element and the dimension of the first initial feature may be used as the dimension of the first feature matrix. If M barrier elements and N semantic elements exist, and the dimensionality of the first initial feature of each element is F, the dimensionality of the first feature matrix is determined to be (M + N) F, and each numerical value in the first feature matrix is related to feature information expressed by the first initial feature.
S204, establishing a first adjacency matrix based on the position information of the barrier elements and the semantic elements.
In a specific implementation of this embodiment, S204 may include the following specific steps:
s2041, an element set composed of a plurality of barrier elements and a plurality of semantic elements is determined.
S2042, calculating the distance between every two elements in the element set.
In this embodiment, each element in the element set refers to an obstacle element and a semantic element, and calculating the distance between each two elements can be understood as calculating the distance between different obstacle elements and obstacle elements, calculating the distance between an obstacle element and a semantic element, and calculating the distance between different semantic elements and semantic elements; more specifically, in one example, calculating a distance between an obstacle element and a semantic element includes: unifying a coordinate system of the position of the barrier corresponding to the barrier element and the position of the semantic object corresponding to the semantic element, determining the coordinate of the barrier corresponding to the barrier element in the unified coordinate system as the barrier coordinate, determining the coordinate of the semantic object corresponding to the semantic element in the unified coordinate system as the semantic coordinate, and calculating the Euclidean distance between the barrier coordinate and the semantic coordinate as the distance between the barrier element and the semantic element. In another example, calculating the distance between the different obstacle elements includes: in the same coordinate system, the coordinates of the obstacle a corresponding to the obstacle element a1 and the coordinates of the obstacle B corresponding to the obstacle element B1 are determined, and the euclidean distance between the coordinates of the obstacle a and the coordinates of the obstacle B is calculated as the distance between the obstacle element a1 and the obstacle element B1. In yet another example, calculating the different semantic elements and the distance between the semantic elements includes: and respectively determining the coordinates of the semantic object C corresponding to the semantic element C1 and the coordinates of the semantic object D corresponding to the semantic element D1 in the same coordinate system, and calculating the Euclidean distance between the coordinates of the semantic object C and the coordinates of the semantic object D as the distance between the semantic element C1 and the semantic element D1. There are various formulas for calculating the distance between two elements, and the embodiment does not limit the specific distance calculation method.
S2043, comparing the distance with a preset distance threshold value, and establishing a first adjacency matrix based on the comparison result.
In one specific example, S2043 may include:
for each element in the element set, if the distance between the element and other elements is smaller than a preset distance threshold, setting the number of the position corresponding to the distance in the first adjacent matrix as 1;
and if the distance between the element and other elements is larger than the distance threshold value, setting the number of the position corresponding to the distance in the first adjacent matrix as 0.
The dimension of the first adjacency matrix can be determined by the sum of the numbers of the barrier elements and the semantic elements. For example, if M obstacle elements and N semantic elements are determined to exist, the dimension of the first adjacency matrix is (M + N) × (M + N), that is, the total number of obstacle elements and the total number of semantic elements may be determined as the number of rows and columns of the first adjacency matrix, respectively.
In a specific implementation of this embodiment, S204 may further include: and optimizing the first adjacency matrix by adopting a self-loop adding and normalization processing mode to obtain the optimized first adjacency matrix.
For example, an identity matrix having the same dimension as the first adjacency matrix may be added to the first adjacency matrix, resulting in the first adjacency matrix after the self-loop is added; determining a degree matrix of the barrier elements and the semantic elements; and multiplying the first adjacent matrix added with the self loop by the inverse of the degree matrix to obtain an optimized first adjacent matrix.
It should be noted that the degree matrix is a concept commonly used in a graph, and may be understood as defining the degree (out degree and in degree) of each node by using the barrier elements and the semantic elements as nodes for constructing the graph, and the degree may be understood as a connection edge between each node and other nodes in the graph. The dimension of the degree matrix in this embodiment is the same as the dimension of the first adjacent matrix, the degree matrix is a diagonal matrix, the total number of values on the diagonal is the same as the total number of the barrier elements and the semantic elements, and each value on the diagonal is the degree of the corresponding node.
And S205, inputting the first feature matrix and the first adjacent matrix into a preset first graph neural network to obtain the embedded features of the obstacles output by the first graph neural network.
The first Graph neural network used in this embodiment may be any Graph neural network capable of establishing a Graph relationship, such as a Graph Convolution Network (GCN), a Graph Attention network (Graph Attention network), a Graph autoencoder (Graph Autoencoders), a Graph generation network (Graph general Networks), and a Graph space-time network (Graph Spatial-temporal Networks), for example. This embodiment is not limited to this.
In a specific implementation of this embodiment, inputting the first feature matrix and the first adjacency matrix into a preset first graph neural network may be understood as taking the obstacle element and the semantic element as nodes for constructing a graph relationship, where the first feature matrix determines an initial weight of an edge between the nodes, and performing a matrix multiplication operation on the first feature matrix with the first adjacency matrix (for example, the first adjacency matrix is multiplied by the first feature matrix) to continuously update a weight value of the edge, so as to update feature information of the obstacle element and the semantic element, that is, continuously update the first initial feature, and finally obtain an embedded feature of the obstacle output by the first graph neural network.
S206, extracting second initial features with unified dimensions from the embedded features and the historical track of the vehicle through a preset second feature extraction network.
In one example, in order to save computing resources and avoid disappearance of gradient changes caused by the second feature extraction network being too deep in the network layer, the embedded features of the obstacles output by the first graph neural network may be screened, and the embedded features of the obstacles with confidence higher than a preset confidence threshold and the embedded features of the obstacles with low confidence may be determined as target obstacle features; and inputting the target obstacle characteristic and the historical track of the vehicle into a preset second characteristic extraction network to obtain a second initial characteristic output by the second characteristic extraction network.
The second feature extraction network is a neural network used for extracting features of the embedded features and the historical track of the vehicle, and may be obtained by training based on some common neural networks, or by improving some common neural networks and training the improved networks, which is not limited in this embodiment. For example, the second feature extraction network may be obtained by modifying and training any one of a convolutional Neural network cnn (convolutional Neural networks), a feature map Pyramid network fpn (feature pyramids), a recurrent Neural network rnn (recurrent Neural networks), a residual Neural network rescet (residual Neural networks), and the like, which is not limited in this embodiment.
In this embodiment, the embedded features include a plurality of enhanced features of the obstacle, the historical trajectory of the vehicle includes position information of a plurality of historical frames (the position and attitude angle of the vehicle at a certain time are calculated as one position information), the embedded features and the historical trajectory of the vehicle are input into a trained second feature extraction network, feature extraction is performed on the embedded features and the historical trajectory of the vehicle through the network, output features of objects (the embedded features and the historical trajectory of the vehicle) can be unified into a same dimension W, and second initial features with unified dimensions are output, so that the second initial features of the objects can be subsequently enhanced and fused to obtain target features.
And S207, constructing a second characteristic matrix based on the second initial characteristic.
In this embodiment, the product of the position information included in the historical trajectory of the vehicle and the total number of the embedded features and the dimension of the second initial feature may be used as the dimension of the second feature matrix. If the historical tracks of K vehicles and the embedded features of P obstacles are determined to exist, and the dimension of the second initial feature of each object is W, the dimension of the second feature matrix is determined to be (K + P) W, and each numerical value in the second feature matrix is closely related to feature information expressed by the second initial feature. In one example, if the embedded features of the obstacles in the input second feature extraction network are filtered, that is, the embedded features are embedded features of obstacles with confidence degrees higher than a preset confidence degree threshold and embedded features of obstacles with low confidence degrees, it may be determined that there are P1 embedded features of obstacles with confidence degrees higher than the preset confidence degree threshold and embedded features of obstacles with P2 low confidence degrees, and it is determined that there are K historical tracks of vehicles, and the dimension of the second initial feature of each object is W, and the dimension of the second feature matrix is determined to be (K + P1+ P2) × W.
And S208, establishing a second adjacency matrix based on the embedded features and the position information of the historical track of the vehicle.
In this embodiment, the specific construction method of the second adjacency matrix may refer to the construction method of the first adjacency matrix, that is, an object set formed by a plurality of embedded features and a plurality of historical tracks of vehicles may be determined, a distance between each two objects in the object set is calculated, the distance is compared with a preset distance threshold, and the second adjacency matrix is established based on a comparison result. Specifically, comparing the distance with a preset distance threshold, and establishing a second adjacency matrix based on the comparison result, may include setting, for each object in the object set, a number of a position in the second adjacency matrix corresponding to the distance to 1 if the distance between the object and another object is less than the preset distance threshold; and if the distance between the object and other objects is greater than the distance threshold value, setting the number of the position corresponding to the distance in the second adjacent matrix to be 0.
The dimension of the second adjacency matrix can be determined by the sum of the number of the embedded features and the number of the objects of the historical track of the vehicle. For example, if it is determined that there are P embedding features of the obstacles and K historical tracks of the vehicle, the dimension of the second adjacent matrix is (P + K) × (P + K), that is, the total number of the embedding features of the obstacles and the total number of the historical tracks of the vehicle may be determined as the number of rows and the number of columns of the second adjacent matrix, respectively, and it is understood that the distances calculated by each object in the object set can find the corresponding positions of the matrix elements determined by each row and each column of the matrix in the second adjacent matrix. In this embodiment, details of the specific establishment method and the optimization method of the second adjacency matrix are not described again.
And S209, inputting the second characteristic matrix and the second adjacent matrix into a preset second graph neural network to obtain the target characteristics output by the second graph neural network.
The second Graph neural network used in this embodiment may be any Graph neural network capable of establishing a Graph relationship, such as a Graph Convolution Network (GCN), a Graph Attention network (Graph Attention network), a Graph autoencoder (Graph Autoencoders), a Graph generation network (Graph general Networks), and a Graph space-time network (Graph Spatial-temporal Networks), and so on. This embodiment does not limit this.
In a specific implementation of the embodiment, the second feature matrix and the second adjacency matrix are input into a preset second graph neural network, and it can be understood that the embedded feature of the obstacle and the historical track of the vehicle are taken as nodes for constructing a graph relationship, the second feature matrix determines initial weights of edges between the nodes, and the second adjacency matrix performs a matrix multiplication operation on the second feature matrix (for example, the second adjacency matrix is multiplied by the second feature matrix to the left) to continuously update the weight values of the edges, so that the updating of the feature information of the embedded feature of the obstacle and the historical track of the vehicle is realized, that is, the second initial feature is continuously updated, and the embedded feature of the obstacle output by the second graph neural network is finally obtained.
And S210, respectively inputting the target characteristics into a preset association network and a preset track prediction network to obtain a matching result of the obstacles output by the association network and the historical track and a plurality of groups of predicted tracks of the obstacles output by the track prediction network.
In this embodiment, the preset association network may be a neural network after training is completed for a training sample containing feature information similar to the target feature, and may be a Multi-layer perceptual neural network MLP (Multi-layer Perceptron), for example; the preset trajectory prediction network may be a neural network after training is completed for a training sample containing feature information similar to the target feature, and may be, for example, a convolutional neural network CNN. The present embodiment does not specifically limit the preset association network and the preset trajectory prediction network.
Referring to fig. 3 and 4, initially obtained barrier elements containing various kinds of feature information of the barrier and semantic elements containing abundant geographic environment semantic information are input into a first feature extraction network to extract a first initial feature, dimensions of the feature are unified, the first initial feature is input into a first graph neural network, the first initial feature is subjected to feature enhancement through association relationship of graphs in the first graph neural network to obtain an embedded feature of the barrier, the embedded feature and a historical track of a vehicle are input into a second feature extraction network to obtain a second initial feature, the second initial feature unifies feature information of the barrier and driving state information of the vehicle into the same feature dimension in feature extraction, the second initial feature is input into a second image neural network, and the driving state of the vehicle is associated with the barrier state in the driving environment of the vehicle by utilizing the association relationship of the graphs again, and obtaining target characteristics with characteristic information fusion and strong correlation, and at the moment, respectively inputting the target characteristics into a preset correlation network and a preset track prediction network, namely outputting a matching result of the obstacles output by the correlation network and the historical tracks and a plurality of groups of predicted tracks of the obstacles output by the track prediction network. Therefore, the technical problems that the prior art can only provide the motion state of the barrier in a short time, cannot establish the incidence relation between the barrier and the vehicle running state, and cannot predict various possible motion states of the barrier in a future time period can be solved.
EXAMPLE III
Fig. 5 is a block diagram of an obstacle tracking device according to a third embodiment of the present invention, where the obstacle tracking device may be implemented by software and/or hardware. The device includes: an element determination module 501, a feature enhancement module 502, a feature association module 503, and a result output module 504, wherein,
an element determination module 501, configured to determine an obstacle element and a semantic element in a vehicle driving environment;
a feature enhancement module 502, configured to perform feature enhancement on the obstacle element and the semantic element to obtain an embedded feature of the obstacle;
the feature association module 503 is configured to perform feature association on the embedded features and the historical track of the vehicle to obtain target features;
a result output module 504, configured to input the target feature into a preset association network and a preset trajectory prediction network, respectively, to obtain a matching result between the obstacle and the historical trajectory output by the association network, and multiple groups of predicted trajectories of the obstacle output by the trajectory prediction network.
In one embodiment of the present invention, the element determination module 501 includes:
the point cloud data acquisition sub-module is used for acquiring point cloud data containing obstacles in the vehicle driving environment;
the obstacle element determining submodule is used for inputting the point cloud data into a preset target detection network to obtain obstacle elements output by the target detection network;
the semantic map acquisition sub-module is used for acquiring a semantic map matched with the point cloud data;
and the semantic element determining submodule is used for selecting semantic elements in a preset range of the position of the obstacle from the semantic map.
In one embodiment of the present invention, the feature enhancement module 502 comprises:
the first initial feature determination submodule is used for extracting first initial features with unified dimensions from the barrier elements and the semantic elements through a preset first feature extraction network;
a first feature matrix determination submodule for constructing a first feature matrix based on the first initial feature;
a first adjacency matrix determination submodule for establishing a first adjacency matrix based on the position information of the barrier elements and the semantic elements;
and the embedded characteristic determining submodule is used for inputting the first characteristic matrix and the first adjacent matrix into a preset first graph neural network to obtain the embedded characteristics of the obstacles output by the first graph neural network.
In one embodiment of the invention, the first adjacency matrix determination sub-module includes:
an element set determination unit configured to determine an element set composed of a plurality of the obstacle elements and a plurality of the semantic elements;
the distance calculation unit is used for calculating the distance between every two elements in the element set;
and the distance comparison unit is used for comparing the distance with a preset distance threshold value and establishing a first adjacency matrix based on the comparison result.
In one embodiment of the present invention, the distance comparing unit includes:
a first numerical value determining subunit, configured to, for each element in the element set, set a number of a position in the first adjacency matrix corresponding to a distance between the element and another element to 1 if the distance is smaller than a preset distance threshold;
and the second numerical value determining subunit is used for setting the number of the position corresponding to the distance in the first adjacency matrix to 0 if the distance between the element and other elements is greater than the distance threshold.
In one embodiment of the present invention, the first adjacency matrix determination sub-module further includes:
and the adjacency matrix optimization unit is used for optimizing the first adjacency matrix by adopting a self-loop adding and normalization processing mode to obtain the optimized first adjacency matrix.
In one embodiment of the present invention, the adjacency matrix optimization unit includes:
the adding self-loop subunit is used for adding the unit matrix to the first adjacent matrix to obtain a first adjacent matrix added with a self loop;
a degree matrix determination subunit, configured to determine a degree matrix of the barrier element and the semantic element;
and the optimization determination subunit is used for multiplying the first adjacent matrix added with the self loop by the inverse of the degree matrix to obtain an optimized first adjacent matrix.
In one embodiment of the present invention, the feature association module 503 includes:
the second initial feature determining submodule is used for extracting second initial features with unified dimensions from the embedded features and the historical track of the vehicle through a preset second feature extraction network;
a second feature matrix determination submodule, configured to construct a second feature matrix based on the second initial feature;
a second adjacency matrix determination submodule for establishing a second adjacency matrix based on the embedded features and the position information of the historical track of the vehicle;
and the target characteristic determining submodule is used for inputting the second characteristic matrix and the second adjacent matrix into a preset second graph neural network to obtain the target characteristics output by the second graph neural network.
In one embodiment of the present invention, the second initial characteristic determination submodule includes:
the target obstacle feature determination unit is used for determining the embedded features of the obstacles with the confidence degrees higher than a preset confidence degree threshold value and the embedded features of the obstacles with low confidence degrees as target obstacle features;
and the initial feature output unit is used for inputting the target obstacle features and the historical track of the vehicle into a preset second feature extraction network to obtain second initial features output by the second feature extraction network.
The obstacle tracking device provided by the embodiment of the invention can execute the obstacle tracking method provided by any embodiment of the invention, and has corresponding functional modules and beneficial effects of the execution method.
Example four
Fig. 6 is a schematic structural diagram of a computer apparatus according to a fourth embodiment of the present invention, as shown in fig. 6, the computer apparatus includes a processor 600, a memory 601, a communication module 602, an input device 603, and an output device 604; the number of processors 600 in the computer device may be one or more, and one processor 600 is taken as an example in fig. 6; the processor 600, the memory 601, the communication module 602, the input device 603 and the output device 604 in the computer apparatus may be connected by a bus or other means, and the connection by the bus is exemplified in fig. 6.
The memory 601, which is a computer-readable storage medium, may be used to store software programs, computer-executable programs, and modules, such as program instructions/modules corresponding to the obstacle tracking method in the embodiment of the present invention (for example, the element determination module 501, the feature enhancement module 502, the feature association module 503, and the result output module 504 in the obstacle tracking apparatus). The processor 600 executes various functional applications and data processing of the computer device by executing software programs, instructions and modules stored in the memory 601, that is, implements the obstacle tracking method described above.
The memory 601 may mainly include a storage program area and a storage data area, wherein the storage program area may store an operating system, an application program required for at least one function; the storage data area may store data created according to the use of the terminal, and the like. Further, the memory 601 may include high speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid state storage device. In some examples, memory 601 may further include memory located remotely from processor 600, which may be connected to a computer device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
And the communication module 602 is configured to establish a connection with the display screen and implement data interaction with the display screen.
The input device 603 may be used to receive input numeric or character information and generate key signal inputs related to user settings and function control of the computer apparatus.
The output device 604 may include a display device such as a display screen.
It should be noted that the specific composition of the input device 603 and the output device 604 can be set according to actual situations.
The computer device provided by the embodiment of the invention can execute the obstacle tracking method provided by any embodiment of the invention, and has corresponding functions and beneficial effects.
EXAMPLE five
An embodiment of the present invention further provides a storage medium containing computer-executable instructions, which when executed by a computer processor, perform a method for obstacle tracking, the method including:
determining barrier elements and semantic elements in a vehicle driving environment;
performing feature enhancement on the barrier elements and the semantic elements to obtain embedded features of the barriers;
performing characteristic association on the embedded characteristic and a historical track of the vehicle to obtain a target characteristic;
and respectively inputting the target characteristics into a preset association network and a preset track prediction network to obtain a matching result of the obstacles output by the association network and the historical track and a plurality of groups of predicted tracks of the obstacles output by the track prediction network.
Of course, the storage medium containing the computer-executable instructions provided by the embodiments of the present invention is not limited to the method operations described above, and may also perform related operations in the obstacle tracking method provided by any embodiments of the present invention.
From the above description of the embodiments, it is obvious for those skilled in the art that the present invention can be implemented by software and necessary general hardware, and certainly can be implemented by hardware, but the former is a better embodiment in many cases. Based on such understanding, the technical solutions of the present invention may be embodied in the form of a software product, which can be stored in a computer-readable storage medium, such as a floppy disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a FLASH Memory (FLASH), a hard disk or an optical disk of a computer, and includes several instructions for enabling a computer device (which may be a personal computer, a server, or a network device) to execute the methods according to the embodiments of the present invention.
It should be noted that, in the embodiment of the obstacle tracking apparatus, the included units and modules are merely divided according to the functional logic, but are not limited to the above division, as long as the corresponding functions can be implemented; in addition, specific names of the functional units are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present invention.
It is to be noted that the foregoing is only illustrative of the preferred embodiments of the present invention and the technical principles employed. It will be understood by those skilled in the art that the present invention is not limited to the particular embodiments described herein, but is capable of various obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the invention. Therefore, although the present invention has been described in greater detail by the above embodiments, the present invention is not limited to the above embodiments, and may include other equivalent embodiments without departing from the spirit of the present invention, and the scope of the present invention is determined by the scope of the appended claims.

Claims (11)

1. An obstacle tracking method, comprising:
determining barrier elements and semantic elements in a vehicle driving environment;
performing feature enhancement on the barrier elements and the semantic elements to obtain embedded features of the barriers;
extracting second initial features with uniform dimensions from the embedded features and the historical track of the vehicle through a preset second feature extraction network;
constructing a second feature matrix based on the second initial features;
establishing a second adjacency matrix based on the embedded features and position information of the historical track of the vehicle;
inputting the second feature matrix and the second adjacent matrix into a preset second graph neural network to obtain target features output by the second graph neural network;
and respectively inputting the target characteristics into a preset association network and a preset track prediction network to obtain a matching result of the obstacles output by the association network and the historical track and a plurality of groups of predicted tracks of the obstacles output by the track prediction network.
2. The method of claim 1, wherein the determining the barrier elements and semantic elements in the vehicle driving environment comprises:
acquiring point cloud data containing obstacles in a vehicle driving environment;
inputting the point cloud data into a preset target detection network to obtain barrier elements output by the target detection network;
acquiring a semantic map matched with the point cloud data;
and selecting semantic elements in a preset range of the position of the obstacle from the semantic map.
3. The method according to claim 1, wherein the feature enhancing the obstacle element and the semantic element to obtain the embedded feature of the obstacle comprises:
extracting a first initial feature with uniform dimension from the barrier element and the semantic element through a preset first feature extraction network;
constructing a first feature matrix based on the first initial features;
establishing a first adjacency matrix based on the position information of the barrier elements and the semantic elements;
and inputting the first characteristic matrix and the first adjacent matrix into a preset first graph neural network to obtain the embedded characteristics of the obstacles output by the first graph neural network.
4. The method of claim 3, wherein building a first adjacency matrix based on the position information of the barrier element and the semantic element comprises:
determining a set of elements consisting of a plurality of said barrier elements and a plurality of said semantic elements;
calculating the distance between every two elements in the element set;
and comparing the distance with a preset distance threshold value, and establishing a first adjacency matrix based on the comparison result.
5. The method of claim 4, wherein comparing the distance with a preset distance threshold and establishing a first adjacency matrix based on the comparison result comprises:
for each element in the element set, if the distance between the element and other elements is smaller than a preset distance threshold, setting the number of the position corresponding to the distance in the first adjacent matrix as 1;
and if the distance between the element and other elements is larger than the distance threshold value, setting the number of the position corresponding to the distance in the first adjacent matrix as 0.
6. The method according to claim 4 or 5, wherein the establishing a first adjacency matrix based on the position information of the barrier element and the semantic element further comprises:
and optimizing the first adjacency matrix by adopting a self-loop adding and normalization processing mode to obtain an optimized first adjacency matrix.
7. The method of claim 6, wherein optimizing the first adjacency matrix by adding self-loop and normalization processes to obtain an optimized first adjacency matrix comprises:
adding a unit matrix into the first adjacent matrix to obtain a first adjacent matrix added with a self-loop;
determining a degree matrix of the barrier elements and the semantic elements;
and multiplying the first adjacent matrix added with the self loop by the inverse of the degree matrix to obtain an optimized first adjacent matrix.
8. The method according to claim 1, wherein the extracting, through a preset second feature extraction network, a second initial feature with unified dimension from the embedded feature and the historical track of the vehicle comprises:
determining the embedding features of the obstacles with confidence degrees higher than a preset confidence degree threshold value and the embedding features of the obstacles with low confidence degrees as target obstacle features;
and inputting the target obstacle features and the historical track of the vehicle into a preset second feature extraction network to obtain second initial features output by the second feature extraction network.
9. An obstacle tracking device, comprising:
the system comprises an element determination module, a semantic element determination module and a semantic element determination module, wherein the element determination module is used for determining barrier elements and semantic elements in the vehicle running environment;
the feature enhancement module is used for carrying out feature enhancement on the barrier elements and the semantic elements to obtain the embedded features of the barriers;
the characteristic association module is used for carrying out characteristic association on the embedded characteristic and the historical track of the vehicle to obtain a target characteristic;
the result output module is used for respectively inputting the target characteristics into a preset association network and a preset track prediction network to obtain a matching result of the obstacles and the historical tracks output by the association network and a plurality of groups of predicted tracks of the obstacles output by the track prediction network;
wherein the feature association module comprises:
the second initial feature determining submodule is used for extracting second initial features with unified dimensions from the embedded features and the historical track of the vehicle through a preset second feature extraction network;
a second feature matrix determination submodule, configured to construct a second feature matrix based on the second initial feature;
a second adjacency matrix determination submodule for establishing a second adjacency matrix based on the embedded features and the position information of the historical track of the vehicle;
and the target characteristic determining submodule is used for inputting the second characteristic matrix and the second adjacent matrix into a preset second graph neural network to obtain the target characteristics output by the second graph neural network.
10. A computer device, characterized in that the computer device comprises:
one or more processors;
a storage device for storing one or more programs,
when executed by the one or more processors, cause the one or more processors to implement the obstacle tracking method of any of claims 1-8.
11. A computer-readable storage medium, on which a computer program is stored, which program, when being executed by a processor, is adapted to carry out the obstacle tracking method according to any one of claims 1-8.
CN202111018932.5A 2021-09-01 2021-09-01 Obstacle tracking method, device, equipment and storage medium Active CN113740837B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202111018932.5A CN113740837B (en) 2021-09-01 2021-09-01 Obstacle tracking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202111018932.5A CN113740837B (en) 2021-09-01 2021-09-01 Obstacle tracking method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN113740837A CN113740837A (en) 2021-12-03
CN113740837B true CN113740837B (en) 2022-06-24

Family

ID=78734738

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202111018932.5A Active CN113740837B (en) 2021-09-01 2021-09-01 Obstacle tracking method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN113740837B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114610020B (en) * 2022-01-28 2023-05-23 广州文远知行科技有限公司 Obstacle movement track prediction method, device, equipment and storage medium
CN114419605B (en) * 2022-03-29 2022-07-19 之江实验室 Visual enhancement method and system based on multi-network vehicle-connected space alignment feature fusion
CN116152782A (en) * 2023-04-18 2023-05-23 苏州魔视智能科技有限公司 Obstacle track prediction method, device, equipment and storage medium
CN117557977A (en) * 2023-12-28 2024-02-13 安徽蔚来智驾科技有限公司 Environment perception information acquisition method, readable storage medium and intelligent device

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112078592A (en) * 2019-06-13 2020-12-15 初速度(苏州)科技有限公司 Method and device for predicting vehicle behavior and/or vehicle track
CN112651990A (en) * 2020-12-25 2021-04-13 际络科技(上海)有限公司 Motion trajectory prediction method and system, electronic device and readable storage medium
CN112651557A (en) * 2020-12-25 2021-04-13 际络科技(上海)有限公司 Trajectory prediction system and method, electronic device and readable storage medium

Family Cites Families (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10656657B2 (en) * 2017-08-08 2020-05-19 Uatc, Llc Object motion prediction and autonomous vehicle control
US11104334B2 (en) * 2018-05-31 2021-08-31 Tusimple, Inc. System and method for proximate vehicle intention prediction for autonomous vehicles
US20190367019A1 (en) * 2018-05-31 2019-12-05 TuSimple System and method for proximate vehicle intention prediction for autonomous vehicles
CN111222438A (en) * 2019-12-31 2020-06-02 的卢技术有限公司 Pedestrian trajectory prediction method and system based on deep learning
EP3855120A1 (en) * 2020-01-23 2021-07-28 Robert Bosch GmbH Method for long-term trajectory prediction of traffic participants
KR102192348B1 (en) * 2020-02-24 2020-12-17 한국과학기술원 Electronic device for integrated trajectory prediction for unspecified number of surrounding vehicles and operating method thereof
CN111079721B (en) * 2020-03-23 2020-07-03 北京三快在线科技有限公司 Method and device for predicting track of obstacle
CN113033364A (en) * 2021-03-15 2021-06-25 商汤集团有限公司 Trajectory prediction method, trajectory prediction device, travel control method, travel control device, electronic device, and storage medium
CN113033899B (en) * 2021-03-29 2023-03-17 同济大学 Unmanned adjacent vehicle track prediction method
CN112766468B (en) * 2021-04-08 2021-07-30 北京三快在线科技有限公司 Trajectory prediction method and device, storage medium and electronic equipment
CN113291321A (en) * 2021-06-16 2021-08-24 苏州智加科技有限公司 Vehicle track prediction method, device, equipment and storage medium

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN112078592A (en) * 2019-06-13 2020-12-15 初速度(苏州)科技有限公司 Method and device for predicting vehicle behavior and/or vehicle track
CN112651990A (en) * 2020-12-25 2021-04-13 际络科技(上海)有限公司 Motion trajectory prediction method and system, electronic device and readable storage medium
CN112651557A (en) * 2020-12-25 2021-04-13 际络科技(上海)有限公司 Trajectory prediction system and method, electronic device and readable storage medium

Also Published As

Publication number Publication date
CN113740837A (en) 2021-12-03

Similar Documents

Publication Publication Date Title
CN113740837B (en) Obstacle tracking method, device, equipment and storage medium
US11860629B2 (en) Sparse convolutional neural networks
US11017550B2 (en) End-to-end tracking of objects
US20230228880A1 (en) Method for creating occupancy grid map and processing apparatus
CN111127513B (en) Multi-target tracking method
CN113264066B (en) Obstacle track prediction method and device, automatic driving vehicle and road side equipment
CN113506317B (en) Multi-target tracking method based on Mask R-CNN and apparent feature fusion
CN112288770A (en) Video real-time multi-target detection and tracking method and device based on deep learning
WO2018081036A1 (en) Dynamic scene prediction with multiple interacting agents
Akan et al. Stretchbev: Stretching future instance prediction spatially and temporally
US11755917B2 (en) Generating depth from camera images and known depth data using neural networks
JP2022117464A (en) Training method and multi-target tracking method for multi-target tracking model
CN105809718A (en) Object tracking method with minimum trajectory entropy
CN114998276A (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
KR102143034B1 (en) Method and system for tracking object in video through prediction of future motion of object
CN114820765A (en) Image recognition method and device, electronic equipment and computer readable storage medium
KR102628598B1 (en) Multi-object tracking apparatus and method using graph convolution neural network
CN113139696B (en) Trajectory prediction model construction method and trajectory prediction method and device
CN111652181B (en) Target tracking method and device and electronic equipment
CN116324902A (en) Detecting objects and determining the behavior of objects
Al Hakim 3d yolo: End-to-end 3d object detection using point clouds
Long et al. The geometric attention-aware network for lane detection in complex road scenes
CN116203971A (en) Unmanned obstacle avoidance method for generating countering network collaborative prediction
Djenouri et al. Hybrid RESNET and regional convolution neural network framework for accident estimation in smart roads
Messoussi et al. Vehicle detection and tracking from surveillance cameras in urban scenes

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant