CN114997297A - Target movement intention reasoning method and system based on multistage regional division - Google Patents

Target movement intention reasoning method and system based on multistage regional division Download PDF

Info

Publication number
CN114997297A
CN114997297A CN202210582031.7A CN202210582031A CN114997297A CN 114997297 A CN114997297 A CN 114997297A CN 202210582031 A CN202210582031 A CN 202210582031A CN 114997297 A CN114997297 A CN 114997297A
Authority
CN
China
Prior art keywords
target
intention
motion
region
moving target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202210582031.7A
Other languages
Chinese (zh)
Other versions
CN114997297B (en
Inventor
白成超
颜鹏
郭继峰
郑红星
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Harbin Institute of Technology
Original Assignee
Harbin Institute of Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Harbin Institute of Technology filed Critical Harbin Institute of Technology
Priority to CN202210582031.7A priority Critical patent/CN114997297B/en
Publication of CN114997297A publication Critical patent/CN114997297A/en
Application granted granted Critical
Publication of CN114997297B publication Critical patent/CN114997297B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A30/00Adapting or protecting infrastructure or their operation
    • Y02A30/60Planning or developing urban green infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Health & Medical Sciences (AREA)
  • Computing Systems (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Evolutionary Computation (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Molecular Biology (AREA)
  • Artificial Intelligence (AREA)
  • General Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Health & Medical Sciences (AREA)
  • Feedback Control In General (AREA)
  • Image Analysis (AREA)

Abstract

A target movement intention reasoning method and system based on multi-level region division relates to the technical field of moving target movement intention reasoning and is used for solving the problem that the target movement intention cannot be deduced in the prior art when target movement intention priori knowledge is lacked. The urban environment is divided into multiple levels and multiple areas to form a motion intention set of a moving target; marking the acquired motion tracks of a plurality of moving target cities in a motion intention set to construct a training data set; discretizing the training data set to construct a feature map matrix; inputting the characteristic map matrix into a multi-stage target movement intention inference model based on a convolutional neural network for training; and marking the city motion trail of the moving target to be inferred in the motion intention set, carrying out discretization treatment, inputting the marked city motion trail into a trained inference model, and acquiring the probability that the moving target goes to each subregion in each level of region. The invention can be applied to reasoning problems in which the destination of the moving target is unknown.

Description

Target movement intention reasoning method and system based on multi-stage region division
Technical Field
The invention relates to the technical field of target movement intention reasoning, in particular to a target movement intention reasoning method and system based on multistage regional division.
Background
The target motion intention inference technology mainly comprises an intention inference method based on a generative model and an intention inference method based on a discriminant model. Typical algorithms in the generative model include an intention inference method based on Bayesian theory and an intention inference method based on a hidden Markov model; typical algorithms in the discriminant model include an intention inference method based on a support vector machine and an intention inference method based on a deep neural network. In the intention inference method based on the Bayesian theory, a likelihood probability model between the target motion behavior and the target motion intention needs to be established first, and then the motion intention of the target needs to be inferred iteratively according to the observed target motion state. Although the iterative inference mode has a clear inference architecture, the problem that the target motion trajectory cannot be completely used exists. In the intention reasoning method based on the hidden Markov model, the motion behavior of a target is modeled by using two random processes, wherein one random process is hidden and unobservable, and the other random process is observable. In the intention reasoning method based on the support vector machine, a target movement intention reasoning problem is regarded as a classification problem, an observed target movement state is segmented by searching a hyperplane with the maximum segmentation distance, and then a movement intention label corresponding to the observed target movement state is regarded as a deduced target movement intention. The above intention inference methods based on the hidden markov model and the support vector machine are difficult to process high-dimensional states, and thus are difficult to process complex high-dimensional urban environment information. The intention reasoning method based on the deep neural network is characterized in that an end-to-end intention reasoning network is established, target motion tracks and original information of urban environment are directly processed, a mapping relation between a target motion state and a target motion intention is established through layer-by-layer coding of the deep neural network, and the trained intention reasoning network can directly obtain the target motion intention from the target motion state. On the premise that training data are sufficient, compared with other methods, the deep neural network-based intention reasoning method has better intention reasoning performance.
In each of the above types of intent inference methods, the set of target movement intents is assumed to be known, i.e., the set of possible destinations for the target is assumed to be known. However, for moving objects in urban environments, their set of possible motion intentions is generally not available in advance.
Disclosure of Invention
In order to solve the problem that the prior knowledge of a target movement intention set is lacked, the target movement intention cannot be inferred in the existing method, the invention provides a target movement intention inference method and a target movement intention inference system based on multi-stage area division.
According to an aspect of the present invention, there is provided a target motion intention inference method based on multi-level region division, the method including the steps of:
carrying out multi-stage multi-region division on the urban environment, wherein each divided sub-region forms a motion intention set of the moving target;
obtaining a plurality of moving target city motion tracks, and marking the tracks in the motion intention set to construct a training data set;
discretizing the training data set to construct a feature map matrix; the characteristic map matrix is used for representing the motion state characteristics of the moving target related to the urban environment;
inputting the characteristic map matrix into a multi-stage target movement intention inference model based on a convolutional neural network for training to obtain a trained multi-stage target movement intention inference model;
and labeling the city motion trail of the mobile target to be inferred in the motion intention set, carrying out discretization treatment, inputting the labeled city motion trail into a trained multi-stage target motion intention inference model, and acquiring the probability that the mobile target goes to each sub-region in each stage of region.
Further, discretizing the training data set to construct a feature map matrix comprises the following specific processes: converting the motion intention set marked with the motion trail into a grid map; in the grid map, assigning a grid unit with an attribute of an accessible building as N1, assigning a grid unit with an attribute of an inaccessible building as N2, and assigning grid units with a plurality of position points of each moving target city motion trail in the training data set as N3; 0< N1<1, 0< N2<1, 0< N3<1, and N1, N2, N3 are all not equal; thereby obtaining a plurality of feature map matrices.
Furthermore, the characteristic map matrixes correspond to the assigned grid maps at a plurality of moments, N1 is set to be 0.2, N2 is set to be 0.6, N3 is set to be 0.4, and the matrixes are used
Figure BDA0003664372900000021
A feature map matrix representing the time t,
Figure BDA0003664372900000022
the definition is as follows:
Figure BDA0003664372900000023
Figure BDA0003664372900000024
in the formula (I), the compound is shown in the specification,
Figure BDA0003664372900000025
representing the matrix at time t
Figure BDA0003664372900000026
The element value of the kth line and the l column; c. C kl Representing the grid cell located in the kth row and the lth column; c (B) acc ) Representing a set of grid cells occupied by all accessible building areas; c (B) inacc ) A set of grid cells representing all areas occupied by inaccessible buildings;
Figure BDA0003664372900000027
indicating the location of the target at time t
Figure BDA0003664372900000028
An occupied grid cell; t is inf Expressing the period of inference on the intention of the target to move, i.e. every time period T inf And reasoning the movement intention of the target once according to the change of the movement state of the target.
Further, the convolutional neural network-based multi-stage object motion intention inference model is established as follows:
for each level of area Q representing target movement intention i Respectively establishing corresponding ith-level target movement intention inference networks based on the convolutional neural networks
Figure BDA0003664372900000031
The input of which is the area Q in the feature map matrix i Corresponding feature matrix, outputting as moving object in region Q i I.e. moving the target to go to zone Q i Each sub-region to which it belongs
Figure BDA0003664372900000032
Is expressed as:
Figure BDA0003664372900000033
in the formula: p (Q) i ) Indicating moving target heading to zone Q i Each sub-region to which it belongs
Figure BDA0003664372900000034
The probability of (d);
Figure BDA0003664372900000035
represents a region Q i A corresponding feature map matrix;
Figure BDA0003664372900000036
inference network for representing motion intention of i-th level target
Figure BDA0003664372900000037
The parameter (c) of (c).
Further, the multi-stage target movement intention inference model based on the convolutional neural network determines the ith stage target movement intention inference network by optimizing the following loss function in the training process
Figure BDA0003664372900000038
Parameter (d) of
Figure BDA0003664372900000039
Figure BDA00036643729000000310
In the formula: n is a radical of D Representing the number of the moving target city motion tracks in the training data set; m m Representing the number of position points of the motion trail of the mth moving target city in the training data set; y is i (m, k) is a flag bit, which indicates that the motion trail of the mth moving target city in the training data set is in the state of leaving the ith-level area Q i Whether the last position point before is in the i-th sub-area
Figure BDA00036643729000000311
In, if yes, Y i (m, k) 1, otherwise Y i (m,k)=0;
Figure BDA00036643729000000312
Representing inference networks using level i object movement intent
Figure BDA00036643729000000313
The motion trail of the mth moving target city in the inferred training data set goes to the ith-level sub-area at the jth position point
Figure BDA00036643729000000314
The probability of (d); λ is a positive coefficient.
Further, the concrete process of discretizing the moving target city motion trail to be inferred comprises the following steps: in the grid map, assigning the grid unit with the attribute of being capable of entering the building as N1, assigning the grid unit with the attribute of being incapable of entering the building as N2, acquiring each position point of the moving target city motion trail to be inferred in real time, and assigning the grid unit with each position point as N3, so that the assigned grid map corresponding to different moments is updated in real time to serve as a feature map matrix.
According to another aspect of the present invention, there is provided a target motion intention inference system based on multi-level region division, the system including:
the system comprises a movement intention set acquisition module, a movement intention set acquisition module and a movement intention set acquisition module, wherein the movement intention set acquisition module is configured to divide the urban environment into multiple stages and multiple regions, and the divided sub-regions at each stage form a movement intention set of a moving target;
a training data acquisition module configured to acquire a plurality of moving target city motion tracks and label the tracks in the motion intention set to construct a training data set;
a feature map acquisition module configured to discretize the training data set to construct a feature map matrix; the characteristic map matrix is used for representing the motion state characteristics of the moving target related to the urban environment;
the intention reasoning model training module is configured to input the feature map matrix into a multi-stage target movement intention reasoning model based on a convolutional neural network for training to obtain a trained multi-stage target movement intention reasoning model;
and the movement intention reasoning module is configured to label the city movement track of the moving target to be inferred in the movement intention set, carry out discretization treatment, input the marked city movement track into a trained multi-stage target movement intention reasoning model, and acquire the probability that the moving target goes to each sub-region in each stage of region.
Further, the feature map acquisition module performs discretization processing on the training data set to construct a feature map matrix in a specific process that: converting the motion intention set marked with the motion trail into a grid map; in the grid map, assigning a grid cell with an attribute of being capable of entering a building as N1, assigning a grid cell with an attribute of being incapable of entering a building as N2, and assigning a grid cell with a plurality of position points of each moving target city motion track in the training data set as N3; 0< N1<1, 0< N2<1, 0< N3<1, and N1, N2, N3 are all not equal; thereby obtaining a plurality of feature map matrices.
Further, the intention inference model training module establishes the multi-stage object motion intention inference model based on the convolutional neural network as follows:
for each level region Q representing the movement intention of the target i Respectively establishing corresponding ith level target movement intention inference networks based on convolutional neural networks
Figure BDA0003664372900000041
The input of which is the area Q in the feature map matrix i Corresponding feature matrix, outputting as moving object in region Q i I.e. moving the target to go to zone Q i Each sub-region to which it belongs
Figure BDA0003664372900000042
Is expressed as:
Figure BDA0003664372900000043
in the formula: p (Q) i ) Indicating moving target heading to zone Q i Each sub-region to which it belongs
Figure BDA0003664372900000044
The probability of (d);
Figure BDA0003664372900000045
represents a region Q i A corresponding feature map matrix;
Figure BDA0003664372900000046
inference network for representing motion intention of i-th level target
Figure BDA0003664372900000047
The parameter (c) of (c).
Further, the convolutional neural network-based multi-stage object motion intention inference model in the intention inference model training module determines the ith stage object motion intention inference network by optimizing the following loss function in the training process
Figure BDA0003664372900000048
Parameter (d) of
Figure BDA0003664372900000049
Figure BDA0003664372900000051
In the formula: n is a radical of D Representing the number of the moving target city motion tracks in the training data set; m m Representing the number of position points of the motion trail of the mth moving target city in the training data set; y is i (m, k) is a flag bit which indicates that the motion trail of the mth moving target city in the training data set leaves the ith level area Q i Whether the last position point before is in the i-th sub-area
Figure BDA0003664372900000052
In, if yes, Y i (m, k) 1, otherwise Y i (m,k)=0;
Figure BDA0003664372900000053
Representing inference networks using level i object movement intent
Figure BDA0003664372900000054
The motion trail of the mth moving target city in the inferred training data set goes to the ith-level sub-area at the jth position point
Figure BDA0003664372900000055
The probability of (d); λ is a positive coefficient.
The beneficial technical effects of the invention are as follows:
the method can represent the movement intention of the target through the established multi-level target movement area when the target movement intention set is unknown, and can realize the reasoning on the target movement intention under the condition of lacking the prior knowledge of the target movement intention set; a multilevel target motion intention reasoning network established based on a convolutional neural network can represent the mapping relation between a target motion state and a target motion intention, effective characteristic information is extracted from a fused target motion track and urban environment information, and the motion intention of a target can be accurately deduced in time through an observed target motion state after the intention reasoning network is trained.
Drawings
The present invention may be better understood by reference to the following description taken in conjunction with the accompanying drawings, which are incorporated in and form a part of this specification, and which are used to further illustrate preferred embodiments of the present invention and to explain the principles and advantages of the present invention.
Fig. 1 is a flowchart of a target movement intention inference method based on multi-level region division according to an embodiment of the present invention.
Fig. 2 is an exemplary diagram of multi-level and multi-region division of a city environment in the embodiment of the present invention.
Fig. 3 is an exemplary diagram of discretization of a target motion state and an urban environment in the embodiment of the present invention.
Fig. 4 is an exemplary diagram of a two-stage target motion intention inference network established based on a convolutional neural network in the embodiment of the present invention.
FIG. 5 is a diagram illustrating a loss value variation curve in a two-stage object motion intention inference network training process in an embodiment of the present invention.
FIG. 6 is a diagram illustrating an example of a process of reasoning two-stage motion intention of a moving object in a city according to an embodiment of the present invention.
Fig. 7 is a schematic structural diagram of a target movement intention inference system based on multi-level region division according to an embodiment of the present invention.
Detailed Description
In order that those skilled in the art will better understand the disclosure, exemplary embodiments or examples of the disclosure are described below with reference to the accompanying drawings. It is obvious that the described embodiments or examples are only some, but not all embodiments or examples of the invention. All other embodiments or examples obtained by a person of ordinary skill in the art based on the embodiments or examples of the present invention without any creative effort shall fall within the protection scope of the present invention.
The invention provides a target movement intention reasoning method and a target movement intention reasoning system based on multi-stage area division, wherein the target movement intention refers to the destination position of a target movement track. Firstly, carrying out multi-stage multi-region division on the urban environment where a target is located according to the urban environment characteristics of the target and the position of the target, and representing a possible movement intention set of the target by using each stage of divided regions; then, discretizing the observed target motion trail and the urban environment to obtain a characteristic state after the target motion trail and the urban environment are fused; and finally, establishing a multi-stage target motion intention inference network based on a convolutional neural network, training parameters of the multi-stage target motion intention inference network by using the collected target motion trajectory data set, and inferring the intention of the target to each stage of sub-regions by the trained intention inference network according to the observed target motion trajectory and the urban environment information.
The embodiment of the invention provides a target movement intention inference method based on multi-stage area division, which comprises the following steps of:
carrying out multi-stage multi-region division on the urban environment, wherein each divided sub-region forms a motion intention set of the moving target;
obtaining a plurality of moving target city motion tracks, and marking the tracks in a motion intention set to construct a training data set;
discretizing the training data set to construct a feature map matrix; the characteristic map matrix is used for representing the motion state characteristics of the moving target related to the urban environment;
inputting the characteristic map matrix into a multi-stage target movement intention inference model based on a convolutional neural network for training to obtain a trained multi-stage target movement intention inference model;
and marking the city motion trail of the moving target to be inferred in the motion intention set, carrying out discretization treatment, inputting the marked city motion trail into a trained multi-stage target motion intention inference model, and acquiring the probability that the moving target goes to each subregion in each stage of region.
In this embodiment, optionally, discretizing the training data set to construct the feature map matrix includes: converting the motion intention set marked with the motion trail into a grid map; in the grid map, assigning a grid unit with an attribute of an accessible building as N1, assigning a grid unit with an attribute of an inaccessible building as N2, and assigning grid units with a plurality of position points of each moving target city motion trail in the training data set as N3; 0< N1<1, 0< N2<1, 0< N3<1, and N1, N2, N3 are all not equal; thereby obtaining a plurality of feature map matrices.
In this embodiment, optionally, the plurality of feature map matrices correspond to the assigned grid maps at a plurality of times, where N1 is set to 0.2, N2 is set to 0.6, and N3 is set to 0.4, and the matrices are used
Figure BDA0003664372900000061
A feature map matrix representing the time t,
Figure BDA0003664372900000062
the definition is as follows:
Figure BDA0003664372900000071
Figure BDA0003664372900000072
in the formula (I), the compound is shown in the specification,
Figure BDA0003664372900000073
representing the matrix at time t
Figure BDA0003664372900000074
The element value of the kth line and the l column; c. C kl Representing the grid cell located in the kth row and the lth column; c (B) acc ) Representing a set of grid cells occupied by all accessible building areas; c (B) inacc ) A set of grid cells representing all areas occupied by inaccessible buildings;
Figure BDA0003664372900000075
indicating the location of the target at time t
Figure BDA0003664372900000076
An occupied grid cell; t is inf Expressing the period of inference of the intention of movement of the target, i.e. every time period T inf And reasoning the movement intention of the target once according to the change of the movement state of the target.
In this embodiment, optionally, the multi-stage target motion intention inference model based on the convolutional neural network is established as follows:
for each level of area Q representing target movement intention i Establishing corresponding ith-level orders based on the convolutional neural networks respectivelyBidding movement intention inference network
Figure BDA0003664372900000077
Its input is the region Q in the feature map matrix i Corresponding feature matrix, outputting as moving object in region Q i I.e. moving the target to go to zone Q i Each sub-region to which it belongs
Figure BDA0003664372900000078
Is expressed as:
Figure BDA0003664372900000079
in the formula: p (Q) i ) Indicating moving target heading to zone Q i Each sub-region to which it belongs
Figure BDA00036643729000000710
The probability of (d);
Figure BDA00036643729000000711
represents the region Q i A corresponding feature map matrix;
Figure BDA00036643729000000712
inference network for representing motion intention of i-th level target
Figure BDA00036643729000000713
The parameter (c) of (c).
In this embodiment, optionally, the convolutional neural network-based multi-stage object motion intention inference model determines the ith stage object motion intention inference network by optimizing the following loss function in the training process
Figure BDA00036643729000000714
Parameter (d) of
Figure BDA00036643729000000715
Figure BDA00036643729000000716
In the formula: n is a radical of D Representing the number of the moving target city motion tracks in the training data set; m is a group of m Representing the number of position points of the motion trail of the mth moving target city in the training data set; y is i (m, k) is a flag bit, which indicates that the motion trail of the mth moving target city in the training data set is in the state of leaving the ith-level area Q i Whether the last position point before is in the i-th sub-area Q i k In, if yes, Y i (m, k) 1, otherwise Y i (m,k)=0;
Figure BDA00036643729000000717
Representing inference network with level i object movement intent
Figure BDA00036643729000000718
The motion trail of the mth moving target city in the inferred training data set goes to the ith-level sub-area at the jth position point
Figure BDA0003664372900000081
The probability of (d); λ is a positive coefficient.
In this embodiment, optionally, the specific process of performing discretization processing on the moving target city motion trajectory to be inferred includes: in the grid map, a grid unit with an attribute of being capable of entering a building is assigned as N1, a grid unit with an attribute of being incapable of entering the building is assigned as N2, each position point of a moving target city motion trail to be inferred is obtained in real time, the grid unit with each position point is assigned as N3, and therefore the assigned grid map corresponding to different moments is updated in real time to serve as a feature map matrix.
Another embodiment of the present invention provides a target movement intention inference method based on multi-level region division, including the steps of:
the method comprises the following steps: according to the urban environment where the target is located and the current position of the target, carrying out multi-stage and multi-region division on the urban environment, wherein each stage of divided sub-regions form a movement intention set of the target;
according to the embodiment of the invention, for the urban environment omega where the moving target is located, the position of the target relative to the urban environment is determined according to the position of the target
Figure BDA0003664372900000082
And the self structural distribution characteristics of the urban environment divide the urban environment omega into n-level regions Q with progressively reduced occupied region areas 1 ,Q 2 ,…,Q n (ii) a Each level region Q to be divided simultaneously i (i-1, 2, …, n) into m i A plurality of non-overlapping sub-regions
Figure BDA0003664372900000083
In the multi-stage region after the urban environment Ω is divided, the relationship between the stages of regions may be expressed as follows:
Figure BDA0003664372900000084
in the formula:
Figure BDA0003664372900000085
representing the jth sub-zone of the partitioned level i-1 environment zones, i.e.
Figure BDA0003664372900000086
Through the above gradual multi-region division of the urban environment where the target is located, the motion intention set Q of the target in the urban environment can be represented as the following form:
Figure BDA0003664372900000087
in the formula: q represents a constructed multi-stage multi-region city moving target movement intention set, wherein each stage of region Q i (i-1, 2, …, n) to which different subregions belong
Figure BDA0003664372900000088
Representing all the movement intentions of the object at the current level.
The multi-stage and multi-region division of the urban environment where the target is located converts the inference of the movement intention of the target into the inference of the intention of the target going to each level of sub-regions of the division, namely the inference of the movement intention of the multi-stage target means that the target goes to each level of sub-regions according to the observation state theta
Figure BDA0003664372900000089
Intention of exercise of
Figure BDA00036643729000000810
Fig. 2 shows a dividing process of the urban environment Ω of the target, which divides each level of area into 4 sub-areas with the same size. It should be noted that the sub-regions may also have different sizes.
Step two: discretizing the observed target motion track and the urban environment, fusing the two kinds of information, and constructing a characteristic state for reasoning the target motion intention;
according to the embodiment of the invention, firstly, the urban environment omega where the moving target is located is discretized and decomposed into C X ×C Y A grid cell of equal area, wherein C X And C Y The numbers of grid elements in the X-axis direction and the Y-axis direction are shown, respectively.
Then, defining a characteristic map matrix fusing the target motion trail and the urban environment
Figure BDA0003664372900000091
For representing the motion state characteristics of an object associated with an urban environment,
Figure BDA0003664372900000092
the definition is as follows:
Figure BDA0003664372900000093
in the formula:
Figure BDA0003664372900000094
feature matrix representing motion state of target at time t
Figure BDA0003664372900000095
The element value of the kth line and the l column; c. C kl Representing grid cells positioned in the ith row and ith column in the discretized urban environment; c (B) acc ) Representing a set of grid cells occupied by all accessible building areas; c (B) inacc ) A set of grid cells representing all areas occupied by inaccessible buildings;
Figure BDA0003664372900000096
indicating the location of the target at time t
Figure BDA0003664372900000097
An occupied grid cell; t is inf Expressing a cycle of inference on the constructed multi-level object movement intention, i.e. every time period T inf And reasoning the movement intention of the target once according to the change of the movement state of the target.
Fig. 3 shows a process of discretizing a target motion trajectory and an urban environment. Wherein, the urban environment area with the actual area of 600m multiplied by 500m is discretized into a discretized urban map with 60 multiplied by 50 grid cells, namely, the area size of each grid cell is 10m multiplied by 10m, C X =60,C Y 50. The time intervals of the phase difference between the adjacent target intention inference positions shown in the figure are all 40s, namely the target movement intention inference period T inf 40 s. As can be seen from the figure, after the target moves for 40s, the motion state of the target is obviously changed, namely the position of the target is obviously changed, so that the motion intention of the target can be inferred through the change trend of the motion state of the target.
Step three: respectively establishing a target movement intention reasoning network corresponding to each level of area aiming at the multi-level target movement intention set established in the step one;
according to the embodiment of the invention, aiming at each level of region Q which is established in the step one and used for representing the movement intention of the target i (Q i E.g. Q, i is 1,2, …, n), respectively establishing corresponding target movement intention inference networks based on convolution neural networks
Figure BDA0003664372900000098
Wherein, for the ith stage region Q i Intention inference network of
Figure BDA0003664372900000099
Input feature state of as a feature matrix
Figure BDA0003664372900000101
Middle region Q i Corresponding feature matrix
Figure BDA0003664372900000102
Output targeted at i-th level region Q i The intention of the target to move to the area Q i Each sub-region to which it belongs
Figure BDA0003664372900000103
Can be expressed as:
Figure BDA0003664372900000104
in the formula: p (Q) i ) Representing a target heading-to region Q i Each sub-region to which it belongs
Figure BDA0003664372900000105
Is a probability of
Figure BDA0003664372900000106
Inference network for representing motion intention of i-th level target
Figure BDA0003664372900000107
The parameter (c) of (c).
FIG. 4 showsAnd a two-stage object motion intention inference network is established for the first-stage area and the second-stage area which are divided in the figure 1. In the figure, the first stage and the second stage target movement intention inference network are similar in structure, and the difference is that the first stage target movement intention inference network
Figure BDA0003664372900000108
Is the first stage region Q 1 Corresponding feature matrix
Figure BDA0003664372900000109
And a second level of object movement intent inference network
Figure BDA00036643729000001010
Is the second-stage region Q 2 Corresponding feature matrix
Figure BDA00036643729000001011
Following reasoning about the network with first level object movement intent
Figure BDA00036643729000001012
The structure of the network is illustrated by way of example. The first stage target motion intention inference network is composed of 5 layers of neural networks, wherein the first two layers of neural networks are two-dimensional convolution neural networks, and a characteristic matrix is formed by convolution operation
Figure BDA00036643729000001013
The first layer of convolutional neural network is provided with 4 two-dimensional convolutional kernels, the size of the convolutional kernels is (2,2) and the convolutional sliding step length is 1, the second layer of convolutional neural network is also provided with 4 two-dimensional convolutional kernels, the size of the convolutional kernels is (2,2) and the convolutional sliding step length is 2. The back three layers of neural networks are full-connection networks with the number of neurons being 100, 100 and 4 respectively, and are used for further processing the characteristic information extracted by the front two layers of convolutional neural networks to finally obtain a target which goes to each subarea
Figure BDA00036643729000001014
Size of probability of
Figure BDA00036643729000001015
In the established two-stage target movement intention inference network structure, the activation functions of the convolutional neural network and the first two layers of fully-connected networks are both ReLU, the activation function of the output layer is Softmax, and the aim is to limit the value range of the output value of the network between (0,1) so that the value range conforms to the expression range of the probability value.
Step four: determining multi-stage object motion intention inference network established in step three
Figure BDA00036643729000001016
Parameter (2)
Figure BDA00036643729000001017
According to an embodiment of the invention, the network structure is determined by optimizing the following loss function
Figure BDA00036643729000001018
Parameter (2)
Figure BDA00036643729000001019
As shown in the following formula:
Figure BDA00036643729000001020
in the formula: n is a radical of D Representing the number of target motion tracks in the training data; m m Representing the number of position points of the mth motion trail in the training data; y is i (m, k) is a flag indicating that the motion trajectory of the mth motion in the training data set is leaving the ith class area Q i Whether the last position point before is in the i-th sub-area
Figure BDA0003664372900000111
In, if yes, Y i (m, k) 1, otherwise Y i (m,k)=0;
Figure BDA0003664372900000112
Representing inference network with level i object movement intent
Figure BDA0003664372900000113
The motion track of the mth item mark in the inferred training data set goes to the ith-level sub-area at the jth position point
Figure BDA0003664372900000114
λ is a positive coefficient.
Step five: the motion intention of the target is reasoned by the network through the trained multi-stage target motion intention in the fourth step, namely the probability of the target going to each subarea in each stage of area is obtained according to the observed target motion state, and the following formula is shown:
Figure BDA0003664372900000115
the following experiments were further used to verify the technical effects of the present invention.
The correctness and the rationality of the invention are verified by adopting a digital simulation mode. A virtual urban environment is first constructed in a Python environment, as shown in the first level area of fig. 2. The three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-based three-dimensional space-. The inference of the target movement intention refers to the inference of all levels of sub-areas where the real destination position of the target is located according to the observed target movement track and the urban environment. The simulation test software environment of the invention is Windows 10+ Python3.7, and the hardware environment is AMD Ryzen 53550H CPU +16.0GB RAM.
The experiment firstly trains the two-stage target motion intention inference network established in fig. 4, and a loss value curve in the training process is shown in fig. 5. As can be seen from the figure, the training process totally goes through 500 training cycles, and in the early stage of training, that is, when the training cycle is less than 200, the loss value of the first-stage intention inference network and the loss value of the second-stage intention inference network decrease faster with the increase of the training cycle, indicating that the network is learning parameters quickly; in the later training period, namely when the training period is more than 200, the loss value of the first-stage intention inference network and the loss value of the second-stage intention inference network decrease and become slow along with the increase of the training period, which indicates that the training process of the network gradually converges; at the end of training, namely when the training period is greater than 400, the loss value of the first-stage intention inference network and the loss value of the second-stage intention inference network are basically unchanged, which indicates that the network training process is basically converged. The training process above illustrates that the two-stage object motion intention inference network established by the present invention can learn stable network parameters through training data.
Then, the invention verifies the effectiveness of the method by once reasoning the two-stage target movement intention represented by the first-stage area and the second-stage area established in the figure 2. The reasoning process is shown in figure 6. It can be seen from the figure that when t is 100s, the target is in the sub-region of the first-level region
Figure BDA0003664372900000116
Medium motion, target to each sub-area of the first level area deduced by the observed target track
Figure BDA0003664372900000117
The probability of the target going to each sub-area of the second level area
Figure BDA0003664372900000118
The probability of (2) is shown in FIG. 6(a), and the inference result shows that the first-level subregion that the target is most likely to go to at this time is the first-level subregion
Figure BDA0003664372900000121
The most likely secondary sub-region for the target is
Figure BDA0003664372900000122
As the target continues to move, when t is 200s, the target continues to be in a sub-region of the first level region
Figure BDA0003664372900000123
The most probable first-level sub-region to which the target is possibly moved is deduced through the observed target motion track at the moment
Figure BDA0003664372900000124
The most likely secondary sub-region for the target is
Figure BDA0003664372900000125
When t is 300s, the target just enters the first level subregion
Figure BDA0003664372900000126
The first-level subregion to which the inferred target is most likely to go at this time is
Figure BDA0003664372900000127
Due to the object being in the region
Figure BDA0003664372900000128
Has less motion tracks in the middle, so that the target area cannot be accurately inferred
Figure BDA0003664372900000129
The intention of exercise in (1). When t is 400s, the target is in the sub-area of the first level area
Figure BDA00036643729000001210
Medium motion, when the inferred first level sub-region that the target is most likely to go to is
Figure BDA00036643729000001211
The secondary sub-region where the target is most likely to go is the region
Figure BDA00036643729000001212
Sub-region to which it belongs
Figure BDA00036643729000001213
Because the target real destination position is positioned in the first-level subarea
Figure BDA00036643729000001214
Sub-region to which it belongs
Figure BDA00036643729000001215
Thus, at this time, the area where the target real destination is located is correctly inferred. When t 467s, the target reaches its true destination location, confirming the inference result when t 400 s.
As known from the reasoning process, when the target destination position set is unknown, the method can reasonably characterize the movement intention set of the target, deduce the movement intentions of the target at different moments through the observed target track, and deduce the area where the target real destination position is located before the target reaches the real destination position. The method can realize the inference of the movement intention of the urban moving target and provides a new technical idea for the realization mode of the inference of the movement intention when the position set of the moving target destination is unknown.
Another embodiment of the present invention provides a target movement intention inference system based on multi-level region division, as shown in fig. 7, the system including:
the system comprises a motion intention set acquisition module 10, a motion intention set calculation module and a motion intention set calculation module, wherein the motion intention set acquisition module is configured to divide an urban environment into multiple stages and multiple regions, and each stage of divided sub-regions form a motion intention set of a moving target;
a training data acquisition module 20 configured to acquire a plurality of moving target city motion trajectories, and label the trajectories in a motion intention set to construct a training data set;
a feature map acquisition module 30 configured to discretize the training data set to construct a feature map matrix; the characteristic map matrix is used for representing the motion state characteristics of the moving target related to the urban environment;
an intention inference model training module 40 configured to input the feature map matrix into a convolutional neural network-based multistage target motion intention inference model for training, to obtain a trained multistage target motion intention inference model;
and the movement intention reasoning module 50 is configured to label the movement track of the moving target city to be inferred in the movement intention set, perform discretization processing, input the labeled movement track into the trained multi-stage target movement intention reasoning model, and acquire the probability that the moving target goes to each sub-region in each stage of region.
In this embodiment, optionally, the discretization of the training data set in the feature map obtaining module 30 is performed to construct a feature map matrix in a specific process: converting the motion intention set marked with the motion trail into a grid map; in the grid map, assigning a grid cell with an attribute of being capable of entering a building as N1, assigning a grid cell with an attribute of being incapable of entering a building as N2, and assigning a grid cell with a plurality of position points of each moving target city motion track in the training data set as N3; 0< N1<1, 0< N2<1, 0< N3<1, and N1, N2, N3 are all not equal; thereby obtaining a plurality of feature map matrices.
In this embodiment, optionally, the multi-stage object motion intention inference model based on the convolutional neural network in the intention inference model training module 40 is established as follows:
for each level region Q representing the movement intention of the target i Respectively establishing corresponding ith-level target movement intention inference networks based on the convolutional neural networks
Figure BDA0003664372900000131
Its input is the region Q in the feature map matrix i Corresponding feature matrix, outputting as moving object in region Q i I.e. moving the target to go to zone Q i Belonging to each sub-area
Figure BDA0003664372900000132
Is expressed as:
Figure BDA0003664372900000133
in the formula: p (Q) i ) Indicating moving target heading to zone Q i Each sub-region to which it belongs
Figure BDA0003664372900000134
The probability of (d);
Figure BDA0003664372900000135
represents a region Q i A corresponding feature map matrix;
Figure BDA0003664372900000136
inference network for representing motion intention of i-th level target
Figure BDA0003664372900000137
The parameter (c) of (c).
In this embodiment, optionally, the convolutional neural network-based multi-stage object motion intention inference model in the intention inference model training module 40 determines the ith stage object motion intention inference network by optimizing the following loss function in the training process
Figure BDA0003664372900000138
Parameter (d) of
Figure BDA0003664372900000139
Figure BDA00036643729000001310
In the formula: n is a radical of D Representing the number of the moving target city motion tracks in the training data set; m is a group of m Representing the number of position points of the motion trail of the mth moving target city in the training data set; y is i (m, k) is a flag bit, which indicates that the motion trail of the mth moving target city in the training data set is in the state of leaving the ith-level area Q i Whether the last position point before is in the i-th sub-area
Figure BDA00036643729000001311
In, if yes, Y i (m, k) 1, otherwise Y i (m,k)=0;
Figure BDA00036643729000001312
Representing inference network with level i object movement intent
Figure BDA00036643729000001313
The motion trail of the mth moving target city in the inferred training data set goes to the ith-level sub-area at the jth position point
Figure BDA00036643729000001314
The probability of (d); λ is a positive coefficient.
The function of the target movement intention inference system based on multistage region division in this embodiment can be described by the aforementioned target movement intention inference method based on multistage region division, so that the detailed description of this embodiment is omitted, and reference may be made to the above method embodiments, and further description is omitted here.
While the invention has been described with respect to a limited number of embodiments, those skilled in the art, having benefit of this description, will appreciate that other embodiments can be devised which do not depart from the scope of the invention as described herein. The present invention has been disclosed in an illustrative rather than a restrictive sense, and the scope of the present invention is defined by the appended claims.

Claims (10)

1. A target movement intention inference method based on multi-stage region division is characterized by comprising the following steps:
carrying out multi-stage multi-region division on the urban environment, wherein each divided sub-region forms a motion intention set of the moving target;
obtaining a plurality of moving target city motion tracks, and marking the tracks in the motion intention set to construct a training data set;
discretizing the training data set to construct a feature map matrix; the characteristic map matrix is used for representing the motion state characteristics of the moving target related to the urban environment;
inputting the characteristic map matrix into a multi-stage target movement intention inference model based on a convolutional neural network for training to obtain a trained multi-stage target movement intention inference model;
and marking the city motion track of the moving target to be inferred in the motion intention set, carrying out discretization treatment, inputting the marked city motion track into a trained multi-stage target motion intention inference model, and acquiring the probability that the moving target goes to each subregion in each stage of region.
2. The method for reasoning the intention of the target movement based on the multi-level regional division as recited in claim 1, wherein the training data set is discretized to construct a feature map matrix by a specific process comprising: converting the motion intention set marked with the motion trail into a grid map; in the grid map, assigning a grid unit with an attribute of an accessible building as N1, assigning a grid unit with an attribute of an inaccessible building as N2, and assigning grid units with a plurality of position points of each moving target city motion trail in the training data set as N3; 0< N1<1, 0< N2<1, 0< N3<1, and N1, N2, N3 are all not equal; thereby obtaining a plurality of feature map matrices.
3. The method as claimed in claim 2, wherein the feature map matrices correspond to assigned grid maps at a plurality of times, N1-0.2, N2-0.6, N3-0.4 are set, and the matrices are used
Figure FDA0003664372890000011
A feature map matrix representing the time t,
Figure FDA0003664372890000012
the definition is as follows:
Figure FDA0003664372890000013
Figure FDA0003664372890000014
in the formula (I), the compound is shown in the specification,
Figure FDA0003664372890000015
representing the matrix at time t
Figure FDA0003664372890000016
The element value of the kth row and the l column; c. C kl Representing the grid cell located in the kth row and the lth column; c (B) acc ) Representing a set of grid cells occupied by all accessible building areas; c (B) inacc ) A set of grid cells representing all areas occupied by inaccessible buildings;
Figure FDA0003664372890000017
indicating the location of the target at time t
Figure FDA0003664372890000018
An occupied grid cell; t is a unit of inf Expressing the period of inference of the intention of movement of the target, i.e. every time period T inf And reasoning the movement intention of the target once according to the change of the movement state of the target.
4. The method for target motion intention inference based on multilevel regional division according to claim 3, wherein the multilevel target motion intention inference model based on the convolutional neural network is established as follows:
for each level region Q representing the movement intention of the target i Respectively establishing corresponding ith-level target movement intention inference networks based on the convolutional neural networks
Figure FDA0003664372890000021
The input of which is the area Q in the feature map matrix i Corresponding feature matrix, outputting as moving object in region Q i I.e. moving the target to go to zone Q i Each sub-region to which it belongs
Figure FDA0003664372890000022
Is expressed as:
Figure FDA0003664372890000023
in the formula: p (Q) i ) Indicating moving target heading to zone Q i Each sub-region to which it belongs
Figure FDA0003664372890000024
The probability of (d);
Figure FDA0003664372890000025
represents a region Q i A corresponding feature map matrix;
Figure FDA0003664372890000026
inference network for representing motion intention of i-th level target
Figure FDA0003664372890000027
The parameter (c) of (c).
5. The method as claimed in claim 4, wherein the convolutional neural network-based multi-stage object motion intention inference model determines the ith stage object motion intention inference network by optimizing the following loss function in the training process
Figure FDA0003664372890000028
Parameter (d) of
Figure FDA0003664372890000029
Figure FDA00036643728900000210
In the formula: n is a radical of D Representing the number of the moving target city motion tracks in the training data set; m m Representing the number of position points of the motion trail of the mth moving target city in the training data set; y is i (m, k) is a flag bit, which indicates that the motion trail of the mth moving target city in the training data set is in the state of leaving the ith-level area Q i Whether the last position point before is in the i-th sub-region
Figure FDA00036643728900000211
In, if yes, Y i (m, k) 1, otherwise Y i (m,k)=0;
Figure FDA00036643728900000212
Representing inference network with level i object movement intent
Figure FDA00036643728900000213
The motion trail of the mth moving target city in the inferred training data set goes to the ith-level sub-area at the jth position point
Figure FDA00036643728900000214
The probability of (d); λ is a positive coefficient.
6. The method for target motion intention inference based on multilevel regional division according to any of claims 1-5, characterized in that the specific process of discretizing the moving target city motion trajectory to be inferred comprises: in the grid map, assigning the grid unit with the attribute of being capable of entering the building as N1, assigning the grid unit with the attribute of being incapable of entering the building as N2, acquiring each position point of the moving target city motion trail to be inferred in real time, and assigning the grid unit with each position point as N3, so that the assigned grid map corresponding to different moments is updated in real time to serve as a feature map matrix.
7. A target movement intention inference system based on multistage regional division is characterized by comprising:
the system comprises a motion intention set acquisition module, a motion intention set generation module and a motion intention set generation module, wherein the motion intention set acquisition module is configured to divide the urban environment into multiple levels and multiple regions, and the divided sub-regions at each level form a motion intention set of a moving target;
a training data acquisition module configured to acquire a plurality of moving target city motion tracks and label the tracks in the motion intention set to construct a training data set;
a feature map acquisition module configured to discretize the training data set to construct a feature map matrix; the characteristic map matrix is used for representing the motion state characteristics of the moving target related to the urban environment;
the intention reasoning model training module is configured to input the feature map matrix into a multi-stage target movement intention reasoning model based on a convolutional neural network for training to obtain a trained multi-stage target movement intention reasoning model;
and the movement intention reasoning module is configured to label the city movement track of the moving target to be inferred in the movement intention set, carry out discretization treatment, input the marked city movement track into a trained multi-stage target movement intention reasoning model, and acquire the probability that the moving target goes to each sub-region in each stage of region.
8. The system of claim 7, wherein the feature map obtaining module discretizes the training data set to construct a feature map matrix by a specific process comprising: converting the motion intention set marked with the motion trail into a grid map; in the grid map, assigning a grid unit with an attribute of an accessible building as N1, assigning a grid unit with an attribute of an inaccessible building as N2, and assigning grid units with a plurality of position points of each moving target city motion trail in the training data set as N3; 0< N1<1, 0< N2<1, 0< N3<1, and N1, N2, N3 are all not equal; thereby obtaining a plurality of feature map matrices.
9. The system of claim 8, wherein the multi-level object motion intention inference model training module establishes the following multi-level object motion intention inference model based on the convolutional neural network as follows:
for each level region Q representing the movement intention of the target i Respectively establishing corresponding ith-level target movement intention inference networks based on the convolutional neural networks
Figure FDA0003664372890000031
The input of which is the area Q in the feature map matrix i Corresponding feature matrix, outputting as moving object in region Q i The intention of movement in (1), i.e. moving the object to go to the area Q i Each sub-region to which it belongs
Figure FDA0003664372890000032
Is expressed as:
Figure FDA0003664372890000033
in the formula: p (Q) i ) Indicating moving target heading to area Q i Each sub-region to which it belongs
Figure FDA0003664372890000034
The probability of (d);
Figure FDA0003664372890000035
represents the region Q i A corresponding feature map matrix;
Figure FDA0003664372890000036
inference network for representing motion intention of i-th level target
Figure FDA0003664372890000037
The parameter (c) of (c).
10. The system of claim 9, wherein the multi-level region partition-based object motion intention inference model in the intention inference model training module determines the ith level of object motion intention inference network by optimizing the following loss function
Figure FDA0003664372890000038
Parameter (d) of
Figure FDA0003664372890000039
Figure FDA00036643728900000310
In the formula: n is a radical of D Representing the number of the moving target city motion tracks in the training data set; m m Representing the number of position points of the motion trail of the mth moving target city in the training data set; y is i (m, k) is a flag bit, which indicates that the motion trail of the mth moving target city in the training data set is in the state of leaving the ith-level area Q i Whether the last position point before is in the i-th sub-area
Figure FDA0003664372890000041
In, if yes, Y i (m, k) 1, otherwise Y i (m,k)=0;
Figure FDA0003664372890000042
Representing inference network with level i object movement intent
Figure FDA0003664372890000043
The motion trail of the mth moving target city in the inferred training data set goes to the ith-level sub-area at the jth position point
Figure FDA0003664372890000044
The probability of (d); λ is a positive coefficient.
CN202210582031.7A 2022-05-26 2022-05-26 Target movement intention reasoning method and system based on multi-level region division Active CN114997297B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210582031.7A CN114997297B (en) 2022-05-26 2022-05-26 Target movement intention reasoning method and system based on multi-level region division

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210582031.7A CN114997297B (en) 2022-05-26 2022-05-26 Target movement intention reasoning method and system based on multi-level region division

Publications (2)

Publication Number Publication Date
CN114997297A true CN114997297A (en) 2022-09-02
CN114997297B CN114997297B (en) 2024-05-03

Family

ID=83028354

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210582031.7A Active CN114997297B (en) 2022-05-26 2022-05-26 Target movement intention reasoning method and system based on multi-level region division

Country Status (1)

Country Link
CN (1) CN114997297B (en)

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165344A (en) * 2018-08-06 2019-01-08 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
CN111046919A (en) * 2019-11-21 2020-04-21 南京航空航天大学 Peripheral dynamic vehicle track prediction system and method integrating behavior intents
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN112797995A (en) * 2020-12-17 2021-05-14 北京工业大学 Vehicle emergency navigation method with space-time characteristic situation information
CN113435644A (en) * 2021-06-25 2021-09-24 天津大学 Emergency prediction method based on deep bidirectional long-short term memory neural network
WO2022022721A1 (en) * 2020-07-31 2022-02-03 商汤集团有限公司 Path prediction method and apparatus, device, storage medium, and program
CN114049602A (en) * 2021-10-29 2022-02-15 哈尔滨工业大学 Escape target tracking method and system based on intention reasoning
CN114067552A (en) * 2021-11-08 2022-02-18 山东高速建设管理集团有限公司 Pedestrian crossing track tracking and predicting method based on roadside laser radar
CN114283576A (en) * 2020-09-28 2022-04-05 华为技术有限公司 Vehicle intention prediction method and related device

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109165344A (en) * 2018-08-06 2019-01-08 百度在线网络技术(北京)有限公司 Method and apparatus for pushed information
WO2021007812A1 (en) * 2019-07-17 2021-01-21 深圳大学 Deep neural network hyperparameter optimization method, electronic device and storage medium
CN111046919A (en) * 2019-11-21 2020-04-21 南京航空航天大学 Peripheral dynamic vehicle track prediction system and method integrating behavior intents
WO2022022721A1 (en) * 2020-07-31 2022-02-03 商汤集团有限公司 Path prediction method and apparatus, device, storage medium, and program
CN114283576A (en) * 2020-09-28 2022-04-05 华为技术有限公司 Vehicle intention prediction method and related device
CN112797995A (en) * 2020-12-17 2021-05-14 北京工业大学 Vehicle emergency navigation method with space-time characteristic situation information
CN113435644A (en) * 2021-06-25 2021-09-24 天津大学 Emergency prediction method based on deep bidirectional long-short term memory neural network
CN114049602A (en) * 2021-10-29 2022-02-15 哈尔滨工业大学 Escape target tracking method and system based on intention reasoning
CN114067552A (en) * 2021-11-08 2022-02-18 山东高速建设管理集团有限公司 Pedestrian crossing track tracking and predicting method based on roadside laser radar

Non-Patent Citations (4)

* Cited by examiner, † Cited by third party
Title
刘秋辉等: ""基于OKNN的目标战术意图识别方法"", 《现代防御技术》, vol. 49, no. 03, 31 December 2021 (2021-12-31) *
周旺旺;姚佩阳;张杰勇;王勋;魏帅;: "基于深度神经网络的空中目标作战意图识别", 航空学报, no. 11, 27 August 2018 (2018-08-27) *
惠晓龙: ""机载相控阵雷达工作模式识别与行为意图预测技术研究"", 《中国优秀硕士电子期刊网》, no. 04, 15 April 2022 (2022-04-15) *
翟翔宇: ""基于全连接神经网络的空战目标威胁评估方法研究"", 《中国优秀硕士电子期刊网》, no. 11, 15 November 2020 (2020-11-15) *

Also Published As

Publication number Publication date
CN114997297B (en) 2024-05-03

Similar Documents

Publication Publication Date Title
Jin et al. A GAN-based short-term link traffic prediction approach for urban road networks under a parallel learning framework
Lin et al. A self-adaptive neural fuzzy network with group-based symbiotic evolution and its prediction applications
CN108985516B (en) Indoor path planning method based on cellular automaton
CN113825978B (en) Method and device for defining path and storage device
CN114815802A (en) Unmanned overhead traveling crane path planning method and system based on improved ant colony algorithm
CN109840595B (en) Knowledge tracking method based on group learning behavior characteristics
CN112550314A (en) Embedded optimization type control method suitable for unmanned driving, driving control module and automatic driving control system thereof
CN115563674B (en) Initial planar arrangement generating method and device, electronic equipment and storage medium
CN110414718A (en) A kind of distribution network reliability index optimization method under deep learning
CN116401941B (en) Prediction method for evacuation capacity of subway station gate
Abdellah et al. VANET traffic prediction using LSTM with deep neural network learning
Demertzis et al. Geo-AI to aid disaster response by memory-augmented deep reservoir computing
CN110908384B (en) Formation navigation method for distributed multi-robot collaborative unknown random maze
Jiang et al. Bi‐GRCN: A Spatio‐Temporal Traffic Flow Prediction Model Based on Graph Neural Network
CN114371711B (en) Robot formation obstacle avoidance path planning method
CN114707641A (en) Training method, device, equipment and medium for neural network model of double-view diagram
Lee et al. Extendable navigation network based reinforcement learning for indoor robot exploration
CN114997297A (en) Target movement intention reasoning method and system based on multistage regional division
Redlarski et al. Using river formation dynamics algorithm in mobile robot navigation
CN115826591B (en) Multi-target point path planning method based on neural network estimation path cost
CN116667369A (en) Distributed photovoltaic voltage control method based on graph convolution neural network
Malmir et al. Belief tree search for active object recognition
CN112861332B (en) Cluster dynamics prediction method based on graph network
CN114997306A (en) Target intention identification method based on dynamic Bayesian network
Zhang et al. Color clustering using self-organizing maps

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant