CN115222769A - Trajectory prediction method, device and agent - Google Patents

Trajectory prediction method, device and agent Download PDF

Info

Publication number
CN115222769A
CN115222769A CN202210674327.1A CN202210674327A CN115222769A CN 115222769 A CN115222769 A CN 115222769A CN 202210674327 A CN202210674327 A CN 202210674327A CN 115222769 A CN115222769 A CN 115222769A
Authority
CN
China
Prior art keywords
destination
candidate
image
vector
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210674327.1A
Other languages
Chinese (zh)
Inventor
杨一博
夏北浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Jingdong Technology Information Technology Co Ltd
Original Assignee
Jingdong Technology Information Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Jingdong Technology Information Technology Co Ltd filed Critical Jingdong Technology Information Technology Co Ltd
Priority to CN202210674327.1A priority Critical patent/CN115222769A/en
Publication of CN115222769A publication Critical patent/CN115222769A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/08Learning methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/11Region-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Multimedia (AREA)
  • Biophysics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Health & Medical Sciences (AREA)
  • Computational Linguistics (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The embodiment of the disclosure discloses a track prediction method, a track prediction device and an agent. One embodiment of the method comprises: acquiring an image of an area where an object is located, and acquiring a historical motion track of the object moving to the current position; determining a candidate destination for the object from the image, wherein the candidate destination is located at a boundary of the image; determining a target destination of the object from the candidate destinations according to the historical motion trail; and predicting the track of the object moving from the current position to the target destination according to the historical motion track and the target destination. The embodiment realizes the full scene prediction for the track of the moving object.

Description

Trajectory prediction method, device and agent
Technical Field
The embodiment of the disclosure relates to the technical field of computers, in particular to a track prediction method, a track prediction device and an agent.
Background
Trajectory prediction aims to predict the future position of a moving object (such as an agent) according to the position of the moving object in a certain time period and the change of the external physical environment. Trajectory prediction may be widely applied in various fields such as autonomous driving, robot navigation, behavior analysis and understanding, target detection/tracking, and the like.
Currently, research directions for trajectory prediction include aspects of how to model interactions and describe or characterize predicted trajectories. Wherein the interactive relationship comprises an interactive relationship between the moving objects and the surrounding physical environment. The essence of modeling the interaction is to obtain a representation of the motion factors that affect the moving object.
The existing track prediction method generally utilizes a convolutional neural network and the like to extract the characteristics of images of moving objects, introduces an attention mechanism and the like to screen other surrounding moving objects with higher influence on the moving objects, then utilizes a long-short term memory network and the like to encode the moving tracks of the moving objects, and then utilizes a generation countermeasure network, a variation self-encoder and the like to directly predict the moving tracks of the moving objects.
Disclosure of Invention
The embodiment of the disclosure provides a track prediction method, a track prediction device and an agent.
In a first aspect, an embodiment of the present disclosure provides a trajectory prediction method, including: acquiring an image of an area where an object is located, and acquiring a historical motion track of the object moving to the current position; determining a candidate destination for the object from the image, wherein the candidate destination is located at a boundary of the image; determining a target destination of the object from the candidate destinations according to the historical motion trail; and predicting the track of the object moving from the current position to the target destination according to the historical motion track and the target destination.
In a second aspect, an embodiment of the present disclosure provides a trajectory prediction apparatus, including: the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is configured to acquire an image of an area where an object is located and acquire a historical motion track of the object moving to the current position; a candidate destination determining unit configured to determine a candidate destination of the object from the image, wherein the candidate destination is located at a boundary of the image; a target destination determination unit configured to determine a target destination of the object from the candidate destinations according to the historical motion trajectory; and the track prediction unit is configured to predict the track of the object moving from the current position to the target destination according to the historical motion track and the target destination.
In a third aspect, an embodiment of the present disclosure provides an electronic device, including: one or more processors; storage means for storing one or more programs; when executed by one or more processors, cause the one or more processors to implement a method as described in any implementation of the first aspect.
In a fourth aspect, embodiments of the present disclosure provide a computer-readable medium on which a computer program is stored, which computer program, when executed by a processor, implements the method as described in any implementation manner of the first aspect.
According to the track prediction method, the track prediction device and the intelligent agent, the target destination is screened from the candidate destinations, and then the motion track of the motion object moving from the current position to the target destination is predicted, so that the full-scene prediction of the motion object can be realized.
Drawings
Other features, objects and advantages of the disclosure will become more apparent upon reading of the following detailed description of non-limiting embodiments thereof, made with reference to the accompanying drawings in which:
FIG. 1 is an exemplary system architecture diagram in which one embodiment of the present disclosure may be applied;
FIG. 2 is a flow diagram of one embodiment of a trajectory prediction method according to the present disclosure;
FIG. 3 is a schematic diagram of an application scenario of a trajectory prediction method according to an embodiment of the present disclosure;
FIG. 4 is a flow diagram of yet another embodiment of a trajectory prediction method according to the present disclosure;
FIG. 5 is a flow chart of yet another embodiment of a trajectory prediction method according to the present disclosure;
FIG. 6 is a schematic block diagram of one embodiment of a trajectory prediction device according to the present disclosure;
FIG. 7 is a schematic block diagram of an electronic device suitable for use in implementing embodiments of the present disclosure.
Detailed Description
The present disclosure is described in further detail below with reference to the accompanying drawings and examples. It is to be understood that the specific embodiments described herein are merely illustrative of the relevant invention and not restrictive of the invention. It should be noted that, for convenience of description, only the portions related to the present invention are shown in the drawings.
It should be noted that, in the present disclosure, the embodiments and the features of the embodiments may be combined with each other without conflict. The present disclosure will be described in detail below with reference to the accompanying drawings in conjunction with embodiments.
Fig. 1 illustrates an exemplary architecture 100 to which embodiments of the trajectory prediction method or apparatus of the present disclosure may be applied.
As shown in fig. 1, the system architecture 100 may include agents 101, 102, a network 103, and a server 104. The network 103 serves as a medium to provide a communication link between the agents 101, 102 and the server 104. Network 103 may include various types of connections, such as wire, wireless communication links, or fiber optic cables, to name a few.
Agents 101, 102 interact with server 104 over network 103 to receive or transmit various messages (e.g., images, predicted trajectories, control signals, etc.), and the like. Agents 101, 102 may be various agents with athletic capabilities. An agent may generally have Autonomy (Autonomy), reactivity (Reactive), initiative (Proactive), sociality (Social), evolutionary, and so forth. For example, agents 101, 102 may be smart cars, smart robots, and so on.
The server 104 may be a server that provides various services to the agents 101, 102, for example, predicting a motion trajectory of the agents 101, 102 to assist the motion of the agents 101, 102.
It should be noted that the trajectory prediction method provided by the embodiment of the present disclosure is generally executed by the server 104, and accordingly, the trajectory prediction apparatus is generally disposed in the server 104.
The server 104 may be hardware or software. When the server 104 is hardware, it may be implemented as a distributed server cluster composed of multiple servers, or may be implemented as a single server. When the server 104 is software, it may be implemented as multiple pieces of software or software modules (e.g., multiple pieces of software or software modules used to provide distributed services), or as a single piece of software or software module. And is not particularly limited herein. It is noted that the server 104 may also be located in the agents 101, 102.
It should be understood that the number of agents, networks, and servers in FIG. 1 is illustrative only. There may be any number of agents, networks, and servers, as desired for an implementation.
With continued reference to FIG. 2, a flow 200 of one embodiment of a trajectory prediction method according to the present disclosure is shown. The trajectory prediction method comprises the following steps:
step 201, acquiring an image of an area where an object is located, and acquiring a historical motion track of the object moving to the current position.
In the present embodiment, the object may refer to various objects having a motion capability. For example, the objects may be agents 101, 102 as shown in fig. 1. As another example, the agent may be a person. The area where the object is located may refer to a certain space including a position where the object is currently located. The size or range of the area can be flexibly set according to the actual application scene or application requirement.
The image of the area where the object is located may be obtained by various methods, and an execution subject of the trajectory prediction method (such as the server 104 shown in fig. 1 or the like) may acquire the image of the area where the object is located from a local or other device or the like. For example, an image of an area where the object is located may be acquired by an image acquisition device (such as a camera) in the area, and at this time, the image acquisition device may send the acquired image to the execution main body. For another example, the image of the region where the object is located may be acquired by an image acquisition device of the object, and in this case, the object may transmit the acquired image to the execution subject.
The motion trajectory of the object may be represented in various ways. For example, the motion trajectory may be expressed using a position coordinate sequence in which position coordinates of the object at respective time instants are formed in chronological order. The historical motion trajectory may refer to a motion trajectory of the object moving to the current location. Specifically, according to an actual application scenario, the historical motion trajectory may refer to a motion trajectory of an object within a preset time period before the object moves to the current position, where the preset time period may be set according to an application requirement.
The execution subject may obtain a historical motion trajectory of the object moving to the current location from a local or other device. For example, the object may record its own motion trajectory during the motion process, and at this time, the historical motion trajectory of the execution subject whose motion is to be performed to the current position may be sent by the object to the execution subject. For another example, during the movement of the object, the monitoring device in the area where the object passes through may record the movement trajectory of the object, and at this time, the monitoring device may send the historical movement trajectory of the object moving to the current location to the execution subject.
At step 202, candidate destinations for the object are determined from the image.
In this embodiment, the candidate destination may refer to a possible exit when an arbitrary object in the region where the image is presented leaves the region, and thus, the candidate destination may be located at a boundary of the image. In different scenes, the number of the candidate destinations determined in the image may be one, or may be at least two.
In particular, various methods may be employed to determine a candidate destination for an object from an image. For example, the image may be image-segmented by a preset region size, and then the segmented image region located at the boundary of the image may be determined as a candidate destination. The size of the preset area can be set according to the size of the object.
Generally, each candidate destination is an image area, and thus, the candidate destination can be represented in various ways. For example, the coordinates of each vertex of the image region corresponding to the candidate destination may be used for the representation.
For another example, a candidate destination may be represented by:
d=(x min ,y min ,x max ,y max ,θ min ,θ max ,)
Figure BDA0003694243300000051
Figure BDA0003694243300000052
where "d" represents a candidate destination. "x min "and" y min "denotes the abscissa and ordinate of the vertex whose destination candidate is the smallest," x max "and" y max "denotes the abscissa and ordinate of the vertex where the candidate destination is largest, respectively. "theta min And theta max "respectively denotes the minimum angle and the maximum angle between the position where the object is currently located and the candidate destination. "x 0 "and" y 0 "respectively denotes an abscissa and an ordinate of a position where the object is currently located. It should be noted that the coordinate system can be flexibly set according to the actual application scenario.
Step 203, determining the target destination of the object from the candidate destinations according to the historical motion trail.
In the present embodiment, the target destination may refer to an exit when it leaves the area where the image is presented, for the above-described object. Specifically, various methods may be employed to determine the target destination of the object from the candidate destinations according to the historical motion trajectory of the object. For example, a destination candidate in the movement direction may be determined as the target destination of the object from among the destination candidates according to the movement direction of the history movement trajectory of the object.
It should be noted that, when there is one candidate destination, the object can only leave the area where the image is presented from the candidate destination, and therefore, it can be directly determined that the candidate destination is the target destination of the object.
And step 204, predicting the track of the object moving from the current position to the target destination according to the historical motion track and the target destination.
In the present embodiment, various prediction methods may be employed to determine a movement trajectory of an object moving from a current position to a target destination according to a historical movement trajectory of the object and the target destination. For example, the motion trajectory of an object in a region where an image is presented from entering the region to leaving the region may be counted and stored in advance. At this time, a motion trajectory stored in advance may be queried, and a motion trajectory including the current position of the object and the target destination may be determined as a trajectory for the target to move from the current position to the target destination.
For another example, a trajectory of the object moving from the current position to the target destination may be predicted according to the historical movement trajectory and the target destination by using a pre-trained trajectory prediction model (e.g., implemented by using an Encoder-Decoder framework). The specific training method can complete the training of the trajectory prediction model by using various existing training methods according to the pre-collected or obtained training samples and the preset loss function. For example, the loss function may represent the difference between the trajectory predicted by the trajectory prediction model and the actual trajectory in the training samples. As an example, the loss function may be expressed as follows:
Figure BDA0003694243300000061
wherein "L" represents a loss function. "T" denotes a predicted time period.
Figure BDA0003694243300000062
Representing the trajectory predicted by the trajectory prediction model. "Y" represents the actual acquired trajectory to which the predicted trajectory corresponds.
In some optional implementations of this embodiment, the image may be segmented to obtain an image region set, and then it may be determined whether each image region in the image region set is located at a boundary of the image, and the image region located at the boundary of the image may be determined as a candidate destination.
Specifically, the image may be segmented into the image region set by using various existing image segmentation methods (such as semantic segmentation, etc.), and then, whether each segmented image region is located at the boundary of the image may be determined by using various methods (such as determining whether the coordinates with 0 abscissa and/or 0 ordinate are included, etc.). If the image region is located at the boundary of the image, the image region may be considered to belong to the candidate destination, and if the image region is not located at the boundary of the image, the image region may be tasked with not belonging to the candidate destination.
As an example, the candidate destination may be determined by the following formula:
F=Se(I)
D=C(F,l)
where "Se" denotes a pre-trained image segmentation model. "I" represents an image. "F" represents an image segmentation result, i.e., a set of image regions. "D" represents a candidate destination. "l" denotes a label of each image region obtained by the segmentation, and the label includes a boundary located in the image and a boundary not located in the image. "C" denotes a classifier to determine whether an image region belongs to a candidate destination according to the label of the image region.
The image segmentation method can be used for rapidly and accurately segmenting the image region, so that the efficiency and the accuracy of subsequent target destination and track prediction results are improved.
With continued reference to fig. 3, fig. 3 is an exemplary application scenario 300 of the trajectory prediction method according to the present embodiment. In the application scenario of fig. 3, an image 302 of a region where the moving object 301 is currently located may be obtained, and then a candidate destination 304 in the image 302 is determined by using the segmentation model 303, which specifically includes a candidate destination "a", a candidate destination "B", a candidate destination "C", a candidate destination "D", and a candidate destination "E". Then, a target destination determination model 306 may be utilized to determine a target destination 307 of the object 301, specifically including the target destination "C", the target destination "D", and the target destination "E", from the candidate destinations 304 according to the historical motion trajectory 305 of the object 301 in a preset period before the current time. Thereafter, the movement trajectories of the object 301 from the current position to the target destination "C", the target destination "D", and the target destination "E", respectively, may be determined from the historical movement trajectories 305 using the estimation prediction model 308.
The existing track prediction method is usually short-time prediction, such as prediction of a motion track of 12 frames in the future according to a historical motion track of 8 frames, a time length of each frame is, for example, 0.4 second, and the like, and is not prediction of a whole scene (i.e. prediction of a track in the whole area of a scene where a moving object is located). In addition, each predicted trajectory by the conventional trajectory prediction method is weighted equally, that is, each predicted trajectory has the same possibility, so that the conventional prediction method cannot provide an adaptive trajectory prediction result for different scenes, different choices, and the like.
The trajectory prediction method provided by the above embodiment of the present disclosure may be applied to trajectory prediction of a whole scene, and the specific idea is to determine a target destination of a moving object first, and then predict a trajectory of the moving object to the target destination. Suppose that the destination of a moving object is g i=1,2…n And if "n" is the number of destinations, the following rules are met among the historical motion track "X", the destination g and the predicted track "Y":
P(Y,g|X)=P(Y|X,g)P(g|X)
based on this, P (Y | X) can be transformed to be represented as:
Figure BDA0003694243300000081
based on the idea, the target destination is screened from the candidate destinations, and then the motion trail of the moving object moving from the current position to the target destination is predicted, so that full-scene prediction of the moving object can be realized, rather than short-time prediction.
With further reference to FIG. 4, a flow 400 of yet another embodiment of a trajectory prediction method is illustrated. The process 400 of the trajectory prediction method includes the following steps:
step 401, acquiring an image of an area where an object is located, and acquiring a historical motion track of the object moving to the current location.
Step 402, determining candidate destinations of the object from the image
And step 403, obtaining the activity of each candidate destination.
In the present embodiment, the liveness of the candidate destination may refer to the probability of various moving objects appearing in the candidate destination. Specifically, the liveness of each candidate destination may be determined in a variety of ways. For example, the number of moving objects appearing in each candidate destination may be counted, and the liveness of the candidate destination may be determined according to the number, and generally, the number of moving objects appearing is positively correlated with the liveness.
Alternatively, the liveness of the candidate destination may refer to the liveness within the target period. The target time period may be determined according to an actual application scenario. For example, the target time period may refer to a time period corresponding to a preset time period before the current time. For another example, a plurality of time periods may be divided in advance (for example, time periods are divided by hours for one day), and then the time period of the current time may be determined as the target time period.
Because in some scenarios, each candidate destination is often more active for a period of time and less active for another period of time, differentiating the activity of candidate destinations by period of time may more accurately express the activity of each candidate destination.
As an example, the people flow density of each candidate destination may be first calculated by:
Figure BDA0003694243300000091
wherein, f i "indicates the traffic density of the candidate destination" i ". "q" s t "denotes coordinates of various objects appearing in the time period" t "in the area where the image is presented. ' d i "indicates the position of the candidate destination" i ". At this time, if an arbitrary object appears in the destination candidate in the time period "t", the traffic density of the destination candidate is 1, and if an arbitrary object does not appear in the destination candidate in the time period "t", the traffic density of the destination candidate is 0.
Then, the liveness of each candidate destination may be calculated by:
Figure BDA0003694243300000092
wherein "s i "indicates the liveness of the candidate destination" i ". "n" represents "n" candidate destinations.
And step 404, determining a target destination of the object from the candidate destinations according to the historical motion trail and the liveness of each candidate destination.
In the present embodiment, a higher activity of a candidate destination may indicate that the probability that the candidate destination history is the target destination is higher, and in some cases, an object often has dominance, so a higher activity of a candidate destination may also indicate that the probability that the candidate destination is the target destination of the current object is higher.
Based on this, the target destination of the object may be determined by various methods according to the historical motion track of the object and the liveness of each candidate destination, for example, according to the motion direction of the historical motion track of the object, a candidate destination in the motion direction may be determined from each candidate destination, and then a candidate destination with a liveness greater than a preset threshold value may be selected from the determined candidate destinations as the corresponding target destination.
And step 405, predicting the track of the object moving from the current position to the target destination according to the historical motion track and the target destination.
In some optional implementations of the present embodiment, the target destination of the object may be determined from the candidate destinations according to the historical motion trajectories and the liveness of each candidate destination by:
step one, determining a position vector of each candidate destination.
In this step, the location vector of the candidate destination may be used to represent the location of the candidate destination in the image. In particular, various vector representation methods may be employed to determine the location vector for each candidate destination. For example, it is possible to extract the position feature of each candidate destination in the image using various existing feature extraction models, and use the extracted feature vector representing the position feature as the position vector of each candidate destination
Updating the position vector of each candidate destination by using the activity of the candidate destination to obtain an updated position vector;
in this step, various methods may be adopted to update the location vector of each candidate destination with the liveness of the candidate destination to obtain an updated location vector. The updated location vector may fuse the location features and liveness features of the candidate destinations.
For example, the liveness and location vectors of the candidate destinations may be stitched directly to get an updated location vector. As another example, a product of the liveness of the candidate destination and the location vector may be computed to obtain an updated location vector.
Determining a characteristic vector of the historical motion track;
in this step, the feature vector of the historical motion trajectory of the object may be determined using various existing feature extraction methods. For example, various feature extraction models may be used to extract features of the historical motion trajectory, thereby obtaining a feature vector representing the features of the historical motion trajectory.
Step four, fusing the updated position vector of each candidate destination with the feature vector to obtain a fusion vector of the candidate destination;
in this step, various vector fusion methods can be adopted to realize fusion between the updated position vector of the candidate destination and the feature vector of the historical motion trajectory of the object according to the actual application requirements.
For example, a sum or product of the updated position vector of the candidate destination and the feature vector of the historical motion trajectory of the object may be computed for fusion. For another example, the updated position vector of the candidate destination may be directly merged with the sum or product of the feature vectors of the historical motion trajectory of the object.
And step five, respectively determining the probability that each destination is the target destination of the object according to the fusion vector of each candidate destination, and determining the target destination of the object according to the probability that each destination is the target destination of the object.
In this step, since the fusion vector fuses the updated position vector of the destination candidate and the feature vector of the historical motion trajectory of the object, and the updated position vector can represent the position feature and the liveness feature of the destination candidate, the fusion vector contains the position feature and the liveness feature of the destination candidate and the feature of the historical motion trajectory of the object.
From the fusion vector, various methods may be employed to determine the probability that each candidate destination is the target destination of the object. For example, a pre-trained target destination determination model may be utilized to determine a probability that each candidate destination is the target destination of the object based on the fused vector for that candidate destination. The target destination determination model can determine the probability that the candidate destination belongs to the target destination according to the fusion vector, namely according to the position characteristic and the activeness characteristic of the candidate destination and the characteristic of the historical motion trail of the object.
After obtaining the probability that each candidate destination is the target destination of the object, the target destination of the object may be determined according to the probability. For example, a preset number of candidate destinations may be selected as the target destinations in the order of the probability from large to small. The preset number can be preset according to actual application requirements or application scenes.
As an example, the feature vector of the historical motion trajectory of the object may be extracted first by the following formula:
h t =NN(h t-1 ,MLP(q t ))
the NN (Neural Networks) may be various Neural Network models capable of processing sequence data, such as RNN (Recurrent Neural Networks), LSTM (Long-short Term Memory), GRU (Gate-controlled round Unit), and the like, and specifically, the Neural Network model may be flexibly selected according to actual application requirements. "q" s t "indicates a history of motion trajectory (e.g., a coordinate sequence consisting of coordinates of respective time instants in chronological order). MLP (multi layer Perceptron) may be used to convert historical motion trajectories (e.g., coordinate sequences) into variable length vectors. "h" is t "and" h t-1 "represents the hidden state at times" t "and" t-1", respectively. "t" may be the current time.
Then, a fused vector of candidate destinations can be formed by the following formula:
e i =MLP([h t ,MLP(s i d i )])
wherein "e i "represents the fused vector of the candidate destination" i ". ' d i "indicates the position of the candidate destination" i ". "s i "represents the liveness of the candidate destination" i ". The MLP may be a single layer perceptron with a layer of Tanh nonlinear activation. "[,]"denotes a splicing operation.
Then, the probability that the candidate destination is the target destination of the object can be determined according to the fused vector of the candidate destination by the following formula:
r i =Softmax(MLP(e i ))
wherein "r" is i "indicates a candidate purposeThe ground "i" is the probability of the target destination of the object. MLP is a variety of multi-layered perceptrons. The Softmax function is used to normalize the mapping.
The execution process not specifically described in the above steps may refer to the related description in the corresponding embodiment of fig. 2, and is not repeated herein.
The method provided by the above embodiment of the present disclosure selects the target destination according to the liveness of each candidate destination on the basis of the full scene prediction, that is, determines the moving direction of the moving object, and then predicts the trajectory of the moving object moving to the target destination, so that unnecessary trajectory prediction can be avoided, and the accuracy of trajectory prediction can be improved.
With further reference to FIG. 5, a flow 500 of yet another embodiment of a trajectory prediction method is illustrated. The process 500 of the trajectory prediction method includes the following steps:
step 501, acquiring an image of an area where an object is located, and acquiring a historical motion track of the object moving to the current position.
Step 502 determines a candidate destination for the object from the image.
Step 503, obtaining the activity of each candidate destination, determining the location vector of each candidate destination, and updating the location vector of the candidate destination by using the activity of each candidate destination to obtain an updated location vector.
Step 504, determining the feature vector of the historical motion track, and fusing the updated position vector of each candidate destination with the feature vector to obtain a fusion vector of the candidate destination.
And 505, respectively determining the probability that each candidate destination is the target destination of the object according to the fusion vector of each candidate destination, and determining the target destination of the object according to the probability that each destination is the target destination of the object.
Step 506, determining the weight of each target destination according to the fusion vector of the target destination.
In this embodiment, the weight of each target destination may be used to distinguish the attractive force of each candidate destination to the object, respectively. Specifically, various methods may be employed to determine the weight of each target destination based on the fused vector for that target destination. For example, a weight for each target destination may be determined from its fused vector using a pre-trained weight determination model. The weight determination model may determine the weight of the target destination according to the fusion vector, that is, according to the location feature and the liveness feature of the target destination.
In step 507, the weighted sum of the fusion vectors of the target destinations is determined as a control vector.
In the present embodiment, for each target destination, the weight of the target destination may be determined as the weight of the fused vector of the target destination, so as to calculate the weighted sum of the fused vectors of the respective target destinations, and the resultant weighted sum result value may be used as the control vector.
And step 508, predicting the track of the object moving from the current position to the target destination by using a pre-trained track prediction model according to the historical motion track and the control vector.
In this embodiment, since the control vector includes the attraction information of the position feature and the liveness feature of each target destination to the object, the trajectory prediction is performed by using the historical motion trajectory of the object, and the trajectory prediction is performed by combining the control vector, which is helpful to improve the accuracy of the trajectory prediction.
Specifically, various methods may be adopted to predict the trajectory of the object moving from the current position to the target destination by using a pre-trained trajectory prediction model according to the historical motion trajectory and the control vector. For example, a trajectory prediction model may be trained in advance, wherein the trajectory prediction model may predict a motion trajectory of the object moving from the current position to each target destination according to a historical motion trajectory of the object and a control vector of the target destination.
When a plurality of target destinations are provided, the control vector may be determined by the above method, and the motion trajectory of the object may be predicted by using the historical motion trajectory of the object and the control vector. When the target destination is one, the motion trail of the object can be directly predicted according to the historical motion trail of the object.
As an example, the encoding of the historical motion trajectory of the object may be implemented by the following formula:
h t =NN(h t-1 ,MLP(q t ))
the NN may be various neural network models capable of processing sequence data, such as RNN, LSTM, GRU, and the like, and specifically, the neural network model used may be flexibly selected according to actual application requirements. "q" is a number t "indicates a history of motion trajectory (e.g., a coordinate sequence consisting of time-series coordinates of respective time instants). MLP (multi layer Perceptron) may be used to convert historical motion trajectories (e.g., coordinate sequences) into variable length vectors. "h" is t "and" h t-1 "represents the hidden state at times" t "and" t-1", respectively. "t" may be the current time.
Then, the weight of the target destination can be determined by the following formula:
w i =Softmax(MLP([e i ,h t-1 ]))
wherein "w i "represents the weight of the target destination" i ". "e i "represents the fused vector of the candidate destination" i ". The MLP may be a single layer perceptron with a layer of Tanh nonlinear activation. The Softmax function is used to normalize the mapping.
The control vector may then be determined by the following equation:
C=∑w i e i
wherein "w i "represents the weight of the target destination" i ". "e i "represents the fused vector of the candidate destination" i ". "C" represents a control vector, which is obtained by calculating a weighted sum of the fusion vectors of the respective target destinations.
Then, the trajectory prediction can be performed by the following decoding formula:
Figure BDA0003694243300000141
the NN may be various neural network models capable of processing sequence data, such as RNN, LSTM, GRU, and the like, and specifically, the neural network model used may be flexibly selected according to actual application requirements. The MLP may be various multi-layer perceptrons.
Figure BDA0003694243300000142
Representing the predicted trajectory.
The execution process not specifically described in the above steps may refer to the related description in the corresponding embodiment of fig. 2, and is not repeated herein.
The method provided by the above embodiment of the present disclosure screens target destinations according to the liveness of each candidate destination on the basis of full-scene prediction, determines the weight of the target destination, and then performs trajectory prediction by combining the weights of the target destinations, so that each predicted trajectory may have different weights in different scenes, so as to be suitable for trajectory prediction in various different scenes.
With further reference to fig. 6, as an implementation of the methods shown in the above-mentioned figures, the present disclosure provides an embodiment of a trajectory prediction apparatus, which corresponds to the embodiment of the method shown in fig. 2, and which can be applied in various electronic devices.
As shown in fig. 6, the trajectory prediction apparatus 600 provided by the present embodiment includes an acquisition unit 601, a candidate destination determination unit 602, a target destination determination unit 603, and a trajectory prediction unit 604. The obtaining unit 601 is configured to obtain an image of an area where the object is located, and obtain a historical movement track of the object moving to the current location; the candidate destination determining unit 602 is configured to determine a candidate destination of the object from the image, wherein the candidate destination is located at a boundary of the image; the target destination determining unit 603 is configured to determine a target destination of the object from the candidate destinations according to the historical motion trajectory; the trajectory prediction unit 604 is configured to predict a trajectory of the object moving from the current position to the target destination based on the historical movement trajectory and the target destination.
In the present embodiment, the trajectory prediction apparatus 600: the detailed processing and the technical effects of the obtaining unit 601, the candidate destination determining unit 602, the target destination determining unit 603, and the trajectory predicting unit 604 can refer to the related descriptions of step 201, step 202, step 203, and step 204 in the corresponding embodiment of fig. 2, and are not repeated herein.
In some optional implementations of the present embodiment, the candidate destination determining unit 602 is further configured to: segmenting an image to obtain an image region set; respectively determining whether each image area in the image area set is positioned at the boundary of the image; an image region located at a boundary of the image is determined as a candidate destination.
In some optional implementations of the present embodiment, the target destination determining unit 603 is further configured to: acquiring the activity of each candidate destination; and determining the target destination of the object from the candidate destinations according to the historical motion trail and the activity of each candidate destination.
In some optional implementations of the present embodiment, the target destination determining unit 603 is further configured to: determining a position vector of each candidate destination, wherein the position vector is used for representing the position of the candidate destination in the image; updating the position vector of each candidate destination by using the activity of the candidate destination to obtain an updated position vector; determining a feature vector of a historical motion track; fusing the updated position vector of each candidate destination with the feature vector to obtain a fusion vector of the candidate destination; and respectively determining the probability that each candidate destination is the target destination of the object according to the fusion vector of each candidate destination, and determining the target destination of the object according to the probability that each destination is the target destination of the object.
In some optional implementations of the present embodiment, the trajectory prediction unit 604 is further configured to: determining the weight of each target destination according to the fusion vector of the target destination; determining a weighted sum of the fusion vectors of the target destinations as a control vector; and predicting the track of the object moving from the current position to the target destination by using a pre-trained track prediction model according to the historical motion track and the control vector.
According to the device provided by the embodiment of the disclosure, the image of the area where the object is located is obtained through the obtaining unit, and the historical motion track of the object moving to the current position is obtained; a candidate destination determining unit determines a candidate destination of the object from the image, wherein the candidate destination is located at a boundary of the image; a target destination determining unit determines a target destination of the object from the candidate destinations according to the historical motion trail; the track prediction unit predicts the track of the moving object from the current position to the target destination according to the historical moving track and the target destination, and can realize the full-scene prediction of the moving object by firstly screening the target destination from the candidate destinations and then predicting the moving track of the moving object from the current position to the target destination.
Referring now to FIG. 7, a block diagram of an electronic device (e.g., the server of FIG. 1) 700 suitable for use in implementing embodiments of the present disclosure is shown. The terminal device/server shown in fig. 7 is only an example, and should not bring any limitation to the functions and the use range of the embodiments of the present disclosure.
As shown in fig. 7, electronic device 700 may include a processing means (e.g., central processing unit, graphics processor, etc.) 701 that may perform various appropriate actions and processes in accordance with a program stored in a Read Only Memory (ROM) 702 or a program loaded from a storage means 708 into a Random Access Memory (RAM) 703. In the RAM703, various programs and data necessary for the operation of the electronic apparatus 700 are also stored. The processing device 701, the ROM 702, and the RAM703 are connected to each other by a bus 704. An input/output (I/O) interface 705 is also connected to bus 704.
Generally, the following devices may be connected to the I/O interface 705: input devices 706 including, for example, a touch screen, touch pad, keyboard, mouse, camera, microphone, accelerometer, gyroscope, etc.; an output device 707 including, for example, a Liquid Crystal Display (LCD), a speaker, a vibrator, and the like; storage 708 including, for example, magnetic tape, hard disk, etc.; and a communication device 709. The communication means 709 may allow the electronic device 700 to communicate wirelessly or by wire with other devices to exchange data. While fig. 7 illustrates an electronic device 700 having various means, it is to be understood that not all illustrated means are required to be implemented or provided. More or fewer devices may alternatively be implemented or provided. Each block shown in fig. 7 may represent one device or may represent multiple devices as desired.
In particular, the processes described above with reference to the flow diagrams may be implemented as computer software programs, according to embodiments of the present disclosure. For example, embodiments of the present disclosure include a computer program product comprising a computer program embodied on a computer-readable medium, the computer program comprising program code for performing the method illustrated by the flow chart. In such embodiments, the computer program may be downloaded and installed from a network via the communication means 709, or may be installed from the storage means 708, or may be installed from the ROM 702. The computer program, when executed by the processing device 701, performs the above-described functions defined in the methods of embodiments of the present disclosure.
It should be noted that the computer readable medium described in the embodiments of the present disclosure may be a computer readable signal medium or a computer readable storage medium or any combination of the two. A computer readable storage medium may be, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any combination of the foregoing. More specific examples of the computer readable storage medium may include, but are not limited to: an electrical connection having one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing. In embodiments of the disclosure, a computer readable storage medium may be any tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. In embodiments of the present disclosure, however, a computer readable signal medium may comprise a propagated data signal with computer readable program code embodied therein, for example, in baseband or as part of a carrier wave. Such a propagated data signal may take many forms, including, but not limited to, electro-magnetic, optical, or any suitable combination thereof. A computer readable signal medium may also be any computer readable medium that is not a computer readable storage medium and that can communicate, propagate, or transport a program for use by or in connection with an instruction execution system, apparatus, or device. Program code embodied on a computer readable medium may be transmitted using any appropriate medium, including but not limited to: electrical wires, optical cables, RF (radio frequency), etc., or any suitable combination of the foregoing.
The agent may include the electronic device to perform trajectory prediction using the electronic device. For example, the agent is an intelligent robot, in which case the intelligent robot may include a mechanical structure and a control unit, and may also include the electronic device to perform trajectory prediction, and the control unit may control the mechanical structure to move according to a result of the trajectory prediction according to the result of the trajectory prediction. For another example, when the intelligent agent is an intelligent vehicle, the vehicle may include basic components of the vehicle structure (such as a vehicle body, a braking device, a running gear, and the like), and may further include the electronic device to perform trajectory prediction, and the vehicle may travel according to the trajectory prediction result.
The computer readable medium may be embodied in the electronic device; or may be separate and not incorporated into the electronic device. The computer readable medium carries one or more programs which, when executed by the electronic device, cause the electronic device to: acquiring an image of an area where an object is located, and acquiring a historical motion track of the object moving to the current position; determining a candidate destination for the object from the image, wherein the candidate destination is located at a boundary of the image; determining the target destination from the candidate destinations according to the historical motion trail; and predicting the track of the object moving from the current position to the target destination according to the historical motion track and the target destination.
Computer program code for carrying out operations for embodiments of the present disclosure may be written in any combination of one or more programming languages, including an object oriented programming language such as Java, smalltalk, C + +, and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computer, partly on the user's computer, as a stand-alone software package, partly on the user's computer and partly on a remote computer or entirely on the remote computer or server. In the case of a remote computer, the remote computer may be connected to the user's computer through any type of network, including a Local Area Network (LAN) or a Wide Area Network (WAN), or the connection may be made to an external computer (for example, through the Internet using an Internet service provider).
The flowchart and block diagrams in the figures illustrate the architecture, functionality, and operation of possible implementations of systems, methods and computer program products according to various embodiments of the present disclosure. In this regard, each block in the flowchart or block diagrams may represent a module, segment, or portion of code, which comprises one or more executable instructions for implementing the specified logical function(s). It should also be noted that, in some alternative implementations, the functions noted in the block may occur out of the order noted in the figures. For example, two blocks shown in succession may, in fact, be executed substantially concurrently, or the blocks may sometimes be executed in the reverse order, depending upon the functionality involved. It will also be noted that each block of the block diagrams and/or flowchart illustration, and combinations of blocks in the block diagrams and/or flowchart illustration, can be implemented by special purpose hardware-based systems which perform the specified functions or acts, or combinations of special purpose hardware and computer instructions.
The units described in the embodiments of the present disclosure may be implemented by software or hardware. The described units may also be provided in a processor, and may be described as: a processor includes an acquisition unit, a candidate destination determination unit, a target destination determination unit, and a trajectory prediction unit. The names of these units do not constitute a limitation to the unit itself in some cases, and for example, the acquisition unit may also be described as "a unit that acquires an image of an area where the object is located and acquires a history of movement trajectories of the object to a current location".
The foregoing description is only exemplary of the preferred embodiments of the disclosure and is illustrative of the principles of the technology employed. It will be appreciated by those skilled in the art that the scope of the invention in the embodiments of the present disclosure is not limited to the specific combination of the above-mentioned features, but also encompasses other embodiments in which any combination of the above-mentioned features or their equivalents is made without departing from the inventive concept as defined above. For example, the above features and (but not limited to) technical features with similar functions disclosed in the embodiments of the present disclosure are mutually replaced to form the technical solution.

Claims (13)

1. A trajectory prediction method, comprising:
acquiring an image of an area where an object is located, and acquiring a historical motion track of the object moving to the current position;
determining a candidate destination for the object from the image, wherein a candidate destination is located at a boundary of the image;
determining a target destination of the object from the candidate destinations according to the historical motion trail;
and predicting the track of the object moving from the current position to the target destination according to the historical motion track and the target destination.
2. The method of claim 1, wherein the determining a candidate destination for the object from the image comprises:
segmenting the image to obtain an image region set;
determining whether each image region in the set of image regions is located at a boundary of the image, respectively;
an image region located at a boundary of the image is determined as a candidate destination.
3. The method of claim 1 or 2, wherein said determining a target destination for the object from the candidate destinations according to the historical motion profile comprises:
acquiring the activity of each candidate destination;
and determining the target destination of the object from the candidate destinations according to the historical motion trail and the activity of each candidate destination.
4. The method of claim 3, wherein the determining a target destination for the object from the candidate destinations based on the historical motion profiles and liveness of each candidate destination comprises:
determining a location vector for each candidate destination, wherein the location vector is used for representing the location of the candidate destination in the image;
updating the position vector of each candidate destination by using the activity of the candidate destination to obtain an updated position vector;
determining a feature vector of the historical motion track;
fusing the updated position vector of each candidate destination with the feature vector to obtain a fusion vector of the candidate destination;
and respectively determining the probability that each candidate destination is the target destination of the object according to the fusion vector of each candidate destination, and determining the target destination of the object according to the probability that each destination is the target destination of the object.
5. The method of claim 4, wherein predicting the trajectory of the object moving from the current location to the target destination according to the historical motion trajectory and the target destination comprises:
determining the weight of each target destination according to the fusion vector of the target destination;
determining a weighted sum of the fusion vectors of the target destinations as a control vector;
and predicting the track of the object moving from the current position to the target destination by utilizing a pre-trained track prediction model according to the historical motion track and the control vector.
6. A trajectory prediction device comprising:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is configured to acquire an image of an area where an object is located and acquire a historical motion track of the object moving to the current position;
a candidate destination determination unit configured to determine a candidate destination of the object from the image, wherein the candidate destination is located at a boundary of the image;
a target destination determination unit configured to determine a target destination of the object from the candidate destinations according to the historical motion trajectory;
and the track prediction unit is configured to predict the track of the object moving from the current position to the target destination according to the historical motion track and the target destination.
7. The apparatus of claim 6, wherein the candidate destination determination unit is further configured to:
segmenting the image to obtain an image region set;
determining whether each image region in the set of image regions is located at a boundary of the image, respectively;
an image region located at a boundary of the image is determined as a candidate destination.
8. The apparatus of claim 6 or 7, wherein the target destination determining unit is further configured to:
acquiring the activity of each candidate destination;
and determining the target destination of the object from the candidate destinations according to the historical motion trail and the activity of each candidate destination.
9. The apparatus of claim 8, wherein the target destination determination unit is further configured to:
determining a location vector for each candidate destination, wherein the location vector is used for representing the location of the candidate destination in the image;
updating the position vector of each candidate destination by using the activity of the candidate destination to obtain an updated position vector;
determining a feature vector of the historical motion track;
fusing the updated position vector of each candidate destination with the feature vector to obtain a fusion vector of the candidate destination;
and respectively determining the probability that each candidate destination is the target destination of the object according to the fusion vector of each candidate destination, and determining the target destination of the object according to the probability that each destination is the target destination of the object.
10. The apparatus of claim 9, wherein the trajectory prediction unit is further configured to:
determining the weight of each target destination according to the fusion vector of the target destination;
determining a weighted sum of the fusion vectors of the target destinations as a control vector;
and predicting the track of the object moving from the current position to the target destination by utilizing a pre-trained track prediction model according to the historical motion track and the control vector.
11. An electronic device, comprising:
one or more processors;
a storage device having one or more programs stored thereon;
the one or more programs, when executed by the one or more processors, cause the one or more processors to implement the method recited in any of claims 1-5.
12. A computer-readable medium, on which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1-5.
13. An agent comprising the electronic device of claim 11.
CN202210674327.1A 2022-06-14 2022-06-14 Trajectory prediction method, device and agent Pending CN115222769A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210674327.1A CN115222769A (en) 2022-06-14 2022-06-14 Trajectory prediction method, device and agent

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210674327.1A CN115222769A (en) 2022-06-14 2022-06-14 Trajectory prediction method, device and agent

Publications (1)

Publication Number Publication Date
CN115222769A true CN115222769A (en) 2022-10-21

Family

ID=83608777

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210674327.1A Pending CN115222769A (en) 2022-06-14 2022-06-14 Trajectory prediction method, device and agent

Country Status (1)

Country Link
CN (1) CN115222769A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4362455A1 (en) * 2022-10-27 2024-05-01 Fujitsu Limited Position prediction program, information processing device, and position prediction method

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP4362455A1 (en) * 2022-10-27 2024-05-01 Fujitsu Limited Position prediction program, information processing device, and position prediction method

Similar Documents

Publication Publication Date Title
CN112015847B (en) Obstacle trajectory prediction method and device, storage medium and electronic equipment
KR20210006971A (en) System and method for geolocation prediction
CN112419722B (en) Traffic abnormal event detection method, traffic control method, device and medium
CN111523640B (en) Training method and device for neural network model
CN112686281A (en) Vehicle track prediction method based on space-time attention and multi-stage LSTM information expression
US11636348B1 (en) Adaptive training of neural network models at model deployment destinations
CN114519932B (en) Regional traffic condition integrated prediction method based on space-time relation extraction
CN112347691A (en) Artificial intelligence server
CN111597961A (en) Moving target track prediction method, system and device for intelligent driving
CN115511892A (en) Training method of semantic segmentation model, semantic segmentation method and device
CN114758502B (en) Dual-vehicle combined track prediction method and device, electronic equipment and automatic driving vehicle
WO2021006870A1 (en) Vehicular autonomy-level functions
CN114997307A (en) Trajectory prediction method, apparatus, device and storage medium
CN115648204A (en) Training method, device, equipment and storage medium of intelligent decision model
Naveed et al. Deep introspective SLAM: Deep reinforcement learning based approach to avoid tracking failure in visual SLAM
CN115222769A (en) Trajectory prediction method, device and agent
CN113742590A (en) Recommendation method and device, storage medium and electronic equipment
US20200245141A1 (en) Privacy protection of entities in a transportation system
CN116088537A (en) Vehicle obstacle avoidance method, device, electronic equipment and computer readable medium
Ma et al. Vehicle-based machine vision approaches in intelligent connected system
Kusuma et al. Real-Time Object Detection and Tracking Design Using Deep Learning with Spatial–Temporal Mechanism for Video Surveillance Applications
Olier et al. Active estimation of motivational spots for modeling dynamic interactions
CN111797655A (en) User activity identification method and device, storage medium and electronic equipment
US11567494B2 (en) Enhancing performance of local device
WO2023216779A1 (en) Improving computational capability based on vehicle maintenance

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination