CN113129330A - Track prediction method and device for movable equipment - Google Patents

Track prediction method and device for movable equipment Download PDF

Info

Publication number
CN113129330A
CN113129330A CN202010038689.2A CN202010038689A CN113129330A CN 113129330 A CN113129330 A CN 113129330A CN 202010038689 A CN202010038689 A CN 202010038689A CN 113129330 A CN113129330 A CN 113129330A
Authority
CN
China
Prior art keywords
feature map
characteristic diagram
movable
determining
target
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202010038689.2A
Other languages
Chinese (zh)
Inventor
范坤
陈迈越
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Horizon Robotics Technology Research and Development Co Ltd
Original Assignee
Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Horizon Robotics Technology Research and Development Co Ltd filed Critical Beijing Horizon Robotics Technology Research and Development Co Ltd
Priority to CN202010038689.2A priority Critical patent/CN113129330A/en
Publication of CN113129330A publication Critical patent/CN113129330A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Abstract

A track prediction method and device for a movable device are disclosed, comprising: determining a first feature map of a target movable device from the image frame sequence by using a preset network model; acquiring a second feature map of the target movable equipment from a current frame image in the image frame sequence; acquiring a third characteristic diagram corresponding to each movable device to be tested from the subsequent frame image; determining a relation characteristic diagram between the target movable equipment and each movable equipment to be tested according to the second characteristic diagram and the third characteristic diagram; determining a fourth feature map of the target mobile device according to the relationship feature map and the first feature map; and determining a track prediction result of the target movable equipment according to the fourth feature map by utilizing the network model.

Description

Track prediction method and device for movable equipment
Technical Field
The present disclosure relates to the field of image analysis technologies, and in particular, to a trajectory prediction method and apparatus for a mobile device.
Background
Trajectory prediction for mobile devices is a very important component of automated/assisted driving technology. The prediction principle is that continuous multiple frames of images of the movable equipment in the past are analyzed and calculated to determine the possible driving track of the movable equipment in the future.
The tracking of the mobile device, that is, the determination of the same mobile device in the continuous multi-frame images, is taken as the basis for the trajectory prediction. However, the accuracy of the existing tracking algorithm is not ideal, and the existing tracking algorithm cannot effectively cope with the complex situation. Resulting in poor results in trajectory prediction.
Disclosure of Invention
The present disclosure is proposed to solve the above technical problems. The embodiment of the disclosure provides a method and a device for predicting a track of a movable device, which can improve the effect of track prediction by tracking the movable device more accurately.
According to a first aspect of the present disclosure, there is provided a trajectory prediction method of a movable device, including:
determining a first feature map of a target movable device from the image frame sequence by using a preset network model;
acquiring a second feature map of the target movable equipment from a current frame image in the image frame sequence;
acquiring a third characteristic diagram corresponding to each movable device to be tested from the subsequent frame image;
determining a relation characteristic diagram between the target movable equipment and each movable equipment to be tested according to the second characteristic diagram and the third characteristic diagram;
determining a fourth feature map of the target mobile device according to the relationship feature map and the first feature map;
and determining a track prediction result of the target movable equipment according to the fourth feature map by utilizing the network model.
According to a second aspect of the present disclosure, there is provided a trajectory prediction apparatus of a movable device, including:
the first feature map determining module is used for determining a first feature map of the target movable device from the image frame sequence by using a preset network model;
a second feature map obtaining module, configured to obtain a second feature map of the target mobile device from a current frame image in the image frame sequence;
the third characteristic diagram acquisition module is used for acquiring a third characteristic diagram corresponding to each movable device to be tested from the subsequent frame image;
a relation characteristic diagram determining module, configured to determine a relation characteristic diagram between the target mobile device and each mobile device to be tested according to the second characteristic diagram and the third characteristic diagram;
a fourth feature map determination module, configured to determine a fourth feature map of the target removable device according to the relationship feature map and the first feature map;
and the track prediction module is used for determining a track prediction result of the target movable equipment according to the fourth feature map by utilizing the network model.
According to a third aspect of the present disclosure, there is provided a computer-readable storage medium storing a computer program for executing the trajectory prediction method of a movable apparatus described in the first aspect above.
According to a fourth aspect of the present disclosure, there is provided an electronic apparatus comprising: a processor; a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the executable instructions to implement the trajectory prediction method of the mobile device in the first aspect.
Compared with the prior art, the trajectory prediction method and the trajectory prediction device for the movable equipment provided by the disclosure are adopted, the second characteristic diagram and the third characteristic diagram are utilized to analyze the characteristic information so as to determine a relation characteristic diagram capable of reflecting the corresponding relation between the target movable equipment and the movable equipment to be tested, and the relation characteristic diagram and the first characteristic diagram are combined to obtain a fourth characteristic diagram; the track prediction of the target movable equipment is carried out through the network model according to the fourth characteristic diagram, so that a track prediction result can be obtained under the condition that the tracking is more accurate; the tracking operation of the movable equipment related in the relation characteristic diagram does not need to be combined with scenes for adaptive adjustment, has universality and does not need to be combined with manual operation.
Drawings
The above and other objects, features and advantages of the present disclosure will become more apparent by describing in more detail embodiments of the present disclosure with reference to the attached drawings. The accompanying drawings are included to provide a further understanding of the embodiments of the disclosure and are incorporated in and constitute a part of this specification, illustrate embodiments of the disclosure and together with the description serve to explain the principles of the disclosure and not to limit the disclosure. In the drawings, like reference numbers generally represent like parts or steps.
FIG. 1 is a diagram illustrating a prior art trajectory prediction method;
FIG. 2 is a schematic diagram of a trajectory prediction system of a mobile device according to an exemplary embodiment of the disclosure;
FIG. 3 is a diagram illustrating a current frame image and a subsequent frame image in an exemplary embodiment of the disclosure;
FIG. 4 is a flowchart illustrating a trajectory prediction method for a mobile device according to an exemplary embodiment of the disclosure;
FIG. 5 is a flowchart illustrating a trajectory prediction method for a mobile device according to an exemplary embodiment of the disclosure;
FIG. 6 is a diagram illustrating relationships among various feature maps in a trajectory prediction method for a mobile device according to an exemplary embodiment of the disclosure;
FIG. 7 is a schematic structural diagram of a trajectory prediction apparatus of a movable device according to an exemplary embodiment of the present disclosure;
FIG. 8 is a schematic structural diagram of a relationship characteristic diagram determination module in a trajectory prediction apparatus of a movable device according to an exemplary embodiment of the present disclosure;
fig. 9 is a schematic structural diagram of a relationship characteristic map determining unit in a trajectory prediction apparatus of a movable device according to an exemplary embodiment of the present disclosure;
FIG. 10 is a schematic structural diagram of a trajectory prediction module in a trajectory prediction apparatus of a movable device according to an exemplary embodiment of the present disclosure;
fig. 11 is a block diagram of an electronic device according to an exemplary embodiment of the present disclosure.
Detailed Description
Hereinafter, example embodiments according to the present disclosure will be described in detail with reference to the accompanying drawings. It is to be understood that the described embodiments are merely a subset of the embodiments of the present disclosure and not all embodiments of the present disclosure, with the understanding that the present disclosure is not limited to the example embodiments described herein.
It should be noted that: the relative arrangement of the components and steps, the numerical expressions, and numerical values set forth in these embodiments do not limit the scope of the present disclosure unless specifically stated otherwise.
It will be understood by those of skill in the art that the terms "first," "second," and the like in the embodiments of the present disclosure are used merely to distinguish one element from another, and are not intended to imply any particular technical meaning, nor is the necessary logical order between them.
It is also understood that in embodiments of the present disclosure, "a plurality" may refer to two or more and "at least one" may refer to one, two or more.
It is also to be understood that any reference to any component, data, or structure in the embodiments of the disclosure, may be generally understood as one or more, unless explicitly defined otherwise or stated otherwise.
In addition, the term "and/or" in the present disclosure is only one kind of association relationship describing an associated object, and means that three kinds of relationships may exist, for example, a and/or B may mean: a exists alone, A and B exist simultaneously, and B exists alone. In addition, the character "/" in the present disclosure generally indicates that the former and latter associated objects are in an "or" relationship.
It should also be understood that the description of the various embodiments of the present disclosure emphasizes the differences between the various embodiments, and the same or similar parts may be referred to each other, so that the descriptions thereof are omitted for brevity.
Meanwhile, it should be understood that the sizes of the respective portions shown in the drawings are not drawn in an actual proportional relationship for the convenience of description.
The following description of at least one exemplary embodiment is merely illustrative in nature and is in no way intended to limit the disclosure, its application, or uses.
Techniques, methods, and apparatus known to those of ordinary skill in the relevant art may not be discussed in detail but are intended to be part of the specification where appropriate.
It should be noted that: like reference numbers and letters refer to like items in the following figures, and thus, once an item is defined in one figure, further discussion thereof is not required in subsequent figures.
The disclosed embodiments may be applied to electronic devices such as terminal devices, computer systems, servers, etc., which are operational with numerous other general purpose or special purpose computing system environments or configurations. Examples of well known terminal devices, computing systems, environments, and/or configurations that may be suitable for use with electronic devices, such as terminal devices, computer systems, servers, and the like, include, but are not limited to: personal computer systems, server computer systems, thin clients, thick clients, hand-held or laptop devices, microprocessor-based systems, set top boxes, programmable consumer electronics, network pcs, minicomputer systems, mainframe computer systems, distributed cloud computing environments that include any of the above systems, and the like.
Electronic devices such as terminal devices, computer systems, servers, etc. may be described in the general context of computer system-executable instructions, such as program modules, being executed by a computer system. Generally, program modules may include routines, programs, objects, components, logic, data structures, etc. that perform particular tasks or implement particular abstract data types. The computer system/server may be practiced in distributed cloud computing environments where tasks are performed by remote processing devices that are linked through a communications network. In a distributed cloud computing environment, program modules may be located in both local and remote computer system storage media including memory storage devices.
Summary of the application
The track prediction of the movable device is based on the analysis and calculation of a plurality of continuous frames of images of the movable device in the past period of time so as to determine the possible driving track of the movable device in the future period of time. In the prior art, trajectory prediction can be achieved through a specific network model. Specifically, the network model may include two parts, an encoding network and a decoding network. As shown in fig. 1, the image frame sequence for the mobile device is input into the coding network for parsing, so as to obtain the corresponding feature map. And inputting the characteristic diagram into a decoding network for analysis to obtain a track prediction result. In general, the trajectory prediction result may be embodied as a predicted trajectory line, i.e., a future travel trajectory of the mobile device is embodied in the form of a trajectory line on the image frame.
Tracking of mobile devices is a very important technology link for trajectory prediction. I.e. in the successive multiframe images described above, the same mobile device is identified. Therefore, the continuous running state of the movable equipment in a past period can be analyzed, and the track prediction is achieved. Particularly, in an image frame sequence shot for a road surface, each frame image often includes a plurality of movable devices, and the corresponding relationship between each frame image of each movable device must be accurately determined.
In a relatively simple case, since the movement process of the movable devices is continuous, tracking can be achieved "nearby" in the sequence of image frames based on the positional relationship of the respective movable devices using a tracking algorithm in the network model. However, for the design of the tracking algorithm, the matching condition of the mobile device in the algorithm must be adjusted in combination with the actual scene, and cannot be widely used. For example, in two different scenes of a highway and a congested road section in an urban area, the speeds of the mobile devices have a difference of the stages, and obviously, the matching conditions cannot be used universally. In a relatively complex scenario, tracking has to be performed manually or manually in combination with an algorithm.
Exemplary System
In view of the above problems, the trajectory prediction system for a mobile device according to the present disclosure will combine with an image analysis technique to achieve more accurate tracking of the mobile device, thereby ensuring the effect of trajectory prediction. The structure of the system is shown with reference to fig. 2.
Assuming that the current time is t0, the image frame sequence acquired at this time may be a video segment ending at time t 0. The last frame in the image frame sequence, i.e. the current frame image corresponding to the current time instant t 0. And at a subsequent time t1 after t0, a subsequent frame image corresponding to a subsequent time t1 is acquired. As shown in fig. 3, A, B, C, D movable devices are included in the current frame image, and 4 movable devices are included in the subsequent frame image. The system can determine the corresponding relation of each movable device in the front frame image and the rear frame image through the tracking of the movable devices.
Similar to the prior art, in this system, the sequence of image frames is also input into the encoded network of the network model, and the first feature map of each movable device (i.e., movable device A, B, C, D) in the sequence of image frames is determined.
Meanwhile, the current frame image and the subsequent frame image are analyzed by utilizing a preset neural network model respectively. A second feature map of the movable device A, B, C, D in the current frame image and a third feature map of the movable devices a, b, c, d in the subsequent frame image are obtained. And then, splicing the second feature maps and the third feature maps in pairs, and inputting the second feature maps and the third feature maps into a perception model to obtain a relationship feature map for each movable device in the previous frame of image. I.e., the removable devices A, B, C, D each have a corresponding relationship profile that includes a probability distribution for its corresponding removable device a, b, c, d.
That is to say, the relationship characteristic diagram shows the corresponding relationship of each movable device in the two previous and next frames of images, and a more ideal movable device tracking effect can be realized. And further splicing the first characteristic diagram and the relation characteristic diagram corresponding to the movable equipment, and inputting the spliced first characteristic diagram and the relation characteristic diagram into a decoding network of the network model, so that the track prediction is realized. Namely, on the basis of more accurate movable equipment tracking, the track prediction of the movable equipment is realized.
Exemplary method
Fig. 4 is a flowchart illustrating a trajectory prediction method for a mobile device according to an exemplary embodiment of the disclosure. The embodiment can be applied to an electronic device, as shown in fig. 4, and includes the following steps:
step 401, a first feature map of the target mobile device is determined from the image frame sequence by using a preset network model.
The network model involved in this embodiment is similar to the network model applied in the prior art for trajectory prediction. The specific structure of the network model and the internal calculation principle are not limited, and any network model capable of realizing the same or similar functions can be combined in the whole technical scheme. In this step, the image frame sequence is also analyzed by using the coding network in the network model.
The image frame sequence is a series of image frames arranged in time series. Specifically, the image frame sequence may be extracted from a video shot for a road surface or a vehicle. The scene comprised in the sequence of image frames should generally be within a certain spatial range. At least one movable device will be included in the sequence of image frames. And any movable device included in the image frame sequence may be the target movable device. The target mobile device is the object for which trajectory prediction (and corresponding tracking) is performed in this embodiment. The driving state (speed, driving direction, etc.) of the target mobile device within the time range covered by the image frame sequence can be reflected by the image frame sequence.
In this embodiment, the image frame sequence is input into the coding network of the network model, and the coding network can output the first feature map of the target mobile device.
Step 402, a second feature map of the target mobile device is obtained from a current frame image in the image frame sequence.
The current frame image may be the last frame image in the sequence of image frames. The corresponding time is the time when the image frame sequence is cut, or called the current time. The target movable apparatus described above is included in the current frame sequence.
The second feature map of the target mobile device is obtained according to the current frame image, and specifically, the current frame image may be input to a convolutional neural network to perform a convolution operation, so as to obtain the second feature map of the target mobile device. The image analysis by using the convolutional neural network and obtaining the corresponding characteristic diagram belong to the prior art. In this embodiment, details are not described, and the specific network structure and network parameters of the convolutional neural network are not limited. And in other cases, the image analysis can be completed in other modes except the convolutional neural network, and a second characteristic diagram can be obtained. Any means of image analysis that can perform the same or similar functions can be incorporated into the overall scheme of the present embodiment.
And 403, acquiring a third feature map corresponding to each movable device to be tested from the subsequent frame image.
The subsequent frame image is an image frame acquired at a specific subsequent time after the current time. In general, the subsequent frame image should include a spatial extent to which the sequence of image frames corresponds that is equivalent or similar to the scene. The subsequent frame image comprises at least one movable device, and all the movable devices in the subsequent frame image are the movable devices to be tested. In fact, the tracking of the mobile device according to this embodiment is a process of determining which mobile device to be tested the target mobile device matches (i.e. the same mobile device).
In this embodiment, a third feature map is correspondingly obtained for each to-be-tested movable device in the subsequent frame image. The process of obtaining the third feature map is the same as the process of obtaining the second feature map, and the description thereof is not repeated.
And step 404, determining a relation characteristic diagram between the target movable equipment and each movable equipment to be tested according to the second characteristic diagram and the third characteristic diagram.
Based on the principle of the convolutional neural network, it can be understood that the second feature map includes feature information of the target movable device, and the third feature map includes feature information of the corresponding movable device to be tested. And then, the second characteristic diagram and each third characteristic diagram are combined for analysis and judgment, so that the probability of matching between the target movable equipment and each movable equipment to be tested can be obtained through analysis according to the content of the characteristic information. The above-mentioned relationship characteristic diagram shows the probability distribution of the matching probability between the target mobile device and each mobile device to be tested. If there are multiple target mobile devices in the current frame image, each target mobile device has a corresponding relationship feature map.
The relationship characteristic diagram is considered to be that the convolutional neural network is utilized, and the corresponding relationship between the target movable device and the movable device to be detected is expressed from the aspect of characteristic information, that is, the tracking of the movable device is completed to a certain extent.
And step 405, determining a fourth feature map of the target movable equipment according to the relation feature map and the first feature map.
In the prior art, the first feature map is directly input into a decoding network of the network model for trajectory prediction. In order to solve the problems in the prior art, the first feature map is further processed in this embodiment. That is, the relationship feature map corresponding to the target removable device is fused with the first feature map to determine the fourth feature map.
The relation characteristic diagram and the first characteristic diagram belong to a multi-dimensional matrix in nature, so the fusion of the relation characteristic diagram and the first characteristic diagram can be the multi-dimensional matrix splicing. For example, assume that the relational feature map and the first feature map are both multidimensional matrices whose dimensions are (H, W, C). The dimension of the multidimensional matrix obtained after splicing the two is (2H, W, C). And (4) splicing to obtain a new multidimensional matrix, namely a fourth characteristic diagram. It can be seen that the value of one dimension in the fourth feature map is equal to the sum of the values of the dimension of the relational feature map and the first feature map. It can be understood that, because the relationship characteristic diagram is accompanied by the "tracking effect" of the mobile device, that is, the probability that the target mobile device matches with each mobile device to be tested is included, the corresponding relationship between the target mobile device and the mobile device to be tested is embodied. Therefore, this "tracking effect" is also included in the fourth feature map.
And step 406, determining a track prediction result of the target movable equipment according to the fourth feature map by using the network model.
In this embodiment, the fourth feature map is input to the decoding network of the network model instead of the first feature map, that is, the decoding network can further perform trajectory prediction on the target mobile device and determine a trajectory prediction result of the target mobile device while ensuring more accurate tracking.
According to the technical scheme, the beneficial effects of the embodiment are as follows: analyzing the feature information by using the second feature map and the third feature map to determine a relationship feature map capable of embodying the corresponding relationship between the target movable equipment and the movable equipment to be tested, and obtaining a fourth feature map by combining the relationship feature map and the first feature map; the track prediction of the target movable equipment is carried out through the network model according to the fourth characteristic diagram, so that a track prediction result can be obtained under the condition that the tracking is more accurate; the tracking operation of the movable equipment related in the relation characteristic diagram does not need to be combined with scenes for adaptive adjustment, has universality and does not need to be combined with manual operation.
Fig. 4 shows only a basic embodiment of the method of the present disclosure, and based on this, certain optimization and expansion can be performed, and other preferred embodiments of the method can also be obtained.
Fig. 5 is a flowchart illustrating a trajectory prediction method for a mobile device according to another exemplary embodiment of the disclosure. The embodiment can be applied to electronic equipment. This embodiment will be explained in more detail based on the embodiment shown in fig. 4. In this embodiment, the method specifically includes the following steps:
step 501, determining a first feature map of the target mobile device from the image frame sequence by using a preset network model.
In connection with the current frame image shown in fig. 3, it is assumed that the movable device a is the target movable device. In this step, a first characteristic diagram corresponding to the mobile device a is obtained by using the coding network of the network model. In this embodiment, the first characteristic diagram corresponding to the movable device a is referred to as a first characteristic diagram a, as shown in fig. 6.
Step 502, a second feature map of the target mobile device is obtained from a current frame image in the image frame sequence.
Similarly, in the step, the current frame image is analyzed by using the convolutional neural network, so that a second characteristic diagram corresponding to the movable equipment A is obtained. Since the second feature map is obtained by analyzing the convolutional neural network in this embodiment, the second feature map will include color information, size information, and/or shape information of the target mobile device based on the characteristics of the convolutional neural network. In this embodiment, the second characteristic diagram corresponding to the movable device a is referred to as a second characteristic diagram a, as shown in fig. 6.
And 503, acquiring a third feature map corresponding to each movable device to be tested from the subsequent frame image.
And combining the subsequent frame images shown in fig. 3, wherein the movable devices a, b, c and d are all movable devices to be tested. And analyzing the subsequent frame image by using the convolutional neural network to obtain a third characteristic diagram corresponding to each movable device to be tested. I.e. 1 third profile is obtained for each of the movable devices a, b, c, d.
In this embodiment, the third feature map is obtained by analyzing the convolutional neural network, and based on the characteristics of the convolutional neural network, the third feature map includes color information, size information, and/or shape information of the to-be-tested mobile device.
For convenience of description in this step, the third characteristic diagram corresponding to the portable device a is referred to as a third characteristic diagram a, the third characteristic diagram corresponding to the portable device b is referred to as a third characteristic diagram b, the third characteristic diagram corresponding to the portable device c is referred to as a third characteristic diagram c, and the third characteristic diagram corresponding to the portable device d is referred to as a third characteristic diagram d.
And step 504, respectively determining a fifth feature map corresponding to each movable device to be tested according to the second feature map and the third feature map corresponding to each movable device to be tested.
Specifically, in this embodiment, the second feature map may be respectively spliced with each third feature map to obtain a corresponding fifth feature map. The second characteristic diagram and the third characteristic diagram belong to a multi-dimensional matrix in nature, so the fusion of the two can be multi-dimensional matrix splicing. And (5) splicing to obtain a new multidimensional matrix, namely a fifth characteristic diagram.
In this embodiment, the splicing process is shown in fig. 6, that is, the second feature diagram a is respectively spliced with the third feature diagram a, the third feature diagram b, the third feature diagram c and the third feature diagram d to obtain a corresponding fifth feature diagram a, a fifth feature diagram b, a fifth feature diagram c and a fifth feature diagram d.
And 505, determining a relationship characteristic diagram between the target movable equipment and each movable equipment to be tested according to the corresponding fifth characteristic diagram of each movable equipment to be tested by using a preset perception model.
The perception model may be a Multilayer Perceptron (MLP for short). The specific structure and the calculation principle of the perception model are not limited in this embodiment, and any perception model capable of realizing the same or similar functions can be combined in the overall scheme of this embodiment. And inputting each fifth feature map into the perception model to obtain a probability distribution matrix. The probability distribution matrix includes a matching probability distribution between the target movable device and each movable device to be tested. A relational feature map may be determined from the probability distribution matrix.
As shown in fig. 6, the relationship feature map a, which is the relationship feature map between the movable device a and the movable devices a, b, c, and d, can be obtained by inputting the fifth feature map a, the fifth feature map b, the fifth feature map c, and the fifth feature map d into the perception model. In the relationship profile a, the matching probability distribution between the mobile device a and the mobile devices a, b, c, d, respectively, is included.
If the matching probability of the movable device A and the movable device a is the highest and is higher than a certain threshold value, the two devices can be considered to be matched. That is, removable device a and removable device a may be considered to be the same removable device in practice. Therefore, the corresponding relation between the target movable equipment and the movable equipment to be tested is embodied in the relation characteristic diagram, namely, the target movable equipment is tracked.
And step 506, splicing the relation characteristic diagram with the first characteristic diagram to obtain a fourth characteristic diagram.
Similarly, the relationship characteristic diagram and the first characteristic diagram belong to a multi-dimensional matrix in nature, so that the fusion of the relationship characteristic diagram and the first characteristic diagram can be the multi-dimensional matrix splicing. And (4) splicing to obtain a new multidimensional matrix, namely a fourth characteristic diagram.
Referring to fig. 6, in this embodiment, the relationship characteristic diagram a is spliced with the first characteristic diagram a to obtain a fourth characteristic diagram a.
And step 507, determining a predicted trajectory line of the target movable equipment in the subsequent frame image according to the fourth feature map by using the network model.
In the present embodiment, the trajectory prediction result is presented in the form of a predicted trajectory line of the target movable device in the subsequent frame image. That is to say, in this embodiment, after the fourth feature map a is input into the decoding network of the network model, the network model predicts the travel track of the mobile device a in a future period of time, and further embodies the travel track in the form of a predicted track line in the subsequent frame image. Thus, the present embodiment achieves trajectory prediction for a target mobile device in terms of a technique for accurately tracking the mobile device.
Exemplary devices
Fig. 7 is a schematic structural diagram of a trajectory prediction apparatus of a movable device according to an exemplary embodiment of the present disclosure. The apparatus of this embodiment is a physical apparatus for executing the methods of fig. 4 to 5. The technical solution is essentially the same as that in the above embodiment, and the corresponding description in the above embodiment is also applicable to this embodiment. The device in the embodiment comprises:
a first feature map determining module 701, configured to determine a first feature map of the target mobile device from the image frame sequence by using a preset network model.
A second feature map obtaining module 702, configured to obtain a second feature map of the target mobile device from a current frame image in the image frame sequence.
A third feature map obtaining module 703, configured to obtain, from the subsequent frame image, a third feature map corresponding to each to-be-tested movable device.
And a relation characteristic map determining module 704, configured to determine a relation characteristic map between the target mobile device and each mobile device to be tested according to the second characteristic map and the third characteristic map.
A fourth feature map determining module 705, configured to determine a fourth feature map of the target removable device according to the relationship feature map and the first feature map.
And a track prediction module 706, configured to determine a track prediction result of the target mobile device according to the fourth feature map by using the network model.
Fig. 8 is a schematic structural diagram of the relationship characteristic map determining module 704 in the trajectory prediction apparatus of the movable device according to another exemplary embodiment of the disclosure. As shown in FIG. 8, in an exemplary embodiment, relational feature map determination module 704 comprises:
the fifth feature map determining unit 811 is configured to splice the second feature map with each of the third feature maps, so as to obtain corresponding fifth feature maps.
A relation feature map determining unit 812, configured to determine, according to the fifth feature map corresponding to each to-be-tested movable device, a relation feature map between the target movable device and each to-be-tested movable device by using a preset sensing model.
Fig. 9 is a schematic structural diagram of a relationship characteristic map determining unit 812 in a trajectory prediction apparatus of a movable device according to another exemplary embodiment of the present disclosure. As shown in fig. 9, in an exemplary embodiment, the relationship characteristic map determination unit 812 includes:
a matching probability distribution determining subunit 921, configured to input each fifth feature map into the sensing model to obtain a probability distribution matrix, where the probability distribution matrix includes a matching probability distribution between the target mobile device and each mobile device to be tested;
the relationship characteristic map determining subunit 922 is configured to determine a relationship characteristic map according to the probability distribution matrix.
Fig. 10 is a schematic structural diagram of a trajectory prediction module 706 in a trajectory prediction apparatus of a movable device according to another exemplary embodiment of the present disclosure. As shown in FIG. 10, in an exemplary embodiment, the trajectory prediction module 706 includes:
a trajectory line determination unit 1001 configured to determine a predicted trajectory line of the target mobile device from the fourth feature map using the network model.
A trajectory line drawing unit 1002 for drawing the predicted trajectory line of the target movable device in the subsequent frame image.
Exemplary electronic device
Next, an electronic apparatus according to an embodiment of the present disclosure is described with reference to fig. 11. The electronic device may be either or both of the first device 100 and the second device 200, or a stand-alone device separate from them that may communicate with the first device and the second device to receive the collected input signals therefrom.
FIG. 11 illustrates a block diagram of an electronic device in accordance with an embodiment of the disclosure.
As shown in fig. 11, the electronic device 10 includes one or more processors 11 and memory 12.
The processor 11 may be a Central Processing Unit (CPU) or other form of processing unit having data processing capabilities and/or instruction execution capabilities, and may control other components in the electronic device 10 to perform desired functions.
Memory 12 may include one or more computer program products that may include various forms of computer-readable storage media, such as volatile memory and/or non-volatile memory. The volatile memory may include, for example, Random Access Memory (RAM), cache memory (cache), and/or the like. The non-volatile memory may include, for example, Read Only Memory (ROM), hard disk, flash memory, etc. One or more computer program instructions may be stored on the computer-readable storage medium and executed by the processor 11 to implement the trajectory prediction method of the movable apparatus of the various embodiments of the present disclosure described above and/or other desired functions. Various contents such as an input signal, a signal component, a noise component, etc. may also be stored in the computer-readable storage medium.
In one example, the electronic device 10 may further include: an input device 13 and an output device 14, which are interconnected by a bus system and/or other form of connection mechanism (not shown).
For example, when the electronic device is the first device 100 or the second device 200, the input device 13 may be a microphone or a microphone array as described above for capturing an input signal of a sound source. When the electronic device is a stand-alone device, the input means 13 may be a communication network connector for receiving the acquired input signals from the first device 100 and the second device 200.
The input device 13 may also include, for example, a keyboard, a mouse, and the like.
The output device 14 may output various information including the determined distance information, direction information, and the like to the outside. The output devices 14 may include, for example, a display, speakers, a printer, and a communication network and its connected remote output devices, among others.
Of course, for simplicity, only some of the components of the electronic device 10 relevant to the present disclosure are shown in fig. 11, omitting components such as buses, input/output interfaces, and the like. In addition, the electronic device 10 may include any other suitable components depending on the particular application.
Exemplary computer program product and computer-readable storage Medium
In addition to the above-described methods and apparatus, embodiments of the present disclosure may also be a computer program product comprising computer program instructions, the described methods and apparatusComputer programThe instructions, when executed by the processor, cause the processor to perform the steps in the trajectory prediction method of the movable device according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer program product may write program code for carrying out operations for embodiments of the present disclosure in any combination of one or more programming languages, including an object oriented programming language such as Java, C + + or the like and conventional procedural programming languages, such as the "C" programming language or similar programming languages. The program code may execute entirely on the user's computing device, partly on the user's device, as a stand-alone software package, partly on the user's computing device and partly on a remote computing device, or entirely on the remote computing device or server.
Furthermore, embodiments of the present disclosure may also be a computer-readable storage medium having stored thereon computer program instructions that, when executed by a processor, cause the processor to perform steps in a trajectory prediction method of a movable apparatus according to various embodiments of the present disclosure described in the "exemplary methods" section above in this specification.
The computer-readable storage medium may take any combination of one or more readable media. The readable medium may be a readable signal medium or a readable storage medium. A readable storage medium may include, for example, but not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or a combination of any of the foregoing. More specific examples (a non-exhaustive list) of the readable storage medium include: an electrical connection having one or more wires, a portable disk, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
The foregoing describes the general principles of the present disclosure in conjunction with specific embodiments, however, it is noted that the advantages, effects, etc. mentioned in the present disclosure are merely examples and are not limiting, and they should not be considered essential to the various embodiments of the present disclosure. Furthermore, the foregoing disclosure of specific details is for the purpose of illustration and description and is not intended to be limiting, since the disclosure is not intended to be limited to the specific details so described.
The block diagrams of devices, apparatuses, systems referred to in this disclosure are only given as illustrative examples and are not intended to require or imply that the connections, arrangements, configurations, etc. must be made in the manner shown in the block diagrams. These devices, apparatuses, devices, systems may be connected, arranged, configured in any manner, as will be appreciated by those skilled in the art. Words such as "including," "comprising," "having," and the like are open-ended words that mean "including, but not limited to," and are used interchangeably therewith. The words "or" and "as used herein mean, and are used interchangeably with, the word" and/or, "unless the context clearly dictates otherwise. The word "such as" is used herein to mean, and is used interchangeably with, the phrase "such as but not limited to".
It is also noted that in the devices, apparatuses, and methods of the present disclosure, each component or step can be decomposed and/or recombined. These decompositions and/or recombinations are to be considered equivalents of the present disclosure.
The previous description of the disclosed aspects is provided to enable any person skilled in the art to make or use the present disclosure. Various modifications to these aspects will be readily apparent to those skilled in the art, and the generic principles defined herein may be applied to other aspects without departing from the scope of the disclosure. Thus, the present disclosure is not intended to be limited to the aspects shown herein but is to be accorded the widest scope consistent with the principles and novel features disclosed herein.
The foregoing description has been presented for purposes of illustration and description. Furthermore, this description is not intended to limit embodiments of the disclosure to the form disclosed herein. While a number of example aspects and embodiments have been discussed above, those of skill in the art will recognize certain variations, modifications, alterations, additions and sub-combinations thereof.

Claims (10)

1. A trajectory prediction method of a movable device includes:
determining a first feature map of a target movable device from the image frame sequence by using a preset network model;
acquiring a second feature map of the target movable equipment from a current frame image in the image frame sequence;
acquiring a third characteristic diagram corresponding to each movable device to be tested from the subsequent frame image;
determining a relation characteristic diagram between the target movable equipment and each movable equipment to be tested according to the second characteristic diagram and the third characteristic diagram;
determining a fourth feature map of the target mobile device according to the relationship feature map and the first feature map;
and determining a track prediction result of the target movable equipment according to the fourth feature map by utilizing the network model.
2. The method of claim 1, wherein the determining the relationship feature map between the target movable device and each movable device under test according to the second feature map and the third feature map comprises:
respectively determining a fifth feature map corresponding to each movable device to be tested according to the second feature map and the third feature map corresponding to each movable device to be tested;
and determining a relationship characteristic diagram between the target movable equipment and each movable equipment to be tested according to the corresponding fifth characteristic diagram of each movable equipment to be tested by using a preset perception model.
3. The method according to claim 2, wherein the fifth feature map corresponding to each mobile device to be tested is respectively determined according to the third feature map corresponding to each mobile device to be tested and the second feature map; the method comprises the following steps:
and respectively splicing the second characteristic diagram with each third characteristic diagram to obtain a corresponding fifth characteristic diagram.
4. The method of claim 2, wherein the determining, by using a preset perceptual model, the relationship feature map between the target mobile device and each mobile device under test according to the corresponding fifth feature map of each mobile device under test comprises:
inputting the fifth feature maps into the perception model to obtain a probability distribution matrix, wherein the probability distribution matrix comprises matching probability distribution between the target movable equipment and each piece of movable equipment to be tested;
and determining the relation characteristic diagram according to the probability distribution matrix.
5. The method of claim 1, wherein the determining a fourth feature map of the target movable device from the relationship feature map and the first feature map comprises:
and splicing the relation characteristic diagram with the first characteristic diagram to obtain the fourth characteristic diagram.
6. The method of claim 1, wherein the determining a trajectory prediction of the target movable device comprises:
determining a predicted trajectory line of the target movable device in the subsequent frame image.
7. The method according to any one of claims 1 to 6, wherein:
the second feature map includes color information, size information, and/or shape information of the target movable device;
the third characteristic diagram includes color information, size information and/or shape information of the movable device to be tested.
8. An apparatus for predicting a trajectory of a movable device, comprising:
the first feature map determining module is used for determining a first feature map of the target movable device from the image frame sequence by using a preset network model;
a second feature map obtaining module, configured to obtain a second feature map of the target mobile device from a current frame image in the image frame sequence;
the third characteristic diagram acquisition module is used for acquiring a third characteristic diagram corresponding to each movable device to be tested from the subsequent frame image;
a relation characteristic diagram determining module, configured to determine a relation characteristic diagram between the target mobile device and each mobile device to be tested according to the second characteristic diagram and the third characteristic diagram;
a fourth feature map determination module, configured to determine a fourth feature map of the target removable device according to the relationship feature map and the first feature map;
and the track prediction module is used for determining a track prediction result of the target movable equipment according to the fourth feature map by utilizing the network model.
9. A computer-readable storage medium storing a computer program for executing the trajectory prediction method of a movable apparatus according to any one of claims 1 to 7.
10. An electronic device, the electronic device comprising:
a processor;
a memory for storing the processor-executable instructions;
the processor is configured to read the executable instructions from the memory and execute the instructions to implement the trajectory prediction method of the mobile device according to any one of claims 1 to 7.
CN202010038689.2A 2020-01-14 2020-01-14 Track prediction method and device for movable equipment Pending CN113129330A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010038689.2A CN113129330A (en) 2020-01-14 2020-01-14 Track prediction method and device for movable equipment

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010038689.2A CN113129330A (en) 2020-01-14 2020-01-14 Track prediction method and device for movable equipment

Publications (1)

Publication Number Publication Date
CN113129330A true CN113129330A (en) 2021-07-16

Family

ID=76771371

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010038689.2A Pending CN113129330A (en) 2020-01-14 2020-01-14 Track prediction method and device for movable equipment

Country Status (1)

Country Link
CN (1) CN113129330A (en)

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6754389B1 (en) * 1999-12-01 2004-06-22 Koninklijke Philips Electronics N.V. Program classification using object tracking
US20170286774A1 (en) * 2016-04-04 2017-10-05 Xerox Corporation Deep data association for online multi-class multi-object tracking
CN107492113A (en) * 2017-06-01 2017-12-19 南京行者易智能交通科技有限公司 A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method
CN108053410A (en) * 2017-12-11 2018-05-18 厦门美图之家科技有限公司 Moving Object Segmentation method and device
CN108229468A (en) * 2017-06-28 2018-06-29 北京市商汤科技开发有限公司 Vehicle appearance feature recognition and vehicle retrieval method, apparatus, storage medium, electronic equipment
CN108876813A (en) * 2017-11-01 2018-11-23 北京旷视科技有限公司 Image processing method, device and equipment for object detection in video
CN109299305A (en) * 2018-10-30 2019-02-01 湖北工业大学 A kind of spatial image searching system based on multi-feature fusion and search method
CN109934183A (en) * 2019-03-18 2019-06-25 北京市商汤科技开发有限公司 Image processing method and device, detection device and storage medium
CN110532916A (en) * 2019-08-20 2019-12-03 北京地平线机器人技术研发有限公司 A kind of motion profile determines method and device
CN110633718A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Method and device for determining a driving area in an environment image

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6754389B1 (en) * 1999-12-01 2004-06-22 Koninklijke Philips Electronics N.V. Program classification using object tracking
US20170286774A1 (en) * 2016-04-04 2017-10-05 Xerox Corporation Deep data association for online multi-class multi-object tracking
CN107492113A (en) * 2017-06-01 2017-12-19 南京行者易智能交通科技有限公司 A kind of moving object in video sequences position prediction model training method, position predicting method and trajectory predictions method
CN108229468A (en) * 2017-06-28 2018-06-29 北京市商汤科技开发有限公司 Vehicle appearance feature recognition and vehicle retrieval method, apparatus, storage medium, electronic equipment
CN108876813A (en) * 2017-11-01 2018-11-23 北京旷视科技有限公司 Image processing method, device and equipment for object detection in video
CN108053410A (en) * 2017-12-11 2018-05-18 厦门美图之家科技有限公司 Moving Object Segmentation method and device
CN110633718A (en) * 2018-06-21 2019-12-31 北京京东尚科信息技术有限公司 Method and device for determining a driving area in an environment image
CN109299305A (en) * 2018-10-30 2019-02-01 湖北工业大学 A kind of spatial image searching system based on multi-feature fusion and search method
CN109934183A (en) * 2019-03-18 2019-06-25 北京市商汤科技开发有限公司 Image processing method and device, detection device and storage medium
CN110532916A (en) * 2019-08-20 2019-12-03 北京地平线机器人技术研发有限公司 A kind of motion profile determines method and device

Similar Documents

Publication Publication Date Title
US20180293664A1 (en) Image-based vehicle damage determining method and apparatus, and electronic device
EP3520045B1 (en) Image-based vehicle loss assessment method, apparatus, and system, and electronic device
CN108710885B (en) Target object detection method and device
WO2019233421A1 (en) Image processing method and device, electronic apparatus, and storage medium
CN106951484B (en) Picture retrieval method and device, computer equipment and computer readable medium
EP3893125A1 (en) Method and apparatus for searching video segment, device, medium and computer program product
JP7436670B2 (en) Target detection method, device, and roadside equipment in road-vehicle coordination
CN112183166A (en) Method and device for determining training sample and electronic equipment
CN113807350A (en) Target detection method, device, equipment and storage medium
CN111626219A (en) Trajectory prediction model generation method and device, readable storage medium and electronic equipment
CN110738108A (en) Target object detection method, target object detection device, storage medium and electronic equipment
CN110853085A (en) Semantic SLAM-based mapping method and device and electronic equipment
CN111428805A (en) Method and device for detecting salient object, storage medium and electronic equipment
CN113129249A (en) Depth video-based space plane detection method and system and electronic equipment
CN114139630A (en) Gesture recognition method and device, storage medium and electronic equipment
CN116740662B (en) Axle recognition method and system based on laser radar
CN113901998A (en) Model training method, device, equipment, storage medium and detection method
CN113281780A (en) Method and device for labeling image data and electronic equipment
CN113129330A (en) Track prediction method and device for movable equipment
US20220351533A1 (en) Methods and systems for the automated quality assurance of annotated images
CN111383245A (en) Video detection method, video detection device and electronic equipment
CN111915000B (en) Network model adjusting method and device for medical image
CN114743174A (en) Determination method and device for observed lane line, electronic equipment and storage medium
CN112199978A (en) Video object detection method and device, storage medium and electronic equipment
CN116311110A (en) Method and device for processing perception data, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination