WO2022021924A1 - 目标跟踪方法及相关系统、存储介质、智能驾驶车辆 - Google Patents
目标跟踪方法及相关系统、存储介质、智能驾驶车辆 Download PDFInfo
- Publication number
- WO2022021924A1 WO2022021924A1 PCT/CN2021/084784 CN2021084784W WO2022021924A1 WO 2022021924 A1 WO2022021924 A1 WO 2022021924A1 CN 2021084784 W CN2021084784 W CN 2021084784W WO 2022021924 A1 WO2022021924 A1 WO 2022021924A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- frame
- image
- objects
- predicted position
- positional relationship
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/277—Analysis of motion involving stochastic approaches, e.g. using Kalman filters
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/70—Determining position or orientation of objects or cameras
- G06T7/73—Determining position or orientation of objects or cameras using feature-based methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/045—Combinations of networks
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/04—Architecture, e.g. interconnection topology
- G06N3/049—Temporal neural networks, e.g. delay elements, oscillating neurons or pulsed inputs
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06N—COMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
- G06N3/00—Computing arrangements based on biological models
- G06N3/02—Neural networks
- G06N3/08—Learning methods
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20024—Filtering details
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30241—Trajectory
Definitions
- the present application relates to the technical field of computer vision, and in particular, to a target tracking method and related system, storage medium, and intelligent driving vehicle.
- Visual target tracking refers to the technology of finding the corresponding target position in subsequent frames in a video sequence given the target position of the current frame.
- Multi-target tracking based on video image sequences is one of the important tasks in autonomous driving systems.
- the tracker can give the position of the target in the following frames.
- drift phenomenon is prone to occur during the tracking process, that is, the tracker fails to track.
- erroneous position predictions are prone to occur due to tracking drift.
- the existing technology often fails to determine whether the trajectory is due to occlusion or other reasons when performing target tracking. Temporarily disappear or leave the detection area to stop tracking, causing part of the occluded trajectory to terminate tracking due to misjudgment. When the original tracked target appears again, if the original track has stopped tracking, the ID of the target will jump. Although some existing methods try to use the temporal features between multiple frames of images, it is still easy to fail to track a difficult target in complex scenes.
- the present application discloses a target tracking method, a related system, a storage medium, and an intelligent driving vehicle, which can realize accurate prediction of target positions.
- an embodiment of the present application provides a target tracking method, which includes: acquiring the relative positional relationships of N objects in the i-th frame of images, and acquiring at least one of the N objects in each of the M-frame images.
- the position of an object, and the image of the i-th frame is the last image acquired in time in the M frames, and the M, N, and i are all positive integers; for any object A in the N objects, according to M' positions are obtained from the first predicted positions of the object A in the first image, and the M' positions are the positions of the object A in the M' frame images containing the object A in the M frame images , M' is a positive integer not greater than M, and the first image is an image obtained after the i-th frame in time; according to the intra-frame relative positional relationship of N objects in the i-th frame image, the The second predicted position of each of the N objects in the first image; the predicted position of the object A in the first image is determined according to the first predicted position and the second predicted
- the determining the predicted position of the object A in the first image according to the first predicted position and the second predicted position of the object A includes: if the first predicted position of the object A and the second predicted position If the distance between the positions is greater than a preset threshold, the predicted position of the object A in the first image is obtained according to an average sliding filter algorithm.
- the determining the predicted position of the object A in the first image according to the first predicted position and the second predicted position of the object A includes: if the first predicted position of the object A and the second predicted position If the distance between the positions is not greater than a preset threshold, the predicted position of the object A is obtained according to the first predicted position of the object A; or, the predicted position of the object A is obtained according to the second predicted position of the object A position; or, obtaining the predicted position of the object A according to the first predicted position and the second predicted position of the object A.
- the obtaining the second predicted position of each of the N objects in the first image according to the relative positional relationship of the N objects in the i-th frame image includes: taking the object E as the first parent node, wherein, the object E is the object with the highest confidence in the first image; obtain the relationship between the first parent node and the child node in the relative positional relationship within the frame of N objects in the i-th frame of image the relative position within the frame; obtain the second predicted position of the child node according to the relative position within the frame between the first parent node and the child node and the first predicted position of the first parent node;
- the node serves as the second parent node, and obtains the relative position in the frame between the second parent node and the child node in the relative position relationship of the N objects in the i-th frame image;
- the relative position in the frame between the child nodes and the second predicted position of the second parent node obtain the second predicted position of the child node; and so on, until each of the N objects in the first image is obtained
- the obtaining the second predicted position of each of the N objects in the first image according to the relative positional relationship of the N objects in the ith frame of image includes: according to the ith frame of image The relative positional relationship in the frame of the N objects in the frame and the first predicted position of the object E obtain the second predicted position of each of the N objects in the first image, wherein the relative position of the N objects in the frame The relationship includes the relative position within the frame of each of the N-1 objects relative to the object E, where the object E is any one of the N objects, and the N-1 objects are the N Objects other than the object E among the objects, wherein the second predicted position of the object E is the same as the first predicted position.
- the relative positional relationship between the positions of the N objects and the relative positional relationship between the W objects in the second image and the frame relative to the W objects in the second image Positional relationship, obtaining the relative positional relationship within the frame of N objects in the i-th frame image, including: deleting the relative position of each object and the object C in the relative positional relationship within the frame of the W objects in the second image to obtain the relative positional relationship within the reference frame of the ith frame image; according to the relative positional relationship between the positions of the N objects and the relative positional relationship within the reference frame of the ith frame image, obtain the Intra-frame relative position relationship of N objects in the i-th frame image.
- the method further includes: determining the position of the object A in the first image according to the predicted position of the object A; wherein, determining the position of the object A in the first image according to the predicted position of the object A
- the position includes: acquiring a first image, and obtaining the detection positions of Q objects in the first image according to the first image, where Q is a positive integer; if the Q objects include the object A, then according to the The predicted position of the object A and the detected position of the object A are used to determine the position of the object A in the first image.
- the Q objects include object B, and the object B does not match any object A among the N objects, then determine the object B in the first image according to the detection position of the object B. The location of object B.
- the present application provides a target tracking system, including: a position acquisition module, configured to acquire the relative positional relationship within the frame of N objects in the ith frame of images, and to acquire the information described in each frame of the M frame of images
- the position of at least one object among the N objects, and the image of the i-th frame is the last image obtained in time in the M frames, and the M, N, and i are all positive integers
- the first prediction module is used for For any object A among the N objects, the first predicted position of the object A in the first image is obtained according to M' positions, where the M' positions are the M frames of images including the object A
- M' is a positive integer not greater than M
- the first image is an image acquired after the i-th frame in time
- the second prediction module is used for The relative positional relationship within the frame of the N objects in the i-th frame image obtains the second predicted position of each of the N objects in the first image;
- the target prediction module is specifically configured to: if the distance between the first predicted position and the second predicted position of the object A is greater than a preset threshold, obtain the information in the first image according to the average sliding filter algorithm The predicted location of object A.
- the target prediction module is also specifically configured to: if the distance between the first predicted position of the object A and the second predicted position is not greater than a preset threshold, obtain the first predicted position of the object A according to the The predicted position of the object A; or, the predicted position of the object A is obtained according to the second predicted position of the object A; or, the predicted position of the object A is obtained according to the first predicted position and the second predicted position of the object A Predicted location.
- the second prediction module is specifically used to: take the object E as the first parent node, wherein the object E is the object with the highest confidence in the first image; obtain N images in the ith frame of images The relative position in the frame between the first parent node and the child node in the relative position relationship of the object in the frame; according to the relative position in the frame between the first parent node and the child node and the relative position of the first parent node
- the first predicted position obtains the second predicted position of the child node; the child node is used as the second parent node, and the second parent node in the relative positional relationship within the frame of the N objects in the i-th frame image is obtained
- the relative position in the frame with the child node; the second predicted position of the child node is obtained according to the relative position in the frame between the second parent node and the child node and the second predicted position of the second parent node ; and so on, until the second predicted position of each of the N objects in the first image is obtained, wherein the second predicted position of the object E
- the second prediction module is specifically configured to: obtain each of the N objects in the first image according to the relative positional relationship within the frame of the N objects in the i-th frame image and the first predicted position of the object E
- the second predicted position of the object wherein the relative positional relationship in the frame of the N objects includes the relative position in the frame of each of the N ⁇ 1 objects relative to the object E, and the object E is the N Any one of the objects, the N ⁇ 1 objects are objects other than the object E among the N objects, wherein the second predicted position of the object E is the same as the first predicted position.
- the position obtaining module obtains the relative positional relationship within the frame of N objects in the ith frame of image, it is specifically used to: obtain the relative positional relationship within the frame of W objects in the second image, and obtain the ith frame relative positional relationship.
- the positions of N objects in a frame image where W is a positive integer, the N objects include at least one object among the W objects, and the second image is acquired temporally before the i-th frame the image; obtain the relative positional relationship between the positions of the N objects in the i-th frame image according to the positions of the N objects in the i-th frame image; according to the relative positional relationship between the positions of the N objects and the relative positional relationship within the frame of the W objects in the second image to obtain the relative positional relationship within the frame of the N objects in the i-th frame image.
- the position acquisition module is further configured to: delete each object and all the objects in the relative positional relationship within the frame of the W objects in the second image.
- the relative positional relationship of the object C is obtained to obtain the relative positional relationship in the reference frame of the i-th frame image; according to the relative positional relationship between the positions of the N objects and the relative positional relationship in the reference frame of the i-th frame image The positional relationship is obtained, and the relative positional relationship within the frame of the N objects in the i-th frame image is obtained.
- the system further includes a target position acquisition module, which is used for: determining the position of the object A in the first image according to the predicted position of the object A; wherein, determining the position of the object A in the first image according to the predicted position of the object A;
- the position of the object A includes: acquiring a first image, and obtaining the detection positions of Q objects in the first image according to the first image, where Q is a positive integer; if the Q objects include the object A, the position of the object A in the first image is determined according to the predicted position of the object A and the detected position of the object A.
- the Q objects include object B, and the object B does not match any object A among the N objects, then determine the object B in the first image according to the detection position of the object B. The location of object B.
- the present application provides a computer storage medium, including computer instructions, which, when the computer instructions are executed on an electronic device, cause the electronic device to perform any possible implementation method of the first aspect.
- the embodiments of the present application provide a computer program product, which, when the computer program product runs on a computer, enables the computer to execute any possible implementation manner of the first aspect.
- the embodiments of the present application provide an intelligent driving vehicle, including a traveling system, a sensing system, a control system, and a computer system, wherein the computer system is configured to execute any possible implementation manner of the first aspect.
- the system in the second aspect, the computer storage medium in the third aspect, the computer program product in the fourth aspect, and the intelligent driving vehicle in the fifth aspect are all used to execute the first The method provided in any one of the aspects. Therefore, for the beneficial effects that can be achieved, reference may be made to the beneficial effects in the corresponding method, which will not be repeated here.
- FIG. 1 is a schematic diagram of an application scenario to which a target tracking method provided by an embodiment of the present application is applicable;
- FIG. 2 is a schematic diagram of another application scenario to which a target tracking method provided by an embodiment of the present application is applicable;
- FIG. 3 is a schematic flowchart of a target tracking method provided by an embodiment of the present application.
- FIG. 4a is a schematic diagram of a relative positional relationship within a frame provided by an embodiment of the present application.
- 4b is a schematic diagram of another relative positional relationship within a frame provided by an embodiment of the present application.
- FIG. 5 is a schematic diagram of an application of a target tracking method provided by an embodiment of the present application.
- FIG. 7 is a schematic structural diagram of a target position prediction system provided by an embodiment of the present application.
- FIG. 8 is a schematic structural diagram of a target tracking apparatus provided by an embodiment of the present application.
- the present application provides a target tracking method, in which the target predicted position of each object in the current frame image is obtained based on the relative positional relationship of each object in the previous frame image.
- the problem of tracking drift can be effectively suppressed, the false tracking rate during target tracking is reduced, the stability of target tracking is improved, and the tracker can run effectively for a long time.
- the embodiment of the present application can be widely applied to the target tracking part of the unmanned driving system.
- target tracking can make up for the lack of target detection speed, and can smooth the detection results. Therefore, object tracking is a very important part of the visual perception module.
- Commonly used trackers are CT, STC, CSK, KCF, etc.
- the speed of the tracker can generally reach 30-60FPS. Some even as high as 200 ⁇ 300FPS.
- many trackers cannot self-check the tracking accuracy. Once the tracker drifts, it will output the wrong position.
- outputting the wrong location means outputting a car where there is no car, which directly affects the regulation and control to make a reasonable decision. Therefore, it is very important to suppress tracking drift.
- This solution can be used to improve the visual perception part of unmanned driving and improve the accuracy of output results.
- the embodiment of the present application can also be widely applied to the target tracking part of the intelligent video surveillance system.
- Video surveillance has been widely used in all aspects of production and life.
- Intelligent video surveillance systems have widely existed in public places such as banks, shopping malls, stations and traffic intersections.
- the main task of intelligent video surveillance is to detect the moving objects in the collected pictures, classify them, find the moving objects of interest categories and track them. Identify its behavior during tracking. Once the risky behavior is detected, it triggers an alarm to stop the further deterioration of the risky behavior.
- Object tracking can make up for the lack of object detection speed and concatenate the same object in consecutive frames for further analysis.
- This solution can be used to improve the target tracking part of video surveillance, so that the perception results can be accurately delivered to the next processing module, such as identity recognition or anomaly detection.
- FIG. 3 it is a schematic flowchart of a target tracking method provided by an embodiment of the present application.
- the method includes steps 301-304, as follows:
- the above-mentioned M frame images are images including at least one object among the above-mentioned N objects.
- the above current prediction is the predicted position of each object in the i+1 th frame image.
- the M The frame image may be, for example, the second frame, the fourth frame, and the fifth frame. Of course, it can also acquire only the 4th frame, the 5th frame, or only the 5th frame. There is no specific limitation here.
- the time is not limited here. Specifically, it can also predict the i+2 th frame, the i+3 th frame, etc. according to the ith frame.
- the tracker may first obtain the relative positional relationship within the frame of the N objects in the ith frame of images, and obtain at least one of the N objects in each frame of the M frames of images. the location of the object.
- the above position may be the position of the object in the image coordinate system, or may be the position of the object in the world coordinate system, which is not specifically limited in this solution. where this position is the final position of the object output by the tracker.
- obtaining the relative positional relationship within the frame of the N objects in the ith frame image may include S3011-S3013, and the details are as follows:
- the relative positional relationship within the frame of each object in the frame of image can be obtained by separately acquiring the relative positional relationship between any two objects in the image. If the frame contains object 1, object 2, object 3, and object 4, the relative position relationship in the frame between object 1 and object 2 can be obtained. Object 1-2, and the relative position relationship object in the frame between object 1 and object 3 can be obtained. 1-3, the relative position relationship in the frame between object 1 and object 4 The relative position relationship in the frame between object 1-4, object 2 and object 3 The relative position relationship in the frame between object 2-3, object 2 and object 4 The relative position relationship object 2-4 and the relative position relationship object 3-4 in the frame between the object 3 and the object 4. As shown in Figure 4a.
- the confidence level of each object can be obtained by obtaining the confidence level of each object, wherein the confidence level of each object can be obtained through detection by a detector, and then the one with the largest confidence level is selected.
- the object is used as the parent node, and then starting from the parent node, with the Euclidean distance of the target in the image coordinate system as the weight, in the graph formed by all the targets, use the kruskal algorithm or the prim algorithm to establish the minimum spanning tree, as shown in Figure 4b , so as to establish the intra-frame target structure model, form the intra-frame target data association, and obtain the intra-frame relative position relationship.
- the first frame of image may be any frame of image acquired in time, and may also be an image corresponding to the appearance of a specific object, which is not specifically limited here.
- the relative positional relationship within the first frame is updated to obtain the relative positional relationship within the second frame. And so on. Further, the relative positional relationship within the frame of the W objects in the i-1 th frame image can be obtained.
- the relative positions between the positions of any two objects can be obtained respectively, and then the relative position relationship between the positions of the N objects in the ith frame image can be obtained.
- the N objects in the i-th frame image can be obtained according to the relative positions between the target positions of the N objects and the average value of the relative positions of the W objects in the i-1-th frame image The relative positional relationship within the frame.
- the relative positional relationship within the frame of the matched part of the objects is It is obtained by averaging the relative positional relationship within the frame of each object in the i-1th frame image and the relative positional relationship between the target positions of the objects in the i-th frame image.
- the W objects in the i-1th frame image and the unmatched objects among the N objects in the i-th frame image there are two cases: 1) If the i-1th frame image contains The object C does not match with the N objects in the i-th frame image, it means that the object C in the i-1-th frame image disappears in the i-th frame, that is, the object in the i-1-th frame image is deleted.
- the relative positional relationship within the frame of C That is to say, the intra-frame relative positional relationship of the object C is not included in the intra-frame relative positional relationship of the N objects in the i-th frame image.
- the object D in the i-th frame image does not match with the W objects in the i-1-th frame image, it means that the object D in the i-th frame image is new, then the The relative positional relationship within the frame of the object D in the ith frame image is the relative positional relationship between the target position of each object in the ith frame image and the target position of the object D.
- the M' positions are the M frames of images that contain the position of the object A in the M' frame image of the object A, where M' is a positive integer not greater than M, and the first image is an image obtained after the i-th frame in time;
- any frame may contain at least one object among the N objects, so M' target positions can be obtained for any object, where M' is not less than 1, and not greater than M.
- the first predicted position of the object A is obtained based on the above-mentioned M' positions.
- the first predicted position of the object A may be obtained by averaging the M' positions of the object A, or calculated according to a preset weight, for example, the closer to the first image, the greater the weight.
- This solution can predict the image of any frame after the i-th frame, such as the image within 3 minutes in time interval.
- the time is not limited here. Specifically, it may predict the i+1 th frame, the i+2 th frame, or the i+3 th frame, etc. according to the ith frame. That is, the first image may be the i+1 th frame image, the i+2 th frame image, or the i+3 th frame image, or the like.
- any object A among the N objects can be selected, based on the first predicted position of the object A and the relative positional relationship between each object in the ith frame image and the object A, the ith object A can be obtained.
- the second predicted position of each of the N objects in +1 frame image wherein, for object A, its second predicted position is the same as its first predicted position.
- the above-mentioned object A may be the object with the highest confidence.
- multiple objects may be selected, and the second predicted position of each of the N objects in the first image is obtained based on the first predicted positions of the multiple objects.
- obtaining of the second predicted position of each of the N objects in the first image according to the relative positional relationship of the N objects in the i-th frame image includes: :
- the second predicted position of each of the N objects in the first image is obtained according to the relative positional relationship within the frame of the N objects in the ith frame image and the first predicted position of the object E, wherein the N
- the intra-frame relative position relationship of the objects includes the intra-frame relative position of each of the N-1 objects relative to the object E, where the object E is any one of the N objects, and the N- One object is an object other than the object E among the N objects, wherein the second predicted position of the object E is the same as the first predicted position.
- obtaining of the second predicted position of each of the N objects in the first image according to the relative positional relationship of the N objects in the i-th frame image includes: :
- the predicted position of the object A in the first image is obtained according to an average sliding filter algorithm.
- obtaining the predicted position of the object A in the first image according to the average sliding filter algorithm is obtained according to the position of the object A in at least one frame before the first image.
- it can be the ith frame, the average value of the position of the object A in the ith frame, and so on.
- the object A is acquired according to the first predicted position and/or the second predicted position of the object A predicted location.
- the predicted position of the object A is obtained according to the first predicted position and/or the second predicted position of the object A, for example, the first predicted position of the object A may be used as the predicted position of the object A; or The second predicted position of the object A is used as the predicted position of the object A; it can also be the predicted position of the object A obtained according to the first predicted position and the second predicted position of the object A, for example, it can be the predicted position of the object A The average of the first predicted position and the second predicted position, etc. There is no specific limitation here.
- the solution may further include:
- the position of the object A in the first image is determined according to the predicted position of the object A.
- the determining the target position of the object A in the i+1 th frame image according to the target predicted position of the object A may specifically include:
- the target position of the object A in the i+1 th frame image is determined according to the target predicted position of the object A and the detected position of the object A.
- the Q objects do not include the object A in the N objects, it is confirmed that the object A disappears in the i+1 th frame image.
- the Q objects include object B, and the object B does not match any object A in the N objects, then determine the object B in the i+1th frame image according to the detection position of the object B. Describe the target location of object B.
- the detection positions of the above-mentioned Q objects may be obtained based on the detector, and of course they may also be obtained in other forms.
- the target position of the object can be comprehensively determined based on the results of the two.
- the result of the detector shall prevail; if the object is not detected by the detector, it means that the object disappears.
- the first predicted position of each object in the first image after the ith frame of image is obtained, and then the second prediction position of each object in the first image is obtained based on the relative positional relationship of each object in the ith frame. predicting the position, and then obtaining the predicted position of each object in the first image based on the first predicted position and the second predicted position of each object in the obtained first image.
- the predicted position of each object in the subsequent frame image is obtained based on the relative positional relationship of each object in the previous frame image, which can effectively suppress tracking drift in scenes with challenging factors such as target occlusion and similar objects.
- the first image may be the i+1 th frame image, or may be the i+2 th frame image, the i+3 th frame image, or the like.
- the embodiment of the present application provides a target tracking method.
- the confidence of each object can be obtained through the detector.
- the target with the highest confidence is selected as the parent node, and then starting from the parent node, the Euclidean distance of the target in the image coordinate system is used.
- the size is the weight, and in the graph formed by all the targets, use the kruskal algorithm or the prim algorithm to generate the minimum spanning tree. As shown in Figure 4b.
- an intra-frame target structure model is established, and an intra-frame target data association is formed.
- the coordinate p i ( xi , y i ) of the center position of the i-th object O i in the image in the image coordinate system, and the confidence level is conf i .
- the first predicted position corresponding to each object in the second frame image can be obtained based on the position of each object in the first frame image.
- d i is greater than the set threshold d th , it means that the position estimated by the multi-target tracker makes the shape of the tree change greatly, which does not satisfy the stable intra-frame structure.
- the position obtained by using the average sliding filter algorithm is selected as target predicted location
- the intra-frame structure relationship is satisfied, and the predicted position of the multi-target tracker is used.
- the above intra-frame structure inference position Or combine the two to get a target predicted location.
- the distance between the first predicted position corresponding to each target and the second predicted position is obtained . Then, it is confirmed whether the distance between the first predicted position and the second predicted position corresponding to each target is greater than a preset threshold. If the distance between the two is greater than the preset threshold, it means that the position estimated by the multi-target tracker causes the shape of the tree to change greatly, which does not satisfy the stable intra-frame structure. At this time, the average sliding filter algorithm is used to obtain the position of each target as Predicted location.
- the intra-frame structure relationship is satisfied, and the first prediction position corresponding to each target in the second frame image can be used as the target prediction position of each target, or the target prediction position of each target in the second frame image can be used, or each target in the second frame image can be used.
- the corresponding second predicted position is used as the predicted position of each target, or the predicted position of each target may be obtained by performing weighting processing based on the first predicted position and the second predicted position corresponding to each target in the second frame image. There is no specific limitation here.
- the above-mentioned average sliding filtering algorithm may be based on the position of the target in the multi-frame images to obtain the predicted position corresponding to the target in the current frame image.
- the average sliding filter algorithm is used as an example for description, and the above-mentioned algorithm may also be other algorithms to replace the average sliding filter algorithm, and no specific limitation is made here.
- the position of each object in the second frame image can be obtained by combining the detection position of the object obtained by the detector.
- the relative positional relationship in the frame can be updated to obtain the relative positional relationship in the current frame.
- the relative positional relationship between the target positions in the second frame of image is obtained according to the target positions corresponding to each object in the second frame of image, and then the relative positional relationship within the frame of the first frame of image is combined to obtain the second frame of image. Relative positional relationship within the frame.
- FIG. 5 is an application schematic diagram of a target tracking method provided by an embodiment of the present application.
- the tracker confirms whether a new object appears in the frame image compared with the previous frame image. If a new object appears, the tracker initializes the tracking trajectory of the object. Then, check whether any object disappears in this frame image compared to the previous frame image. If an object disappears, the tracker terminates the object's tracking trajectory. Then, the position of each object in the frame image is predicted by using the target position prediction method as shown in FIG. 3 , and the tracking result is output. When the frame image is not the last frame, input the next frame image, and repeat the above steps.
- the target tracking method provided by the embodiments of the present application can be used in any existing visual multi-target tracker. As shown in FIG. 6 , the target tracking method provided in this embodiment of the present application is applied to the joint tracker of KCF and LSTM.
- KCF is a fast tracker that can be used to track the position response center of the target.
- LSTM takes temporal information into account and can be used to estimate the scale of the target.
- the example of this improved scheme first quickly tracks the maximum response position of each object through KCF. Specifically, according to the position of each object in the previous frame of image, the image blocks of each object are extracted proportionally. For the image blocks of the objects, a circulant matrix is used to obtain a training sample set of the objects, and a ridge regression training model is used to obtain each independent correlation filter as multiple trajectories of multiple objects. The KCF is used to detect the current frame, and the position of each node in the current frame is predicted through the response distribution.
- CNN is used to extract the apparent features of the target image sequence
- LSTM is used to extract the target motion features to estimate the target scale
- a fully connected branch sharing the apparent features is used to estimate the confidence of each target.
- This scheme predicts the position of the object in the image based on the intra-frame structure data association of each object in the image, such as the relative positional relationship in the frame, which can effectively suppress the tracking drift in scenes with challenging factors such as occluded targets and similar objects. It reduces the mistracking rate during target tracking, improves the stability of target tracking, and enables the tracker to run effectively for a long time.
- an embodiment of the present application further provides a target tracking system, including a position acquisition module 701, a first prediction module 702, a second prediction module 703, and a target prediction module 704, as follows:
- a position obtaining module 701 configured to obtain the relative positional relationship within the frame of N objects in the ith frame of images, and obtain the position of at least one object in the N objects in each frame of images in the M frames of images, and the ith frame of images
- the frame image is the last image obtained in time in the M frames, and the M, N, and i are all positive integers;
- a second prediction module 703, configured to obtain the second predicted position of each of the N objects in the first image according to the intra-frame relative positional relationship of the N objects in the i-th frame image;
- the target prediction module 704 is configured to determine the predicted position of the object A in the first image according to the first predicted position and the second predicted position of the object A.
- the target prediction module 704 is specifically configured to: if the distance between the first predicted position and the second predicted position of the object A is greater than a preset threshold, obtain all the data in the first image according to the average sliding filter algorithm. Describe the predicted position of object A.
- the target prediction module 704 is also specifically used for:
- the position obtaining module 701 obtains the relative positional relationship within the frame of the N objects in the i-th frame image, it is specifically used for:
- the second image is an image obtained before the i-th frame in time;
- the intra-frame relative positional relationship of the N objects in the i-th frame image is obtained.
- the location acquisition module 701 is also specifically used for:
- the relative positional relationship within the frame of the N objects in the ith frame image is obtained.
- the system further includes a target position acquisition module, configured to: determine the position of the object A in the first image according to the predicted position of the object A;
- the determining the position of the object A in the first image according to the predicted position of the object A includes:
- the position of the object A in the first image is determined according to the predicted position of the object A and the detected position of the object A.
- the Q objects include object B, and the object B does not match any object A among the N objects, then determine the object B in the first image according to the detection position of the object B. The location of object B.
- the target tracking apparatus 8000 includes at least one processor 8001 , at least one memory 8002 and at least one communication interface 8003 .
- the processor 8001, the memory 8002 and the communication interface 8003 are connected through the communication bus and complete the communication with each other.
- the processor 8001 may be a general-purpose central processing unit (CPU), a microprocessor, an application-specific integrated circuit (ASIC), or one or more integrated circuits used to control the execution of the above programs.
- CPU central processing unit
- ASIC application-specific integrated circuit
- the communication interface 8003 is used to communicate with other devices or communication networks, such as Ethernet, radio access network (RAN), wireless local area network (Wireless Local Area Networks, WLAN).
- RAN radio access network
- WLAN Wireless Local Area Networks
- the memory 8002 may be read-only memory (ROM) or other type of static storage device that can store static information and instructions, random access memory (RAM) or other type of static storage device that can store information and instructions It can also be an electrically erasable programmable read-only memory (Electrically Erasable Programmable Read-Only Memory, EEPROM), a compact disc read-only memory (CD-ROM) or other optical disk storage, optical disk storage (including compact discs, laser discs, optical discs, digital versatile discs, Blu-ray discs, etc.), magnetic disk storage media or other magnetic storage devices, or capable of carrying or storing desired program code in the form of instructions or data structures and capable of being executed by a computer Access any other medium without limitation.
- the memory can exist independently and be connected to the processor through a bus.
- the memory can also be integrated with the processor.
- the memory 8002 is used for storing the application program code for executing the above solution, and the execution is controlled by the processor 8001 .
- the processor 8001 is configured to execute the application code stored in the memory 8002 .
- the code stored in the memory 8002 can perform one of the object tracking methods provided above.
- Embodiments of the present application further provide an intelligent driving vehicle, including a traveling system, a sensing system, a control system, and a computer system, wherein the computer system is used to execute the method.
- an intelligent driving vehicle including a traveling system, a sensing system, a control system, and a computer system, wherein the computer system is used to execute the method.
- An embodiment of the present application further provides a chip system, the chip system is applied to an electronic device; the chip system includes one or more interface circuits and one or more processors; the interface circuit and the processor pass through line interconnection; the interface circuit is used to receive signals from the memory of the electronic device and send the signals to the processor, the signals include computer instructions stored in the memory; when the processor executes the When executing the computer instructions, the electronic device performs the method.
- Embodiments of the present application also provide a computer-readable storage medium, where instructions are stored in the computer-readable storage medium, and when the computer or processor is run on a computer or a processor, the computer or the processor is made to execute any one of the above methods. or multiple steps.
- Embodiments of the present application also provide a computer program product including instructions.
- the computer program product when run on a computer or processor, causes the computer or processor to perform one or more steps of any of the above methods.
- the computer program product includes one or more computer instructions.
- the computer may be a general purpose computer, special purpose computer, computer network, or other programmable device.
- the computer instructions may be stored in or transmitted over a computer-readable storage medium. The computer instructions can be sent from one website site, computer, server, or data center to another website site, computer, server or data center for transmission.
- the computer-readable storage medium may be any available medium that can be accessed by a computer or a data storage device such as a server, data center, etc. that includes an integration of one or more available media.
- the usable media may be magnetic media (eg, floppy disks, hard disks, magnetic tapes), optical media (eg, DVDs), or semiconductor media (eg, solid state disks (SSDs)), and the like.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Biophysics (AREA)
- Evolutionary Computation (AREA)
- Artificial Intelligence (AREA)
- Biomedical Technology (AREA)
- Health & Medical Sciences (AREA)
- Computational Linguistics (AREA)
- Data Mining & Analysis (AREA)
- Life Sciences & Earth Sciences (AREA)
- General Health & Medical Sciences (AREA)
- Molecular Biology (AREA)
- Computing Systems (AREA)
- General Engineering & Computer Science (AREA)
- Mathematical Physics (AREA)
- Software Systems (AREA)
- Image Analysis (AREA)
Abstract
本申请实施例提供一种目标跟踪方法及相关系统、存储介质、智能驾驶车辆,包括:获取第i帧图像中N个对象的帧内相对位置关系,并获取M帧图像中每帧图像中所述N个对象中至少一个对象的目标位置,获取在第一图像中对象A的第一预测位置;根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置;根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置。采用该手段,基于在时间获取上较前的图像中各对象的帧内相对位置关系得到各对象在当前帧图像中的预测位置,可有效抑制跟踪漂移的问题,提升了目标跟踪的稳定性。
Description
本申请要求于2020年7月28日提交中国专利局、申请号为2020107397277、申请名称为“目标跟踪方法及相关系统、存储介质、智能驾驶车辆”的中国专利申请的优先权,其全部内容通过引用结合在本申请中。
本申请涉及计算机视觉技术领域,尤其涉及一种目标跟踪方法及相关系统、存储介质、智能驾驶车辆。
视觉目标跟踪是指视频序列在给定当前帧目标位置的前提下,在后续的帧中找到对应目标位置的技术。
基于视频图像序列的多目标跟踪是自动驾驶系统中重要任务之一。现有技术在给定首帧图片以及目标的初始位置时,在接下来的各帧中跟踪器可给出目标的位置。然而由于遮挡、运动模糊、光照变化、目标表观变化、背景疑似目标、尺度变化等因素的影响,跟踪过程中容易出现漂移现象,即跟踪器跟踪失败。例如对于被部分遮挡的车辆目标,由于发生跟踪漂移,容易发生错误的位置预测。
视觉目标跟踪是计算机视觉的基础问题,在实际生活中有着广泛的应用,如无人驾驶,交通管理,智能视频监控AR/VR等。因此抑制目标跟踪漂移问题的意义和价值重大。
由于多目标跟踪过程中存在目标遮挡,运动模糊、光照变化、目标表观变化、背景疑似目标、尺度变化等困难场景,现有技术在进行目标跟踪时,常常无法判别该轨迹是因遮挡等原因暂时消失还是离开检测区域停止跟踪,造成一部分被遮挡的轨迹因为误判终止跟踪。在原跟踪的目标再次出现时,若原跟踪轨迹已停止跟踪,会导致目标的ID发生跳变。一些现有方法虽然尝试使用了多帧图像之间时序特征,然而在复杂场景中,仍容易对某个困难目标出现跟踪失败。
发明内容
本申请公开了一种目标跟踪方法及相关系统、存储介质、智能驾驶车辆,可以实现目标位置的准确预测。
第一方面,本申请实施例提供一种目标跟踪方法,包括:获取第i帧图像中N个对象的帧内相对位置关系,并获取M帧图像中每帧图像中所述N个对象中至少一个对象的位置,且所述第i帧图像为所述M帧中在时间上最后获取的图像,所述M、N、i均为正整数;对于N个对象中的任一个对象A,根据M’个位置获取在第一图像中所述对象A的第一预测位置,所述M’个位置为所述M帧图像中包含所述对象A的M’帧图像中所述对象A的位置,M’为不大于M的正整数,所述第一图像为时间上在所述第i帧之后获取的图像;根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置;根据所述对象A的第一预测位置和第二预测位置确定所述第一图像 中所述对象A的预测位置。
通过本申请实施例,通过获取第i帧图像之后的第一图像中各对象的第一预测位置,然后基于第i帧中各对象的帧内相对位置关系得到第一图像中各对象的第二预测位置,进而基于所得第一图像中各对象的第一预测位置和第二预测位置得到所述第一图像中各对象的预测位置。采用该手段,基于前帧图像中各对象的帧内相对位置关系得到各对象在后续帧图像中的预测位置,可在存在目标被遮挡、存在相似物体等挑战性因素的场景中有效抑制跟踪漂移的问题,减少了目标跟踪过程中的误跟踪率,提升了目标跟踪的稳定性,可以使跟踪器长时间有效运行。其中,该第一图像可以是第i+1帧图像,也可以是第i+2帧图像、第i+3帧图像等。
其中,所述根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置,包括:若所述对象A的第一预测位置与第二预测位置之间的距离大于预设阈值,根据平均滑动滤波算法获取所述第一图像中所述对象A的预测位置。
其中,所述根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置,包括:若所述对象A的第一预测位置与第二预测位置之间的距离不大于预设阈值,则根据所述对象A的第一预测位置得到所述对象A的预测位置;或者,根据所述对象A的第二预测位置得到所述对象A的预测位置;或者,根据所述对象A的第一预测位置和第二预测位置得到所述对象A的预测位置。
其中,所述根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置,包括:将对象E作为第一父节点,其中,所述对象E为所述第一图像中置信度最高的对象;获取所述第i帧图像中N个对象的帧内相对位置关系中所述第一父节点与子节点之间的帧内相对位置;根据所述第一父节点与子节点之间的帧内相对位置以及所述第一父节点的第一预测位置得到所述子节点的第二预测位置;将所述子节点作为第二父节点,获取所述第i帧图像中N个对象的帧内相对位置关系中所述第二父节点与子节点之间的帧内相对位置;根据所述第二父节点与子节点之间的帧内相对位置以及所述第二父节点的第二预测位置得到所述子节点的第二预测位置;以此类推,直到得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述对象E的第二预测位置与第一预测位置相同。
该实施例提供以最小生成树等树状结构来表示帧内相对位置关系。
其中,所述根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置,包括:根据所述第i帧图像中N个对象的帧内相对位置关系以及对象E的第一预测位置得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述N个对象的帧内相对位置关系包括N-1个对象中每个对象相对于所述对象E的帧内相对位置,所述对象E为所述N个对象中的任一个对象,所述N-1个对象为所述N个对象中除所述对象E之外的对象,其中,所述对象E的第二预测位置与第一预测位置相同。
该实施例提供的帧内相对位置关系包括N-1个对象中每个对象相对于对象E的帧内相对位置。
其中,所述获取第i帧图像中N个对象的帧内相对位置关系,包括:获取第二图像中W个对象的帧内相对位置关系,并获取所述第i帧图像中N个对象的位置,其中,W为正整数,所述N个对象中包含所述W个对象中的至少一个对象,所述第二图像为时间上在所述第i帧之前获取的图像;根据所述第i帧图像中N个对象的位置得到所述第i帧图像中N个对象的位置之间的相对位置关系;根据所述N个对象的位置之间的相对位置关系和所述第二图像中W个对象的帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
其中,若所述N个对象不包含所述W个对象中的对象C,所述根据所述N个对象的位置之间的相对位置关系和所述第二图像中W个对象的帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系,包括:删除所述第二图像中W个对象的帧内相对位置关系中各对象与所述对象C的相对位置关系,以得到所述第i帧图像的参考帧内相对位置关系;根据所述N个对象的位置之间的相对位置关系以及所述第i帧图像的参考帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
其中,所述方法还包括:根据所述对象A的预测位置确定第一图像中所述对象A的位置;其中,所述根据所述对象A的预测位置确定第一图像中所述对象A的位置,包括:获取第一图像,根据所述第一图像得到所述第一图像中Q个对象的检测位置,Q为正整数;若所述Q个对象中包括所述对象A,则根据所述对象A的预测位置和所述对象A的检测位置,确定所述第一图像中所述对象A的位置。
若所述Q个对象不包括所述N个对象中的对象A,则确认所述第一图像中所述对象A消失。
其中,若所述Q个对象包括对象B,所述对象B与所述N个对象中的任一个对象A均不匹配,则根据所述对象B的检测位置确定所述第一图像中所述对象B的位置。
第二方面,本申请提供了一种目标跟踪系统,包括:位置获取模块,用于获取第i帧图像中N个对象的帧内相对位置关系,并获取M帧图像中每帧图像中所述N个对象中至少一个对象的位置,且所述第i帧图像为所述M帧中在时间上最后获取的图像,所述M、N、i均为正整数;第一预测模块,用于对于N个对象中的任一个对象A,根据M’个位置获取在第一图像中所述对象A的第一预测位置,所述M’个位置为所述M帧图像中包含所述对象A的M’帧图像中所述对象A的位置,M’为不大于M的正整数,所述第一图像为时间上在所述第i帧之后获取的图像;第二预测模块,用于根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置;目标预测模块,用于根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置。
其中,所述目标预测模块,具体用于:若所述对象A的第一预测位置与第二预测位置之间的距离大于预设阈值,根据平均滑动滤波算法获取所述第一图像中所述对象A的预测位置。
所述目标预测模块,还具体用于:若所述对象A的第一预测位置与第二预测位置之间的距离不大于预设阈值,则根据所述对象A的第一预测位置得到所述对象A的预测位置;或者,根据所述对象A的第二预测位置得到所述对象A的预测位置;或者,根据所述对象 A的第一预测位置和第二预测位置得到所述对象A的预测位置。
其中,所述第二预测模块具体用于:将对象E作为第一父节点,其中,所述对象E为所述第一图像中置信度最高的对象;获取所述第i帧图像中N个对象的帧内相对位置关系中所述第一父节点与子节点之间的帧内相对位置;根据所述第一父节点与子节点之间的帧内相对位置以及所述第一父节点的第一预测位置得到所述子节点的第二预测位置;将所述子节点作为第二父节点,获取所述第i帧图像中N个对象的帧内相对位置关系中所述第二父节点与子节点之间的帧内相对位置;根据所述第二父节点与子节点之间的帧内相对位置以及所述第二父节点的第二预测位置得到所述子节点的第二预测位置;以此类推,直到得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述对象E的第二预测位置与第一预测位置相同。
其中,所述第二预测模块具体用于:根据所述第i帧图像中N个对象的帧内相对位置关系以及对象E的第一预测位置得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述N个对象的帧内相对位置关系包括N-1个对象中每个对象相对于所述对象E的帧内相对位置,所述对象E为所述N个对象中的任一个对象,所述N-1个对象为所述N个对象中除所述对象E之外的对象,其中,所述对象E的第二预测位置与第一预测位置相同。
其中,所述位置获取模块在获取第i帧图像中N个对象的帧内相对位置关系时,具体用于:获取第二图像中W个对象的帧内相对位置关系,并获取所述第i帧图像中N个对象的位置,其中,W为正整数,所述N个对象中包含所述W个对象中的至少一个对象,所述第二图像为时间上在所述第i帧之前获取的图像;根据所述第i帧图像中N个对象的位置得到所述第i帧图像中N个对象的位置之间的相对位置关系;根据所述N个对象的位置之间的相对位置关系和所述第二图像中W个对象的帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
其中,若所述N个对象不包含所述W个对象中的对象C,所述位置获取模块还用于:删除所述第二图像中W个对象的帧内相对位置关系中各对象与所述对象C的相对位置关系,以得到所述第i帧图像的参考帧内相对位置关系;根据所述N个对象的位置之间的相对位置关系以及所述第i帧图像的参考帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
所述系统还包括目标位置获取模块,用于:根据所述对象A的预测位置确定第一图像中所述对象A的位置;其中,所述根据所述对象A的预测位置确定第一图像中所述对象A的位置,包括:获取第一图像,根据所述第一图像得到所述第一图像中Q个对象的检测位置,Q为正整数;若所述Q个对象中包括所述对象A,则根据所述对象A的预测位置和所述对象A的检测位置,确定所述第一图像中所述对象A的位置。
若所述Q个对象不包括所述N个对象中的对象A,则确认所述第一图像中所述对象A消失。
其中,若所述Q个对象包括对象B,所述对象B与所述N个对象中的任一个对象A均不匹配,则根据所述对象B的检测位置确定所述第一图像中所述对象B的位置。
第三方面,本申请提供了一种计算机存储介质,包括计算机指令,当所述计算机指令在电子设备上运行时,使得所述电子设备执行如第一方面任一种可能的实施方法。
第四方面,本申请实施例提供一种计算机程序产品,当计算机程序产品在计算机上运行时,使得计算机执行如第一方面任一种可能的实施方式。
第五方面,本申请实施例提供一种智能驾驶车辆,包括行进系统、传感系统、控制系统和计算机系统,其中,所述计算机系统用于执行如第一方面任一种可能的实施方式。
可以理解地,上述提供的第二方面所述的系统、第三方面所述的计算机存储介质或者第四方面所述的计算机程序产品、第五方面所述的智能驾驶车辆均用于执行第一方面中任一所提供的方法。因此,其所能达到的有益效果可参考对应方法中的有益效果,此处不再赘述。
下面对本申请实施例用到的附图进行介绍。
图1是本申请实施例提供的一种目标跟踪方法适用的一应用场景示意图;
图2是本申请实施例提供的一种目标跟踪方法适用的又一应用场景示意图;
图3是本申请实施例提供的一种目标跟踪方法的流程示意图;
图4a是本申请实施例提供的一种帧内相对位置关系示意图;
图4b是本申请实施例提供的另一种帧内相对位置关系示意图;
图5是本申请实施例提供的一种目标跟踪方法的一应用示意图;
图6是本申请实施例提供的一种目标跟踪方法的又一应用示意图;
图7是本申请实施例提供的一种目标位置预测系统的结构示意图;
图8是本申请实施例提供的一种目标跟踪装置的结构示意图。
下面结合本申请实施例中的附图对本申请实施例进行描述。本申请实施例的实施方式部分使用的术语仅用于对本申请的具体实施例进行解释,而非旨在限定本申请。
基于在目标跟踪过程中存在目标遮挡,运动模糊、光照变化、目标表观变化、背景疑似目标、尺度变化等困难场景,容易对某个困难目标出现跟踪失败。为此,本申请提供一种目标跟踪方法,其中,通过基于上一帧图像中各对象的帧内相对位置关系来得到各对象在当前帧图像中的目标预测位置,可在存在目标被遮挡、存在相似物体等挑战性因素的场景中有效抑制跟踪漂移的问题,减少了目标跟踪过程中的误跟踪率,提升了目标跟踪的稳定性,可以使跟踪器长时间有效运行。
其中,如图1所示,本申请实施例可以广泛应用于无人驾驶系统的目标跟踪部分。其中,目标跟踪可以弥补目标检测速度的不足,且可以平滑检测结果。因此目标跟踪是视觉感知模块非常重要的一部分。常用的跟踪器有CT、STC、CSK、KCF等。跟踪器的速度一般可以达到30~60FPS。有的甚至高达200~300FPS。但是在真实跟踪场景中,很多跟踪器无法自检跟踪准确性,一旦跟踪器发生漂移,那么会输出错误的位置。在无人驾驶中,输出错误的位置意味着在没车的地方输出有车,直接影响规控做出合理的决策。因此,抑制跟踪漂移非常重要。本方案可用于改进无人驾驶视觉感知部分,提高输出结果的准确性。
如图2所示,本申请实施例还可以广泛应用于智能视频监控系统的目标跟踪部分。目前银行、电力、交通、安检以及军事设施等领域对安全防范和现场记录报警系统的需求与日俱增,要求越来越高,视频监控在生产生活各方面得到了非常广泛的应用。智能视频监控系统己经广泛地存在于银行、商场、车站和交通路口等公共场所。智能视频监控的主要任务是检测采集到的图片中的运动目标,并对其进行分类,找到感兴趣类别的运动目标并予以跟踪。在跟踪过程中对其行为进行识别。一旦检测到危险行为促发报警,制止危险行为的进一步恶化。目标跟踪可以弥补目标检测速度的不足、以及在连续帧中将同一个目标串联起来,便于进一步分析。本方案可用于改进视频监控中的目标跟踪部分,使感知结果准确的输送到下一处理模块,如身份识别或异常检测等。
下面具体介绍本申请实施例提供的目标跟踪方法。参照图3所示,为本申请实施例提供的一种目标跟踪方法的流程示意图。该方法包括步骤301-304,具体如下:
301、获取第i帧图像中N个对象的帧内相对位置关系,并获取M帧图像中每帧图像中所述N个对象中至少一个对象的位置,且所述第i帧图像为所述M帧中在时间上最后获取的图像,所述M、N、i均为正整数;
上述对象可以是指人、物体等,如车辆、行人、障碍物等。
上述帧内相对位置关系,可以是基于图像坐标系中各对象的相对位置向量得到的。本方案仅以帧内相对位置关系进行说明,其还可以是其他帧内相对关系,如帧内对象的相对速度等,此处不做具体限定。
上述M帧图像是至少包括上述N个对象中的至少一个对象的图像。上述当前预测如第i+1帧图像中各对象的预测位置。如当前预测第6帧图像中各对象的目标预测位置,其中,按照时间顺序获取的第2帧、第4帧、第5帧中均包含有上述N个对象中的至少一个对象,则该M帧图像如可以是第2帧、第4帧、第5帧。当然,其也可以只获取第4帧、第5帧,或者仅获取第5帧。此处不做具体限定。
其中,其可以预测所述第i帧之后的任意一帧的图像,如在时间上间隔3min之内的图像等,当然,此处并不限定该时间。具体地,其还可以根据第i帧预测第i+2帧、第i+3帧等。
其中,如跟踪器用于目标位置预测,其中,跟踪器可先获取第i帧图像中N个对象的帧内相对位置关系,以及获取M帧图像中每帧图像中所述N个对象中至少一个对象的位置。
上述位置可以是对象在图像坐标系中的位置,也可以是对象在世界坐标系中的位置,本方案对此不作具体限定。其中,该位置即为跟踪器输出的对象的最终位置。
其中,获取第i帧图像中N个对象的帧内相对位置关系,可包括S3011-S3013,具体如下:
S3011、获取第i-1帧图像中W个对象的帧内相对位置关系,并获取所述第i帧图像中N个对象的位置,其中,W为正整数,i不小于2,所述N个对象中包含所述W个对象中的至少一个对象;
作为一种可选的实现方式,其中,可通过分别获取图像中任意两个对象之间的相对位置关系,即可得到该帧图像中各对象的帧内相对位置关系。如该帧包含对象1、对象2对 象3和对象4,则可获取对象1和对象2之间的帧内相对位置关系对象1-2、对象1和对象3之间的帧内相对位置关系对象1-3、对象1和对象4之间的帧内相对位置关系对象1-4、对象2和对象3之间的帧内相对位置关系对象2-3、对象2和对象4之间的帧内相对位置关系对象2-4和对象3和对象4之间的帧内相对位置关系对象3-4。如图4a所示。
作为另一种可选的实现方式,对于第一帧图像中的各对象,通过获取各对象的置信度,其中,可通过检测器进行检测得到各对象的置信度,然后选取其中置信度最大的对象作为父节点,然后从父节点出发,以目标在图像坐标系中的欧式距离大小为权重,在所有目标所形成的图中,使用kruskal算法或prim算法建立最小生成树,如图4b所示,从而建立帧内目标结构模型,形成帧内目标数据关联,并得到帧内相对位置关系。其中,该第一帧图像可以是在时间上获取的任意一帧图像,其也可以是出现特定对象时所对应的图像,此处不做具体限定。
然后,在对第2帧进行预测并获取到第2帧中各对象的目标位置后,则对第1帧的帧内相对位置关系进行更新,以得到第2帧的帧内相对位置关系。以此类推。进而可得到第i-1帧图像中W个对象的帧内相对位置关系。
S3012、根据所述第i帧图像中N个对象的目标位置得到所述第i帧图像中N个对象的目标位置之间的相对位置关系;
其中,根据第i帧图像中N个对象的目标位置,可分别获取任意两个对象的位置之间的相对位置,进而得到所述第i帧图像中N个对象的位置之间的相对位置关系。
S3013、根据所述N个对象的目标位置之间的相对位置关系和所述第i-1帧图像中W个对象的帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
其中,可根据所述N个对象的目标位置之间的相对位置和所述第i-1帧图像中W个对象的帧内相对位置的平均值,得到所述第i帧图像中N个对象的帧内相对位置关系。
具体地,当所述第i-1帧图像中的W个对象和所述第i帧图像中的N个对象中的一部分对象匹配时,则对于匹配的该部分对象的帧内相对位置关系即采用所述第i-1帧图像中各对象的帧内相对位置关系与对应的所述第i帧图像中的该各对象的目标位置之间的相对位置关系之和进行平均后得到。
对于所述第i-1帧图像中的W个对象和所述第i帧图像中的N个对象中未匹配上的对象,包含两种情况,1)若所述第i-1帧图像中的对象C与所述第i帧图像中的N个对象均未匹配上,则说明第i-1帧图像中的对象C在第i帧中消失,即删除第i-1帧图像中的对象C的帧内相对位置关系。也就是说,第i帧图像中N个对象的帧内相对位置关系中不包括所述对象C的帧内相对位置关系。2)若所述第i帧图像中的对象D与所述第i-1帧图像中的W个对象均未匹配上,则说明第i帧图像中的对象D是新出现的,则所述第i帧图像中的对象D的帧内相对位置关系即为所述第i帧图像中的各对象的目标位置与对象D的目标位置之间的相对位置关系。
302、对于N个对象中的任一个对象A,根据M’个位置获取在第一图像中所述对象A的第一预测位置,所述M’个位置为所述M帧图像中包含所述对象A的M’帧图像中所述对象A的位置,M’为不大于M的正整数,所述第一图像为时间上在所述第i帧之后获取的图像;
其中,上述获取的M帧图像中,对于任一帧中可能包含所述N个对象中的至少一个对象,因此对于任一对象可得到M’个目标位置,其中,M’不小于1,且不大于M。
其中,所述对象A的第一预测位置是基于上述M’个位置得到的。如所述对象A的第一预测位置可以是上述对象A的M’个位置进行平均得到的,或者按照预设权重计算得到,如越接近第一图像的权重越大。
本方案可以预测所述第i帧之后的任意一帧的图像,如在时间上间隔3min之内的图像等,当然,此处并不限定该时间。具体地,其可以根据第i帧预测第i+1帧或者第i+2帧或者第i+3帧等。即上述第一图像可以是第i+1帧图像、第i+2帧图像或者第i+3帧图像等。
303、根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置;
基于步骤301获取的第i帧图像中N个对象的帧内相对位置关系,以及步骤302中得到的N个对象的第一预测位置,得到所述第一图像中N个对象中每个对象的第二预测位置。
具体地,如可选取N个对象中的任一个对象A,基于该对象A的第一预测位置,以及第i帧图像中各对象与该对象A的帧内相对位置关系,则可得到第i+1帧图像中N个对象中每个对象的第二预测位置。其中,对于对象A,其第二预测位置即与其第一预测位置相同。可选的,上述对象A,可以是置信度最高的对象。
进一步地,还可以选取多个对象,基于多个对象的第一预测位置来得到所述第一图像中N个对象中每个对象的第二预测位置。
示例性地,参阅图4a,其中,所述根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置,包括:
根据所述第i帧图像中N个对象的帧内相对位置关系以及对象E的第一预测位置得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述N个对象的帧内相对位置关系包括N-1个对象中每个对象相对于所述对象E的帧内相对位置,所述对象E为所述N个对象中的任一个对象,所述N-1个对象为所述N个对象中除所述对象E之外的对象,其中,所述对象E的第二预测位置与第一预测位置相同。
可替代地,参阅图4b,其中,所述根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置,包括:
将对象E作为第一父节点,其中,所述对象E为所述第一图像中置信度最高的对象;
获取所述第i帧图像中N个对象的帧内相对位置关系中所述第一父节点与子节点之间的帧内相对位置;
根据所述第一父节点与子节点之间的帧内相对位置以及所述第一父节点的第一预测位置得到所述子节点的第二预测位置;
将所述子节点作为第二父节点,获取所述第i帧图像中N个对象的帧内相对位置关系中所述第二父节点与子节点之间的帧内相对位置;
根据所述第二父节点与子节点之间的帧内相对位置以及所述第二父节点的第二预测位置得到所述子节点的第二预测位置;
以此类推,直到得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述对象E的第二预测位置与第一预测位置相同。
304、根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置。
也就是说,本方案中各对象的预测位置是基于各对象的第一预测位置和第二预测位置得到的。
具体地,其中,若所述对象A的第一预测位置与第二预测位置之间的距离大于预设阈值,则根据平均滑动滤波算法获取所述第一图像中所述对象A的预测位置。
其中,根据平均滑动滤波算法获取所述第一图像中所述对象A的预测位置,即为根据所述对象A在第一图像之前的至少一帧中的位置得到。如可以是第i-1帧、第i帧中对象A的位置的平均值等。
上述仅以平均滑动滤波算法作为一种示例,其中还可以是其他任意算法,此处不做具体限定。
其中,若所述对象A的第一预测位置与第二预测位置之间的距离不大于预设阈值,则根据所述对象A的第一预测位置和/或第二预测位置获取所述对象A的预测位置。
其中,根据所述对象A的第一预测位置和/或第二预测位置获取所述对象A的预测位置,如可以是将对象A的第一预测位置作为所述对象A的预测位置;或者将对象A的第二预测位置作为所述对象A的预测位置;其还可以是根据所述对象A的第一预测位置和第二预测位置得到所述对象A的预测位置,如可以是对象A的第一预测位置和第二预测位置的平均等。此处不做具体限定。
进一步地,在上述位置预测完后,本方案还可包括:
根据所述对象A的预测位置确定第一图像中所述对象A的位置。
其中,所述根据所述对象A的目标预测位置确定第i+1帧图像中所述对象A的目标位置,具体可包括:
获取第i+1帧图像,根据所述第i+1帧图像得到所述第i+1帧图像中Q个对象的检测位置,Q为正整数;
若所述Q个对象中包括所述对象A,则根据所述对象A的目标预测位置和所述对象A的检测位置,确定所述第i+1帧图像中所述对象A的目标位置。
其中,若所述Q个对象不包括所述N个对象中的对象A,则确认所述第i+1帧图像中所述对象A消失。
若所述Q个对象包括对象B,所述对象B与所述N个对象中的任一个对象A均不匹配,则根据所述对象B的检测位置确定所述第i+1帧图像中所述对象B的目标位置。
其中,上述Q个对象的检测位置可以是基于检测器获得的,当然其也可以是其他形式获得的。对于检测器和跟踪器能够匹配上的对象,则该对象的目标位置可基于两者结果综合确定。对于检测器和跟踪器无法匹配的对象,若该对象对于检测器来说是新出现的,则以检测器的结果为准;若检测器未检测到该对象,则表明该对象消失。
通过本申请实施例,通过获取第i帧图像之后的第一图像中各对象的第一预测位置,然后基于第i帧中各对象的帧内相对位置关系得到第一图像中各对象的第二预测位置,进而基于所得第一图像中各对象的第一预测位置和第二预测位置得到所述第一图像中各对象的预测位置。采用该手段,基于前帧图像中各对象的帧内相对位置关系得到各对象在后续 帧图像中的预测位置,可在存在目标被遮挡、存在相似物体等挑战性因素的场景中有效抑制跟踪漂移的问题,减少了目标跟踪过程中的误跟踪率,提升了目标跟踪的稳定性,可以使跟踪器长时间有效运行。其中,该第一图像可以是第i+1帧图像,也可以是第i+2帧图像、第i+3帧图像等。
下面以一具体实施例对本方案进行说明。本申请实施例提供一种目标跟踪方法。其中,初始阶段,对于第一帧图像,通过检测器可得到各个对象的置信度,其中,选取置信度最大的目标作为父节点,然后从父节点出发,以目标在图像坐标系中的欧式距离大小为权重,在所有目标所形成的图中,使用kruskal算法或prim算法生成最小生成树。如图4b所示。从而建立帧内目标结构模型,形成帧内目标数据关联。
举例说明,首先,定义图像中第i个目标O
i的中心位置在图像坐标系中的坐标p
i=(x
i,y
i),置信度为conf
i。帧内目标之间的相对位置关系可由相对位置向量v
ij=p
j-p
i=(x
j-x
i,y
j-y
i)表示。求出该帧中所有目标之间的相对位置向量,可得到位置向量集V={v
ij}。
然后,选取置信度最大的目标为父节点,即O
r=max{conf
i}。由根节点出发,以目标在图像坐标系中的欧式距离大小为权重,在所有目标所形成的图形G中,使用kruskal算法或prim算法建立最小生成树T,即T(G)=min∑
i,j∈G‖v‖。
然后,对于第二帧图像,如假设第一帧中出现4个对象。可采用任意跟踪算法对第二帧的4个对象的位置进行预测,其中假设上述4个对象均会出现。如可以基于第一帧图像中各个对象的位置得到第二帧图像中各个对象对应的第一预测位置。
从上述得到的第二帧图像中各个目标对应的第一预测位置中选取出置信度最高的目标作为新的父节点。从该父节点对应的第一预测位置出发,利用上述得到的帧内树状结构的相对位置向量集,可推测出其他节点的第二预测位置,即得到其他目标的第二预测位置。其中,父节点对应的第二预测位置与其第一预测位置相同。此处仅以最小生成树为例进行说明。
若d
i大于设定的阈值d
th,说明该多目标跟踪器估计的位置使得树的形状发生较大改变,不满足稳定的帧内结构,此时选择使用平均滑动滤波算法得出的位置作为目标预测位置
反之则满足帧内结构关系,采用该多目标跟踪器的预测位置
上述帧内结构推理位置
或结合两者得到一个目标预测位置。
基于上述得到的第二帧图像中各个目标对应的第一预测位置和第二帧图像中各个目标对应的第二预测位置,获取各个目标对应的第一预测位置与第二预测位置之间的距离。然后确认各个目标对应的第一预测位置与第二预测位置之间的距离是否大于预设阈值。如果 两者的距离大于预设阈值,说明采用多目标跟踪器估计的位置使得树的形状发生较大改变,不满足稳定的帧内结构,此时选择使用平均滑动滤波算法得到各目标的位置作为预测位置。如果两者的距离不大于预设阈值,则满足帧内结构关系,可采用第二帧图像中各个目标对应的第一预测位置作为各目标的目标预测位置,或者采用第二帧图像中各个目标对应的第二预测位置作为各目标的预测位置,或者也可以基于第二帧图像中各个目标对应的第一预测位置和第二预测位置进行加权处理等得到各目标的预测位置。此处不做具体限定。
上述平均滑动滤波算法,可以是基于多帧图像中该目标的位置得到当前帧图像中该目标对应的预测位置。此处仅以平均滑动滤波算法为例进行说明,其中上述还可以是其他算法来替代平均滑动滤波算法,此处不做具体限制。
其中,在基于上述方法得到上述第二帧图像中各个目标对应的预测位置后,可结合检测器得到的对象的检测位置进而得到第二帧图像中各对象的位置。
得到各对象的位置之后,可对上述帧内相对位置关系进行更新得到当前帧的帧内相对位置关系。其中,根据第二帧图像中各个对象对应的目标位置获取第二帧图像中各目标位置之间的相对位置关系,然后结合第一帧图像的帧内相对位置关系,进而得到第二帧图像的帧内相对位置关系。
基于此,重复循环上述步骤可实现对任一帧图像的跟踪预测。
上述实施例对于对象的位置的预测方法进行了详细介绍。如图5所示,图5为本申请实施例提供的一种目标跟踪方法的应用示意图。其中,在输入图像后,如跟踪器确认该帧图像相较于上一帧图像是否有新的对象出现。若有新的对象出现,则跟踪器初始化该对象的跟踪轨迹。然后,确认该帧图像相较于上一帧图像是否有对象消失。若有对象消失,则跟踪器终止该对象的跟踪轨迹。然后,通过采用如图3所述的目标位置的预测方法对该帧图像中的各对象进行位置预测,并输出跟踪结果。当该帧图像不是最后一帧时,则输入下一帧图像,并重复上述步骤。
本申请实施例提供的目标跟踪方法可以用于任意现有的视觉多目标跟踪器上。如图6所示,为本申请实施例提供的目标跟踪方法应用于KCF和LSTM的联合跟踪器上。其中,KCF是一种快速的跟踪器,可用来跟踪目标的位置响应中心。LSTM考虑了时序信息,可以用来对目标进行尺度估计。
其中,该改进方案实例首先通过KCF快速跟踪每个对象的最大响应位置。具体地,根据各对象在上一帧图像中的位置,按比例提取各对象的图像块。针对所述各对象的图像块,采用循环矩阵获取所述各对象的训练样本集,利用岭回归训练模型,获得各个独立的相关滤波器,作为多对象的多个轨迹。利用KCF对当前帧进行检测,通过响应分布预测各节点在当前帧的位置。然后根据这些位置,考虑时序信息,利用CNN提取目标图像序列的表观特征,进而采用LSTM提取目标运动特征,进行目标尺度估计;并采用一个共用表观特征的全连接分支进行各目标置信度估计。本申请实施例所提供的方法应用在KCF和LSTM的联合跟踪器上后,可明显提升该跟踪器的性能。
本方案通过基于图像中各对象的帧内结构数据关联如帧内相对位置关系来预测图像中对象的位置,可在存在目标被遮挡、存在相似物体等挑战性因素的场景中有效抑制跟踪漂 移的问题,减少了目标跟踪过程中的误跟踪率,提升了目标跟踪的稳定性,可以使跟踪器长时间有效运行。
参照图7所示,本申请实施例还提供一种目标跟踪系统,包括位置获取模块701、第一预测模块702、第二预测模块703和目标预测模块704,具体如下:
位置获取模块701,用于获取第i帧图像中N个对象的帧内相对位置关系,并获取M帧图像中每帧图像中所述N个对象中至少一个对象的位置,且所述第i帧图像为所述M帧中在时间上最后获取的图像,所述M、N、i均为正整数;
第一预测模块702,用于对于N个对象中的任一个对象A,根据M’个位置获取在第一图像中所述对象A的第一预测位置,所述M’个位置为所述M帧图像中包含所述对象A的M’帧图像中所述对象A的位置,M’为不大于M的正整数,所述第一图像为时间上在所述第i帧之后获取的图像;
第二预测模块703,用于根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置;
目标预测模块704,用于根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置。
其中,所述目标预测模块704,具体用于:若所述对象A的第一预测位置与第二预测位置之间的距离大于预设阈值,根据平均滑动滤波算法获取所述第一图像中所述对象A的预测位置。
其中,所述目标预测模块704,还具体用于:
若所述对象A的第一预测位置与第二预测位置之间的距离不大于预设阈值,则根据所述对象A的第一预测位置得到所述对象A的预测位置;
或者,根据所述对象A的第二预测位置得到所述对象A的预测位置;
或者,根据所述对象A的第一预测位置和第二预测位置得到所述对象A的预测位置。
所述第二预测模块703具体用于:
将对象E作为第一父节点,其中,所述对象E为所述第一图像中置信度最高的对象;
获取所述第i帧图像中N个对象的帧内相对位置关系中所述第一父节点与子节点之间的帧内相对位置;
根据所述第一父节点与子节点之间的帧内相对位置以及所述第一父节点的第一预测位置得到所述子节点的第二预测位置;
将所述子节点作为第二父节点,获取所述第i帧图像中N个对象的帧内相对位置关系中所述第二父节点与子节点之间的帧内相对位置;
根据所述第二父节点与子节点之间的帧内相对位置以及所述第二父节点的第二预测位置得到所述子节点的第二预测位置;
以此类推,直到得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述对象E的第二预测位置与第一预测位置相同。
可替代的,所述第二预测模块703具体用于:根据所述第i帧图像中N个对象的帧内相对位置关系以及对象E的第一预测位置得到所述第一图像中N个对象中每个对象的第二 预测位置,其中,所述N个对象的帧内相对位置关系包括N-1个对象中每个对象相对于所述对象E的帧内相对位置,所述对象E为所述N个对象中的任一个对象,所述N-1个对象为所述N个对象中除所述对象E之外的对象,其中,所述对象E的第二预测位置与第一预测位置相同。
其中,所述位置获取模块701在获取第i帧图像中N个对象的帧内相对位置关系时,具体用于:
获取第二图像中W个对象的帧内相对位置关系,并获取所述第i帧图像中N个对象的位置,其中,W为正整数,所述N个对象中包含所述W个对象中的至少一个对象,所述第二图像为时间上在所述第i帧之前获取的图像;
根据所述第i帧图像中N个对象的位置得到所述第i帧图像中N个对象的位置之间的相对位置关系;
根据所述N个对象的位置之间的相对位置关系和所述第二图像中W个对象的帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
其中,若所述N个对象不包含所述W个对象中的对象C,所述位置获取模块701还具体用于:
删除所述第二图像中W个对象的帧内相对位置关系中各对象与所述对象C的相对位置关系,以得到所述第i帧图像的参考帧内相对位置关系;
根据所述N个对象的位置之间的相对位置关系以及所述第i帧图像的参考帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
其中,所述系统还包括目标位置获取模块,用于:根据所述对象A的预测位置确定第一图像中所述对象A的位置;
其中,所述根据所述对象A的预测位置确定第一图像中所述对象A的位置,包括:
获取第一图像,根据所述第一图像得到所述第一图像中Q个对象的检测位置,Q为正整数;
若所述Q个对象中包括所述对象A,则根据所述对象A的预测位置和所述对象A的检测位置,确定所述第一图像中所述对象A的位置。
其中,若所述Q个对象不包括所述N个对象中的对象A,则确认所述第一图像中所述对象A消失。
其中,若所述Q个对象包括对象B,所述对象B与所述N个对象中的任一个对象A均不匹配,则根据所述对象B的检测位置确定所述第一图像中所述对象B的位置。
如图8所示,是本申请实施例提供的一种目标跟踪装置的结构示意图。该目标跟踪装置8000包括至少一个处理器8001,至少一个存储器8002以及至少一个通信接口8003。所述处理器8001、所述存储器8002和所述通信接口8003通过所述通信总线连接并完成相互间的通信。
处理器8001可以是通用中央处理器(CPU),微处理器,特定应用集成电路(application-specific integrated circuit,ASIC),或一个或多个用于控制以上方案程序执行的集成电路。
通信接口8003,用于与其他设备或通信网络通信,如以太网,无线接入网(RAN),无线局域网(Wireless Local Area Networks,WLAN)等。
存储器8002可以是只读存储器(read-only memory,ROM)或可存储静态信息和指令的其他类型的静态存储设备,随机存取存储器(random access memory,RAM)或者可存储信息和指令的其他类型的动态存储设备,也可以是电可擦可编程只读存储器(Electrically Erasable Programmable Read-Only Memory,EEPROM)、只读光盘(Compact Disc Read-Only Memory,CD-ROM)或其他光盘存储、光碟存储(包括压缩光碟、激光碟、光碟、数字通用光碟、蓝光光碟等)、磁盘存储介质或者其他磁存储设备、或者能够用于携带或存储具有指令或数据结构形式的期望的程序代码并能够由计算机存取的任何其他介质,但不限于此。存储器可以是独立存在,通过总线与处理器相连接。存储器也可以和处理器集成在一起。
其中,所述存储器8002用于存储执行以上方案的应用程序代码,并由处理器8001来控制执行。所述处理器8001用于执行所述存储器8002中存储的应用程序代码。
存储器8002存储的代码可执行以上提供的一种目标跟踪方法。
本申请实施例还提供一种智能驾驶车辆,包括行进系统、传感系统、控制系统和计算机系统,其中,所述计算机系统用于执行所述的方法。
本申请实施例还提供一种芯片系统,所述芯片系统应用于电子设备;所述芯片系统包括一个或多个接口电路,以及一个或多个处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从所述电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述电子设备执行所述方法。
本申请实施例还提供了一种计算机可读存储介质,该计算机可读存储介质中存储有指令,当其在计算机或处理器上运行时,使得计算机或处理器执行上述任一个方法中的一个或多个步骤。
本申请实施例还提供了一种包含指令的计算机程序产品。当该计算机程序产品在计算机或处理器上运行时,使得计算机或处理器执行上述任一个方法中的一个或多个步骤。
在上述实施例中,可以全部或部分地通过软件、硬件、固件或者其任意组合来实现。当使用软件实现时,可以全部或部分地以计算机程序产品的形式实现。所述计算机程序产品包括一个或多个计算机指令。在计算机上加载和执行所述计算机程序指令时,全部或部分地产生按照本申请实施例所述的流程或功能。所述计算机可以是通用计算机、专用计算机、计算机网络、或者其他可编程装置。所述计算机指令可以存储在计算机可读存储介质中,或者通过所述计算机可读存储介质进行传输。所述计算机指令可以从一个网站站点、计算机、服务器或数据中心通过有线(例如同轴电缆、光纤、数字用户线)或无线(例如红外、无线、微波等)方式向另一个网站站点、计算机、服务器或数据中心进行传输。所述计算机可读存储介质可以是计算机能够存取的任何可用介质或者是包含一个或多个可用介质集成的服务器、数据中心等数据存储设备。所述可用介质可以是磁性介质,(例如,软盘、硬盘、磁带)、光介质(例如,DVD)、或者半导体介质(例如,固态硬盘(solid state disk,SSD))等。
本领域普通技术人员可以理解实现上述实施例方法中的全部或部分流程,该流程可以 由计算机程序来指令相关的硬件完成,该程序可存储于计算机可读取存储介质中,该程序在执行时,可包括如上述各方法实施例的流程。而前述的存储介质包括:ROM或随机存储记忆体RAM、磁碟或者光盘等各种可存储程序代码的介质。
以上所述,仅为本申请实施例的具体实施方式,但本申请实施例的保护范围并不局限于此,任何在本申请实施例揭露的技术范围内的变化或替换,都应涵盖在本申请实施例的保护范围之内。因此,本申请实施例的保护范围应以所述权利要求的保护范围为准。
Claims (23)
- 一种目标跟踪方法,其特征在于,包括:获取第i帧图像中N个对象的帧内相对位置关系,并获取M帧图像中每帧图像中所述N个对象中至少一个对象的位置,且所述第i帧图像为所述M帧中在时间上最后获取的图像,所述M、N、i均为正整数;对于N个对象中的任一个对象A,根据M’个位置获取在第一图像中所述对象A的第一预测位置,所述M’个位置为所述M帧图像中包含所述对象A的M’帧图像中所述对象A的位置,M’为不大于M的正整数,所述第一图像为时间上在所述第i帧之后获取的图像;根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置;根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置。
- 根据权利要求1所述的方法,其特征在于,所述根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置,包括:若所述对象A的第一预测位置与第二预测位置之间的距离大于预设阈值,根据平均滑动滤波算法获取所述第一图像中所述对象A的预测位置。
- 根据权利要求1所述的方法,其特征在于,所述根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置,包括:若所述对象A的第一预测位置与第二预测位置之间的距离不大于预设阈值,则根据所述对象A的第一预测位置得到所述对象A的预测位置;或者,根据所述对象A的第二预测位置得到所述对象A的预测位置;或者,根据所述对象A的第一预测位置和第二预测位置得到所述对象A的预测位置。
- 根据权利要求1至3任一项所述的方法,其特征在于,所述根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置,包括:将对象E作为第一父节点,其中,所述对象E为所述第一图像中置信度最高的对象;获取所述第i帧图像中N个对象的帧内相对位置关系中所述第一父节点与子节点之间的帧内相对位置;根据所述第一父节点与子节点之间的帧内相对位置以及所述第一父节点的第一预测位置得到所述子节点的第二预测位置;将所述子节点作为第二父节点,获取所述第i帧图像中N个对象的帧内相对位置关系中所述第二父节点与子节点之间的帧内相对位置;根据所述第二父节点与子节点之间的帧内相对位置以及所述第二父节点的第二预测位 置得到所述子节点的第二预测位置;以此类推,直到得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述对象E的第二预测位置与第一预测位置相同。
- 根据权利要求1至3任一项所述的方法,其特征在于,所述根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置,包括:根据所述第i帧图像中N个对象的帧内相对位置关系以及对象E的第一预测位置得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述N个对象的帧内相对位置关系包括N-1个对象中每个对象相对于所述对象E的帧内相对位置,所述对象E为所述N个对象中的任一个对象,所述N-1个对象为所述N个对象中除所述对象E之外的对象,其中,所述对象E的第二预测位置与第一预测位置相同。
- 根据权利要求1至5任一项所述的方法,其特征在于,所述获取第i帧图像中N个对象的帧内相对位置关系,包括:获取第二图像中W个对象的帧内相对位置关系,并获取所述第i帧图像中N个对象的位置,其中,W为正整数,所述N个对象中包含所述W个对象中的至少一个对象,所述第二图像为时间上在所述第i帧之前获取的图像;根据所述第i帧图像中N个对象的位置得到所述第i帧图像中N个对象的位置之间的相对位置关系;根据所述N个对象的位置之间的相对位置关系和所述第二图像中W个对象的帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
- 根据权利要求6所述的方法,其特征在于,若所述N个对象不包含所述W个对象中的对象C,所述根据所述N个对象的位置之间的相对位置关系和所述第二图像中W个对象的帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系,包括:删除所述第二图像中W个对象的帧内相对位置关系中各对象与所述对象C的相对位置关系,以得到所述第i帧图像的参考帧内相对位置关系;根据所述N个对象的位置之间的相对位置关系以及所述第i帧图像的参考帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
- 根据权利要求1至7任一项所述的方法,其特征在于,所述方法还包括:根据所述对象A的预测位置确定第一图像中所述对象A的位置;其中,所述根据所述对象A的预测位置确定第一图像中所述对象A的位置,包括:获取第一图像,根据所述第一图像得到所述第一图像中Q个对象的检测位置,Q为正整数;若所述Q个对象中包括所述对象A,则根据所述对象A的预测位置和所述对象A的检测位置,确定所述第一图像中所述对象A的位置。
- 根据权利要求8所述的方法,其特征在于,若所述Q个对象不包括所述N个对象中的对象A,则确认所述第一图像中所述对象A消失。
- 根据权利要求8或9所述的方法,其特征在于,若所述Q个对象包括对象B,所述对象B与所述N个对象中的任一个对象A均不匹配,则根据所述对象B的检测位置确定所述第一图像中所述对象B的位置。
- 一种目标跟踪系统,其特征在于,包括:位置获取模块,用于获取第i帧图像中N个对象的帧内相对位置关系,并获取M帧图像中每帧图像中所述N个对象中至少一个对象的位置,且所述第i帧图像为所述M帧中在时间上最后获取的图像,所述M、N、i均为正整数;第一预测模块,用于对于N个对象中的任一个对象A,根据M’个位置获取在第一图像中所述对象A的第一预测位置,所述M’个位置为所述M帧图像中包含所述对象A的M’帧图像中所述对象A的位置,M’为不大于M的正整数,所述第一图像为时间上在所述第i帧之后获取的图像;第二预测模块,用于根据所述第i帧图像中N个对象的帧内相对位置关系得到所述第一图像中N个对象中每个对象的第二预测位置;目标预测模块,用于根据所述对象A的第一预测位置和第二预测位置确定所述第一图像中所述对象A的预测位置。
- 根据权利要求11所述的系统,其特征在于,所述目标预测模块,具体用于:若所述对象A的第一预测位置与第二预测位置之间的距离大于预设阈值,根据平均滑动滤波算法获取所述第一图像中所述对象A的预测位置。
- 根据权利要求11所述的系统,其特征在于,所述目标预测模块,具体用于:若所述对象A的第一预测位置与第二预测位置之间的距离不大于预设阈值,则根据所述对象A的第一预测位置得到所述对象A的预测位置;或者,根据所述对象A的第二预测位置得到所述对象A的预测位置;或者,根据所述对象A的第一预测位置和第二预测位置得到所述对象A的预测位置。
- 根据权利要求11至13任一项所述的系统,其特征在于,所述第二预测模块具体用于:将对象E作为第一父节点,其中,所述对象E为所述第一图像中置信度最高的对象;获取所述第i帧图像中N个对象的帧内相对位置关系中所述第一父节点与子节点之间的帧内相对位置;根据所述第一父节点与子节点之间的帧内相对位置以及所述第一父节点的第一预测位置得到所述子节点的第二预测位置;将所述子节点作为第二父节点,获取所述第i帧图像中N个对象的帧内相对位置关系中所述第二父节点与子节点之间的帧内相对位置;根据所述第二父节点与子节点之间的帧内相对位置以及所述第二父节点的第二预测位置得到所述子节点的第二预测位置;以此类推,直到得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述对象E的第二预测位置与第一预测位置相同。
- 根据权利要求11至13任一项所述的系统,其特征在于,所述第二预测模块具体用于:根据所述第i帧图像中N个对象的帧内相对位置关系以及对象E的第一预测位置得到所述第一图像中N个对象中每个对象的第二预测位置,其中,所述N个对象的帧内相对位置关系包括N-1个对象中每个对象相对于所述对象E的帧内相对位置,所述对象E为所述N个对象中的任一个对象,所述N-1个对象为所述N个对象中除所述对象E之外的对象,其中,所述对象E的第二预测位置与第一预测位置相同。
- 根据权利要求11至15任一项所述的系统,其特征在于,所述位置获取模块在获取第i帧图像中N个对象的帧内相对位置关系时,具体用于:获取第二图像中W个对象的帧内相对位置关系,并获取所述第i帧图像中N个对象的位置,其中,W为正整数,所述N个对象中包含所述W个对象中的至少一个对象,所述第二图像为时间上在所述第i帧之前获取的图像;根据所述第i帧图像中N个对象的位置得到所述第i帧图像中N个对象的位置之间的相对位置关系;根据所述N个对象的位置之间的相对位置关系和所述第二图像中W个对象的帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
- 根据权利要求16所述的系统,其特征在于,若所述N个对象不包含所述W个对象中的对象C,所述位置获取模块还用于:删除所述第二图像中W个对象的帧内相对位置关系中各对象与所述对象C的相对位置关系,以得到所述第i帧图像的参考帧内相对位置关系;根据所述N个对象的位置之间的相对位置关系以及所述第i帧图像的参考帧内相对位置关系,得到所述第i帧图像中N个对象的帧内相对位置关系。
- 根据权利要求11至17任一项所述的系统,其特征在于,所述系统还包括目标位置获取模块,用于:根据所述对象A的预测位置确定第一图像中所述对象A的位置;其中,所述根据所述对象A的预测位置确定第一图像中所述对象A的位置,包括:获取第一图像,根据所述第一图像得到所述第一图像中Q个对象的检测位置,Q为正整数;若所述Q个对象中包括所述对象A,则根据所述对象A的预测位置和所述对象A的检测位置,确定所述第一图像中所述对象A的位置。
- 根据权利要求18所述的系统,其特征在于,若所述Q个对象不包括所述N个对象中的对象A,则确认所述第一图像中所述对象A消失。
- 根据权利要求18或19所述的系统,其特征在于,若所述Q个对象包括对象B,所述对象B与所述N个对象中的任一个对象A均不匹配,则根据所述对象B的检测位置确定所述第一图像中所述对象B的位置。
- 一种芯片系统,其特征在于,所述芯片系统应用于电子设备;所述芯片系统包括一个或多个接口电路,以及一个或多个处理器;所述接口电路和所述处理器通过线路互联;所述接口电路用于从所述电子设备的存储器接收信号,并向所述处理器发送所述信号,所述信号包括所述存储器中存储的计算机指令;当所述处理器执行所述计算机指令时,所述电子设备执行如权利要求1-10中任意一项所述方法。
- 一种计算机存储介质,其特征在于,所述计算机存储介质存储有计算机程序,所述计算机程序包括程序指令,所述程序指令当被处理器执行时使所述处理器执行如权利要求1-10任一项所述的方法。
- 一种智能驾驶车辆,其特征在于,包括行进系统、传感系统、控制系统和计算机系统,其中,所述计算机系统用于执行如权利要求1-10任一项所述的方法。
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202010739727.7A CN114004861B (zh) | 2020-07-28 | 2020-07-28 | 目标跟踪方法及相关系统、存储介质、智能驾驶车辆 |
CN202010739727.7 | 2020-07-28 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2022021924A1 true WO2022021924A1 (zh) | 2022-02-03 |
Family
ID=79920649
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/CN2021/084784 WO2022021924A1 (zh) | 2020-07-28 | 2021-03-31 | 目标跟踪方法及相关系统、存储介质、智能驾驶车辆 |
Country Status (2)
Country | Link |
---|---|
CN (1) | CN114004861B (zh) |
WO (1) | WO2022021924A1 (zh) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110264495A (zh) * | 2017-12-29 | 2019-09-20 | 华为技术有限公司 | 一种目标跟踪方法及装置 |
CN110276783A (zh) * | 2019-04-23 | 2019-09-24 | 上海高重信息科技有限公司 | 一种多目标跟踪方法、装置及计算机系统 |
CN110751674A (zh) * | 2018-07-24 | 2020-02-04 | 北京深鉴智能科技有限公司 | 多目标跟踪方法及相应视频分析系统 |
US10607084B1 (en) * | 2019-10-24 | 2020-03-31 | Capital One Services, Llc | Visual inspection support using extended reality |
CN111292355A (zh) * | 2020-02-12 | 2020-06-16 | 江南大学 | 一种融合运动信息的核相关滤波多目标跟踪方法 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR102458664B1 (ko) * | 2018-03-08 | 2022-10-25 | 삼성전자주식회사 | 차량의 주행을 보조하는 전자 장치 및 방법 |
-
2020
- 2020-07-28 CN CN202010739727.7A patent/CN114004861B/zh active Active
-
2021
- 2021-03-31 WO PCT/CN2021/084784 patent/WO2022021924A1/zh active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110264495A (zh) * | 2017-12-29 | 2019-09-20 | 华为技术有限公司 | 一种目标跟踪方法及装置 |
CN110751674A (zh) * | 2018-07-24 | 2020-02-04 | 北京深鉴智能科技有限公司 | 多目标跟踪方法及相应视频分析系统 |
CN110276783A (zh) * | 2019-04-23 | 2019-09-24 | 上海高重信息科技有限公司 | 一种多目标跟踪方法、装置及计算机系统 |
US10607084B1 (en) * | 2019-10-24 | 2020-03-31 | Capital One Services, Llc | Visual inspection support using extended reality |
CN111292355A (zh) * | 2020-02-12 | 2020-06-16 | 江南大学 | 一种融合运动信息的核相关滤波多目标跟踪方法 |
Also Published As
Publication number | Publication date |
---|---|
CN114004861A (zh) | 2022-02-01 |
CN114004861B (zh) | 2023-04-07 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
Tian et al. | Online multi-object tracking using joint domain information in traffic scenarios | |
Lee et al. | Multi-class multi-object tracking using changing point detection | |
US8447069B2 (en) | Apparatus and method for moving object detection | |
AU2007327875B2 (en) | A multiple target tracking system incorporating merge, split and reacquisition hypotheses | |
KR101910542B1 (ko) | 객체 검출을 위한 영상분석 서버장치 및 방법 | |
CN107886048A (zh) | 目标跟踪方法及系统、存储介质及电子终端 | |
CN106651901B (zh) | 对象跟踪方法和设备 | |
KR102002812B1 (ko) | 객체 검출을 위한 영상분석 서버장치 및 방법 | |
Bhaskar et al. | Autonomous detection and tracking under illumination changes, occlusions and moving camera | |
KR20190023389A (ko) | 변화점 검출을 활용한 다중클래스 다중물체 추적 방법 | |
CN105374049B (zh) | 一种基于稀疏光流法的多角点跟踪方法及装置 | |
CN112528927B (zh) | 基于轨迹分析的置信度确定方法、路侧设备及云控平台 | |
CN106780567B (zh) | 一种融合颜色和梯度直方图的免疫粒子滤波扩展目标跟踪方法 | |
Ait Abdelali et al. | An adaptive object tracking using Kalman filter and probability product kernel | |
Yang et al. | Network flow labeling for extended target tracking PHD filters | |
CN117670939B (zh) | 多相机的多目标跟踪方法、装置、存储介质及电子设备 | |
Acharya et al. | Modelling uncertainty of single image indoor localisation using a 3D model and deep learning | |
Wang et al. | Tracking objects through occlusions using improved Kalman filter | |
WO2022021924A1 (zh) | 目标跟踪方法及相关系统、存储介质、智能驾驶车辆 | |
KR20160108979A (ko) | 표적 추적 방법 및 장치 | |
CN116523972A (zh) | 基于稀疏光流运动补偿的两阶段多目标追踪方法及产品 | |
CN115100565B (zh) | 一种基于空间相关性与光流配准的多目标跟踪方法 | |
Cattelani et al. | A particle filtering approach for tracking an unknown number of objects with dynamic relations | |
Tissainayagam et al. | Visual tracking with automatic motion model switching | |
McLaughlin et al. | Online multiperson tracking with occlusion reasoning and unsupervised track motion model |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 21850462 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 21850462 Country of ref document: EP Kind code of ref document: A1 |