CN116310403A - Target tracking method, device, electronic equipment and readable storage medium - Google Patents

Target tracking method, device, electronic equipment and readable storage medium Download PDF

Info

Publication number
CN116310403A
CN116310403A CN202310127549.6A CN202310127549A CN116310403A CN 116310403 A CN116310403 A CN 116310403A CN 202310127549 A CN202310127549 A CN 202310127549A CN 116310403 A CN116310403 A CN 116310403A
Authority
CN
China
Prior art keywords
target
tracking
unassociated
pair
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310127549.6A
Other languages
Chinese (zh)
Inventor
路金诚
张伟
谭啸
李莹莹
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Beijing Baidu Netcom Science and Technology Co Ltd
Original Assignee
Beijing Baidu Netcom Science and Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Beijing Baidu Netcom Science and Technology Co Ltd filed Critical Beijing Baidu Netcom Science and Technology Co Ltd
Priority to CN202310127549.6A priority Critical patent/CN116310403A/en
Publication of CN116310403A publication Critical patent/CN116310403A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/62Extraction of image or video features relating to a temporal dimension, e.g. time-based feature extraction; Pattern tracking
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/50Extraction of image or video features by performing operations within image blocks; by using histograms, e.g. histogram of oriented gradients [HoG]; by summing image-intensity values; Projection analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/50Context or environment of the image
    • G06V20/52Surveillance or monitoring of activities, e.g. for recognising suspicious objects
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Abstract

The disclosure provides a target tracking method, a target tracking device, electronic equipment and a readable storage medium, relates to the technical field of artificial intelligence, and particularly relates to the fields of computer vision, image processing, deep learning and the like, and can be applied to scenes such as smart cities and the like. The method comprises the following steps: performing target detection on the current image frame to obtain at least one detection target; transforming coordinate values of pixels in at least one detection target according to at least one first feature point in the current image frame and at least one second feature point in the historical image frame to obtain at least one updated detection target; performing first matching on at least one updated detection target and at least one confirmed tracking target according to a first preset characteristic to obtain at least one first target pair; updating the validated tracking target in the at least one first target pair based on the updated detection target in the at least one first target pair. The method and the device fully consider the influence of the movement of the camera on the target tracking, and can improve the accuracy and the robustness of the target tracking.

Description

Target tracking method, device, electronic equipment and readable storage medium
Technical Field
The disclosure relates to the technical field of artificial intelligence, in particular to the technical fields of computer vision, image processing, deep learning and the like, and can be applied to scenes such as smart cities and the like. Provided are a target tracking method, a target tracking device, an electronic device and a readable storage medium.
Background
Along with the continuous rising of the total number of expressway vehicles and the total mileage of the highways in China, the road traffic management faces new challenges, the conventional deployment interval of expressway cameras is limited, the full coverage of the road area cannot be ensured, and the problems of fixed field of view, high cost of manpower and material resources and the like exist. The unmanned aerial vehicle, the motorcycle and other vehicles have the characteristics of strong maneuverability, large visual field and flexible deployment, and the installation of the monitoring cameras on the vehicles makes up the defects of the traditional video monitoring, thereby having positive effects on establishing an omnibearing, three-dimensional and intuitive monitoring system, realizing intelligent traffic management and improving the response speed of coping with emergencies.
Disclosure of Invention
According to a first aspect of the present disclosure, there is provided a target tracking method, including: performing target detection on the current image frame to obtain at least one detection target; transforming coordinate values of pixels in the at least one detection target according to at least one first feature point in the current image frame and at least one second feature point in the historical image frame to obtain at least one updated detection target; according to the first preset characteristics, performing first matching on the at least one updated detection target and the at least one confirmed tracking target to obtain at least one first target pair; updating the confirmed tracking target in the at least one first target pair according to the updated detection target in the at least one first target pair.
According to a second aspect of the present disclosure, there is provided an object tracking apparatus including: the detection unit is used for carrying out target detection on the current image frame to obtain at least one detection target; an alignment unit, configured to transform coordinate values of pixels in the at least one detection target according to at least one first feature point in the current image frame and at least one second feature point in the historical image frame, so as to obtain at least one updated detection target; the first matching unit is used for carrying out first matching on the at least one updated detection target and the at least one confirmed tracking target according to a first preset characteristic to obtain at least one first target pair; and the tracking unit is used for updating the confirmed tracking target in the at least one first target pair according to the updated detection target in the at least one first target pair.
According to a third aspect of the present disclosure, there is provided an electronic device comprising: at least one processor; and a memory communicatively coupled to the at least one processor; wherein the memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method as described above.
According to a fourth aspect of the present disclosure, there is provided a non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method as described above.
According to a fifth aspect of the present disclosure, there is provided a computer program product comprising a computer program which, when executed by a processor, implements a method as described above.
According to the technical scheme, the purpose of motion compensation is achieved by combining the mode of aligning the coordinate systems of the two frames of images, and the influence of the motion of the camera on target tracking is fully considered, so that the accuracy and the robustness of target tracking can be improved under tracking scenes such as large scene change, faster relative motion between the camera and the target and the like.
It should be understood that the description in this section is not intended to identify key or critical features of the embodiments of the disclosure, nor is it intended to be used to limit the scope of the disclosure. Other features of the present disclosure will become apparent from the following specification.
Drawings
The drawings are for a better understanding of the present solution and are not to be construed as limiting the present disclosure. Wherein:
FIG. 1 is a schematic diagram according to a first embodiment of the present disclosure;
FIG. 2 is a schematic diagram according to a second embodiment of the present disclosure;
FIG. 3 is a schematic diagram according to a third embodiment of the present disclosure;
FIG. 4 is a schematic diagram according to a fourth embodiment of the present disclosure;
fig. 5 is a block diagram of an electronic device for implementing the target tracking method of an embodiment of the present disclosure.
Detailed Description
Exemplary embodiments of the present disclosure are described below in conjunction with the accompanying drawings, which include various details of the embodiments of the present disclosure to facilitate understanding, and should be considered as merely exemplary. Accordingly, one of ordinary skill in the art will recognize that various changes and modifications of the embodiments described herein can be made without departing from the scope and spirit of the present disclosure. Also, descriptions of well-known functions and constructions are omitted in the following description for clarity and conciseness.
Fig. 1 is a schematic diagram according to a first embodiment of the present disclosure. As shown in fig. 1, the target tracking method of the present embodiment specifically includes the following steps:
s101, performing target detection on a current image frame to obtain at least one detection target;
s102, transforming coordinate values of pixels in the at least one detection target according to at least one first feature point in the current image frame and at least one second feature point in the historical image frame to obtain at least one updated detection target;
S103, according to a first preset characteristic, performing first matching on the at least one updated detection target and the at least one confirmed tracking target to obtain at least one first target pair;
s104, updating the confirmed tracking target in the at least one first target pair according to the updated detection target in the at least one first target pair.
According to the target tracking method, after at least one detected target in the current image frame is obtained, the coordinate values of pixels in the detected target are transformed according to the characteristic points extracted from the current image frame and the historical image frame to obtain an updated detected target, so that the aim of aligning coordinate systems of two frames of images is fulfilled, then the updated detected target is subjected to first matching with the confirmed tracked target, and target tracking is finished according to the obtained at least one first target pair.
In the embodiment, when S101 is executed, an image frame captured by an image capturing device (for example, an on-board camera) on the unmanned aerial vehicle may be obtained as a current image frame; that is, the unmanned aerial vehicle with the image acquisition device performs target tracking, so that the fixed area difficult to shoot by the image acquisition device can be better covered, and thus the target tracking tasks under different scenes such as intelligent security, rescue and relief work, intelligent traffic, intelligent city and the like can be more conveniently executed.
The detection target obtained by the embodiment executing S101 may be a vehicle; that is, the target tracking method of the embodiment can be applied to an unmanned aerial vehicle road inspection scene, and the purpose of monitoring abnormal or illegal behaviors of the vehicle by using the unmanned aerial vehicle is achieved by tracking the vehicle in the road.
In the embodiment, when S101 is executed, the target detection model obtained by training in advance may be used to perform target detection on the current image frame, so as to obtain at least one detection target, where different detection targets correspond to different detection frames.
After the at least one detection target is obtained in the step S101, the step S102 is performed to transform the coordinate values of the pixels in the at least one detection target according to the at least one first feature point in the current image frame and the at least one second feature point in the historical image frame to obtain at least one updated detection target; wherein the historical image frame is the previous frame image of the current image frame.
Specifically, in the embodiment, when executing S102 to transform coordinate values of pixels in at least one detection target according to at least one first feature point in a current image frame and at least one second feature point in a historical image frame to obtain at least one updated detection target, optional implementation manners may be adopted as follows: matching at least one first feature point with at least one second feature point to obtain at least one feature point pair, wherein the matching between feature points can be performed according to feature descriptors of the feature points; according to at least one characteristic point pair, a coordinate transformation matrix between a current image frame and a historical image frame is obtained, and according to the characteristic point pair, a RANSAC method can be used in the embodiment, and the coordinate transformation matrix is obtained; and transforming the coordinate values of the pixels in at least one detection target according to the obtained coordinate transformation matrix to obtain at least one updated detection target.
In the embodiment, when S102 is executed to extract at least one first feature point from the current image frame, an optional implementation manner may be adopted as follows: detecting feature points of the current image frame to obtain at least one feature point, wherein the feature point obtained by the embodiment can be ORB (Oriented FAST and Rotated BRIEF) feature points, and further obtain feature descriptors of the feature points, wherein the feature descriptors are used for matching the feature points; and taking the obtained at least one feature point outside the detection target (such as a surrounding frame of the detection target) as at least one first feature point.
In the embodiment, when S102 is executed to extract at least one second feature point from the historical image frame, an optional implementation manner may be adopted as follows: detecting characteristic points of the historical image frames to obtain at least one characteristic point; and taking the obtained at least one feature point outside the tracking target (such as a bounding box of the tracking target) as at least one second feature point.
That is, since the feature points located inside the detection target or the tracking target affect the accuracy of the coordinate transformation, the embodiment only selects the feature points located outside the target as the first feature point or the second feature point, so that the first feature point and the second feature point for matching are both located on the background of the image frame, and the accuracy of the coordinate transformation is improved.
After the step S102 of obtaining at least one updated detection target, the step S103 of performing a first matching on the at least one updated detection target and the at least one confirmed tracking target according to a first preset feature to obtain at least one first target pair; each first target pair obtained in S103 includes an updated detection target and a confirmed tracking target matched with the updated detection target.
In the present embodiment, the tracking targets located in the history image frame are divided into confirmed tracking targets and unconfirmed tracking targets; wherein, the unconfirmed tracking target is a tracking target with the continuous occurrence frequency smaller than or equal to the preset frequency in the plurality of historical image frames, and the confirmed tracking target is a tracking target with the continuous occurrence frequency larger than the preset frequency in the plurality of historical image frames; the present embodiment can determine the confirmed tracking target and the unconfirmed tracking target in the tracking targets by setting the target tags of the tracking targets (for example, the target tag of the confirmed tracking target is 1, the target tag of the unconfirmed tracking target is 0).
Specifically, in the embodiment, in executing S103, according to the first preset feature, the at least one updated detection target and the at least one confirmed tracking target are subjected to first matching, so as to obtain at least one first target pair, where an optional implementation manner may be adopted: acquiring at least one color histogram feature of an updated detection target; acquiring color histogram features of at least one confirmed tracking target; and performing first matching on at least one updated detection target and at least one confirmed tracking target according to the acquired color histogram characteristics to obtain at least one first target pair.
That is, in this embodiment, the color histogram features of the targets are used as the first preset features to perform matching between the targets, and since the calculation cost of the color histogram features is low and the advantages of translational, rotational and scaling invariance are provided, the matching between the targets is ensured to have higher accuracy, and meanwhile, the matching efficiency between the targets is improved, so that the efficiency of target tracking is improved.
The present embodiment, when executing S103, may acquire the color histogram feature of the updated detection target or the confirmed tracking target using the following calculation formula:
P=(p 0 ,p 1 ...,p 23 )
Figure BDA0004083175860000051
in the above formula: p represents the color histogram feature of the object; p is p i A pixel number ratio representing a color i; c i The number of pixels of color i is indicated.
In the present embodiment, when S103 is executed, 24-dimensional features obtained by three channels R, G, B and 8 sections (0 to 31, 32 to 63, 64 to 95, 96 to 127, 128 to 159, 160 to 191, 192 to 223, 224 to 255) divided according to pixel gradation values (0 to 255) can be used as color histogram features.
In the embodiment, in executing S103, according to the obtained color histogram feature, the at least one updated detection target and the at least one confirmed tracking target are subjected to first matching, so as to obtain at least one first target pair, where an optional implementation manner may be: according to the color histogram characteristics, calculating a cosine distance cost matrix between at least one updated detection target and at least one confirmed tracking target; according to the cosine distance cost matrix, at least one first target pair is obtained, for example, firstly, an element smaller than a preset threshold value in the chord distance cost matrix is determined, and then, according to the updated detection target and the confirmed tracking target corresponding to the element, one first target pair is obtained.
That is, in this embodiment, the matching relationship between the updated detection target and the confirmed tracking target is determined according to the cost matrix obtained by calculating the color histogram features of the at least one updated detection target and the at least one confirmed tracking target, so as to obtain the first target pair, so that the matching step between the targets can be simplified, the matching efficiency between the targets can be improved, and further the performance requirement of target tracking under the unmanned aerial vehicle road inspection scene can be satisfied.
In the embodiment, when executing S103, the appearance feature of the target may be used as the first preset feature, or the target may be input into a feature extraction model trained in advance, and the output result of the model may be used as the first preset feature.
After the execution S103 of the present embodiment obtains at least one first target pair, the execution S104 updates the confirmed tracking target in the at least one first target pair according to the updated detection target in the at least one first target pair.
In this embodiment, when S104 is executed to update the confirmed tracking target in the first target pair according to the update detection target in the first target pair, the ReID of the confirmed tracking target may be updated to the ReID of the update detection target, or parameters such as the position of the bounding box of the confirmed tracking target in the current image frame, the size of the bounding box, the moving direction of the target, and the moving speed of the target may be updated to relevant parameters of the update detection target.
In this embodiment, after S104 updating the confirmed tracking target according to the updated detection target, the parameters of the confirmed tracking target in the next frame image may be predicted by using a kalman filtering method according to the parameters (such as the position of the bounding box, the size of the bounding box, the moving direction of the target, the moving speed of the target, etc.) after updating the confirmed tracking target; that is, the present embodiment can perform the first matching of the next frame image using the predicted correlation parameters of the confirmed tracking target.
Fig. 2 is a schematic diagram according to a second embodiment of the present disclosure. As shown in fig. 2, the object tracking method of the present embodiment may further include the following steps after performing S103 "after obtaining at least one first object pair:
s201, acquiring at least one unassociated tracking target;
s202, acquiring updated detection targets which are not contained in the at least one first target pair as at least one unassociated detection target;
s203, performing second matching on the at least one unassociated detection target and the at least one unassociated tracking target according to a second preset feature to obtain at least one second target pair;
s204, updating the unassociated tracking target in the at least one second target pair according to the unassociated detection target in the at least one second target pair.
That is, since the update detection target may exist in the first target pair after the first matching of the update detection target and the confirmed tracking target is completed, the present embodiment further improves the accuracy and robustness of target tracking by acquiring the second matching between the at least one unassociated tracking target and the update detection target, avoiding omission of the update detection target in the matching.
When S201 is performed to acquire at least one unassociated tracking target, the present embodiment may acquire a confirmed tracking target that is not included in at least one first target pair as the at least one unassociated tracking target.
That is, in this embodiment, the confirmed tracking target that is not matched with the updated detection target in the first matching is secondarily matched, so that the confirmed tracking target that is not successfully matched in the first stage and the updated detection target pair can be matched in the second stage, thereby improving the accuracy of matching the tracking target and the detection target.
In the present embodiment, when S201 is executed to acquire at least one unassociated tracking target, at least one unassociated tracking target may also be acquired as the at least one unassociated tracking target.
That is, the method only matches the unacknowledged tracking target once, so that the problem that computing resources are consumed when the unacknowledged target participates in the first matching is avoided, the efficiency of target tracking is further improved, and therefore the performance requirement of target tracking in the unmanned aerial vehicle road inspection scene is met.
It is to be understood that the present embodiment may also acquire, as the at least one unassociated tracking target, an unacknowledged tracking target and a confirmed tracking target that is not to be included in the at least one first target pair at the same time when S201 is performed.
Specifically, in the embodiment, in executing S203, according to the second preset feature, the at least one unassociated detection target and the at least one unassociated tracking target are subjected to second matching to obtain at least one second target pair, where an optional implementation manner may be adopted: obtaining a center point weighted distance between at least one unassociated detection target and at least one unassociated tracking target according to the movement direction of the unassociated detection target, the center point of the unassociated detection target, and a track center point connecting line of the center point of the unassociated tracking target and the unassociated tracking target; and obtaining at least one second target pair according to the obtained weighted distance of the central point, for example, obtaining an unassociated detection target and an unassociated tracking target with the weighted distance of the central point smaller than a preset distance, and obtaining the second target pair.
That is, in this embodiment, the weighted distance of the center point between the unassociated tracking target and the unassociated detection target is used as a second preset feature to perform the second matching between the targets, so that the influence of the movement of the camera on the target tracking is fully considered, and the target has a certain tracking capability even if the target is completely blocked in a short time, thereby improving the accuracy and the robustness of the target tracking.
In the present embodiment, when S203 is performed to obtain the weighted distance between the center point of the at least one unassociated detection target and the at least one unassociated tracking target, the following calculation formula may be used:
d=(w 0 +cosθ)d 0
in the above formula: d is the weighted distance of the center point between the unassociated detection target and the unassociated detection target; w (w) 0 The weight coefficient is preset; θ is the movement direction and the uncorrelated heel of the uncorrelated detection targetAn included angle between connecting lines of track center points of the target; d, d 0 Is the euclidean distance between the center point of the unassociated detection target and the center point of the unassociated tracking target.
Each second target pair obtained by executing S203 in this embodiment includes an unassociated detection target and an unassociated tracking target matched with the unassociated detection target.
The process of executing S204 when updating the unassociated tracking target according to the unassociated detection target in the second target pair is similar to the process of executing S104 when updating the confirmed tracking target according to the updated detection target in the first target pair, and will not be described in detail here.
Since the present embodiment uses the unconfirmed tracking target as the unconnected tracking target when S201 is executed, the present embodiment may further include the following when S204 is executed to update the unconnected tracking target in at least one second target pair: acquiring the continuous occurrence times of the unassociated tracking target in the at least one second target pair under the condition that the unassociated tracking target is the unacknowledged tracking target; and updating the unassociated tracking target to a confirmed tracking target under the condition that the acquired continuous occurrence number is determined to be larger than the preset number.
The present embodiment may further include the following after S204 is performed: acquiring unassociated detection targets which are not included in at least one second target pair as unacknowledged tracking targets; the present embodiment can also update the acquired unacknowledged tracking target,
that is, in this embodiment, the updated detection target that has not been successfully matched after two-stage matching is used as an unacknowledged tracking target for the next second matching, so that the purpose of updating the tracking target in real time is achieved, and it is ensured that each updated detection target in the current image frame can be matched for the next time.
The present embodiment may further include the following after S204 is performed: obtaining unassociated tracking targets which are not contained in at least one second target pair as tracking targets to be processed; under the condition that the acquired tracking target to be processed is confirmed, acquiring the non-updated time length of the tracking target to be processed; and updating the tracking target to be processed (for example, predicting by using a Kalman filtering method) under the condition that the acquired non-updated time length is less than the preset time length, otherwise deleting the tracking target to be processed, namely deleting the tracking target to be processed with the non-updated time length being greater than or equal to the preset time length.
The present embodiment may further include the following after S204 is performed: and deleting the tracking target to be processed under the condition that the acquired tracking target to be processed is not confirmed.
That is, after two-stage matching is completed, the tracking target which is not successfully matched is deleted or updated according to the type (confirmed or unconfirmed) and the unconditional duration of the tracking target, so that the purpose of updating the tracking target in real time is achieved, and the accuracy and the robustness of target tracking are further improved.
Fig. 3 is a schematic diagram according to a third embodiment of the present disclosure. Fig. 3 shows a flow chart of the present embodiment when the target tracking is currently performed: s301, performing target detection on a current image frame to obtain at least one detection target; s302, transforming coordinate values of pixels in at least one detection target to complete motion compensation, so as to obtain at least one updated detection target; s303, performing first matching according to at least one updated detection target and at least one confirmed tracking target to obtain at least one first target pair, and updating detection targets and confirmed tracking targets which are not contained in the first target pair; s304, updating the tracking target according to the detection target in the first target pair, and predicting the tracking result (namely the updated tracking target) by using a Kalman filtering method after the tracking result is obtained so as to be used as a confirmed tracking target for the next first matching; s305, performing second matching on the updated detection target which is not contained in the first target pair and the unacknowledged tracking target, the acknowledged tracking target which is not contained in the first target pair, so as to obtain at least one second target pair, and the tracking target and the detection target which are not contained in the second target pair; s306, updating the tracking target according to the detection target in the second target pair, and predicting the tracking result by using a Kalman filtering method after the tracking result is obtained so as to be used as a confirmed tracking target in the next first matching; s308, taking a detection target which is not contained in the second target pair as a newly built unacknowledged tracking target, and predicting the newly built unacknowledged tracking target by using a Kalman filtering method to serve as an unacknowledged tracking target used in next second matching, wherein when the continuous occurrence number of the unacknowledged tracking target is determined to be greater than the preset number of times, the unacknowledged tracking target is updated into the acknowledged tracking target; s307, determining the tracking targets which are not contained in the second target pair, and performing corresponding processing according to the type of the tracking targets and the non-updated time length, for example, directly deleting the non-confirmed tracking targets, deleting the confirmed tracking targets with the non-updated time length being greater than or equal to the preset time length, or predicting the confirmed tracking targets with the non-updated time length being less than the preset time length by using a Kalman filtering method to serve as the confirmed tracking targets used in the next first matching, and the like.
Fig. 4 is a schematic diagram according to a fourth embodiment of the present disclosure. As shown in fig. 4, the target tracking apparatus 400 of the present embodiment includes:
a detecting unit 401, configured to perform object detection on a current image frame to obtain at least one detection object;
an alignment unit 402, configured to transform coordinate values of pixels in the at least one detection target according to at least one first feature point in the current image frame and at least one second feature point in the historical image frame, so as to obtain at least one updated detection target;
a first matching unit 403, configured to perform a first matching on the at least one updated detection target and the at least one confirmed tracking target according to a first preset feature, to obtain at least one first target pair;
the tracking unit 404 is configured to update the confirmed tracking target in the at least one first target pair according to the updated detection target in the at least one first target pair.
The detection unit 401 may acquire an image frame captured by an image capturing device (for example, an on-board camera) on the unmanned aerial vehicle as a current image frame; the detection target obtained by the detection unit 401 may be a vehicle.
The detection unit 401 may perform object detection on the current image frame using an object detection model obtained by training in advance, thereby obtaining at least one detection object, where different detection objects correspond to different detection frames.
In this embodiment, after at least one detection target is obtained by the detection unit 401, the alignment unit 402 transforms coordinate values of pixels in at least one detection target according to at least one first feature point in a current image frame and at least one second feature point in a history image frame to obtain at least one updated detection target; wherein the historical image frame is the previous frame image of the current image frame.
Specifically, when the alignment unit 402 transforms the coordinate values of the pixels in the at least one detection target according to the at least one first feature point in the current image frame and the at least one second feature point in the historical image frame to obtain at least one updated detection target, the following alternative implementation manners may be adopted: matching at least one first characteristic point with at least one second characteristic point to obtain at least one characteristic point pair; obtaining a coordinate transformation matrix between the current image frame and the historical image frame according to at least one characteristic point pair; and transforming the coordinate values of the pixels in at least one detection target according to the obtained coordinate transformation matrix to obtain at least one updated detection target.
The alignment unit 402 may adopt an alternative implementation manner when extracting at least one first feature point from the current image frame: detecting characteristic points of the current image frame to obtain at least one characteristic point; and taking the obtained at least one feature point outside the detection target (such as a surrounding frame of the detection target) as at least one first feature point.
The alignment unit 402 may adopt an alternative implementation manner when extracting at least one second feature point from the historical image frame: detecting characteristic points of the historical image frames to obtain at least one characteristic point; and taking the obtained at least one feature point outside the tracking target (such as a bounding box of the tracking target) as at least one second feature point.
That is, since the feature points located inside the detection target or the tracking target affect the accuracy of the coordinate transformation, the alignment unit 402 only selects the feature points located outside the target as the first feature point or the second feature point, so that it is ensured that the first feature point and the second feature point for matching are located on the background of the image frame, thereby improving the accuracy of the coordinate transformation.
After the alignment unit 402 obtains at least one updated detection target, the first matching unit 403 performs first matching on the at least one updated detection target and the at least one confirmed tracking target according to the first preset feature, so as to obtain at least one first target pair; each first target pair obtained by the first matching unit 403 includes an updated detection target and a confirmed tracking target matched with the updated detection target.
In the present embodiment, the tracking targets located in the history image frame are divided into confirmed tracking targets and unconfirmed tracking targets; wherein, the unconfirmed tracking target is a tracking target with the continuous occurrence frequency smaller than or equal to the preset frequency in the plurality of historical image frames, and the confirmed tracking target is a tracking target with the continuous occurrence frequency larger than the preset frequency in the plurality of historical image frames; the present embodiment can determine the confirmed tracking target and the unconfirmed tracking target in the tracking targets by setting the target tags of the tracking targets (for example, the target tag of the confirmed tracking target is 1, the target tag of the unconfirmed tracking target is 0).
Specifically, when the first matching unit 403 performs first matching on at least one updated detection target and at least one confirmed tracking target according to the first preset feature, at least one first target pair is obtained, which may be implemented by using the following alternative implementation manners: acquiring at least one color histogram feature of an updated detection target; acquiring color histogram features of at least one confirmed tracking target; and performing first matching on at least one updated detection target and at least one confirmed tracking target according to the acquired color histogram characteristics to obtain at least one first target pair.
That is, the first matching unit 403 performs matching between targets with the color histogram feature of the targets as the first preset feature, and has the advantages of translational, rotational, and scaling invariance due to low calculation cost of the color histogram feature, so that the matching between targets can be ensured to have higher accuracy, and meanwhile, the matching efficiency between targets can be improved, thereby improving the efficiency of target tracking.
The first matching unit 403 may acquire the color histogram feature of the updated detection target or the confirmed tracking target using the following calculation formula:
P=(p 0 ,p 1 ...,p 23 )
Figure BDA0004083175860000121
in the above formula: p represents the color histogram feature of the object; p is p i A pixel number ratio representing a color i; c i The number of pixels of color i is indicated.
The first matching unit 403 can use 24-dimensional features obtained by R, G, B three channels and 8 sections (0 to 31, 32 to 63, 64 to 95, 96 to 127, 128 to 159, 160 to 191, 192 to 223, 224 to 255) divided according to pixel gradation values (0 to 255) as color histogram features.
The first matching unit 403 performs first matching on the at least one updated detection target and the at least one confirmed tracking target according to the acquired color histogram feature to obtain at least one first target pair, where the optional implementation manner may be: according to the color histogram characteristics, calculating a cosine distance cost matrix between at least one updated detection target and at least one confirmed tracking target; and obtaining at least one first target pair according to the cosine distance cost matrix.
That is, the first matching unit 403 determines the matching relationship between the updated detection target and the confirmed tracking target according to the cost matrix obtained by calculating the color histogram features of the at least one updated detection target and the at least one confirmed tracking target, so as to obtain the first target pair, which can simplify the matching step between the targets, improve the matching efficiency between the targets, and further meet the performance requirement of target tracking in the unmanned aerial vehicle road inspection scene.
The first matching unit 403 may use the appearance feature of the target as a first preset feature, input the target into a feature extraction model obtained by training in advance, and use the output result of the model as the first preset feature.
The present embodiment updates the confirmed tracking target in the at least one first target pair by the tracking unit 404 based on the updated detection target in the at least one first target pair after the at least one first target pair is obtained by the first matching unit 403.
When updating the confirmed tracking target in the first target pair according to the updated detection target in the first target pair, the tracking unit 404 may update the ReID of the confirmed tracking target to the ReID of the updated detection target, or may update parameters such as the position of the bounding box of the confirmed tracking target in the current image frame, the size of the bounding box, the moving direction of the target, and the moving speed of the target to the relevant parameters of the updated detection target.
The tracking unit 404 may predict the parameters of the confirmed tracking target in the next frame image using a kalman filtering method according to the parameters after the confirmed tracking target is updated according to the updated detection target; that is, the present embodiment can perform the first matching of the next frame image using the predicted correlation parameters of the confirmed tracking target.
The target tracking apparatus 400 of the present embodiment may further include a second matching unit 405 for performing the following: acquiring at least one unassociated tracking target; acquiring updated detection targets not included in the at least one first target pair as at least one unassociated detection target; according to the second preset characteristics, performing second matching on at least one unassociated detection target and at least one unassociated tracking target to obtain at least one second target pair; updating the unassociated tracking target in the at least one second target pair according to the unassociated detection target in the at least one second target pair.
That is, since there may be an update detection target that is not included in the first target pair after the first matching of the update detection target and the confirmed tracking target is completed, the second matching unit 405 further improves accuracy and robustness of target tracking by acquiring at least one unassociated tracking target to perform a second matching with the update detection target, avoiding omission of the update detection target in matching.
The second matching unit 405, when acquiring at least one unassociated tracking target, may acquire a confirmed tracking target not included in the at least one first target pair as the at least one unassociated tracking target.
That is, the second matching unit 405 performs the second matching of the confirmed tracking target that is not matched with the updated detection target at the time of the first matching, and ensures that the confirmed tracking target that is not successfully matched in the first stage and the updated detection target pair can be subjected to the matching of the second stage, thereby improving the accuracy of the matching of the tracking target and the detection target.
The second matching unit 405 may also acquire at least one unconnected tracked object as the at least one unconnected tracked object when acquiring the at least one unconnected tracked object.
That is, the second matching unit 405 performs matching on the unacknowledged tracking target only once, so that the problem that computing resources are consumed when the unacknowledged target participates in the first matching is avoided, and the efficiency of target tracking is further improved, thereby meeting the performance requirement of target tracking in the unmanned aerial vehicle road inspection scene.
It is to be understood that the second matching unit 405 may also acquire an unacknowledged tracking target and an acknowledged tracking target that is not to be included in at least one first target pair at the same time as at least one unacknowledged tracking target.
Specifically, when the second matching unit 405 performs second matching on at least one unassociated detection target and at least one unassociated tracking target according to the second preset feature, at least one second target pair is obtained, which may be implemented in an optional manner: obtaining a center point weighted distance between at least one unassociated detection target and at least one unassociated tracking target according to the movement direction of the unassociated detection target, the center point of the unassociated detection target, and a track center point connecting line of the center point of the unassociated tracking target and the unassociated tracking target; and obtaining at least one second target pair according to the obtained weighted distance of the center point.
That is, the second matching unit 405 performs the second matching between the targets by using the weighted distance between the center points of the unassociated tracking target and the unassociated detecting target as the second preset feature, fully considers the influence of the movement of the camera on the target tracking, and has a certain tracking capability even if the target is completely blocked in a short time, thereby improving the accuracy and the robustness of the target tracking.
The second matching unit 405 may use the following calculation formula when obtaining the weighted distance of the center point between the at least one unassociated detection target and the at least one unassociated tracking target:
d=(w 0 +cosθ)d 0
In the above formula: d is the weighted distance of the center point between the unassociated detection target and the unassociated detection target; w (w) 0 The weight coefficient is preset; θ is the included angle between the motion direction of the unassociated detection target and the track center point connecting line of the unassociated tracking target; d, d 0 Is the euclidean distance between the center point of the unassociated detection target and the center point of the unassociated tracking target.
Each second target pair obtained by the second matching unit 405 includes an unassociated detection target and an unassociated tracking target matched with the unassociated detection target.
The procedure of the second matching unit 405 when updating the unassociated tracking target according to the unassociated detection target in the second target pair is similar to the procedure when the tracking unit 404 updates the confirmed tracking target according to the updated detection target in the first target pair, and will not be described here.
Since the second matching unit 405 may use the unacknowledged tracking target as an unassociated tracking target, the second matching unit 405 may further include the following when updating the unassociated tracking target in at least one second target pair: acquiring the continuous occurrence times of the unassociated tracking target in the at least one second target pair under the condition that the unassociated tracking target is the unacknowledged tracking target; and updating the unassociated tracking target to a confirmed tracking target under the condition that the acquired continuous occurrence number is determined to be larger than the preset number.
The second matching unit 405 may also perform the following: acquiring unassociated detection targets which are not included in at least one second target pair as unacknowledged tracking targets; the present embodiment can also update the acquired unacknowledged tracking target,
that is, the second matching unit 405 uses the updated detection targets that have not been successfully matched after two-stage matching as unacknowledged tracking targets for the next second matching, thereby achieving the purpose of updating the tracking targets in real time, and ensuring that each updated detection target in the current image frame can be matched for the next time.
The second matching unit 405 may also perform the following: obtaining unassociated tracking targets which are not contained in at least one second target pair as tracking targets to be processed; under the condition that the acquired tracking target to be processed is confirmed, acquiring the non-updated time length of the tracking target to be processed; and updating the to-be-processed tracking target under the condition that the acquired non-updated time length is smaller than the preset time length, otherwise deleting the to-be-processed tracking target, namely deleting the to-be-processed tracking target with the non-updated time length being longer than or equal to the preset time length.
The second matching unit 405 may also perform the following: and deleting the tracking target to be processed under the condition that the acquired tracking target to be processed is not confirmed.
That is, after the two-stage matching is completed, the second matching unit 405 deletes or updates the tracking target according to the type (confirmed or unconfirmed) and the unconfirmed duration of the tracking target, so as to achieve the purpose of updating the tracking target in real time, and further improve the accuracy and robustness of target tracking.
In the technical scheme of the disclosure, the acquisition, storage, application and the like of the related user personal information all conform to the regulations of related laws and regulations, and the public sequence is not violated.
According to embodiments of the present disclosure, the present disclosure also provides an electronic device, a readable storage medium and a computer program product.
As shown in fig. 5, is a block diagram of an electronic device of a target tracking method according to an embodiment of the present disclosure. Electronic devices are intended to represent various forms of digital computers, such as laptops, desktops, workstations, personal digital assistants, servers, blade servers, mainframes, and other appropriate computers. The electronic device may also represent various forms of mobile devices, such as personal digital processing, cellular telephones, smartphones, wearable devices, and other similar computing devices. The components shown herein, their connections and relationships, and their functions, are meant to be exemplary only, and are not meant to limit implementations of the disclosure described and/or claimed herein.
As shown in fig. 5, the apparatus 500 includes a computing unit 501 that can perform various suitable actions and processes according to a computer program stored in a Read Only Memory (ROM) 502 or a computer program loaded from a storage unit 508 into a Random Access Memory (RAM) 503. In the RAM503, various programs and data required for the operation of the device 500 can also be stored. The computing unit 501, ROM502, and RAM503 are connected to each other by a bus 504. An input/output (I/O) interface 505 is also connected to bus 504.
Various components in the device 500 are connected to the I/O interface 505, including: an input unit 506 such as a keyboard, a mouse, etc.; an output unit 507 such as various types of displays, speakers, and the like; a storage unit 508 such as a magnetic disk, an optical disk, or the like; and a communication unit 509 such as a network card, modem, wireless communication transceiver, etc. The communication unit 509 allows the device 500 to exchange information/data with other devices via a computer network such as the internet and/or various telecommunication networks.
The computing unit 501 may be a variety of general and/or special purpose processing components having processing and computing capabilities. Some examples of computing unit 501 include, but are not limited to, a Central Processing Unit (CPU), a Graphics Processing Unit (GPU), various specialized Artificial Intelligence (AI) computing chips, various computing units running machine learning model algorithms, a Digital Signal Processor (DSP), and any suitable processor, controller, microcontroller, etc. The computing unit 501 performs the respective methods and processes described above, such as the target tracking method. For example, in some embodiments, the target tracking method may be implemented as a computer software program tangibly embodied on a machine-readable medium, such as storage unit 508.
In some embodiments, part or all of the computer program may be loaded and/or installed onto the device 500 via the ROM502 and/or the communication unit 509. When the computer program is loaded into RAM 503 and executed by computing unit 501, one or more steps of the object tracking method described above may be performed. Alternatively, in other embodiments, the computing unit 501 may be configured to perform the target tracking method by any other suitable means (e.g. by means of firmware).
Various implementations of the systems and techniques described here can be implemented in digital electronic circuitry, integrated circuitry, field Programmable Gate Arrays (FPGAs), application Specific Integrated Circuits (ASICs), application Specific Standard Products (ASSPs), systems On Chip (SOCs), complex Programmable Logic Devices (CPLDs), computer hardware, firmware, software, and/or combinations thereof. These various embodiments may include: implemented in one or more computer programs, the one or more computer programs may be executed and/or interpreted on a programmable system including at least one programmable processor, which may be a special purpose or general-purpose programmable processor, that may receive data and instructions from, and transmit data and instructions to, a storage system, at least one input device, and at least one output device.
Program code for carrying out methods of the present disclosure may be written in any combination of one or more programming languages. These program code may be provided to a processor or controller of a general purpose computer, special purpose computer, or other programmable object tracking device such that the program code, when executed by the processor or controller, causes the functions/operations specified in the flowchart and/or block diagram to be implemented. The program code may execute entirely on the machine, partly on the machine, as a stand-alone software package, partly on the machine and partly on a remote machine or entirely on the remote machine or server.
In the context of this disclosure, a machine-readable medium may be a tangible medium that can contain, or store a program for use by or in connection with an instruction execution system, apparatus, or device. The machine-readable medium may be a machine-readable signal medium or a machine-readable storage medium. The machine-readable medium may include, but is not limited to, an electronic, magnetic, optical, electromagnetic, infrared, or semiconductor system, apparatus, or device, or any suitable combination of the foregoing. More specific examples of a machine-readable storage medium would include an electrical connection based on one or more wires, a portable computer diskette, a hard disk, a Random Access Memory (RAM), a read-only memory (ROM), an erasable programmable read-only memory (EPROM or flash memory), an optical fiber, a portable compact disc read-only memory (CD-ROM), an optical storage device, a magnetic storage device, or any suitable combination of the foregoing.
To provide for interaction with a user, the systems and techniques described here can be implemented on a computer having: a presentation device (e.g., a CRT (cathode ray tube) or LCD (liquid crystal display) monitor) for presenting information to a user; and a keyboard and pointing device (e.g., a mouse or trackball) by which a user can provide input to the computer. Other kinds of devices may also be used to provide for interaction with a user; for example, feedback provided to the user may be any form of sensory feedback (e.g., visual feedback, auditory feedback, or tactile feedback); and input from the user may be received in any form, including acoustic input, speech input, or tactile input.
The systems and techniques described here can be implemented in a computing system that includes a background component (e.g., as a data server), or that includes a middleware component (e.g., an application server), or that includes a front-end component (e.g., a user computer having a graphical user interface or a web browser through which a user can interact with an implementation of the systems and techniques described here), or any combination of such background, middleware, or front-end components. The components of the system can be interconnected by any form or medium of digital data communication (e.g., a communication network). Examples of communication networks include: local Area Networks (LANs), wide Area Networks (WANs), and the internet.
The computer system may include a client and a server. The client and server are typically remote from each other and typically interact through a communication network. The relationship of client and server arises by virtue of computer programs running on the respective computers and having a client-server relationship to each other. The server can be a cloud server, also called a cloud computing server or a cloud host, and is a host product in a cloud computing service system, so that the defects of high management difficulty and weak service expansibility in the traditional physical hosts and VPS service ("Virtual Private Server" or simply "VPS") are overcome. The server may also be a server of a distributed system or a server that incorporates a blockchain.
It should be appreciated that various forms of the flows shown above may be used to reorder, add, or delete steps. For example, the steps recited in the present disclosure may be performed in parallel, sequentially, or in a different order, provided that the desired results of the disclosed aspects are achieved, and are not limited herein.
The above detailed description should not be taken as limiting the scope of the present disclosure. It will be apparent to those skilled in the art that various modifications, combinations, sub-combinations and alternatives are possible, depending on design requirements and other factors. Any modifications, equivalent substitutions and improvements made within the spirit and principles of the present disclosure are intended to be included within the scope of the present disclosure.

Claims (25)

1. A target tracking method, comprising:
performing target detection on the current image frame to obtain at least one detection target;
transforming coordinate values of pixels in the at least one detection target according to at least one first feature point in the current image frame and at least one second feature point in the historical image frame to obtain at least one updated detection target;
according to the first preset characteristics, performing first matching on the at least one updated detection target and the at least one confirmed tracking target to obtain at least one first target pair;
updating the confirmed tracking target in the at least one first target pair according to the updated detection target in the at least one first target pair.
2. The method of claim 1, wherein transforming coordinate values of pixels in the at least one detection target according to at least one first feature point in the current image frame and at least one second feature point in the historical image frame to obtain at least one updated detection target comprises:
matching the at least one first feature point with the at least one second feature point to obtain at least one feature point pair;
Obtaining a coordinate transformation matrix between the current image frame and the historical image frame according to the at least one characteristic point pair;
and transforming the coordinate values of the pixels in the at least one detection target according to the coordinate transformation matrix to obtain the at least one updated detection target.
3. The method of claim 1, wherein acquiring at least one first feature point in the current image frame comprises:
detecting characteristic points of the current image frame to obtain at least one characteristic point;
and taking the characteristic point which is positioned outside the detection target as the at least one first characteristic point.
4. The method of claim 1, wherein acquiring at least one second feature point in the historical image frame comprises:
detecting characteristic points of the historical image frames to obtain at least one characteristic point;
and taking the characteristic point which is positioned outside the tracking target as the at least one second characteristic point.
5. The method of claim 1, wherein said first matching the at least one updated detection target with the at least one validated tracking target according to the first predetermined characteristic, the obtaining at least one first target pair comprises:
Acquiring a color histogram feature of the at least one updated detection target;
acquiring color histogram features of the at least one confirmed tracking target;
and according to the color histogram characteristics, performing first matching on the at least one updated detection target and the at least one confirmed tracking target to obtain the at least one first target pair.
6. The method of claim 1, further comprising,
after the at least one first target pair is obtained, at least one unassociated tracking target is obtained;
acquiring updated detection targets not included in the at least one first target pair as at least one unassociated detection target;
according to a second preset feature, performing second matching on the at least one unassociated detection target and the at least one unassociated tracking target to obtain at least one second target pair;
updating the unassociated tracking target in the at least one second target pair according to the unassociated detection target in the at least one second target pair.
7. The method of claim 6, wherein the obtaining at least one unassociated tracking target comprises:
acquiring confirmed tracking targets not included in the at least one first target pair as the at least one unassociated tracking target; and/or
At least one unacknowledged tracked target is obtained as the at least one unassociated tracked target.
8. The method of claim 6, wherein the second matching the at least one unassociated detection target with the at least one unassociated tracking target according to a second preset feature to obtain at least one second target pair comprises:
obtaining a center point weighted distance between at least one unassociated detection target and at least one unassociated tracking target according to the movement direction of the unassociated detection target, the center point of the unassociated detection target, and a track center point connecting line of the center point of the unassociated tracking target and the unassociated tracking target;
and obtaining the at least one second target pair according to the weighted distance of the center point.
9. The method of claim 6, wherein the updating unassociated tracking targets in the at least one second target pair comprises:
acquiring the continuous occurrence times of the unassociated tracking target in the at least one second target pair under the condition that the unassociated tracking target is the unacknowledged tracking target;
and updating the unassociated tracking target to a confirmed tracking target under the condition that the continuous occurrence number is determined to be larger than the preset number.
10. The method of claim 6, further comprising,
after the at least one second target pair is obtained, unassociated detection targets not included in the at least one second target pair are obtained as unacknowledged tracking targets.
11. The method of claim 6, further comprising,
after the at least one second target pair is obtained, unassociated tracking targets which are not contained in the at least one second target pair are obtained and used as tracking targets to be processed;
acquiring the non-updated time length of the tracking target to be processed under the condition that the tracking target to be processed is determined to be the confirmed tracking target;
and updating the tracking target to be processed under the condition that the non-updated time length is smaller than the preset time length, otherwise deleting the tracking target to be processed.
12. An object tracking device comprising:
the detection unit is used for carrying out target detection on the current image frame to obtain at least one detection target;
an alignment unit, configured to transform coordinate values of pixels in the at least one detection target according to at least one first feature point in the current image frame and at least one second feature point in the historical image frame, so as to obtain at least one updated detection target;
The first matching unit is used for carrying out first matching on the at least one updated detection target and the at least one confirmed tracking target according to a first preset characteristic to obtain at least one first target pair;
and the tracking unit is used for updating the confirmed tracking target in the at least one first target pair according to the updated detection target in the at least one first target pair.
13. The apparatus of claim 12, wherein the alignment unit, when transforming the coordinate values of the pixels in the at least one detection target according to the at least one first feature point in the current image frame and the at least one second feature point in the historical image frame to obtain at least one updated detection target, specifically performs:
matching the at least one first feature point with the at least one second feature point to obtain at least one feature point pair;
obtaining a coordinate transformation matrix between the current image frame and the historical image frame according to the at least one characteristic point pair;
and transforming the coordinate values of the pixels in the at least one detection target according to the coordinate transformation matrix to obtain the at least one updated detection target.
14. The apparatus of claim 13, wherein the alignment unit, when acquiring at least one first feature point in the current image frame, specifically performs:
detecting characteristic points of the current image frame to obtain at least one characteristic point;
and taking the characteristic point which is positioned outside the detection target as the at least one first characteristic point.
15. The apparatus of claim 12, wherein the alignment unit, when acquiring at least one second feature point in the historical image frame, specifically performs:
detecting characteristic points of the historical image frames to obtain at least one characteristic point;
and taking the characteristic point which is positioned outside the tracking target as the at least one second characteristic point.
16. The apparatus of claim 12, wherein the first matching unit performs, when the at least one updated detection target is first matched with the at least one confirmed tracking target according to a first preset feature, to obtain at least one first target pair, specifically performing:
acquiring a color histogram feature of the at least one updated detection target;
acquiring color histogram features of the at least one confirmed tracking target;
And according to the color histogram characteristics, performing first matching on the at least one updated detection target and the at least one confirmed tracking target to obtain the at least one first target pair.
17. The apparatus of claim 12, further comprising a second matching unit to perform:
after the first matching unit obtains the at least one first target pair, at least one unassociated tracking target is obtained;
acquiring updated detection targets not included in the at least one first target pair as at least one unassociated detection target;
according to a second preset feature, performing second matching on the at least one unassociated detection target and the at least one unassociated tracking target to obtain at least one second target pair;
updating the unassociated tracking target in the at least one second target pair according to the unassociated detection target in the at least one second target pair.
18. The apparatus of claim 17, wherein the second matching unit, when acquiring at least one unassociated tracking target, specifically performs:
acquiring confirmed tracking targets not included in the at least one first target pair as the at least one unassociated tracking target; and/or
At least one unacknowledged tracked target is obtained as the at least one unassociated tracked target.
19. The apparatus of claim 17, wherein the second matching unit performs, when performing second matching on the at least one unassociated detection target and the at least one unassociated tracking target according to a second preset feature, to obtain at least one second target pair, specifically performing:
obtaining a center point weighted distance between at least one unassociated detection target and at least one unassociated tracking target according to the movement direction of the unassociated detection target, the center point of the unassociated detection target, and a track center point connecting line of the center point of the unassociated tracking target and the unassociated tracking target;
and obtaining the at least one second target pair according to the weighted distance of the center point.
20. The apparatus of claim 17, wherein the first matching unit, when updating unassociated tracking objects in the at least one second object pair, performs:
acquiring the continuous occurrence times of the unassociated tracking target in the at least one second target pair under the condition that the unassociated tracking target is the unacknowledged tracking target;
And updating the unassociated tracking target to a confirmed tracking target under the condition that the continuous occurrence number is determined to be larger than the preset number.
21. The apparatus of claim 17, wherein the second matching unit is further configured to perform:
after the at least one second target pair is obtained, unassociated detection targets not included in the at least one second target pair are obtained as unacknowledged tracking targets.
22. The apparatus of claim 17, wherein the second matching unit is further configured to perform:
after the at least one second target pair is obtained, unassociated tracking targets which are not contained in the at least one second target pair are obtained and used as tracking targets to be processed;
acquiring the non-updated time length of the tracking target to be processed under the condition that the tracking target to be processed is determined to be the confirmed tracking target;
and updating the tracking target to be processed under the condition that the non-updated time length is smaller than the preset time length, otherwise deleting the tracking target to be processed.
23. An electronic device, comprising:
at least one processor; and
a memory communicatively coupled to the at least one processor; wherein,
The memory stores instructions executable by the at least one processor to enable the at least one processor to perform the method of any one of claims 1-11.
24. A non-transitory computer readable storage medium storing computer instructions for causing the computer to perform the method of any one of claims 1-11.
25. A computer program product comprising a computer program which, when executed by a processor, implements the method according to any of claims 1-11.
CN202310127549.6A 2023-02-02 2023-02-02 Target tracking method, device, electronic equipment and readable storage medium Pending CN116310403A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310127549.6A CN116310403A (en) 2023-02-02 2023-02-02 Target tracking method, device, electronic equipment and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310127549.6A CN116310403A (en) 2023-02-02 2023-02-02 Target tracking method, device, electronic equipment and readable storage medium

Publications (1)

Publication Number Publication Date
CN116310403A true CN116310403A (en) 2023-06-23

Family

ID=86824940

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310127549.6A Pending CN116310403A (en) 2023-02-02 2023-02-02 Target tracking method, device, electronic equipment and readable storage medium

Country Status (1)

Country Link
CN (1) CN116310403A (en)

Similar Documents

Publication Publication Date Title
CN112560684B (en) Lane line detection method, lane line detection device, electronic equipment, storage medium and vehicle
CN113989450B (en) Image processing method, device, electronic equipment and medium
CN113112526B (en) Target tracking method, device, equipment and medium
CN115719436A (en) Model training method, target detection method, device, equipment and storage medium
CN113299073A (en) Method, device, equipment and storage medium for identifying illegal parking of vehicle
CN114037087B (en) Model training method and device, depth prediction method and device, equipment and medium
CN113705380B (en) Target detection method and device for foggy days, electronic equipment and storage medium
CN115880555B (en) Target detection method, model training method, device, equipment and medium
CN115953434A (en) Track matching method and device, electronic equipment and storage medium
EP4080479A2 (en) Method for identifying traffic light, device, cloud control platform and vehicle-road coordination system
CN114429631B (en) Three-dimensional object detection method, device, equipment and storage medium
CN114333409B (en) Target tracking method, device, electronic equipment and storage medium
CN113920273B (en) Image processing method, device, electronic equipment and storage medium
CN116052097A (en) Map element detection method and device, electronic equipment and storage medium
JP7258101B2 (en) Image stabilization method, device, electronic device, storage medium, computer program product, roadside unit and cloud control platform
CN114419564B (en) Vehicle pose detection method, device, equipment, medium and automatic driving vehicle
CN115861755A (en) Feature fusion method and device, electronic equipment and automatic driving vehicle
CN116188587A (en) Positioning method and device and vehicle
CN116310403A (en) Target tracking method, device, electronic equipment and readable storage medium
CN114708498A (en) Image processing method, image processing apparatus, electronic device, and storage medium
CN115222939A (en) Image recognition method, device, equipment and storage medium
CN111860084A (en) Image feature matching and positioning method and device and positioning system
CN113806361B (en) Method, device and storage medium for associating electronic monitoring equipment with road
CN112700657B (en) Method and device for generating detection information, road side equipment and cloud control platform
CN114581890B (en) Method and device for determining lane line, electronic equipment and storage medium

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination