CN112837349A - Target tracking method, target tracking equipment and computer-readable storage medium - Google Patents

Target tracking method, target tracking equipment and computer-readable storage medium Download PDF

Info

Publication number
CN112837349A
CN112837349A CN202110174340.6A CN202110174340A CN112837349A CN 112837349 A CN112837349 A CN 112837349A CN 202110174340 A CN202110174340 A CN 202110174340A CN 112837349 A CN112837349 A CN 112837349A
Authority
CN
China
Prior art keywords
target
image frame
predicted
current image
detection
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110174340.6A
Other languages
Chinese (zh)
Inventor
符顺
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
TP Link Technologies Co Ltd
Original Assignee
TP Link Technologies Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by TP Link Technologies Co Ltd filed Critical TP Link Technologies Co Ltd
Priority to CN202110174340.6A priority Critical patent/CN112837349A/en
Publication of CN112837349A publication Critical patent/CN112837349A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/44Local feature extraction by analysis of parts of the pattern, e.g. by detecting edges, contours, loops, corners, strokes or intersections; Connectivity analysis, e.g. of connected components
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/70Arrangements for image or video recognition or understanding using pattern recognition or machine learning
    • G06V10/74Image or video pattern matching; Proximity measures in feature spaces
    • G06V10/75Organisation of the matching processes, e.g. simultaneous or sequential comparisons of image or video features; Coarse-fine approaches, e.g. multi-scale approaches; using context analysis; Selection of dictionaries
    • G06V10/751Comparing pixel values or logical combinations thereof, or feature values having positional relevance, e.g. template matching
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30241Trajectory
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V2201/00Indexing scheme relating to image or video recognition or understanding
    • G06V2201/07Target detection

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Evolutionary Computation (AREA)
  • General Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Software Systems (AREA)
  • Databases & Information Systems (AREA)
  • Computing Systems (AREA)
  • Artificial Intelligence (AREA)
  • Health & Medical Sciences (AREA)
  • Image Analysis (AREA)

Abstract

The application is applicable to the technical field of target tracking, and provides a target tracking method, which comprises the following steps: acquiring a current image frame and corresponding predicted target information thereof, wherein the predicted target information is obtained from a target motion track of a previous image frame of the current image frame; carrying out target detection on the current image frame to obtain a target detection result; and determining the target motion track of the current image frame according to the predicted target information and the target detection result. The method has the advantages that the calculation process is simple, a large amount of calculation is not needed, the equipment is not required to have high calculation capacity, and target tracking can be performed on some complex objects or shielded objects due to the adoption of the predicted target information.

Description

Target tracking method, target tracking equipment and computer-readable storage medium
Technical Field
The present application belongs to the field of target tracking technologies, and in particular, to a target tracking method, device, and computer-readable storage medium.
Background
In the field of computer vision, Multi Object Tracking (MOT) refers to finding a moving Object in a sequence of images given a sequence of images that are continuous in time and obtaining a trajectory of the moving Object.
The existing target tracking algorithm needs to calculate the detection target, the feature points between tracks and the like frame by frame, or adopts a deep network convolution network to extract features. These methods have high requirements on the computing power of the device, and have limitations in target tracking for some complex or occluded objects.
Disclosure of Invention
The embodiment of the application provides a target tracking method, a target tracking device and a computer readable storage medium, which can solve the problems that the existing target tracking method has higher requirement on the computing capability of the device and has limitation when the target tracking is carried out on some complex objects or shielded objects.
In a first aspect, an embodiment of the present application provides a target tracking method, including:
acquiring a current image frame and corresponding predicted target information thereof, wherein the predicted target information is obtained from a target motion track of a previous image frame of the current image frame;
carrying out target detection on the current image frame to obtain a target detection result;
and determining the target motion track of the current image frame according to the predicted target information and the target detection result.
Further, the performing target detection on the current image frame to obtain a target detection result includes:
if the current image frame is a detection frame, performing target detection on the current image frame to obtain a target detection result;
and if the current image frame is a tracking frame, performing target detection on the target detection area to obtain a target detection result.
Further, the determining the target motion trajectory of the current image frame according to the predicted target information and the target detection result includes:
and if the target detection result is that the detection target corresponding to the current image frame is not obtained, determining the target motion track of the current image frame according to the predicted target information.
Further, the determining the target motion trajectory of the current image frame according to the predicted target information and the target detection result includes:
and if the target detection result is that the detection target corresponding to the current image frame is obtained, determining the target motion track of the current image frame according to the predicted target information and the first target feature of the detection target.
Further, the first target feature includes: and detecting one or more of a circumscribed rectangle, a color histogram, a gradient histogram and a scale invariant feature of the target.
Further, the predicted target information includes a predicted target feature of the predicted target;
determining a target motion trajectory of the current image frame according to the predicted target information and a first target feature of the detected target, including:
matching the predicted target feature with the first target feature, and acquiring an existing track corresponding to a matched target, wherein the matched target is a predicted target corresponding to the predicted target feature matched with the first target feature, and the existing track is a target motion track of a previous image frame of the current image frame;
and updating the existing track corresponding to the matched target according to the first position information of the detected target to obtain the motion track of the detected target.
Further, after the matching the predicted target feature and the first target feature, the method further includes:
and if the matching target does not exist, establishing a motion track corresponding to the detection target according to the first position information of the detection target.
Further, after the matching the predicted target feature and the first target feature, the method further includes:
if a predicted target feature which is not matched with the first target feature exists, acquiring the continuous unmatched frame number of the existing track corresponding to the predicted target feature;
if the continuous mismatch frame number is larger than a preset threshold value, deleting the existing track corresponding to the predicted target feature;
and if the continuous mismatch frame number is less than or equal to the preset threshold, acquiring second position information of the predicted target corresponding to the predicted target characteristic, and updating the existing track corresponding to the predicted target characteristic according to the second position information to obtain the motion track corresponding to the predicted target characteristic.
Further, after the updating the existing trajectory corresponding to the matching target according to the first position information of the detection target to obtain the motion trajectory of the detection target, the method further includes:
and correcting the position information of a historical prediction target in the motion trail of the detection target to obtain an updated motion trail corresponding to the detection target, wherein the historical prediction target is a prediction target of an image frame before the current image frame.
Further, after the determining the target motion trajectory of the current image frame according to the predicted target information and the target detection result, the method further includes:
and determining the predicted target information corresponding to the next frame of image frame according to the target motion track of the current image frame.
In a second aspect, an embodiment of the present application provides an apparatus for target tracking, including:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a current image frame and corresponding predicted target information thereof, and the predicted target information is obtained from a target motion track of a previous image frame of the current image frame;
the detection unit is used for carrying out target detection on the current image frame to obtain a target detection result;
and the determining unit is used for determining the target motion track of the current image frame according to the predicted target information and the target detection result.
Further, the predicted target information includes a target detection area, and the detection unit is specifically configured to:
if the current image frame is a detection frame, performing target detection on the current image frame to obtain a target detection result;
and if the current image frame is a tracking frame, performing target detection on the target detection area to obtain a target detection result.
Further, the determining unit is specifically configured to:
and if the target detection result is that the detection target corresponding to the current image frame is not obtained, determining the target motion track of the current image frame according to the predicted target information.
Further, the determining unit is specifically configured to:
and if the target detection result is that the detection target corresponding to the current image frame is obtained, determining the target motion track of the current image frame according to the predicted target information and the first target feature of the detection target.
Further, the first target feature includes: and detecting one or more of a circumscribed rectangle, a color histogram, a gradient histogram and a scale invariant feature of the target.
Further, the predicted target information includes a predicted target feature of the predicted target;
the determining unit is specifically configured to:
matching the predicted target feature with the first target feature, and acquiring an existing track corresponding to a matched target, wherein the matched target is a predicted target corresponding to the predicted target feature matched with the first target feature, and the existing track is a target motion track of a previous image frame of the current image frame;
and updating the existing track corresponding to the matched target according to the first position information of the detected target to obtain the motion track of the detected target.
Further, the determining unit is specifically further configured to:
and if the matching target does not exist, establishing a motion track corresponding to the detection target according to the first position information of the detection target.
Further, the determining unit is specifically further configured to:
if a predicted target feature which is not matched with the first target feature exists, acquiring the continuous unmatched frame number of the existing track corresponding to the predicted target feature;
if the continuous mismatch frame number is larger than a preset threshold value, deleting the existing track corresponding to the predicted target feature;
and if the continuous mismatch frame number is less than or equal to the preset threshold, acquiring second position information of the predicted target corresponding to the predicted target characteristic, and updating the existing track corresponding to the predicted target characteristic according to the second position information to obtain the motion track corresponding to the predicted target characteristic.
Further, the determining unit is specifically further configured to:
and correcting the position information of a historical prediction target in the motion trail of the detection target to obtain an updated motion trail corresponding to the detection target, wherein the historical prediction target is a prediction target of an image frame before the current image frame.
Further, the target tracking apparatus further includes:
and the second processing unit is used for determining the predicted target information corresponding to the next frame of image frame according to the target motion track of the current image frame.
In a third aspect, an embodiment of the present application provides an object tracking device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor, when executing the computer program, implements the object tracking method according to the first aspect.
In a fourth aspect, the present application provides a computer-readable storage medium, which stores a computer program, and when the computer program is executed by a processor, the computer program implements the target tracking method according to the first aspect.
In the embodiment of the application, a current image frame and corresponding predicted target information thereof are obtained, wherein the predicted target information is obtained from a target motion track of a previous image frame of the current image frame; carrying out target detection on the current image frame to obtain a target detection result; and determining the target motion track of the current image frame according to the predicted target information and the target detection result. The method has the advantages that the calculation process is simple, a large amount of calculation is not needed, the equipment is not required to have high calculation capacity, and the target tracking can be performed on some complex objects or shielded objects due to the fact that the predicted target information is adopted to obtain the motion trail, so that certain robustness is achieved, and the applicability of the method is improved.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings needed to be used in the embodiments or the prior art descriptions will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present application, and it is obvious for those skilled in the art to obtain other drawings based on these drawings without inventive exercise.
Fig. 1 is a schematic flow chart of a target tracking method according to a first embodiment of the present application;
FIG. 2 is a schematic diagram of a target tracking device provided in a second embodiment of the present application;
fig. 3 is a schematic diagram of an object tracking device according to a third embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Referring to fig. 1, fig. 1 is a schematic flow chart of a target tracking method according to a first embodiment of the present application. In this embodiment, the main executing body of the target tracking method is a device with a target tracking function, such as a server, a personal computer, and the like. The target tracking method as shown in fig. 1 may include:
s101: the method comprises the steps of obtaining a current image frame and corresponding predicted target information thereof, wherein the predicted target information is obtained from a target motion track of a previous image frame of the current image frame.
In the present embodiment, target tracking mainly determines the motion trajectory of a detected object from each frame in an image video.
The device acquires a current image frame, wherein the current image frame may be acquired by an image acquisition device or acquired from video information, which is not limited herein.
The equipment acquires the predicted target information corresponding to the current image frame, wherein the predicted target information is obtained from the target motion track of the previous image frame of the current image frame. Namely, after the device obtains a target motion track identified according to a previous frame image frame of a current image frame, predicting according to a preset track prediction algorithm to obtain predicted target information. That is, the prediction target information is related information in the current image frame predicted according to the previous image frame, wherein the prediction target information may include a prediction target, a position of the prediction target, a feature of the prediction target, a target detection area, and the like, which is not limited herein.
S102: and carrying out target detection on the current image frame to obtain a target detection result.
The device performs target detection on the current image frame to obtain a target detection result, where the target detection result may include a target detection result corresponding to the current image frame.
When the target is detected, the detection algorithm may be a target detection algorithm in deep learning, such as YOLO, SSD, or the like, or a method of extracting a foreground point target by using background modeling in a conventional method may also be used, which is not limited herein.
If the computing power of the equipment is weak, the method for extracting the foreground point target by background modeling can be used for target detection.
In one embodiment, in order to avoid a large amount of time consumption caused by performing full image detection on each frame, so that the detection speed becomes faster, in this embodiment, the current image frame may be distinguished, and during detection, the current image frame is divided into a detection frame and a tracking frame. The manner in which the detection frames and tracking frames are distinguished is not limited herein.
Specifically, in a detection period, a 1-frame detection frame and a (n-1) -frame tracking frame are divided, wherein n (n ≧ 1) can be set according to actual conditions. For the k-th image frame starting from 0, it can be set that when kmod (n) is 0, the current image frame is the detection frame, otherwise, the current image frame is the tracking frame, where mod (·) is the remainder operation.
In the present embodiment, the prediction target information includes a target detection area. The target detection area is a target detection area for predicting the current image frame according to the previous image frame.
According to a preset detection frame and tracking frame distinguishing method, if the current image frame is a detection frame, performing target detection on the current image frame to obtain a target detection result; and if the current image frame is a tracking frame, performing target detection on the target detection area to obtain a target detection result. The detection algorithm may be a target detection algorithm in deep learning, such as a yolo (young Only Look once) algorithm, an ssd (single Shot multi box detector) algorithm, or a method of extracting a foreground point target by using background modeling in a conventional method, which is not limited herein.
In this embodiment, when the device performs target detection, a plurality of detection targets may be detected, 1 detection target may be detected, or the detection target may not be detected.
The situation that the detection target cannot be detected if the detection target in the current image frame is blocked is avoided. After the device detects the target of the current image frame, if the detection target corresponding to the current image frame is not obtained, the motion track of the current image frame is determined according to the predicted target information. If the detection target is not detected in the current image frame, the motion trajectory of the current image frame may be determined according to the prediction target in the prediction target information, the position of the prediction target, the feature of the prediction target, and the like. The motion trajectory includes position information of the same target on different frames that are continuous in time, such as coordinates of a center point of a circumscribed rectangle of the target on a picture, and in addition, the motion trajectory information may include data such as features of an object in a last frame in time.
S103: and determining the target motion track of the current image frame according to the predicted target information and the target detection result.
And the equipment determines the target motion track of the current image frame according to the predicted target information and the target detection result.
In one embodiment, if the target detection result is that the detection target corresponding to the current image frame is not obtained, the target motion track of the current image frame is determined according to the predicted target information.
In one embodiment, if the target detection result is that the detection target corresponding to the current image frame is obtained, the target motion track of the current image frame is determined according to the predicted target information and the first target feature of the detection target. Specifically, the device may perform feature extraction on the detection target to obtain a first target feature of the detection target. The feature extraction method may be adjusted according to the first target feature, and is not limited herein.
Wherein the first target feature may include: detecting a circumscribed rectangle, a color histogram, a gradient histogram, a scale invariant feature, and the like of the target, which is not limited herein. It is noted that the first target feature may comprise one or more target features.
In one embodiment, if the detection target is a small object or an object with a clearly changed appearance and the characteristics of the object are unstable, only the circumscribed rectangle can be detected and the position of the center point of the circumscribed rectangle can be used as the characteristics.
The device performs feature matching according to the predicted target information and the first target feature to obtain a matching result, determines the motion track of the current image frame according to the matching result, and can perform operations such as updating, correcting, creating, deleting and the like on the existing track when determining the motion track of the current image frame.
In one embodiment, the predicted target information includes a predicted target characteristic of the predicted target; and the equipment matches the predicted target characteristic with the first target characteristic and acquires an existing track corresponding to the matched target. The matching target is a prediction target corresponding to the prediction target feature matched with the first target feature, and the existing track is a target motion track of a previous frame image frame of the current image frame.
Specifically, the prediction target is EST _ OBJi(i∈[1,p]) Total p predicted targets, existing trajectory is TRAJi(i∈[1,p]). The predicted target information may include a position EST _ POS including the predicted targetiAnd the predicted target feature EST _ FEAiInformation EST _ RANGE such as target detection regioni
Here, the position EST _ POS of the prediction targetiThe center of a circumscribed rectangle of the predicted target can be a characteristic point of other positions of the target; the movement distance EST _ MV _ DISiThe distance between the predicted target and the last target on the corresponding track and the centers of the circumscribed rectangles of the predicted target and the last target on the corresponding track is referred to; prediction of target feature EST _ FEAiRefers to the characteristics of the predicted target; information EST _ RANGE such as target detection regioniRefers to the maximum range of the prediction target when the prediction target is considered to match the detection target of the current image frame, and the detection target larger than the range does not match the prediction target.
The device will predict the target feature EST _ OBJi(i∈[1,p]) And a first target feature DET _ OBJj(j∈[1,q]) And carrying out one-to-one matching. First, the device can calculate all the EST _ OBF targets being predictediTarget detection region EST _ RANGE ofiDetection target DET _ OBJ ink(k∈[1,K]And K is less than or equal to q). Then, the detection target DET _ OBJ in the target detection area is calculatedkAnd predicted target EST _ OBJiEuclidean distance DIS of two characteristic vectorskiFinding out the minimum Euclidean distance DISki_minCorresponding detection target DET _ OBJk_min. If DET _ OBJ is detected for the detection targetk_minCalculated predicted target EST \ uOBJiAt the same time, all predicted targets EST _ OBJi(i∈[1,p]) Middle feature distance detection target DET _ OBJk_minRecently, then the target DET _ OBJ is detectedk_minAnd predicted target EST _ OBJiAnd (6) matching.
Further, if the detected target is a small object or an object with obvious appearance change and the characteristics of the detected target are unstable, the central point of the circumscribed rectangle can be used as the characteristics, and the Euclidean distance is DET _ OBJkAnd EST _ OBJiThe distance between the two points is the center of the circumscribed rectangle.
This operation is repeated for each predicted target, and all the detected targets and predicted targets that match one to one, and the remaining predicted targets and detected targets that do not match, are obtained.
After the existing track corresponding to the matched target is determined, the equipment updates the existing track corresponding to the matched target according to the first position information of the detected target, and obtains the motion track of the detected target. First, a detection target DET _ OBJ is detectedjAdded to its matching target EST _ OBJiExisting trajectory TRAJiSpecifically, the target DET _ OBJ is to be detectedjFirst position information DET _ POSjAs corresponding trace objects TRAJiAnd updating the accurate information of the current image frame into the existing track.
Further, the first characteristic information DET _ FEA may also bejAnd updating the characteristic information corresponding to the existing track.
In one embodiment, after updating the existing track corresponding to the matching target to obtain the motion track of the detection target, the device may correct the position information of the historical prediction target in the motion track of the detection target to obtain an updated motion track corresponding to the detection target, where the historical prediction target is a prediction target of an image frame before the current image frame.
Specifically, the existing trajectory TRAJ is judgediUpper detection target DET _ OBJjIf the predicted target is added into the existing track due to the fact that the detected target is not matched, correcting the position information of the predicted targets.
The correction method comprises the following steps: finding a detected target DET _ OBJ on an existing trackjThe other nearest detection target DEJ _ OBJj′And calculating a displacement vector between the two, wherein the formula is as follows:
Figure BDA0002940109910000111
wherein the content of the first and second substances,
Figure BDA0002940109910000112
are DEJ _ OBJ, respectivelyjAnd DEJ _ OBJj′The position coordinates of (a).
The intermediate prediction targets are equally distributed on the displacement vector, and the position information corresponding to the average distribution is used as the corrected position. Specifically, assuming z prediction targets in the middle, for the t (t e [1, z)]) A predicted target, the position (x) of whicht,yt) Can be calculated according to the following equation (x)t,yt) I.e. the corrected position:
Figure BDA0002940109910000113
in one embodiment, after the predicted target feature is matched with the first target feature, if there is no matching target, a motion trajectory corresponding to the detection target is created according to the first position information of the detection target, and a new motion trajectory is created.
In one embodiment, after the predicted target feature is matched with the first target feature, if the predicted target feature which is not matched with the first target feature exists, the continuous unmatched frame number of the existing track corresponding to the predicted target feature is obtained; if the continuous mismatching frame number is larger than a preset threshold value, deleting the existing track corresponding to the predicted target feature; and if the continuous mismatch frame number is less than or equal to the preset threshold, acquiring second position information of the predicted target corresponding to the predicted target characteristic, and updating the existing track corresponding to the predicted target characteristic according to the second position information to obtain the motion track corresponding to the predicted target characteristic.
Specifically, first, the prediction target EST _ OBJ is determinediExisting track TRAJ of the locationiHow many frames have not been matched with the detection target, if the number of frames is NUMframeGreater than threshold THRESfakeThen this existing trajectory TRAJ is usediDeleting; otherwise, the target OBJ will be predictediAdded to the track on which it is located. The specific operation added is to predict the target EST _ OBJiPredicted target feature EST _ FEA ofiInformation EST _ RANGE such as target detection regioniAnd updating the accurate information of the object as the corresponding track in the current frame into the existing track.
Wherein, the threshold value THRESfakeCan be adaptively set to be half of the current video frame rate.
After the feature matching is performed according to the predicted target information and the first target feature, and the motion trail of the current image frame is determined, the method further comprises the following steps: and determining the predicted target information corresponding to the next frame of image frame according to the target motion track of the current image frame.
In particular, for each track TRAJ already presenti(i∈[1,p]) Obtaining object target information including position POS on the motion trail of the current video framei_nowAnd feature FEAi_nowAnd obtaining position information POS of the object on the existing track of the previous video framei_last
Calculating each TRAJ of the existing trajectoriesi(i∈[1,p]) Predicting the target EST _ OBJ on the next frameiPosition EST _ POS ofiFeature EST _ FEAiAnd detection region EST _ RANGEiAnd so on.
Wherein, the position information can refer to the center of the circumscribed rectangle of the object, and at the moment, the EST _ POSiThe calculation formula is as follows:
Figure BDA0002940109910000121
wherein the content of the first and second substances,
Figure BDA0002940109910000122
and
Figure BDA0002940109910000123
are respectively POSi_nowAnd EST _ POSiThe coordinates of (a).
For objects where the characteristics are not significant over time, such as location information, e.g. grey level histogram of large objects, the characteristic EST _ FEAiValue of (D) and FEAi_nowIn the same way, namely:
EST_FEAi=FEAi_now
for objects where the change in characteristics over time is significant, e.g. location characteristics, characteristics EST _ FEAiMay be predicted position information EST _ POSi
Detection area
Figure BDA0002940109910000133
The value can be a fixed value or an adaptive value. Such as a region
Figure BDA0002940109910000134
The following adaptive values may be taken:
Figure BDA0002940109910000131
wherein g is a constant greater than 0, and can be determined according to practical conditions, such as 1, (x)min,xmax,ymin,,ymax) Respectively a detection area
Figure BDA0002940109910000132
Minimum and maximum values on the XY coordinate axis on the image.
In the embodiment of the application, a current image frame and corresponding predicted target information thereof are obtained, wherein the predicted target information is obtained from a target motion track of a previous image frame of the current image frame; carrying out target detection on the current image frame to obtain a target detection result; and determining the target motion track of the current image frame according to the predicted target information and the target detection result. The method has the advantages that the calculation process is simple, a large amount of calculation is not needed, the equipment is not required to have high calculation capacity, and the target tracking can be performed on some complex objects or shielded objects due to the fact that the predicted target information is adopted to obtain the motion trail, so that certain robustness is achieved, and the applicability of the method is improved.
It should be understood that, the sequence numbers of the steps in the foregoing embodiments do not imply an execution sequence, and the execution sequence of each process should be determined by its function and inherent logic, and should not constitute any limitation to the implementation process of the embodiments of the present application.
Referring to fig. 2, fig. 2 is a schematic view of a target tracking device according to a second embodiment of the present application. The units are included for performing the steps in the corresponding embodiment of fig. 1. Please refer to fig. 1 for the related description of the corresponding embodiment. For convenience of explanation, only the portions related to the present embodiment are shown. Referring to fig. 2, the target tracking apparatus 2 includes:
an obtaining unit 210, configured to obtain a current image frame and prediction target information corresponding to the current image frame, where the prediction target information is obtained from a target motion trajectory of a previous image frame of the current image frame;
a detecting unit 220, configured to perform target detection on the current image frame to obtain a target detection result;
a determining unit 230, configured to determine a target motion trajectory of the current image frame according to the predicted target information and the target detection result.
Further, the predicted target information includes a target detection area, and the detection unit 220 is specifically configured to:
if the current image frame is a detection frame, performing target detection on the current image frame to obtain a target detection result;
and if the current image frame is a tracking frame, performing target detection on the target detection area to obtain a target detection result.
Further, the determining unit 230 is specifically configured to:
and if the target detection result is that the detection target corresponding to the current image frame is not obtained, determining the target motion track of the current image frame according to the predicted target information.
Further, the determining unit 230 is specifically configured to:
and if the target detection result is that the detection target corresponding to the current image frame is obtained, determining the target motion track of the current image frame according to the predicted target information and the first target feature of the detection target.
Further, the first target feature includes: and detecting one or more of a circumscribed rectangle, a color histogram, a gradient histogram and a scale invariant feature of the target.
Further, the predicted target information includes a predicted target feature of the predicted target;
the determining unit 240 is specifically configured to:
matching the predicted target feature with the first target feature, and acquiring an existing track corresponding to a matched target, wherein the matched target is a predicted target corresponding to the predicted target feature matched with the first target feature, and the existing track is a target motion track of a previous image frame of the current image frame;
and updating the existing track corresponding to the matched target according to the first position information of the detected target to obtain the motion track of the detected target.
Further, the determining unit 240 is specifically further configured to:
and if the matching target does not exist, establishing a motion track corresponding to the detection target according to the first position information of the detection target.
Further, the determining unit 240 is specifically further configured to:
if a predicted target feature which is not matched with the first target feature exists, acquiring the continuous unmatched frame number of the existing track corresponding to the predicted target feature;
if the continuous mismatch frame number is larger than a preset threshold value, deleting the existing track corresponding to the predicted target feature;
and if the continuous mismatch frame number is less than or equal to the preset threshold, acquiring second position information of the predicted target corresponding to the predicted target characteristic, and updating the existing track corresponding to the predicted target characteristic according to the second position information to obtain the motion track corresponding to the predicted target characteristic.
Further, the determining unit 240 is specifically further configured to:
and correcting the position information of a historical prediction target in the motion trail of the detection target to obtain an updated motion trail corresponding to the detection target, wherein the historical prediction target is a prediction target of an image frame before the current image frame.
Further, the target tracking apparatus 2 further includes:
and the second processing unit is used for determining the predicted target information corresponding to the next frame of image frame according to the target motion track of the current image frame.
Fig. 3 is a schematic diagram of an object tracking device according to a third embodiment of the present application. As shown in fig. 3, the target tracking device 3 of this embodiment includes: a processor 30, a memory 31 and a computer program 32, such as an object tracking program, stored in said memory 31 and executable on said processor 30. The processor 30, when executing the computer program 32, implements the steps in the various target tracking method embodiments described above, such as the steps 101 to 103 shown in fig. 1. Alternatively, the processor 30, when executing the computer program 32, implements the functions of the modules/units in the above-mentioned device embodiments, such as the functions of the modules 210 to 230 shown in fig. 2.
Illustratively, the computer program 32 may be partitioned into one or more modules/units that are stored in the memory 31 and executed by the processor 30 to accomplish the present application. The one or more modules/units may be a series of computer program instruction segments capable of performing specific functions, which are used to describe the execution process of the computer program 32 in the object tracking device 3. For example, the computer program 32 may be divided into an acquisition unit, a detection unit, an extraction unit, and a determination unit, and each unit functions as follows:
the device comprises an acquisition unit, a processing unit and a display unit, wherein the acquisition unit is used for acquiring a current image frame and corresponding predicted target information thereof, and the predicted target information is obtained from a target motion track of a previous image frame of the current image frame;
the detection unit is used for carrying out target detection on the current image frame to obtain a target detection result;
and the determining unit is used for determining the target motion track of the current image frame according to the predicted target information and the target detection result.
The target tracking device may include, but is not limited to, a processor 30, a memory 31. Those skilled in the art will appreciate that fig. 3 is merely an example of the target tracking device 3 and does not constitute a limitation of the target tracking device 3 and may include more or fewer components than shown, or combine certain components, or different components, e.g., the target tracking device may also include input-output devices, network access devices, buses, etc.
The Processor 30 may be a Central Processing Unit (CPU), other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), an off-the-shelf Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic, discrete hardware components, etc. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 31 may be an internal storage unit of the target tracking device 3, such as a hard disk or a memory of the target tracking device 3. The memory 31 may also be an external storage device of the target tracking device 3, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the target tracking device 3. Further, the target tracking device 3 may also include both an internal storage unit and an external storage device of the target tracking device 3. The memory 31 is used for storing the computer program and other programs and data required by the object tracking device. The memory 31 may also be used to temporarily store data that has been output or is to be output.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/units, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and specific reference may be made to the part of the embodiment of the method, which is not described herein again.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a network device, where the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The embodiments of the present application further provide a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above-mentioned method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (12)

1. A target tracking method, comprising:
acquiring a current image frame and corresponding predicted target information thereof, wherein the predicted target information is obtained from a target motion track of a previous image frame of the current image frame;
carrying out target detection on the current image frame to obtain a target detection result;
and determining the target motion track of the current image frame according to the predicted target information and the target detection result.
2. The target tracking method of claim 1, wherein the predicted target information includes a target detection area, and the performing target detection on the current image frame to obtain a target detection result comprises:
if the current image frame is a detection frame, performing target detection on the current image frame to obtain a target detection result;
and if the current image frame is a tracking frame, performing target detection on the target detection area to obtain a target detection result.
3. The target tracking method of claim 1, wherein said determining a target motion trajectory for the current image frame based on the predicted target information and the target detection result comprises:
and if the target detection result is that the detection target corresponding to the current image frame is not obtained, determining the target motion track of the current image frame according to the predicted target information.
4. The target tracking method of claim 1, wherein said determining a target motion trajectory for the current image frame based on the predicted target information and the target detection result comprises:
and if the target detection result is that the detection target corresponding to the current image frame is obtained, determining the target motion track of the current image frame according to the predicted target information and the first target feature of the detection target.
5. The target tracking method of claim 4, wherein the first target feature comprises: and detecting the circumscribed rectangle, the color histogram, the gradient histogram and the scale invariant feature of the target.
6. The object tracking method of claim 4, wherein the predicted object information includes a predicted object feature of the predicted object;
determining a target motion trajectory of the current image frame according to the predicted target information and a first target feature of the detected target, including:
matching the predicted target feature with the first target feature, and acquiring an existing track corresponding to a matched target, wherein the matched target is a predicted target corresponding to the predicted target feature matched with the first target feature, and the existing track is a target motion track of a previous image frame of the current image frame;
and updating the existing track corresponding to the matched target according to the first position information of the detected target to obtain the motion track of the detected target.
7. The target tracking method of claim 6, further comprising, after said matching said predicted target feature and said first target feature:
and if the matching target does not exist, establishing a motion track corresponding to the detection target according to the first position information of the detection target.
8. The target tracking method of claim 6, further comprising, after said matching said predicted target feature and said first target feature:
if a predicted target feature which is not matched with the first target feature exists, acquiring the continuous unmatched frame number of the existing track corresponding to the predicted target feature;
if the continuous mismatch frame number is larger than a preset threshold value, deleting the existing track corresponding to the predicted target feature;
and if the continuous mismatch frame number is less than or equal to the preset threshold, acquiring second position information of the predicted target corresponding to the predicted target characteristic, and updating the existing track corresponding to the predicted target characteristic according to the second position information to obtain the motion track corresponding to the predicted target characteristic.
9. The target tracking method according to claim 6, wherein after the updating the existing trajectory corresponding to the matching target according to the first position information of the detection target to obtain the motion trajectory of the detection target, the method further comprises:
and correcting the position information of a historical prediction target in the motion trail of the detection target to obtain an updated motion trail corresponding to the detection target, wherein the historical prediction target is a prediction target of an image frame before the current image frame.
10. The object tracking method according to claim 1, further comprising, after said determining the object motion trajectory of the current image frame based on the predicted object information and the object detection result:
and determining the predicted target information corresponding to the next frame of image frame according to the target motion track of the current image frame.
11. An object tracking device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 10 when executing the computer program.
12. A computer-readable storage medium, in which a computer program is stored which, when being executed by a processor, carries out the method according to any one of claims 1 to 10.
CN202110174340.6A 2021-02-09 2021-02-09 Target tracking method, target tracking equipment and computer-readable storage medium Pending CN112837349A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110174340.6A CN112837349A (en) 2021-02-09 2021-02-09 Target tracking method, target tracking equipment and computer-readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110174340.6A CN112837349A (en) 2021-02-09 2021-02-09 Target tracking method, target tracking equipment and computer-readable storage medium

Publications (1)

Publication Number Publication Date
CN112837349A true CN112837349A (en) 2021-05-25

Family

ID=75932891

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110174340.6A Pending CN112837349A (en) 2021-02-09 2021-02-09 Target tracking method, target tracking equipment and computer-readable storage medium

Country Status (1)

Country Link
CN (1) CN112837349A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965657A (en) * 2023-02-28 2023-04-14 安徽蔚来智驾科技有限公司 Target tracking method, electronic device, storage medium, and vehicle

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
CN110807377A (en) * 2019-10-17 2020-02-18 浙江大华技术股份有限公司 Target tracking and intrusion detection method, device and storage medium
CN110910422A (en) * 2019-11-13 2020-03-24 北京环境特性研究所 Target tracking method and device, electronic equipment and readable storage medium
CN111179311A (en) * 2019-12-23 2020-05-19 全球能源互联网研究院有限公司 Multi-target tracking method and device and electronic equipment
CN111292352A (en) * 2020-01-20 2020-06-16 杭州电子科技大学 Multi-target tracking method, device, equipment and storage medium
WO2020147348A1 (en) * 2019-01-17 2020-07-23 北京市商汤科技开发有限公司 Target tracking method and device, and storage medium
WO2020211624A1 (en) * 2019-04-18 2020-10-22 腾讯科技(深圳)有限公司 Object tracking method, tracking processing method, corresponding apparatus and electronic device
CN112070807A (en) * 2020-11-11 2020-12-11 湖北亿咖通科技有限公司 Multi-target tracking method and electronic device
CN112330715A (en) * 2020-10-09 2021-02-05 深圳英飞拓科技股份有限公司 Tracking method, tracking device, terminal equipment and readable storage medium

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018133666A1 (en) * 2017-01-17 2018-07-26 腾讯科技(深圳)有限公司 Method and apparatus for tracking video target
WO2020147348A1 (en) * 2019-01-17 2020-07-23 北京市商汤科技开发有限公司 Target tracking method and device, and storage medium
WO2020211624A1 (en) * 2019-04-18 2020-10-22 腾讯科技(深圳)有限公司 Object tracking method, tracking processing method, corresponding apparatus and electronic device
CN110807377A (en) * 2019-10-17 2020-02-18 浙江大华技术股份有限公司 Target tracking and intrusion detection method, device and storage medium
CN110910422A (en) * 2019-11-13 2020-03-24 北京环境特性研究所 Target tracking method and device, electronic equipment and readable storage medium
CN111179311A (en) * 2019-12-23 2020-05-19 全球能源互联网研究院有限公司 Multi-target tracking method and device and electronic equipment
CN111292352A (en) * 2020-01-20 2020-06-16 杭州电子科技大学 Multi-target tracking method, device, equipment and storage medium
CN112330715A (en) * 2020-10-09 2021-02-05 深圳英飞拓科技股份有限公司 Tracking method, tracking device, terminal equipment and readable storage medium
CN112070807A (en) * 2020-11-11 2020-12-11 湖北亿咖通科技有限公司 Multi-target tracking method and electronic device

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115965657A (en) * 2023-02-28 2023-04-14 安徽蔚来智驾科技有限公司 Target tracking method, electronic device, storage medium, and vehicle

Similar Documents

Publication Publication Date Title
CN111640140B (en) Target tracking method and device, electronic equipment and computer readable storage medium
CN111179311B (en) Multi-target tracking method and device and electronic equipment
CN110060276B (en) Object tracking method, tracking processing method, corresponding device and electronic equipment
CN107633526B (en) Image tracking point acquisition method and device and storage medium
CN108898624B (en) Moving object tracking method and device, electronic equipment and storage medium
CN109598744B (en) Video tracking method, device, equipment and storage medium
CN111860398B (en) Remote sensing image target detection method and system and terminal equipment
CN112997190B (en) License plate recognition method and device and electronic equipment
CN111798483A (en) Anti-blocking pedestrian tracking method and device and storage medium
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN109035257B (en) Portrait segmentation method, device and equipment
CN112966654A (en) Lip movement detection method and device, terminal equipment and computer readable storage medium
CN111915657A (en) Point cloud registration method and device, electronic equipment and storage medium
CN116452631A (en) Multi-target tracking method, terminal equipment and storage medium
CN112634316A (en) Target tracking method, device, equipment and storage medium
CN114187333A (en) Image alignment method, image alignment device and terminal equipment
CN112837349A (en) Target tracking method, target tracking equipment and computer-readable storage medium
CN113409353B (en) Motion prospect detection method, motion prospect detection device, terminal equipment and storage medium
JP7014005B2 (en) Image processing equipment and methods, electronic devices
CN112087593A (en) Video configuration updating device and method and electronic equipment
CN115705651A (en) Video motion estimation method, device, equipment and computer readable storage medium
CN110866484B (en) Driver face detection method, computer device and computer readable storage medium
CN112101135A (en) Moving target detection method and device and terminal equipment
CN112634319A (en) Video background and foreground separation method and system, electronic device and storage medium
CN112085002A (en) Portrait segmentation method, portrait segmentation device, storage medium and electronic equipment

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination