CN110490902A - Method for tracking target, device, computer equipment applied to smart city - Google Patents
Method for tracking target, device, computer equipment applied to smart city Download PDFInfo
- Publication number
- CN110490902A CN110490902A CN201910711307.5A CN201910711307A CN110490902A CN 110490902 A CN110490902 A CN 110490902A CN 201910711307 A CN201910711307 A CN 201910711307A CN 110490902 A CN110490902 A CN 110490902A
- Authority
- CN
- China
- Prior art keywords
- target
- tracking
- appearance features
- detection
- tracking target
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Granted
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20081—Training; Learning
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20084—Artificial neural networks [ANN]
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
This application involves a kind of method for tracking target applied to smart city, this method comprises: obtaining multiple image;According to multiple image, the first appearance features information of tracking target and the second appearance features information of detection target are determined;According to the first appearance features information and the second appearance features information, the tracking information that target is tracked in multiple image is determined, and track to the tracking target according to tracking information.Second appearance features information of first appearance features information and detection target of this method based on tracking target, can learn detection target and track the similarity degree of target, and then judge to detect whether target is tracking target, be achieved in the tracking to tracking target.As it can be seen that improving the reliability of target following since tracking target can be more accurately determined.The application further relates to a kind of applied to the target tracker of smart city, computer equipment and computer readable storage medium.
Description
Technical field
This application involves technical field of image processing, more particularly to a kind of target following side applied to smart city
Method, device, computer equipment and computer readable storage medium.
Background technique
In the epoch that smart city rapidly develops, precisely monitoring becomes indispensable, wherein target following technology is precisely to supervise
The key link of control.Currently, urban transportation, the monitoring of emphasis place, offender in terms of, it is full-automatic or half from
Realize that tracing task can greatly reduce the workload of staff dynamicly.
Traditional method for tracking target is broadly divided into the progress of two steps, and the first step obtains the location information where initial target,
Width including transverse and longitudinal coordinate and target is high;Second step predicts the location information where next frame target.
However, being easily lost tracking target using traditional method for tracking target, the reliability of tracking is not high.
Summary of the invention
Based on this, it is necessary to for the not high technical problem of above-mentioned traditional method for tracking target reliability, provide one kind
Applied to the method for tracking target of smart city, device, computer equipment and computer readable storage medium.
A kind of method for tracking target applied to smart city, which comprises
Obtain multiple image;
According to the multiple image, determine that the second of the first appearance features information for tracking target and detection target is apparent special
Reference breath;
According to the first appearance features information and the second appearance features information, determines and tracked in the multiple image
The tracking information of target, and the tracking target is tracked according to the tracking information.
In one of the embodiments, according to the multiple image, determine tracking target the first appearance features information and
Detect the second appearance features information of target, comprising:
Obtain the reference frame image in the multiple image;
Determine the tracking target in the reference frame image;
Extract the first appearance features information of the tracking target.
The first appearance features information of the tracking target is extracted in one of the embodiments, comprising:
According to position of the tracking target in the reference frame image, corresponding first side of the tracking target is determined
Boundary's block diagram picture;
The first bounding box image is normalized, the first standard boundary block diagram picture is obtained;
First standard boundary block diagram picture be input to appearance features extract model and carry out feature extraction, obtain it is described with
First appearance features information of track target.
In one of the embodiments, according to the multiple image, determine tracking target the first appearance features information and
Detect the second appearance features information of target, comprising:
Obtain the current frame image in the multiple image;
Target detection is carried out to the current frame image, obtains the detection target;
Extract the second appearance features information of the detection target.
Target detection is carried out to the current frame image in one of the embodiments, obtains the detection target, is wrapped
It includes:
The current frame image is input to target detection model and carries out target detection, is obtained in the current frame image extremely
A few initial detecting result belongs to the probability of specified classification, wherein the target detection model includes preparatory trained depth
Spend learning model;
If the probability that the initial detecting result belongs to specified classification is greater than preset specified probability threshold value, it is determined that described
Initial detecting result is detection target.
The second appearance features information of the detection target is extracted in one of the embodiments, comprising:
According to position of the detection target in the current frame image, corresponding second side of the detection target is determined
Boundary's block diagram picture;
The second boundary block diagram picture is normalized, the second standard boundary block diagram picture is obtained;
Second standard boundary block diagram picture is input to appearance features and extracts model progress feature extraction, obtains the inspection
Survey the second appearance features information of target.
In one of the embodiments, the first appearance features information include the first appearance features vector, described second
Appearance features information includes the second appearance features vector;
It is described according to the first appearance features information and the second appearance features information, determine in the multiple image
Track the tracking information of target, comprising:
Calculate the minimum COS distance of the second appearance features vector described in the first appearance features vector sum;
If the minimum COS distance is less than preset COS distance threshold value, it is determined that the detection target is tracking mesh
Mark;
Obtain the location information that target is detected described in the multiple image, and by the location information be determined as it is described with
Track information.
In one of the embodiments, the method also includes:
Obtain first motion feature of the tracking target in the current frame image in the multiple image;
Obtain second motion feature of the detection target in the current frame image;
According to first motion feature, second motion feature and preset similarity Rule of judgment, institute is judged
State whether detection target is candidate tracking target;
If the minimum COS distance is less than preset COS distance threshold value, it is determined that the detection target is tracking
Target, comprising:
If the detection target is candidate tracking target, and the minimum COS distance is less than preset COS distance threshold
Value, it is determined that the detection target is tracking target.
It is described in one of the embodiments, to obtain the tracking target in the current frame image in the multiple image
The first motion feature, comprising:
Obtain tracking target prime area corresponding in the reference frame image in the multiple image;
According to the pre- predication method in the prime area and track, predict that the tracking target is right in the current frame image
The final area answered;
The final area is determined as first motion feature;
Second motion feature that the detection target is obtained in the current frame image, comprising:
Obtain the detection target corresponding first area in the current frame image;
The first area is determined as second motion feature;
It is described according to first motion feature, second motion feature and preset similarity Rule of judgment, sentence
Whether the detection target of breaking is candidate tracking target, comprising:
Determine the overlapping region of the final area Yu the first area, and calculate the overlapping region account for it is described final
The percentage in region;
If the percentage is greater than preset percentage threshold, it is determined that the detection target is candidate tracking target.
It is described in one of the embodiments, to obtain the tracking target in the current frame image in the multiple image
The first motion feature, further includes:
According to the pre- predication method in the prime area and track, the speed of the tracking target is predicted;
The speed of the tracking target is determined as first motion feature;
If the percentage is greater than preset percentage threshold, it is determined that the detection target is candidate tracking mesh
Mark, comprising:
If the percentage is greater than preset percentage threshold, it is determined that the detection target is that initial candidate tracks mesh
Mark;
Obtain the position offset of the initial candidate tracking target and the tracking target;
According to the speed of the position offset and the tracking target, judge initial candidate tracking target whether be
Candidate's tracking target;
If the initial candidate tracking target is candidate tracking target, it is determined that the detection target is candidate tracking mesh
Mark.
The prime area is the first bounding box image for including the tracking target in one of the embodiments,;Institute
Stating first area is the second boundary block diagram picture for including the detection target;
After determining the detection target for candidate tracking target, the method also includes:
Save the corresponding first bounding box image of the tracking target.
A kind of target tracker applied to smart city, described device include:
Image collection module, for obtaining multiple image;
Characteristic extracting module, for determining the first appearance features information and inspection of tracking target according to the multiple image
Survey the second appearance features information of target;
Target tracking module, for determining according to the first appearance features information and the second appearance features information
The tracking information of target is tracked in the multiple image, and the tracking target is tracked according to the tracking information.
A kind of computer equipment, including memory and processor, the memory are stored with computer program, the processing
Device realizes the step of method described in any one of above-described embodiment when executing the computer program.
It is above-mentioned applied to the method for tracking target of smart city, device, computer equipment and computer readable storage medium,
By obtaining the second appearance features information of the first appearance features information and detection target that track target in multiple image, and base
In the first appearance features information and the second appearance features information, the tracking information of tracking target is determined, to realize to tracking
The tracking of target.Since the appearance features of different target are able to reflect the similarity degree between target, based on tracking target
The first appearance features information and detection target the second appearance features information, can learn detection target with tracking target phase
Like degree, and then judge to detect whether target is tracking target, is achieved in the tracking to tracking target.As it can be seen that due to can
Tracking target is more accurately determined, therefore improves the reliability of target following.
Detailed description of the invention
Fig. 1 is the applied environment figure for being applied to the method for tracking target of smart city in one embodiment;
Fig. 2 is the flow diagram for being applied to the method for tracking target of smart city in one embodiment;
Fig. 3 is that the process for the first appearance features information for extracting tracking target in one embodiment in reference frame image is shown
It is intended to;
Fig. 4 is the flow diagram that the first appearance features information of tracking target is extracted in one embodiment;
Fig. 5 is that the process for the second appearance features information for extracting detection target in one embodiment in current frame image is shown
It is intended to;
Fig. 6 is the flow diagram that the second appearance features information of detection target is extracted in one embodiment;
Fig. 7 is to be determined in multiple image in one embodiment according to first appearance features vector sum the second appearance features vector
Track the flow diagram of the tracking information of target;
Fig. 8 is to determine that detection target is that the process of tracking target is shown by a variety of similarity Rule of judgment in one embodiment
It is intended to;
Fig. 9 is the structural block diagram for being applied to the target tracker of smart city in one embodiment;
Figure 10 is the internal structure chart of computer equipment in one embodiment;
Figure 11 is the application scenario diagram that pedestrian is detected in one embodiment;
Figure 12 is that 128 dimension tables see feature vector chart in one embodiment;
Figure 13 is the application scenario diagram tracked in Figure 11 for pedestrian.
Specific embodiment
It is with reference to the accompanying drawings and embodiments, right in order to which the objects, technical solutions and advantages of the application are more clearly understood
The application is further elaborated.It should be appreciated that specific embodiment described herein is only used to explain the application, not
For limiting the application.
Method for tracking target provided by the present application applied to smart city can be applied to application ring as shown in Figure 1
In border.Wherein, target following equipment 102 is connect with image capture device 104.Image capture device 104 is for acquiring multiframe figure
Picture.Target following equipment 102 obtains the multiple image passed back of image capture device 104, and according to the multiple image, determine with
First appearance features information of track target and the second appearance features information of detection target, finally according to the first appearance features information
With the second appearance features information, the tracking information that target is tracked in multiple image is determined, and according to tracking information to tracking target
It is tracked.Optionally, tracking mode can be target following equipment 102 control 104 pairs of tracking targets of image capture device into
Row shooting tracking.
Optionally, it is various computers that target following equipment 102, which can be, but not limited to, laptop, smart phone, is put down
Plate computer and portable wearable device.Optionally, image capture device 104 includes camera, camera and ball-shaped camera
It is one or more.
In one embodiment, as shown in Fig. 2, a kind of method for tracking target applied to smart city is provided, with this
Method is applied to be illustrated for the target following equipment 102 in Fig. 1, comprising the following steps:
S202 obtains multiple image.
Wherein, multiple image i.e. multiple images include at least two field pictures.
Specifically, image capture device acquires multiple image.What target following equipment acquisition image capture device was passed back should
Multiple image.
Optionally, multiple image can be by carrying out image to image capture device, such as the video data of camera shooting
It extracts and obtains.
S204 determines the first appearance features information and the second of detection target of tracking target according to the multiple image
Appearance features information.
Wherein, tracking target is to have determined that the target of tracking, and detection target is the target of tracking to be determined.
Wherein, appearance features information refers to the characteristic information that the appearance of target is reflected.By taking pedestrian as an example, pedestrian's is apparent
Characteristic information may include the hair color of personage, hair length, skin color, height, gender, the garment type worn, carrying
Package type etc..
Wherein, in multiple image, for determining that the image of tracking target is benchmark frame image, target is detected for identification
Image be current frame image.
Specifically, target following equipment is after obtaining multiple image, according to the multiple image, in reference frame image really
Surely target is tracked, the recognition detection target in current frame image, and the tracking target based on acquisition and detection target, corresponding
The appearance features that target is carried out in image are extracted, thus obtain tracking target the first appearance features information and detection target the
Two appearance features information.
Optionally, reference frame image can be the first frame image in multiple image, before being also possible to current frame image
One frame image.
S206 determines the multiple image according to the first appearance features information and the second appearance features information
The tracking information of middle tracking target, and the tracking target is tracked according to the tracking information.
Specifically, target following equipment is in the first appearance features information for obtaining tracking target and the second table of detection target
After seeing characteristic information, detection target is carried out according to the first appearance features information and the second appearance features information and tracks target
Similarity judgement, if it is determined that the detection target be tracking target, then by the location information of the detection target be determined as tracking letter
Breath, and tracking target is tracked according to the tracking information.
The above-mentioned method for tracking target applied to smart city, it is apparent by obtain tracking target in multiple image first
Second appearance features information of characteristic information and detection target, and believed based on the first appearance features information and the second appearance features
Breath determines the tracking information of tracking target, to realize the tracking to tracking target.Since the appearance features of different target can
Reflect the similarity degree between target, therefore, the second table of the first appearance features information and detection target based on tracking target
Characteristic information is seen, can learn detection target and tracks the similarity degree of target, and then judges to detect whether target is tracking mesh
Mark is achieved in the tracking to tracking target.As it can be seen that improving mesh since tracking target can be more accurately determined
Mark the reliability of tracking.
In one embodiment, referring to Fig. 3, being related to extracting the first appearance features of tracking target in reference frame image
The detailed process of information.On the basis of the above embodiments, S204 the following steps are included:
S212 obtains the reference frame image in the multiple image;
S214 determines the tracking target in the reference frame image;
S216 extracts the first appearance features information of the tracking target.
Optionally, the first frame image in multiple image can be determined as reference frame image by target following equipment.Target
The previous frame image of current frame image can also be determined as reference frame image by tracking equipment.Wherein, it needs to wrap in reference frame image
The target containing tracking.
Specifically, target following equipment is after determining the reference frame image in multiple image, in the reference frame image,
Tracking target is determined by algorithm of target detection, or tracking target is chosen by way of manually confining.Mesh is tracked determining
After mark, target following equipment extracts the first appearance features information of tracking target from reference frame image.
As an implementation, referring to Fig. 4, S216 the following steps are included:
S2162 determines that the tracking target is corresponding according to position of the tracking target in the reference frame image
First bounding box image;
The first bounding box image is normalized in S2164, obtains the first standard boundary block diagram picture;
First standard boundary block diagram picture is input to appearance features and extracts model progress feature extraction, obtained by S2166
First appearance features information of the tracking target.
Wherein, the first bounding box image is the figure comprising tracking target outlined in reference frame image using bounding box
Picture.
Specifically, target following equipment is according to position of the tracking target in reference frame image, using bounding box outline with
Track target obtains corresponding first bounding box image.Place is normalized to the first bounding box image in target following equipment later
Reason obtains the first standard boundary block diagram picture, and then the first standard boundary block diagram picture is input to appearance features and extracts model progress
Feature extraction obtains the first appearance features information of tracking target.
Optionally, by taking appearance features extract model as convolutional neural networks as an example, although the composition of the convolutional neural networks
It is fairly simple, but its appearance features information that can extract target well.Optionally, which may include 2
A convolutional layer, 1 maximum pond layer, 6 residual error layers, 1 intensive articulamentum and 1 batch of normalization layer.Specifically, target following
Equipment is after obtaining the first bounding box image, by the first bounding box image normalization at the first standard of 128*64 (pixel)
Bounding box image, and then the first standard boundary block diagram picture of the 128*64 (pixel) is input to convolutional neural networks, by this
Convolutional neural networks extract the first appearance features information of tracking target.Wherein, which can be 128 dimensions
The first appearance features vector.
The embodiment of the present application obtains the first apparent of tracking target by determining tracking target in reference frame image
Characteristic information improves the accuracy of target following so that the first appearance features information of tracking target is more reliable.
In one embodiment, referring to Fig. 5, being related to extracting the second appearance features of detection target in current frame image
The detailed process of information.On the basis of the above embodiments, S204 the following steps are included:
S222 obtains the current frame image in the multiple image;
S224 carries out target detection to the current frame image, obtains the detection target;
S226 extracts the second appearance features information of the detection target.
Wherein, current frame image refers to the image read instantly by target following equipment.
Specifically, target following equipment is after determining the current frame image in multiple image, in this prior in frame image,
Target detection is carried out to current frame image by algorithm of target detection, obtains detection target.Later, target following equipment is from current
In frame image, the second appearance features information of detection target is extracted.
As an implementation, referring to Fig. 6, S226 the following steps are included:
S2262 determines that the detection target is corresponding according to position of the detection target in the current frame image
The second boundary block diagram picture;
The second boundary block diagram picture is normalized in S2264, obtains the second standard boundary block diagram picture;
Second standard boundary block diagram picture is input to appearance features and extracts model progress feature extraction, obtained by S2266
Second appearance features information of the detection target.
Wherein, the second boundary block diagram seems the figure comprising detection target outlined in current frame image using bounding box
Picture.
Specifically, position of the target following equipment according to detection target in current frame image, outlines inspection using bounding box
Target is surveyed, corresponding the second boundary block diagram picture is obtained.Place is normalized to the second boundary block diagram picture in target following equipment later
Reason obtains the second standard boundary block diagram picture, and then the second standard boundary block diagram picture is input to appearance features and extracts model progress
Feature extraction obtains the second appearance features information of detection target.
Optionally, by taking appearance features extract model as convolutional neural networks as an example, although the composition of the convolutional neural networks
It is fairly simple, but its appearance features information that can extract target well.Optionally, which may include 2
A convolutional layer, 1 maximum pond layer, 6 residual error layers, 1 intensive articulamentum and 1 batch of normalization layer.Specifically, target following
The second boundary block diagram picture is normalized into the second standard of 128*64 (pixel) after obtaining the second boundary block diagram picture by equipment
Bounding box image, and then the second standard boundary block diagram picture of the 128*64 (pixel) is input to convolutional neural networks, by this
Convolutional neural networks extract the second appearance features information of detection target.Wherein, which can be 128 dimensions
The second appearance features vector.
The embodiment of the present application obtains the second apparent of detection target by determining detection target in current frame image
Characteristic information further improves the accuracy of target following so that the second appearance features information of detection target is more complete.
In one embodiment, it is related to carrying out target inspection to current frame image by preparatory trained deep learning model
It surveys, obtains the detailed process of detection target.On the basis of the above embodiments, S224 the following steps are included:
The current frame image is input to target detection model and carries out target detection, obtains the present frame figure by S232
At least one initial detecting result belongs to the probability of specified classification as in, wherein the target detection model includes training in advance
Good deep learning model;
S234, if the probability that the initial detecting result belongs to specified classification is greater than preset specified probability threshold value, really
The fixed initial detecting result is detection target.
Specifically, current frame image is input to target detection model and carries out target detection by target following equipment, is obtained extremely
A few initial detecting result and initial detecting result belong to the probability of specified classification.It is appreciated that initial detecting result pair
The probability for the specified classification answered can be multiple, for example, the probability that a certain initial detecting result is people is 50%, it is the general of plant
Rate is 20%, be the probability of stone is 30%.Belong to the probability of specified classification and corresponding finger to the initial detecting result later
Determine probability threshold value to be compared, if the probability that initial detecting result belongs to specified classification is greater than preset specified probability threshold value,
Retain the initial detecting as a result, and determining it as detection target;Otherwise, give up the initial detecting result.Wherein, preset finger
Determining probability threshold value may be set to any number between 0 to 1.
Further, the detailed process that above-mentioned deep learning model carries out model training can be with are as follows: it is possible, firstly, to from acquisition
Video image in extract all kinds of target image samples, form training sample set.The image sample that the training sample is concentrated later
This is arranged, and all kinds of targets are marked in image pattern.The image pattern that finally these have been marked is used to train depth
Learning model obtains target detection model.Further, when marking all kinds of targets, bounding box may be selected, target is carried out
Label, in this way, trained target detection model can also export the bounding box information of target.Wherein, which can
For determining the position of target in the picture.
In the embodiment of the present application, in order to solve the situation that target in complex environment tracking is difficult and tracking accuracy is low, use
Deep learning carries out target detection to current frame image.Deep learning model mainly carries out mould using all kinds of target image samples
Type training realizes the detection identification to target using the powerful target's feature-extraction ability of deep learning, to reach target
The purpose accurately detected, and then reach the effective booster action accurately tracked to target.
In one embodiment, referring to Fig. 7, being related to true according to first appearance features vector sum the second appearance features vector
Determine the detailed process that the tracking information of target is tracked in multiple image.Wherein, the first appearance features information includes first apparent special
Vector is levied, the second appearance features information includes the second appearance features vector.On the basis of the above embodiments, S206 includes following
Step:
S242 calculates the minimum COS distance of the second appearance features vector described in the first appearance features vector sum;
S244, if it is described minimum COS distance be less than preset COS distance threshold value, it is determined that the detection target for
Track target;
S246 obtains the location information for detecting target described in the multiple image, and the location information is determined as
The tracking information.
Specifically, the convolutional neural networks that target following equipment is related to based on the above embodiment can extract tracking target
The first appearance features vector sum detection target the second appearance features vector.And then target following equipment calculates the first apparent spy
The minimum COS distance for levying the second appearance features of vector sum vector, by the minimum COS distance and preset COS distance threshold value into
Row compares, if minimum COS distance is less than preset COS distance threshold value, it is determined that detection target is tracking target.Target later
Tracking equipment obtains the location information that target is detected in multiple image, and is determined as the location information to track mesh in multiple image
Target tracking information.
Optionally, the location information for detecting target can be the coordinate information of detection target.
It, will be minimum by judging the size of minimum COS distance and preset COS distance threshold value in the embodiment of the present application
The detection target that COS distance is less than preset COS distance threshold value is determined as tracking target, and the differentiation for improving tracking target is quasi-
True property.
In one embodiment, referring to Fig. 8, being related to determining detection target for tracking by a variety of similarity Rule of judgment
The detailed process of target.On the basis of the above embodiments, this method is further comprising the steps of:
S252 obtains first motion feature of the tracking target in the current frame image in the multiple image;
S254 obtains second motion feature of the detection target in the current frame image;
S256 sentences according to first motion feature, second motion feature and preset similarity Rule of judgment
Whether the detection target of breaking is candidate tracking target;
S258, if the detection target is candidate tracking target, and the minimum COS distance be less than preset cosine away from
From threshold value, it is determined that the detection target is tracking target.
Optionally, motion feature may include position, speed etc..
Specifically, location information of the target following equipment based on tracking target in reference frame image, is pushed away in advance by track
Algorithm obtains first motion feature of the tracking target in current frame image.Also, target following equipment utilization target detection mould
Type obtains second motion feature of the detection target in current frame image.Later, target following equipment is special according to the first movement
Sign, the second motion feature and preset similarity Rule of judgment judge whether detection target is candidate tracking target, if first
Motion feature, the second motion feature meet preset similarity Rule of judgment, then determine that detecting target tracks target for candidate.Into
And if detection target is candidate tracking target, and minimum COS distance is less than preset COS distance threshold value, then target following is set
It is standby to determine that detection target is tracking target.
The embodiment of the present application determines that detection target for tracking target, is further increased by a variety of similarity Rule of judgment
The discriminant accuracy of tracking target.
As an implementation, it is related to a kind of determine by a variety of similarity Rule of judgment and detects target as candidate tracking
The possible realization process of target.On the basis of the above embodiments, the realization process the following steps are included:
S2522 obtains tracking target prime area corresponding in the reference frame image in the multiple image;
S2524 predicts that the tracking target is corresponding in the current frame image according to the pre- predication method in the prime area and track
Final area;The final area is determined as first motion feature by S2526;
S2542 obtains detection target corresponding first area in the current frame image;S2544, by described
One region is determined as second motion feature.
S2562, determines the overlapping region of the final area Yu the first area, and calculates the overlapping region and account for institute
State the percentage of final area;S2564, if the percentage is greater than preset percentage threshold, it is determined that the detection target
Target is tracked for candidate.
Optionally, prime area is the first bounding box image for including tracking target.First area be include detection target
The second boundary block diagram picture.
Specifically, it is corresponding initial in reference frame image in multiple image to obtain tracking target for target following equipment
Region, and according to the pre- predication method in the prime area and track, predicting tracing target corresponding final area in current frame image
The final area is determined as the first motion feature later by domain.Also, target following equipment obtains detection target in present frame figure
The corresponding first area as in, and the first area is determined as the second motion feature.Later, target following equipment determines final
The overlapping region in region and first area, and the percentage that overlapping region accounts for final area is calculated, if percentage is greater than preset
Percentage threshold, it is determined that detection target is candidate tracking target.
As an implementation, be related to it is another by a variety of similarity Rule of judgment determine detection target be candidate with
The possible realization process of track target.On the basis of the above embodiments, the realization process the following steps are included:
S252a predicts the speed of the tracking target according to the pre- predication method in the prime area and track;S252b,
The speed of the tracking target is determined as first motion feature;
S256a, if the percentage be greater than preset percentage threshold, it is determined that the detection target for initial candidate with
Track target;S256b obtains the position offset of the initial candidate tracking target and the tracking target;S256c, according to institute
The speed for stating position offset and the tracking target judges whether the initial candidate tracking target is candidate tracking target;
S256d, if initial candidate tracking target is candidate tracking target, it is determined that the detection target is candidate tracking target.
Specifically, target following equipment is according to the pre- predication method in prime area and track, the speed of predicting tracing target, and
The speed of the tracking target is determined as the first motion feature.If percentage is greater than preset percentage threshold, target following
Equipment determines that detection target tracks target for initial candidate, and the position for obtaining initial candidate tracking target and tracking target later is inclined
Shifting amount, and according to the speed of position offset and tracking target, judge whether initial candidate tracking target is candidate tracking target,
If it is candidate tracking target that initial candidate, which tracks target, target following equipment determines that detection target is candidate tracking target.
Optionally, the pre- predication method in track includes Kalman filtering, Bayesian inference method etc..As an implementation,
It selects Kalman filtering to come position of the predicting tracing target in current frame image, i.e. final area, while obtaining tracking target
Speed, thus realize tracking target motion feature extraction.
The embodiment of the present application considers the continuity of target movement, and the extraction of motion feature mainly passes through the pre- predication method in track
Come determine tracking target position and speed, so as to reduce extract appearance features target quantity.
Optionally, the implementation of S256c includes:
Implementation one, target following equipment convert corresponding actual speed for position offset first, if the reality
The ratio of the speed of speed and tracking target is located in [1- △ v, 1+ △ v], then determine initial candidate track target for candidate with
Track target.Wherein, the value of △ v can be set according to actual tracking accuracy requirement.
Implementation two, target following equipment convert corresponding actual speed for position offset first, if the reality
The difference of the speed of speed and tracking target is located in [- △ v ,+△ v], then determines initial candidate tracking target for candidate's tracking
Target.Wherein, the value of △ v can be set according to actual tracking accuracy requirement.
In one embodiment, after determining that detection target is candidate tracking target, the method also includes: save with
The corresponding first bounding box image of track target.Further, if it is determined that not tracked in detection target in current frame image
After target, then a later frame image of multiple image is passed directly to, while saving tracking target corresponding first in previous frame image
Bounding box image.
One embodiment of the application is answered for the actual scene for being applied to the method for tracking target of smart city
With specifically using corridor monitor video, the pedestrian to walk about during shooting mainly for monitoring is handled.
After video starts, when occurring pedestrian in monitoring scene, detection and tracking is carried out to the pedestrian target at this time, is such as schemed
Shown in 11.Firstly, being detected using deep learning target detection model, blue box is the detection knot of pedestrian target in Figure 11
Fruit, while and to the pedestrian target carry out feature extraction, obtain the clarification of objective vector.Facilitate realization to the row in order to subsequent
The tracking of people's target, the present embodiment is using clarification of objective vector in current frame image as reference characteristic vector, subsequent frame figure
The target detected as in all carries out similarity judgement with this feature vector, and terminal is aobvious in the reference characteristic vector such as Figure 12 of the target
The vector for 128 dimensions shown.In addition, red block is to determine the tracking target of current frame image in Figure 11, it is subsequent to the pedestrian target
It is tracked.
In order to verify this method can accurately realize target tenacious tracking, in the video image as tracking target
Pedestrian carries out continuing processing.As shown in figure 13, the above-mentioned pedestrian as tracking target has advanced on screen from picture center
Side, while certain deformation has occurred.Red block is the testing result of the pedestrian target in Figure 13, and blue box is the pedestrian target
Tracking result.So as to illustrate, the method for the present embodiment extracts target feature vector can be quasi- by similarity judgement
True tracks target, is not influenced by target deformation factor.
It should be understood that although each step in the flow chart of Fig. 2-8 is successively shown according to the instruction of arrow,
These steps are not that the inevitable sequence according to arrow instruction successively executes.Unless expressly stating otherwise herein, these steps
Execution there is no stringent sequences to limit, these steps can execute in other order.Moreover, at least one in Fig. 2-8
Part steps may include that perhaps these sub-steps of multiple stages or stage are not necessarily in synchronization to multiple sub-steps
Completion is executed, but can be executed at different times, the execution sequence in these sub-steps or stage is also not necessarily successively
It carries out, but can be at least part of the sub-step or stage of other steps or other steps in turn or alternately
It executes.
It in one embodiment, should as shown in figure 9, providing a kind of target tracker 30 applied to smart city
Device includes:
Image collection module 302, for obtaining multiple image;
Characteristic extracting module 304, for according to the multiple image, determine tracking target the first appearance features information and
Detect the second appearance features information of target;
Target tracking module 306 is used for according to the first appearance features information and the second appearance features information, really
The tracking information of target is tracked in the fixed multiple image, and the tracking target is tracked according to the tracking information.
The above-mentioned target tracker applied to smart city, it is apparent by obtain tracking target in multiple image first
Second appearance features information of characteristic information and detection target, and believed based on the first appearance features information and the second appearance features
Breath determines the tracking information of tracking target, to realize the tracking to tracking target.Since the appearance features of different target can
Reflect the similarity degree between target, therefore, the second table of the first appearance features information and detection target based on tracking target
Characteristic information is seen, can learn detection target and tracks the similarity degree of target, and then judges to detect whether target is tracking mesh
Mark is achieved in the tracking to tracking target.As it can be seen that improving mesh since tracking target can be more accurately determined
Mark the reliability of tracking.
Specific restriction about the target tracker for being applied to smart city may refer to above for applied to intelligence
The restriction of the method for tracking target in intelligent city, details are not described herein.In the above-mentioned target tracker applied to smart city
Modules can be realized fully or partially through software, hardware and combinations thereof.Above-mentioned each module can be embedded in the form of hardware
Or independently of in the processor in computer equipment, can also be stored in a software form in the memory in computer equipment,
The corresponding operation of the above modules is executed in order to which processor calls.
In one embodiment, a kind of computer equipment is provided, which can be terminal, internal structure
Figure can be as shown in Figure 10.The computer equipment includes the processor connected by system bus, memory, network interface, shows
Display screen and input unit.Wherein, the processor of the computer equipment is for providing calculating and control ability.The computer equipment
Memory includes non-volatile memory medium, built-in storage.The non-volatile memory medium is stored with operating system and computer
Program.The built-in storage provides environment for the operation of operating system and computer program in non-volatile memory medium.The meter
The network interface for calculating machine equipment is used to communicate with external terminal by network connection.When the computer program is executed by processor
To realize a kind of method for tracking target applied to smart city.The display screen of the computer equipment can be liquid crystal display or
Person's electric ink display screen, the input unit of the computer equipment can be the touch layer covered on display screen, be also possible to count
Key, trace ball or the Trackpad being arranged on machine equipment shell are calculated, can also be external keyboard, Trackpad or mouse etc..
It will be understood by those skilled in the art that structure shown in Figure 10, only part relevant to application scheme
The block diagram of structure, does not constitute the restriction for the computer equipment being applied thereon to application scheme, and specific computer is set
Standby may include perhaps combining certain components or with different component layouts than more or fewer components as shown in the figure.
In one embodiment, a kind of computer equipment is provided, including memory, processor and storage are on a memory
And the computer program that can be run on a processor, processor perform the steps of when executing computer program
Obtain multiple image;
According to the multiple image, determine that the second of the first appearance features information for tracking target and detection target is apparent special
Reference breath;
According to the first appearance features information and the second appearance features information, determines and tracked in the multiple image
The tracking information of target, and the tracking target is tracked according to the tracking information.
Above-mentioned computer equipment tracks the first appearance features information and detection target of target by obtaining in multiple image
The second appearance features information, and be based on the first appearance features information and the second appearance features information, determine tracking target
Tracking information, to realize the tracking to tracking target.The phase being able to reflect due to the appearance features of different target between target
Like degree, therefore, the second appearance features information of the first appearance features information and detection target based on tracking target can be obtained
Know detection target and track the similarity degree of target, and then judge to detect whether target is tracking target, is achieved in tracking
The tracking of target.As it can be seen that improving the reliability of target following since tracking target can be more accurately determined.
In one embodiment, a kind of computer readable storage medium is provided, computer program is stored thereon with, is calculated
Machine program performs the steps of when being executed by processor
Obtain multiple image;
According to the multiple image, determine that the second of the first appearance features information for tracking target and detection target is apparent special
Reference breath;
According to the first appearance features information and the second appearance features information, determines and tracked in the multiple image
The tracking information of target, and the tracking target is tracked according to the tracking information.
Above-mentioned computer readable storage medium, by obtain multiple image in track target the first appearance features information and
Detect target the second appearance features information, and be based on the first appearance features information and the second appearance features information, determine with
The tracking information of track target, to realize the tracking to tracking target.Since the appearance features of different target are able to reflect target
Between similarity degree, therefore, based on tracking target the first appearance features information and detection target the second appearance features letter
Breath can learn detection target and track the similarity degree of target, and then judge to detect whether target is tracking target, thus real
Now to the tracking of tracking target.As it can be seen that due to that can more accurately determine tracking target, improve target following can
By property.
Those of ordinary skill in the art will appreciate that realizing all or part of the process in above-described embodiment method, being can be with
Relevant hardware is instructed to complete by computer program, the computer program can be stored in a non-volatile computer
In read/write memory medium, the computer program is when being executed, it may include such as the process of the embodiment of above-mentioned each method.Wherein,
To any reference of memory, storage, database or other media used in each embodiment provided herein,
Including non-volatile and/or volatile memory.Nonvolatile memory may include read-only memory (ROM), programming ROM
(PROM), electrically programmable ROM (EPROM), electrically erasable ROM (EEPROM) or flash memory.Volatile memory may include
Random access memory (RAM) or external cache.By way of illustration and not limitation, RAM is available in many forms,
Such as static state RAM (SRAM), dynamic ram (DRAM), synchronous dram (SDRAM), double data rate sdram (DDRSDRAM), enhancing
Type SDRAM (ESDRAM), synchronization link (Synchlink) DRAM (SLDRAM), memory bus (Rambus) direct RAM
(RDRAM), direct memory bus dynamic ram (DRDRAM) and memory bus dynamic ram (RDRAM) etc..
Each technical characteristic of above embodiments can be combined arbitrarily, for simplicity of description, not to above-described embodiment
In each technical characteristic it is all possible combination be all described, as long as however, the combination of these technical characteristics be not present lance
Shield all should be considered as described in this specification.
The several embodiments of the application above described embodiment only expresses, the description thereof is more specific and detailed, but simultaneously
It cannot therefore be construed as limiting the scope of the patent.It should be pointed out that coming for those of ordinary skill in the art
It says, without departing from the concept of this application, various modifications and improvements can be made, these belong to the protection of the application
Range.
Claims (14)
1. a kind of method for tracking target applied to smart city, which is characterized in that the described method includes:
Obtain multiple image;
According to the multiple image, the first appearance features information of tracking target and the second appearance features letter of detection target are determined
Breath, wherein the tracking target is to have determined that the target of tracking, and the detection target is the target of tracking to be determined;
According to the first appearance features information and the second appearance features information, determines in the multiple image and track target
Tracking information, and the tracking target is tracked according to the tracking information.
2. the method according to claim 1, wherein determining the first of tracking target according to the multiple image
Second appearance features information of appearance features information and detection target, comprising:
Obtain the reference frame image in the multiple image;
Determine the tracking target in the reference frame image;
Extract the first appearance features information of the tracking target.
3. according to the method described in claim 2, it is characterized in that, extract it is described tracking target the first appearance features information,
Include:
According to position of the tracking target in the reference frame image, corresponding first bounding box of the tracking target is determined
Image;
The first bounding box image is normalized, the first standard boundary block diagram picture is obtained;
First standard boundary block diagram picture is input to appearance features and extracts model progress feature extraction, obtains the tracking mesh
Target the first appearance features information.
4. the method according to claim 1, wherein determining the first of tracking target according to the multiple image
Second appearance features information of appearance features information and detection target, comprising:
Obtain the current frame image in the multiple image;
Target detection is carried out to the current frame image, obtains the detection target;
Extract the second appearance features information of the detection target.
5. according to the method described in claim 4, it is characterized in that, carrying out target detection, acquisition institute to the current frame image
State detection target, comprising:
The current frame image is input to target detection model and carries out target detection, is obtained at least one in the current frame image
A initial detecting result belongs to the probability of specified classification, wherein the target detection model includes preparatory trained depth
Practise model;
If the probability that the initial detecting result belongs to specified classification is greater than preset specified probability threshold value, it is determined that described initial
Testing result is detection target.
6. according to the method described in claim 4, it is characterized in that, extract it is described detection target the second appearance features information,
Include:
According to position of the detection target in the current frame image, the corresponding the second boundary frame of the detection target is determined
Image;
The second boundary block diagram picture is normalized, the second standard boundary block diagram picture is obtained;
Second standard boundary block diagram picture is input to appearance features and extracts model progress feature extraction, obtains the detection mesh
Target the second appearance features information.
7. the method according to claim 1, wherein the first appearance features information includes the first appearance features
Vector, the second appearance features information include the second appearance features vector;
It is described according to the first appearance features information and the second appearance features information, determine and tracked in the multiple image
The tracking information of target, comprising:
Calculate the minimum COS distance of the second appearance features vector described in the first appearance features vector sum;
If the minimum COS distance is less than preset COS distance threshold value, it is determined that the detection target is tracking target;
The location information for detecting target described in the multiple image is obtained, and the location information is determined as the tracking and is believed
Breath.
8. the method according to the description of claim 7 is characterized in that the method also includes:
Obtain first motion feature of the tracking target in the current frame image in the multiple image;
Obtain second motion feature of the detection target in the current frame image;
According to first motion feature, second motion feature and preset similarity Rule of judgment, the inspection is judged
Survey whether target is candidate tracking target;
If the minimum COS distance is less than preset COS distance threshold value, it is determined that the detection target is tracking mesh
Mark, comprising:
If the detection target is candidate tracking target, and the minimum COS distance is less than preset COS distance threshold value, then
Determine the detection target for tracking target.
9. according to the method described in claim 8, it is characterized in that, described obtain the tracking target in the multiple image
Current frame image in the first motion feature, comprising:
Obtain tracking target prime area corresponding in the reference frame image in the multiple image;
According to the pre- predication method in the prime area and track, predict that the tracking target is corresponding in the current frame image
Final area;
The final area is determined as first motion feature;
Second motion feature that the detection target is obtained in the current frame image, comprising:
Obtain the detection target corresponding first area in the current frame image;
The first area is determined as second motion feature;
It is described according to first motion feature, second motion feature and preset similarity Rule of judgment, judge institute
State whether detection target is candidate tracking target, comprising:
It determines the overlapping region of the final area Yu the first area, and calculates the overlapping region and account for the final area
Percentage;
If the percentage is greater than preset percentage threshold, it is determined that the detection target is candidate tracking target.
10. according to the method described in claim 9, it is characterized in that, described obtain the tracking target in the multiple image
In current frame image in the first motion feature, further includes:
According to the pre- predication method in the prime area and track, the speed of the tracking target is predicted;
The speed of the tracking target is determined as first motion feature;
If the percentage is greater than preset percentage threshold, it is determined that the detection target is candidate tracking target, packet
It includes:
If the percentage is greater than preset percentage threshold, it is determined that the detection target is that initial candidate tracks target;
Obtain the position offset of the initial candidate tracking target and the tracking target;
According to the speed of the position offset and the tracking target, judge whether the initial candidate tracking target is candidate
Track target;
If the initial candidate tracking target is candidate tracking target, it is determined that the detection target is candidate tracking target.
11. according to the method described in claim 9, it is characterized in that, the prime area is include the tracking target the
One boundary block diagram picture;The first area is the second boundary block diagram picture for including the detection target;
After determining the detection target for candidate tracking target, the method also includes:
Save the corresponding first bounding box image of the tracking target.
12. a kind of target tracker applied to smart city, which is characterized in that described device includes:
Image collection module, for obtaining multiple image;
Characteristic extracting module, for determining the first appearance features information and detection mesh of tracking target according to the multiple image
Target the second appearance features information;
Target tracking module, described in determining according to the first appearance features information and the second appearance features information
The tracking information of target is tracked in multiple image, and the tracking target is tracked according to the tracking information.
13. a kind of computer equipment, including memory and processor, the memory are stored with computer program, feature exists
In the processor realizes method described in any one of claims 1 to 11 when executing computer program the step of.
14. a kind of computer readable storage medium, is stored thereon with computer program, which is characterized in that the computer program
The step of method described in any one of claims 1 to 11 is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910711307.5A CN110490902B (en) | 2019-08-02 | 2019-08-02 | Target tracking method and device applied to smart city and computer equipment |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910711307.5A CN110490902B (en) | 2019-08-02 | 2019-08-02 | Target tracking method and device applied to smart city and computer equipment |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110490902A true CN110490902A (en) | 2019-11-22 |
CN110490902B CN110490902B (en) | 2022-06-14 |
Family
ID=68549187
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910711307.5A Active CN110490902B (en) | 2019-08-02 | 2019-08-02 | Target tracking method and device applied to smart city and computer equipment |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110490902B (en) |
Cited By (10)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111144404A (en) * | 2019-12-06 | 2020-05-12 | 恒大新能源汽车科技(广东)有限公司 | Legacy object detection method, device, system, computer device, and storage medium |
CN111179343A (en) * | 2019-12-20 | 2020-05-19 | 西安天和防务技术股份有限公司 | Target detection method, target detection device, computer equipment and storage medium |
CN111238829A (en) * | 2020-02-12 | 2020-06-05 | 上海眼控科技股份有限公司 | Method and device for determining moving state, computer equipment and storage medium |
CN111275741A (en) * | 2020-01-19 | 2020-06-12 | 北京迈格威科技有限公司 | Target tracking method and device, computer equipment and storage medium |
CN111539986A (en) * | 2020-03-25 | 2020-08-14 | 西安天和防务技术股份有限公司 | Target tracking method and device, computer equipment and storage medium |
CN112819859A (en) * | 2021-02-02 | 2021-05-18 | 重庆特斯联智慧科技股份有限公司 | Multi-target tracking method and device applied to intelligent security |
CN113177967A (en) * | 2021-03-31 | 2021-07-27 | 千里眼(广州)人工智能科技有限公司 | Object tracking method, system and storage medium for video data |
CN113674318A (en) * | 2021-08-16 | 2021-11-19 | 支付宝(杭州)信息技术有限公司 | Target tracking method, device and equipment |
CN114419097A (en) * | 2021-12-30 | 2022-04-29 | 西安天和防务技术股份有限公司 | Target tracking method and device |
CN116991182A (en) * | 2023-09-26 | 2023-11-03 | 北京云圣智能科技有限责任公司 | Unmanned aerial vehicle holder control method, device, system, computer device and medium |
Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101527838A (en) * | 2008-03-04 | 2009-09-09 | 华为技术有限公司 | Method and system for feedback-type object detection and tracing of video object |
CN101867798A (en) * | 2010-05-18 | 2010-10-20 | 武汉大学 | Mean shift moving object tracking method based on compressed domain analysis |
CN101887588A (en) * | 2010-08-04 | 2010-11-17 | 中国科学院自动化研究所 | Appearance block-based occlusion handling method |
CN101923716A (en) * | 2009-06-10 | 2010-12-22 | 新奥特(北京)视频技术有限公司 | Method for improving particle filter tracking effect |
US20110115920A1 (en) * | 2009-11-18 | 2011-05-19 | Industrial Technology Research Institute | Multi-state target tracking mehtod and system |
JP2013193573A (en) * | 2012-03-19 | 2013-09-30 | Fujitsu Ten Ltd | Vehicle follow-up device |
CN103473791A (en) * | 2013-09-10 | 2013-12-25 | 惠州学院 | Method for automatically recognizing abnormal velocity event in surveillance video |
CN104200495A (en) * | 2014-09-25 | 2014-12-10 | 重庆信科设计有限公司 | Multi-target tracking method in video surveillance |
US9129397B2 (en) * | 2012-01-19 | 2015-09-08 | Electronics And Telecommunications Research Institute | Human tracking method and apparatus using color histogram |
CN105654139A (en) * | 2015-12-31 | 2016-06-08 | 北京理工大学 | Real-time online multi-target tracking method adopting temporal dynamic appearance model |
CN107403439A (en) * | 2017-06-06 | 2017-11-28 | 沈阳工业大学 | Predicting tracing method based on Cam shift |
CN107992790A (en) * | 2017-10-13 | 2018-05-04 | 西安天和防务技术股份有限公司 | Target long time-tracking method and system, storage medium and electric terminal |
CN108010067A (en) * | 2017-12-25 | 2018-05-08 | 北京航空航天大学 | A kind of visual target tracking method based on combination determination strategy |
CN108491816A (en) * | 2018-03-30 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | The method and apparatus for carrying out target following in video |
CN108694724A (en) * | 2018-05-11 | 2018-10-23 | 西安天和防务技术股份有限公司 | A kind of long-time method for tracking target |
CN108985162A (en) * | 2018-06-11 | 2018-12-11 | 平安科技(深圳)有限公司 | Object real-time tracking method, apparatus, computer equipment and storage medium |
CN109035304A (en) * | 2018-08-07 | 2018-12-18 | 北京清瑞维航技术发展有限公司 | Method for tracking target, calculates equipment and device at medium |
CN109903310A (en) * | 2019-01-23 | 2019-06-18 | 平安科技(深圳)有限公司 | Method for tracking target, device, computer installation and computer storage medium |
-
2019
- 2019-08-02 CN CN201910711307.5A patent/CN110490902B/en active Active
Patent Citations (18)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101527838A (en) * | 2008-03-04 | 2009-09-09 | 华为技术有限公司 | Method and system for feedback-type object detection and tracing of video object |
CN101923716A (en) * | 2009-06-10 | 2010-12-22 | 新奥特(北京)视频技术有限公司 | Method for improving particle filter tracking effect |
US20110115920A1 (en) * | 2009-11-18 | 2011-05-19 | Industrial Technology Research Institute | Multi-state target tracking mehtod and system |
CN101867798A (en) * | 2010-05-18 | 2010-10-20 | 武汉大学 | Mean shift moving object tracking method based on compressed domain analysis |
CN101887588A (en) * | 2010-08-04 | 2010-11-17 | 中国科学院自动化研究所 | Appearance block-based occlusion handling method |
US9129397B2 (en) * | 2012-01-19 | 2015-09-08 | Electronics And Telecommunications Research Institute | Human tracking method and apparatus using color histogram |
JP2013193573A (en) * | 2012-03-19 | 2013-09-30 | Fujitsu Ten Ltd | Vehicle follow-up device |
CN103473791A (en) * | 2013-09-10 | 2013-12-25 | 惠州学院 | Method for automatically recognizing abnormal velocity event in surveillance video |
CN104200495A (en) * | 2014-09-25 | 2014-12-10 | 重庆信科设计有限公司 | Multi-target tracking method in video surveillance |
CN105654139A (en) * | 2015-12-31 | 2016-06-08 | 北京理工大学 | Real-time online multi-target tracking method adopting temporal dynamic appearance model |
CN107403439A (en) * | 2017-06-06 | 2017-11-28 | 沈阳工业大学 | Predicting tracing method based on Cam shift |
CN107992790A (en) * | 2017-10-13 | 2018-05-04 | 西安天和防务技术股份有限公司 | Target long time-tracking method and system, storage medium and electric terminal |
CN108010067A (en) * | 2017-12-25 | 2018-05-08 | 北京航空航天大学 | A kind of visual target tracking method based on combination determination strategy |
CN108491816A (en) * | 2018-03-30 | 2018-09-04 | 百度在线网络技术(北京)有限公司 | The method and apparatus for carrying out target following in video |
CN108694724A (en) * | 2018-05-11 | 2018-10-23 | 西安天和防务技术股份有限公司 | A kind of long-time method for tracking target |
CN108985162A (en) * | 2018-06-11 | 2018-12-11 | 平安科技(深圳)有限公司 | Object real-time tracking method, apparatus, computer equipment and storage medium |
CN109035304A (en) * | 2018-08-07 | 2018-12-18 | 北京清瑞维航技术发展有限公司 | Method for tracking target, calculates equipment and device at medium |
CN109903310A (en) * | 2019-01-23 | 2019-06-18 | 平安科技(深圳)有限公司 | Method for tracking target, device, computer installation and computer storage medium |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111144404B (en) * | 2019-12-06 | 2023-08-11 | 恒大恒驰新能源汽车科技(广东)有限公司 | Method, apparatus, system, computer device and storage medium for detecting legacy object |
CN111144404A (en) * | 2019-12-06 | 2020-05-12 | 恒大新能源汽车科技(广东)有限公司 | Legacy object detection method, device, system, computer device, and storage medium |
CN111179343A (en) * | 2019-12-20 | 2020-05-19 | 西安天和防务技术股份有限公司 | Target detection method, target detection device, computer equipment and storage medium |
CN111179343B (en) * | 2019-12-20 | 2024-03-19 | 西安天和防务技术股份有限公司 | Target detection method, device, computer equipment and storage medium |
CN111275741A (en) * | 2020-01-19 | 2020-06-12 | 北京迈格威科技有限公司 | Target tracking method and device, computer equipment and storage medium |
CN111275741B (en) * | 2020-01-19 | 2023-09-08 | 北京迈格威科技有限公司 | Target tracking method, device, computer equipment and storage medium |
CN111238829A (en) * | 2020-02-12 | 2020-06-05 | 上海眼控科技股份有限公司 | Method and device for determining moving state, computer equipment and storage medium |
CN111539986A (en) * | 2020-03-25 | 2020-08-14 | 西安天和防务技术股份有限公司 | Target tracking method and device, computer equipment and storage medium |
CN111539986B (en) * | 2020-03-25 | 2024-03-22 | 西安天和防务技术股份有限公司 | Target tracking method, device, computer equipment and storage medium |
CN112819859A (en) * | 2021-02-02 | 2021-05-18 | 重庆特斯联智慧科技股份有限公司 | Multi-target tracking method and device applied to intelligent security |
CN113177967A (en) * | 2021-03-31 | 2021-07-27 | 千里眼(广州)人工智能科技有限公司 | Object tracking method, system and storage medium for video data |
CN113674318A (en) * | 2021-08-16 | 2021-11-19 | 支付宝(杭州)信息技术有限公司 | Target tracking method, device and equipment |
CN114419097A (en) * | 2021-12-30 | 2022-04-29 | 西安天和防务技术股份有限公司 | Target tracking method and device |
CN116991182A (en) * | 2023-09-26 | 2023-11-03 | 北京云圣智能科技有限责任公司 | Unmanned aerial vehicle holder control method, device, system, computer device and medium |
CN116991182B (en) * | 2023-09-26 | 2023-12-22 | 北京云圣智能科技有限责任公司 | Unmanned aerial vehicle holder control method, device, system, computer device and medium |
Also Published As
Publication number | Publication date |
---|---|
CN110490902B (en) | 2022-06-14 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110490902A (en) | Method for tracking target, device, computer equipment applied to smart city | |
CN110852285B (en) | Object detection method and device, computer equipment and storage medium | |
CN110516620B (en) | Target tracking method and device, storage medium and electronic equipment | |
CN111754541B (en) | Target tracking method, device, equipment and readable storage medium | |
CN108446585A (en) | Method for tracking target, device, computer equipment and storage medium | |
WO2020024851A1 (en) | Target tracking method, computer device, and storage medium | |
CN108810620A (en) | Identify method, computer equipment and the storage medium of the material time point in video | |
JP7246104B2 (en) | License plate identification method based on text line identification | |
CN110516559A (en) | Suitable for precisely monitor method for tracking target and device, computer equipment | |
CN111753782B (en) | False face detection method and device based on double-current network and electronic equipment | |
CN109285105A (en) | Method of detecting watermarks, device, computer equipment and storage medium | |
CN110598687A (en) | Vehicle identification code detection method and device and computer equipment | |
CN111680675B (en) | Face living body detection method, system, device, computer equipment and storage medium | |
CN113348465B (en) | Method, device, equipment and storage medium for predicting relevance of objects in image | |
CN112712703A (en) | Vehicle video processing method and device, computer equipment and storage medium | |
CN111832561B (en) | Character sequence recognition method, device, equipment and medium based on computer vision | |
CN110796039B (en) | Face flaw detection method and device, electronic equipment and storage medium | |
CN108564045A (en) | Data processing method, device, storage medium and the computer equipment of augmented reality | |
CN113557546B (en) | Method, device, equipment and storage medium for detecting associated objects in image | |
CN114120220A (en) | Target detection method and device based on computer vision | |
Xie et al. | A method of small face detection based on CNN | |
Liu et al. | WebAR Object Detection Method Based on Lightweight Multiscale Feature Fusion | |
CN110633666A (en) | Gesture track recognition method based on finger color patches | |
Embarak et al. | Intelligent image detection system based on internet of things and cloud computing | |
CN110675428B (en) | Target tracking method and device for human-computer interaction and computer equipment |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
GR01 | Patent grant | ||
GR01 | Patent grant |