CN106650630A - Target tracking method and electronic equipment - Google Patents

Target tracking method and electronic equipment Download PDF

Info

Publication number
CN106650630A
CN106650630A CN201611041675.6A CN201611041675A CN106650630A CN 106650630 A CN106650630 A CN 106650630A CN 201611041675 A CN201611041675 A CN 201611041675A CN 106650630 A CN106650630 A CN 106650630A
Authority
CN
China
Prior art keywords
target
candidate target
tracking
color
pixel
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201611041675.6A
Other languages
Chinese (zh)
Other versions
CN106650630B (en
Inventor
唐矗
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ninebot Beijing Technology Co Ltd
Original Assignee
Ninebot Beijing Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ninebot Beijing Technology Co Ltd filed Critical Ninebot Beijing Technology Co Ltd
Priority to CN201611041675.6A priority Critical patent/CN106650630B/en
Publication of CN106650630A publication Critical patent/CN106650630A/en
Priority to PCT/CN2017/110577 priority patent/WO2018086607A1/en
Application granted granted Critical
Publication of CN106650630B publication Critical patent/CN106650630B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06NCOMPUTING ARRANGEMENTS BASED ON SPECIFIC COMPUTATIONAL MODELS
    • G06N3/00Computing arrangements based on biological models
    • G06N3/02Neural networks
    • G06N3/04Architecture, e.g. interconnection topology
    • G06N3/045Combinations of networks
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/40Extraction of image or video features
    • G06V10/56Extraction of image or video features relating to colour
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Computational Linguistics (AREA)
  • Evolutionary Computation (AREA)
  • Artificial Intelligence (AREA)
  • Biomedical Technology (AREA)
  • Biophysics (AREA)
  • Health & Medical Sciences (AREA)
  • Data Mining & Analysis (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Molecular Biology (AREA)
  • Computing Systems (AREA)
  • General Engineering & Computer Science (AREA)
  • Mathematical Physics (AREA)
  • Software Systems (AREA)
  • Image Analysis (AREA)

Abstract

The invention discloses a target tracking method which is applied to electronic equipment. The electronic equipment is provided with an image acquisition unit which is used for acquiring image data. The method comprises the steps that a tracking target is determined in the initial frame of image of the image data; multiple candidate targets are extracted in the subsequent frame of image of the image data, wherein the subsequent frame of image is any frame of image after the initial frame of image; the similarity of each candidate target and the tracking target is calculated; and the candidate target having the highest similarity with the tracking target in the multiple candidate targets is determined to be the tracking target. The technical problems in the prior art that an online learning visual tracking method cannot judge lost of the tracking target and the tracking target is difficult to find after lost of tracking target can be solved. Meanwhile, the invention also discloses the electronic equipment.

Description

A kind of method for tracking target and electronic equipment
Technical field
The present invention relates to electronic technology field, more particularly to a kind of method for tracking target and electronic equipment.
Background technology
After rising in recent years based on the Visual Tracking of on-line study, become a focus of vision tracking.This Class method on the premise of the priori without any off-line learning, according to the tracking Objective extraction specified in initial frame picture Feature templates, training pattern is used in subsequent video for the tracking of the target, during tracking, is updated according to tracking mode Model, to adapt to the attitudes vibration of target.Such method does not need any off-line training, any thing that can be specified to user Body is tracked, with higher versatility.
But, it is single due to tracking clarification of objective and template, during the tracking of target, it is difficult to whether judge target With losing;And after target is with losing, the continuous updating of trace template can be such that error is persistently amplified, and cause target to be difficult to look for Return, it is difficult to form stable tracking system.
The content of the invention
The embodiment of the present invention is solved of the prior art online by providing a kind of method for tracking target and electronic equipment The visual tracking method of study, whether exist cannot judge to track target with losing, and with losing after be difficult to give tracking target for change Technical problem.
On the one hand, the present invention provides following technical scheme by one embodiment of the invention:
A kind of method for tracking target, in being applied to electronic equipment, the electronic equipment has image acquisition units, the figure As collecting unit is used to gather view data, methods described includes:
Determine that one tracks target in the initial two field picture of described image data;
Multiple candidate targets are extracted in the follow-up two field picture of described image data, the follow-up two field picture is described initial Arbitrary two field picture after two field picture;
Calculate the similarity of each candidate target and the tracking target;
To be defined as with the similarity highest candidate target of the tracking target in the plurality of candidate target described Tracking target.
Preferably, it is described to determine that one tracks target in the initial two field picture of view data, including:
When the initial two field picture is exported by display screen, the selection operation of user is obtained;Selection based on user is grasped Make, the tracking target is determined in the initial two field picture;Or
Obtain for describing the tracking clarification of objective information;Based on the characteristic information, in the initial two field picture It is middle to determine the tracking target.
Preferably, it is described to extract multiple candidate targets in the follow-up two field picture of view data, including:
Determine i-th -1 encirclement frame of the tracking target in the i-th -1 two field picture, wherein, the described i-th -1 two field picture belongs to Described image data, i is the integer more than or equal to 2;When i is equal to 2, the described i-th -1 two field picture is the initial two field picture;
Based on the described i-th -1 encirclement frame, the i-th image block is determined in the i-th two field picture, wherein, i-th two field picture is The follow-up two field picture, the center of i-th image block is identical with the center of the described i-th -1 encirclement frame, i-th image Area of the area of block more than the described i-th -1 encirclement frame;
Determine the plurality of candidate target in i-th image block.
Preferably, the similarity for calculating each candidate target and the tracking target, including:
The first candidate target is selected from the plurality of candidate target, wherein, first candidate target is the plurality of Arbitrary candidate target in candidate target;
Calculate the first color feature vector of first candidate target, and the second color for calculating the tracking target Characteristic vector;
The distance of first color feature vector and second color feature vector is calculated, wherein, the distance is For first candidate target and the similarity for tracking target.
Preferably, first color feature vector for calculating first candidate target, and calculate the tracking mesh The color feature vector of target second, including:
The image of first candidate target is carried out into principal component segmentation, a mask images are obtained;And, will be described The image of tracking target carries out principal component segmentation, obtains the 2nd mask images;
By a mask images and the 2nd mask image scalings to formed objects;
The first mask image averagings are divided into into M region;And, the 2nd mask image averagings are divided into into M Region, M is positive integer;
Calculate the color feature vector in each region in a mask images;And, calculate the 2nd mask figures The color feature vector in each region as in;
The color feature vector in each region in the first mask images is linked in sequence, first color is obtained special Levy vector;And, the color feature vector in each region in the 2nd mask images is linked in sequence, obtain second face Color characteristic vector.
Preferably, the color feature vector for calculating each region in a mask images;And, calculate described The color feature vector in each region in 2nd mask images, including:
Determine W kind domain colors, W is positive integer;
Projection weight of each pixel on every kind of domain color in first area is calculated in a mask images, it is described First area is any region in M region in a mask images;And, in calculating the 2nd mask images Projection weight of each pixel on every kind of domain color in second area, the second area is in the 2nd mask images Any region in M region;
Based on projection weight of each pixel in the first area on every kind of domain color, in obtaining the first area The corresponding W of each pixel ties up color feature vector;And, based on each pixel in the second area on every kind of domain color Projection weight, obtains each pixel correspondence W dimension color feature vector in the second area;
W dimensions color feature vector corresponding to each pixel in the first area is normalized, and obtains described first The color feature vector of each pixel in region;And, W corresponding to each pixel in the second area tie up color characteristic to Amount is normalized, and obtains the color feature vector of each pixel in the second area;
The color feature vector of each pixel in the first area is added, the color characteristic of the first area is obtained Vector;And, the color feature vector of each pixel in the second area is added, obtain the color of the second area Characteristic vector.
Following equation is preferably based on, projection weight of first pixel on every n kinds domain color is calculated:
Wherein, first pixel is the first area or any pixel in the second area, the n master Color is any domain color in the W kinds domain color, wnFor throwing of first pixel on the n domain color Shadow weight, Ir, Ig, the rgb value that Ib is first pixel;Rn, Gn, Bn are the rgb value of the n domain color.
Preferably, the similarity for calculating each candidate target and the tracking target, including:
The first candidate target is selected from the plurality of candidate target, wherein, first candidate target is the plurality of Arbitrary candidate target in candidate target;
By the image of first candidate target with the image normalization for tracking target to formed objects;
The image of the tracking target is input into carries out feature meter into the first convolutional network of the first deep neural network Calculate, obtain the tracking clarification of objective vector, wherein, first deep neural network is based on Siamese structures;
The image of first candidate target is input into into the second convolutional network of first deep neural network Row feature calculation, obtains the characteristic vector of first candidate target;
The characteristic vector of the first candidate target described in the tracking clarification of objective vector sum is input into described first deeply Similarity Measure is carried out in first fully-connected network of degree neutral net, first candidate target and the tracking target is obtained Similarity.
Preferably, it is described to determine the plurality of candidate target in i-th image block, including:
I-th image block is input into carries out feature calculation into the 3rd convolutional network of the second deep neural network, obtains The characteristic pattern of i-th image block is obtained, wherein, second deep neural network is based on Siamese structures;
The characteristic pattern of i-th image block is input into into the RPN networks of the deep neural network, is obtained the plurality of The characteristic vector of candidate target and the plurality of candidate target.
Preferably, the similarity for calculating each candidate target and the tracking target, including:
The characteristic vector of the first candidate target is extracted from the characteristic vector of the plurality of candidate target, wherein, described One candidate target is the arbitrary candidate target in the plurality of candidate target;
The image of the tracking target is input into carries out spy into the Volume Four product network of second deep neural network Calculating is levied, the tracking clarification of objective vector, the Volume Four product network and the shared convolution of the 3rd convolutional network is obtained Layer parameter;
The characteristic vector of the first candidate target described in the tracking clarification of objective vector sum is input into described second deeply Similarity Measure is carried out in second fully-connected network of degree neutral net, first candidate target and the tracking target is obtained Similarity.
On the other hand, the present invention passes through one embodiment of the invention, there is provided following technical scheme:
A kind of electronic equipment, the electronic equipment has image acquisition units, and described image collecting unit is used to gather figure Picture data, the electronic equipment, including:
First determining unit, for determining that one tracks target in the initial two field picture of described image data;
Extraction unit, for extracting multiple candidate targets, the subsequent frame in the follow-up two field picture of described image data Image is the arbitrary two field picture after the initial two field picture;
Computing unit, for calculating the similarity of each candidate target and the tracking target;
Second determining unit, for will wait with the similarity highest of the tracking target in the plurality of candidate target Target is selected to be defined as the tracking target.
Preferably, first determining unit, including:
First determination subelement, for when the initial two field picture is exported by display screen, obtaining the selection behaviour of user Make;Based on the selection operation of user, the tracking target is determined in the initial two field picture;Or
Second determination subelement, for obtaining for describing the tracking clarification of objective information;Believed based on the feature Breath, determines the tracking target in the initial two field picture.
Preferably, the extraction unit, including:
First determination subelement, for determining i-th -1 encirclement frame of the tracking target in the i-th -1 two field picture, its In, the described i-th -1 two field picture belongs to described image data, and i is the integer more than or equal to 2;When i is equal to 2, the described i-th -1 frame Image is the initial two field picture;
Second determination subelement, for based on the described i-th -1 encirclement frame, the i-th image block being determined in the i-th two field picture, its In, i-th two field picture is the follow-up two field picture, the center of the center of i-th image block and the described i-th -1 encirclement frame Position is identical, and the area of i-th image block is more than the area of the described i-th -1 encirclement frame;
3rd determination subelement, for determining the plurality of candidate target in i-th image block.
Preferably, the computing unit, including:
First choice subelement, for selecting the first candidate target from the plurality of candidate target, wherein, described first Candidate target is the arbitrary candidate target in the plurality of candidate target;
First computation subunit, for calculating the first color feature vector of first candidate target, and calculates institute State the second color feature vector of tracking target;
Second computation subunit, for calculate first color feature vector and second color feature vector away from From, wherein, the distance is the similarity of first candidate target and the tracking target.
Preferably, first computation subunit, specifically for:
The image of first candidate target is carried out into principal component segmentation, a mask images are obtained;And, by it is described with The image of track target carries out principal component segmentation, obtains the 2nd mask images;By a mask images and the 2nd mask Image scaling is to formed objects;The first mask image averagings are divided into into M region;And, by the 2nd mask images M region is divided into, M is positive integer;Calculate the color feature vector in each region in a mask images;And, Calculate the color feature vector in each region in the 2nd mask images;By the face in each region in a mask images Color characteristic vector is linked in sequence, and obtains first color feature vector;And, by each region in the 2nd mask images Color feature vector be linked in sequence, obtain second color feature vector.
Preferably, first computation subunit, specifically for:
Determine W kind domain colors, W is positive integer;Each pixel is calculated in a mask images in first area every The projection weight on domain color is planted, the first area is any region in M region in a mask images;With And, projection weight of each pixel on every kind of domain color in second area in calculating the 2nd mask images, described second Region is any region in M region in the 2nd mask images;Based on each pixel in the first area every The projection weight on domain color is planted, the corresponding W of each pixel in the first area is obtained and is tieed up color feature vector;And, base Projection weight of each pixel on every kind of domain color in the second area, obtains each pixel pair in the second area W is answered to tie up color feature vector;W dimensions color feature vector corresponding to each pixel in the first area is normalized, and obtains Obtain the color feature vector of each pixel in the first area;And, W dimensions corresponding to each pixel in the second area Color feature vector is normalized, and obtains the color feature vector of each pixel in the second area;By firstth area The color feature vector of each pixel is added in domain, obtains the color feature vector of the first area;And, by described second The color feature vector of each pixel is added in region, obtains the color feature vector of the second area.
Preferably, first computation subunit, specifically for based on following equation, calculating the first pixel in every n kinds master Projection weight in color:
Wherein, first pixel is the first area or any pixel in the second area, the n master Color is any domain color in the W kinds domain color, wnFor throwing of first pixel on the n domain color Shadow weight, Ir, Ig, the rgb value that Ib is first pixel;Rn, Gn, Bn are the rgb value of the n domain color.
Preferably, the computing unit, including:
Second selects subelement, for selecting the first candidate target from the plurality of candidate target, wherein, described first Candidate target is the arbitrary candidate target in the plurality of candidate target;
Normalization subelement, for by the image of first candidate target with it is described tracking target image normalization extremely Formed objects;
First input subelement, for the image of the tracking target to be input into the first volume of the first deep neural network Feature calculation is carried out in product network, the tracking clarification of objective vector is obtained, wherein, the first deep neural network base In Siamese structures;
Second input subelement, for the image of first candidate target to be input into first deep neural network The second convolutional network in carry out feature calculation, obtain the characteristic vector of first candidate target, second convolutional network With the first convolution network share convolution layer parameter;
3rd input subelement, for by it is described tracking clarification of objective vector sum described in the first candidate target feature to Amount input carries out Similarity Measure into the first fully-connected network of first deep neural network, obtains first candidate Target and the similarity for tracking target.
Preferably, the 3rd determination subelement, specifically for:
I-th image block is input into carries out feature calculation into the 3rd convolutional network of the second deep neural network, obtains The characteristic pattern of i-th image block is obtained, wherein, second deep neural network is based on Siamese structures;By i-th figure As the characteristic pattern of block is input into into the RPN networks of second deep neural network, the plurality of candidate target and institute are obtained State the characteristic vector of multiple candidate targets.
Preferably, the computing unit, including:
Extract subelement, for extract from the characteristic vector of the plurality of candidate target the feature of the first candidate target to Amount, wherein, first candidate target is the arbitrary candidate target in the plurality of candidate target;
4th input subelement, for the image of the tracking target to be input into the of second deep neural network Feature calculation is carried out in four convolutional networks, the tracking clarification of objective vector is obtained, wherein, the Volume Four accumulates network and institute State the shared convolutional layer parameter of the 3rd convolutional network;
5th input subelement, for by it is described tracking clarification of objective vector sum described in the first candidate target feature to Amount input carries out Similarity Measure into the second fully-connected network of second deep neural network, obtains first candidate Target and the similarity for tracking target.
One or more technical schemes provided in the embodiment of the present invention, at least have the following technical effect that or advantage:
In embodiments of the present invention, a kind of method for tracking target is disclosed, in being applied to electronic equipment, electronic equipment has One image acquisition units, image acquisition units are used to gather view data, and the method includes:In the initial two field picture of view data It is middle to determine that one tracks target;Multiple candidate targets are extracted in the follow-up two field picture of view data;Calculate each candidate target With the similarity of tracking target;Similarity highest candidate target is defined as to track target.Due to inciting somebody to action follow-up each two field picture Candidate target be compared with the tracking target in initial two field picture, similarity highest candidate target in candidate target is true It is set to tracking target, it is achieved thereby that the tracking to tracking target.Tracking in the present invention is online with of the prior art The visual tracking method of study is compared, for initial frame after each frame process, can be regarded as judging that target is It is no with losing, have the advantages that reliably judge to track target whether with losing;And trace template need not be maintained, it is to avoid The continuous updating of trace template causes error persistently to be amplified, and is conducive to giving the tracking target with losing for change, so as to improve tracking The robustness of system.
Description of the drawings
Technical scheme in order to be illustrated more clearly that the embodiment of the present invention, below will be to making needed for embodiment description Accompanying drawing is briefly described, it should be apparent that, drawings in the following description are some embodiments of the present invention, for this For the those of ordinary skill of field, on the premise of not paying creative work, can be with other according to these accompanying drawings acquisitions Accompanying drawing.
Fig. 1 is a kind of flow chart of method for tracking target in the embodiment of the present invention;
Fig. 2 is the schematic diagram of initial two field picture in the embodiment of the present invention;
Fig. 3 is the schematic diagram of initial tracking target in the embodiment of the present invention;
Fig. 4 is the schematic diagram of the 2nd two field picture in the embodiment of the present invention;
Fig. 5 is the schematic diagram of the candidate target determined in the 2nd two field picture in the embodiment of the present invention;
Fig. 6 is the schematic diagram of the first deep neural network in the embodiment of the present invention;
Fig. 7 is the schematic diagram of the second deep neural network in the embodiment of the present invention;
Fig. 8 is the structural representation of a kind of electronic equipment in the embodiment of the present invention.
Specific embodiment
The embodiment of the present invention solves on-line study of the prior art by providing a kind of method for tracking target and device Visual tracking method, whether exist cannot judge to track target with losing, and with losing after be difficult to give the technology of tracking target for change Problem.
The technical scheme of the embodiment of the present invention is to solve above-mentioned technical problem, and general thought is as follows:
A kind of method for tracking target, in being applied to electronic equipment, electronic equipment has image acquisition units, IMAQ list For gathering view data, methods described includes for unit:Determine that one tracks target in the initial two field picture of view data;In image Multiple candidate targets are extracted in the follow-up two field picture of data, follow-up two field picture is the arbitrary two field picture after initial two field picture;Meter Calculate the similarity of each candidate target and tracking target;By the similarity highest with tracking target in multiple candidate targets Candidate target is defined as the tracking target.
In order to be better understood from above-mentioned technical proposal, below in conjunction with Figure of description and specific embodiment to upper State technical scheme to be described in detail.
Embodiment one
A kind of method for tracking target is present embodiments provided, in being applied to electronic equipment, the electronic equipment can be:Ground Face robot is (for example:Balance car) or unmanned plane is (for example:Multi-rotor unmanned aerial vehicle or fixed-wing unmanned plane) or electric automobile etc. Equipment, herein, for the electronic equipment specifically which kind of equipment, the present embodiment is not specifically limited.Wherein, in electronic equipment With image acquisition units (for example:Camera), image acquisition units are used to gather view data.
As shown in figure 1, described method for tracking target, including:
Step S101:Determine that one tracks target in the initial two field picture of view data.
As a kind of optional embodiment, step S101, including:
When initial two field picture is exported by display screen, the selection operation of user is obtained;Based on the selection operation of user, Tracking target is determined in initial two field picture;Or
Obtain for description tracking clarification of objective information;Feature based information, determines tracking mesh in initial two field picture Mark.
In specific implementation process, as shown in Fig. 2 the image that image acquisition units are collected can be obtained, and by setting The display screen put on an electronic device exports the image (for example:Initial two field picture 300), and obtain a selection behaviour of user's execution Make (for example:When the display screen is touch-screen, the selection operation of user is obtained by the touch-screen), then based on selection behaviour Make to determine that one tracks target (i.e. from initial two field picture 300:Initial tracking target 000).Or, obtain for description tracking mesh Target characteristic information, calculates with reference to significance analysis (saliency detection) or target detection (object detection) Method, tracking target is determined (i.e. in initial two field picture 300 as in:Initial tracking target 000).Herein, as shown in figure 3, can carry The image 311 for taking and preserving initial tracking target 000 is standby to make, and image 311 is the image in the 1st encirclement frame 310.
Step S102:Multiple candidate targets are extracted in the follow-up two field picture of view data, follow-up two field picture is initial frame Arbitrary two field picture after image.
As a kind of optional embodiment, step S102, including:
It is determined that tracking i-th -1 encirclement frame of the target in the i-th -1 two field picture, (wherein, the i-th -1 two field picture belongs to picture number According to i is the integer more than or equal to 2;When i is equal to 2, the i-th -1 two field picture is initial two field picture);Based on the i-th -1 encirclement frame, The i-th image block is determined in the i-th two field picture, wherein, the i-th two field picture is follow-up two field picture, the center of the i-th image block with i-th- The center of 1 encirclement frame is identical, and the area of the i-th image block is more than the area of the i-th -1 encirclement frame;Determine in the i-th image block Multiple candidate targets.
For example, as shown in Fig. 2 Fig. 2 is initial two field picture, wherein comprising multiple human targets, needing to be tracked Tracking target be personage in the 1st encirclement frame 310.As shown in figure 4, Fig. 4 is the 2nd two field picture, wherein each human target Position or attitude there occurs change.
When i is equal to 2, as shown in figure 3, determining tracking target (i.e.:Initial tracking target 000) in initial two field picture 300 In encirclement frame (i.e.:1st encirclement frame 310), the encirclement frame is generally rectangular, and can just surround tracking target (i.e.:Initially Tracking target 000).As shown in figure 4, (the 1st encirclement frame 310 is in initial two field picture 300 position based on the 1st encirclement frame 310 Position is identical with the position in the 2nd two field picture 400), determine an image block (i.e. in the 2nd two field picture 400:2nd image block 420), the 2nd image block 420 is identical with the center of the 1st encirclement frame 310, but the encirclement frame 310 of the 2nd image block 420 to the 1 Area is larger, may have multiple targets in the 2nd image block 420, wherein, the tracking mesh determined in initial two field picture 300 Mark is (i.e.:Initial tracking target 000) side such as significance analysis or target detection just can be utilized in the 2nd image block 420, herein Method determines the plurality of target in the 2nd image block 420, and these targets are defined as into candidate target (i.e.:Candidate target 401st, candidate target 402, candidate target 403, candidate target 404).Further, then based on step S103~step S104, from this The tracking target is determined in a little candidate targets, that is, initial tracking target 000 is identified from the 2nd two field picture.Wherein, close In the specific embodiment of S103~step S104, it is discussed in detail below.
In the same manner, when i is equal to 3, after tracking target is identified from the 2nd two field picture 400, it is determined that tracking target is the Encirclement frame in 2 two field pictures 400 is (i.e.:2nd encirclement frame), based on the 2nd encirclement frame, in the 3rd two field picture an image block is determined (i.e.:3rd image block), the 3rd image block is identical with the center of the 2nd encirclement frame, but the 3rd image block is than the face of the 2nd image block Product is larger, may have multiple targets in the 3rd image block, wherein, the tracking target determined in initial two field picture is just at this In a little targets, the plurality of target can be determined in the 3rd image block using the method such as significance analysis or target detection herein, And the plurality of target is defined as into candidate target.Further, then based on step S103~step S104, from these candidate targets It is middle to determine the tracking target, that is, initial tracking target 000 is identified from the 3rd two field picture.
In the same manner, when i is equal to 4, the 4th image block is determined in the 4th two field picture, in the 4th image block multiple candidates is determined Target, further, based on step S103~step S104, determines the tracking target (i.e. from these candidate targets:It is initial with Track target 000).By that analogy, when i is equal to 5,6,7,8 ..., in every two field picture wherein multiple candidate targets are determined, Step S103~step S104 is based on again, and the tracking target is determined from these candidate targets (i.e.:Initial tracking target 000), so as to realize to track target recognition and tracking.
In specific implementation process, each time from after determining multiple candidate targets in the i-th image block, is being extracted and preserved Select the image of target standby to make.As shown in figure 5, extracting and preserving the image 421 of candidate target 401, the figure of candidate target 402 As the 422, image 423 of candidate target 403, the image 424 of candidate target 404.
Step S103:Calculate the similarity of each candidate target and tracking target.
In specific implementation process, the first similarity for calculating each candidate target and tracking target is needed.Wherein, institute It is the initial tracking target 000 (as shown in Fig. 3) determined in initial two field picture 300 to state tracking target, and the candidate target comes From the i-th image block in the i-th two field picture, the i-th two field picture be a subsequent frame figure (i.e.:Any frame after initial frame figure Image).For example, as shown in figure 4, the candidate target includes the candidate target 401, candidate target determined in the 2nd two field picture 400 402nd, candidate target 403, candidate target 404.
In specific implementation process, it is possible to use target recognizer again, each candidate target is calculated with tracking target Similarity.Herein, for step S103 can have following three kinds of embodiments.
Mode one:Using the target recognizer again based on color characteristic, each candidate target is calculated with the tracking The similarity of target.
As a kind of optional embodiment, step S103, including:
The first candidate target is selected from multiple candidate targets, wherein, the first candidate target is in multiple candidate targets Arbitrary candidate target;Calculate the first color feature vector of the first candidate target, and the second color spy for calculating tracking target Levy vector;The distance of the first color feature vector and the second color feature vector is calculated, wherein, the distance as first candidate's mesh Mark and the similarity for tracking target.
For example, as shown in figure 3, calculating the color feature vector of initial tracking target 000, wherein, initially track mesh Mark 000 is the tracking target determined in initial two field picture 300, as shown in figure 5, calculating the face of candidate target 401 successively again Color characteristic vector, finally, calculates the color feature vector of initial tracking target 000 and the color feature vector of candidate target 401 The distance between, the distance value represents the similarity of candidate target 401 and initial tracking target 000.In the same manner, then respectively calculate Go out the similarity of candidate target 402, candidate target 403, candidate target 404 and initial tracking target 000.
In specific implementation process, Euclidean distance formula can be based on, calculate the first color feature vector and the The distance of second colors characteristic vector.
As a kind of optional embodiment, in more detail, first color feature vector for calculating the first candidate target, And the second color feature vector for tracking target is calculated, including:
The image of the first candidate target pair is carried out into principal component segmentation, a mask images are obtained;And, target will be tracked Image carry out principal component segmentation (Saliency Segmentation), obtain the 2nd mask images;By a mask images and 2nd mask image scalings are to formed objects;First mask image averagings are divided into into M region;And, by the 2nd mask images M region is divided into, M is positive integer;Calculate the color feature vector in each region in a mask images;And, calculate The color feature vector in each region in 2nd mask images;The color feature vector in each region in the first mask images is suitable Sequence connects, and obtains the first color feature vector;And, the color feature vector order in each region in the 2nd mask images is connected Connect, obtain the second color feature vector.
For example, tracking target is being calculated (i.e.:Initial tracking target 000) color feature vector (i.e.:Second color Characteristic vector) when, first the image 311 of initial tracking target 000 can be carried out into principal component segmentation, obtain the 2nd mask images (in mask images, only principal component region keep pixel value it is consistent with original image, other area pixel values be 0), wherein, just The image 311 of the tracking target 000 that begins is rectangle, and can just surround initial tracking target 000, then by the 2nd mask images Zoom to a default size, then the 2nd mask image averagings are divided into into 4 regions (halving up and down, left and right is halved), then divide Do not calculate the color feature vector in each region in this 4 regions, finally by the color characteristic in each region in this 4 regions to Amount be linked in sequence (if the color feature vector in each region be 10 dimensional vectors, be linked in sequence, obtain one 40 tie up to Amount), tracking target is obtained after normalization (i.e.:Initial tracking target 000) color feature vector (i.e.:Second color characteristic to Amount).
In the same manner, when the color feature vector of candidate target 401 is calculated, first the image 421 of candidate target 401 can be entered Row principal component is split, and obtains a mask images, wherein, the image block 421 of candidate target 401 is rectangle, and can be surrounded just Candidate target 401, then also zooms to a default size by a mask images, identical with the 2nd mask image sizes, then will First mask image averagings are divided into 4 regions (halving up and down, left and right is halved), then calculate each in this 4 regions respectively The color feature vector in region, is finally linked in sequence the color feature vector in each region in this 4 regions (wherein, if often The color feature vector in individual region is 10 dimensional vectors, then are linked in sequence 40 dimensional vectors that then obtain), obtain after normalization The color feature vector of candidate target 401.In the same manner, color feature vector, the candidate target of candidate target 402 are calculated respectively 403 color feature vector, the color feature vector of candidate target 404.
Used as a kind of optional embodiment, in more detail, the color for calculating each region in a mask images is special Levy vector;And, the color feature vector in each region in the 2nd mask images is calculated, including:
Determine W kind domain colors, W is positive integer;Each pixel is calculated in a mask images in first area in every kind of master Projection weight in color, first area is any region in M region in a mask images;And, calculate second Projection weight of each pixel on every kind of domain color in second area in mask images, second area is in the 2nd mask images M region in any region;Based on projection weight of each pixel in first area on every kind of domain color, first is obtained The corresponding W of each pixel ties up color feature vector in region;And, based on each pixel in second area on every kind of domain color Projection weight, obtain each pixel correspondence W dimension color feature vectors in second area;To each pixel correspondence in first area W dimension color feature vector be normalized, obtain first area in each pixel color feature vector;And, to second The corresponding W dimensions color feature vector of each pixel is normalized in region, and the color for obtaining each pixel in second area is special Levy vector;The color feature vector of each pixel in first area is added, the color feature vector of first area is obtained;With And, the color feature vector of each pixel in second area is added, obtain the color feature vector of second area.
For example, 10 kinds of domain colors can be defined, be respectively redness, yellow, blueness, green, cyan, purple, it is orange, White, black, grey, and with 1 to 10 number consecutively (i.e.:Redness is No. 1, and yellow is No. 2, and blueness is No. 3 ... ..., and grey is No. 10), the corresponding rgb value of each color is then recorded, it is embodied as:Rn, Gn, Bn, n represents this 10 kinds of domain colors and compiles Number (for example:R1Represent the R values of redness, G2Represent the G values of yellow, B10Represent the B values of grey).
After a mask image averagings to be divided into 4 regions (halving up and down, left and right is halved), first is being calculated In mask images during the color feature vector in each region, first, an optional region is (i.e. from this 4 regions:Firstth area Domain), projection weight of each pixel on every kind of domain color in first area is calculated, obtain each pixel in first area and exist The projection weight of this 10 domain colors, wherein, each pixel obtains one 10 dimension color feature vector, then, to this 10 dimension After color feature vector normalization, as the color feature vector of this pixel, whole pixels in first area is obtained Color feature vector after, the color feature vector of whole pixels is added, finally, obtain the color characteristic of first area Vector.Based on the method, you can calculate the color feature vector in each region in 4 regions in a mask images.
In the same manner, after the 2nd mask image averagings to be divided into 4 regions (halving up and down, left and right is halved), calculating In 2nd mask images during the color feature vector in each region, first, an optional region is (i.e. from this 4 regions:Second Region), projection weight of each pixel on every kind of domain color in second area is calculated, obtain each pixel in second area In the projection weight of this 10 domain colors, wherein, each pixel obtains first 10 dimension color feature vector, then, to this After 10 dimension color feature vector normalization, as the color feature vector of this pixel, whole pictures in second area is obtained After the color feature vector of vegetarian refreshments, the color feature vector of whole pixels is added, finally, the color for obtaining second area is special Levy vector.Based on the method, you can calculate the color feature vector in each region in 4 regions in the 2nd mask images.
As a kind of optional embodiment, in more detail, following equation can be based on, calculate the first pixel in every n kinds master Projection weight in color:
Wherein, the first pixel is first area or any pixel in second area, and n domain color is W kind domain colors In any domain color, wnThe projection weight for being the first pixel on n domain color, Ir, Ig, Ib be the first pixel Rgb value;Rn, Gn, Bn are the rgb value of n domain color.
For example, n is the numbering of above-mentioned 10 kinds of domain colors, is calculating first area or certain pixel in second area When putting the projection weight in yellow (numbering is 2), can be calculated based on following equation:
Wherein, w2As projection weight of the pixel in yellow, R2、G2、B2For the rgb value of yellow, Ir、Ig、IbI.e. For the rgb value of the pixel.
Mode two:Using the target recognizer again based on deep neural network, calculate each candidate target with it is described The similarity of tracking target.
As a kind of optional embodiment, step S103, including:
As shown in fig. 6, the first candidate target is selected from multiple candidate targets, wherein, the first candidate target is multiple times Select the arbitrary candidate target in target;By the image normalization of the image of the first candidate target and tracking target to formed objects; The image of tracking target is input into into the first convolutional network 601 of the first deep neural network by first input end 611 Row feature calculation, obtains tracking clarification of objective vector, wherein, the first deep neural network is based on Siamese structures;By first The image of candidate target is input into into the second convolutional network 602 of the first deep neural network by the second input 612 and is carried out Feature calculation, obtains the characteristic vector of the first candidate target, wherein, the second convolutional network 602 and the first convolutional network 601 are shared Convolution layer parameter, Ji Juan basic units parameter is identical;By the characteristic vector input of tracking clarification of objective the first candidate target of vector sum Similarity Measure is carried out into the first full articulamentum 603 of the first deep neural network, finally the is obtained in the first output end 621 One candidate target and the similarity for tracking target, wherein, the output of the first convolutional network 601 and the second convolutional network 602 is automatic As the input of the first fully-connected network 603.
In specific implementation process, the deep neural network of off-line training first (as shown in Figure 6), the first depth nerve are needed Network includes the first convolutional network 601, the second convolutional network 602 and the first fully-connected network 603, first input end 611, second Input 612, the first output end 621, wherein, the first convolutional network 601 and the second convolutional network 602 are to employ Siamese The bilateral deep neural network of structure, the network per one side employs the network structure before the FC6 in AlexNet networks, the In one convolutional network 601 and the second convolutional network 602 all include multiple convolutional layers, the convolutional layer in the first convolutional network 601 and Convolutional layer in second convolutional network 602 is shared convolutional layer each other, and its parameter is identical.First convolutional network 601 and volume Two The image of the product input of network 602 needs to be normalized to formed objects.Herein, by the image of the tracking target after normalization be input into In first convolutional network 601, it is possible to obtain tracking clarification of objective vector;The image of the first candidate target after by normalization is defeated Enter into the second convolutional network 602, it is possible to obtain the characteristic vector of the first candidate target.First convolutional layer 601 and the second convolution Layer 602 is common to access the first fully-connected network 603, and multiple full articulamentums are included in the first fully-connected network 603, for calculating two The distance of side input feature value, you can obtain the similarity of the first candidate target and tracking target.Wherein, the first depth nerve Parameter in network is obtained by off-line learning, trains the method and general convolutional Neural net of the first deep neural network The training method of network is consistent, after off-line training terminates, you can by the first deep neural network network application in tracking system In.
For example, calculating candidate target 401 using the first deep neural network and initially tracking the similar of target 000 When spending, first the image 421 of candidate target 401 can be normalized to into formed objects with the image 311 of initial tracking target 000; Then the image 311 of initial tracking target 000 is input into into the first convolutional network 601, obtains the spy of initial tracking target 000 Vector is levied, by the convolutional network 602 of image 421 second of candidate target 401, the characteristic vector of candidate target 401 is obtained;Finally The initial characteristic vector of tracking target 000 and the characteristic vector of candidate target 401 are input into into the first fully-connected network 603, So as to obtain the similarity of candidate target 401 and initial tracking target 000.
In the same manner, after the image 422 of candidate target 402 image 311 corresponding with initial tracking target 000 is normalized, will The image 311 of initial tracking target 000 is input into into the first convolutional network 601, meanwhile, the image 422 of candidate target 402 is defeated Enter into the second convolutional network 602, you can obtain the similarity of candidate target 402 and initial tracking target 000.By that analogy, The similarity of candidate target 403 and initial tracking target 000 can be obtained, and, candidate target 404 and initial tracking target 000 similarity.
Mode three:Using deep neural network, at the same realize candidate target generation and calculate each candidate target with The similarity of the tracking target.
As a kind of optional embodiment, when determining multiple candidate targets in the i-th image block described in perform, except can Beyond using methods such as significance analysis or target detections, the second deep neural network as shown in Figure 7 can also be utilized.
Specifically, as shown in fig. 7, can be based on the deep neural network of off-line training second, the second deep neural network Siamese structures, the second deep neural network includes the 3rd convolutional network 604, Volume Four product network 605, RPN (Region Extract network in Proposal Network, candidate region) fully-connected network 606 of network 607 and second, the 3rd input 613, the Four inputs 614, the second output end 622.Wherein, the output of the 3rd convolutional network 604 as RPN networks 607 input, the 4th Convolutional network 605 and RPN networks 607 access to the second fully-connected network 606 simultaneously.Wherein, include in the 3rd convolutional network 604 Multiple convolutional layers, for carrying out feature calculation to the i-th image block, using the 3rd convolutional network 604 the i-th image block can be obtained Characteristic pattern, RPN networks 607 are used for the characteristic pattern according to the i-th image block, and multiple candidate targets are extracted from the i-th image block, And calculate the characteristic vector of each candidate target.
The second deep neural network shown in Fig. 7 is in the main difference of the first deep neural network shown in Fig. 6 The latter half in Fig. 7.The 3rd convolutional network 604 in Fig. 7 adds additional one using the i-th image block as input RPN networks 607, RPN networks 607 are carried out on the characteristic pattern obtained after the calculating of the 3rd convolutional network 604 in the i-th image block The extraction of candidate target, what RPN networks 607 were directly utilized is that the calculated characteristic pattern of the 3rd convolutional network 604 is calculated, Candidate target corresponding position on characteristic pattern is directly found after calculating, the spy of each candidate target is directly obtained on characteristic pattern Vector is levied, then calculates similar by input to the second fully-connected network 606 with the initial tracking corresponding characteristic vector of target 000 Degree.
In specific implementation process, the i-th image block can be input into the second depth nerve net by the 4th input 614 Feature calculation is carried out in 3rd convolutional network 604 of network, the characteristic pattern of the i-th image block is obtained;The characteristic pattern of the i-th image block is defeated Enter carries out feature calculation into the RPN networks 607 of the second deep neural network, extracts multiple candidate targets, and calculates every The characteristic vector of individual candidate target.
For example, the 2nd image block 420 can be input into into the 3rd convolutional network 604 of the second deep neural network, The characteristic pattern of the 2nd image block 420 is obtained, the characteristic pattern of the 2nd image block 420 is input into the RPN nets of the second deep neural network In network 607, multiple candidate targets are extracted (i.e.:Candidate target 401, candidate target 402, candidate target 404, candidate target 404), and also the characteristic vector of each candidate target can be obtained.
As a kind of optional embodiment, step S103, including:
The characteristic vector of the first candidate target is extracted from the characteristic vector of multiple candidate targets, wherein, first candidate's mesh The arbitrary candidate target being designated as in multiple candidate targets;The image of tracking target is input into second by the 3rd input 613 Feature calculation is carried out in the Volume Four product network 605 of deep neural network, tracking clarification of objective vector is obtained, wherein, the 4th Multiple convolutional layers, the convolutional layer and the in Volume Four product network 605 are all included in the convolutional network 604 of convolutional network 605 and the 3rd Three convolutional networks 604 share convolutional layer parameter, and Ji Juan basic units parameter is identical.Will tracking clarification of objective vector sum the first candidate mesh Target characteristic vector is input into carries out Similarity Measure into the second fully-connected network 606 of the second deep neural network, finally exists Second output end 622 obtains the similarity of the first candidate target and tracking target.
In specific implementation process, as shown in fig. 7, the second deep neural network is including the 3rd convolutional network 604 and RPN On the basis of network 607, also the fully-connected network 606 of network 605 and second is accumulated including Volume Four, RPN networks 704 are used for based on the The characteristic pattern of the output of three convolutional network 604, extracts multiple candidate targets, and calculates the characteristic vector of each candidate target, The characteristic vector of each candidate target sequentially input into the second fully-connected network 606, Volume Four product network 605 be used to calculating with Track clarification of objective vector is simultaneously exported to the second fully-connected network 606, and the second fully-connected network 606 is used to be based on first candidate's mesh Target characteristic vector and tracking clarification of objective vector, calculate the similarity of the first candidate target and tracking target.
For example, as it was noted above, the 2nd image block 420 to be input into the 3rd convolution to the second deep neural network After network 604, by the calculating of the 3rd convolutional network 604 and RPN networks 607, you can obtain candidate target 421 feature to Amount, the characteristic vector of candidate target 422, the characteristic vector of candidate target 424, the characteristic vector of candidate target 424.It is same with this When, the corresponding image 311 of initial tracking target 000 is input into the Volume Four of the second deep neural network and accumulates network 605, you can By the second fully-connected network 606 calculate the similarity of candidate target 401 and initial tracking target 000, candidate target 402 with The similarity of initial tracking target 000, candidate target 403 and the similarity of initial tracking target 000, candidate target 404 with it is first The similarity of the tracking target 000 that begins.
Step S104:Tracking will be defined as with the similarity highest candidate target of tracking target in multiple candidate targets Target.
In specific implementation process, after similarity of each candidate target with tracking target is calculated, you can will be similar Degree highest candidate target is used as tracking target.
For example, if candidate target 402 and the similarity highest of initial tracking target 000, candidate target 402 is made Proceed tracking to track target.
Above mainly by taking the 2nd two field picture 400 as an example, for each in the 2nd image block 420 in the 2nd two field picture 400 is waited Target is selected, the similarity of each candidate target and initial tracking target 000 is calculated respectively, and by similarity highest candidate target As the tracking target in the 2nd two field picture.In the same manner, for follow-up other two field pictures (for example:3rd two field picture, the 4th two field picture, 5 two field pictures ... ...), it is also the same, each candidate target is similar to initial tracking target 000 in calculating per two field picture Degree, and using similarity highest candidate target as the tracking target in the two field picture.
Technical scheme in the embodiments of the present invention, at least has the following technical effect that or advantage:
Due to the candidate target of follow-up each two field picture being compared with the tracking target in initial two field picture, by candidate Similarity highest candidate target is defined as tracking target in target, it is achieved thereby that the tracking to tracking target.It is of the invention real The method for tracking target in example is applied compared with the visual tracking method of on-line study of the prior art, after initial frame The process of each frame, can be regarded as whether judging target with losing, with can reliably judge to track target whether with The advantage lost;And trace template need not be maintained, it is to avoid the continuous updating of trace template causes the error persistently to be amplified, and has Beneficial to the tracking target given for change with losing, so as to improve the robustness of tracking system.
Embodiment two
A kind of electronic equipment is present embodiments provided, the electronic equipment has image acquisition units, image acquisition units are used In collection view data, as shown in figure 8, the electronic equipment, including:
First determining unit 801, for determining that one tracks target in the initial two field picture of view data;
Extraction unit 802, for extracting multiple candidate targets in the follow-up two field picture of view data, follow-up two field picture is Arbitrary two field picture after initial two field picture;
Computing unit 803, for calculating the similarity of each candidate target and tracking target;
Second determining unit 804, for by multiple candidate targets with similarity highest candidate's mesh of tracking target Mark is defined as tracking target.
As a kind of optional embodiment, the first determining unit 801, including:
First determination subelement, for when initial two field picture is exported by display screen, obtaining the selection operation of user;Base In the selection operation of user, tracking target is determined in initial two field picture;Or
Second determination subelement, for obtaining for description tracking clarification of objective information;Feature based information, initial Tracking target is determined in two field picture.
As a kind of optional embodiment, extraction unit 802, including:
First determination subelement, for determining i-th -1 encirclement frame of the tracking target in the i-th -1 two field picture, wherein, i-th - 1 two field picture belongs to view data, and i is the integer more than or equal to 2;When i is equal to 2, the i-th -1 two field picture is initial two field picture;
Second determination subelement, for based on the i-th -1 encirclement frame, the i-th image block being determined in the i-th two field picture, wherein, the I two field pictures are follow-up two field picture, and the center of the i-th image block is identical with the center of the i-th -1 encirclement frame, the i-th image block Area of the area more than the i-th -1 encirclement frame;
3rd determination subelement, for determining multiple candidate targets in the i-th image block.
As a kind of optional embodiment, computing unit 803, including:
First choice subelement, for selecting the first candidate target from multiple candidate targets, wherein, the first candidate target It is the arbitrary candidate target in multiple candidate targets;
First computation subunit, for calculating the first color feature vector of the first candidate target, and calculates tracking mesh The color feature vector of target second;
Second computation subunit, for calculating the distance of the first color feature vector and the second color feature vector, wherein, Distance as the first candidate target and the similarity for tracking target.
As a kind of optional embodiment, the first computation subunit, specifically for:
First candidate target image is carried out into principal component segmentation, a mask images are obtained;And, by the figure of tracking target As carrying out principal component segmentation, the 2nd mask images are obtained;By a mask images and the 2nd mask image scalings to formed objects; First mask image averagings are divided into into M region;And, the 2nd mask image averagings are divided into into M region, M is positive integer;Meter Calculate the color feature vector in each region in a mask images;And, calculate the color in each region in the 2nd mask images Characteristic vector;The color feature vector in each region in the first mask images is linked in sequence, the first color feature vector is obtained; And, the color feature vector in each region in the 2nd mask images is linked in sequence, obtain the second color feature vector.
As a kind of optional embodiment, the first computation subunit, specifically for:
Determine W kind domain colors, W is positive integer;Each pixel is calculated in a mask images in first area in every kind of master Projection weight in color, first area is any region in M region in a mask images;And, calculate second Projection weight of each pixel on every kind of domain color in second area in mask images, second area is in the 2nd mask images M region in any region;Based on projection weight of each pixel in first area on every kind of domain color, first is obtained The corresponding W of each pixel ties up color feature vector in region;And, based on each pixel in second area on every kind of domain color Projection weight, obtain each pixel correspondence W dimension color feature vectors in second area;To each pixel correspondence in first area W dimension color feature vector be normalized, obtain first area in each pixel color feature vector;And, to second The corresponding W dimensions color feature vector of each pixel is normalized in region, and the color for obtaining each pixel in second area is special Levy vector;The color feature vector of each pixel in first area is added, the color feature vector of first area is obtained;With And, the color feature vector of each pixel in second area is added, obtain the color feature vector of second area.
As a kind of optional embodiment, the first computation subunit, specifically for based on following equation, calculating the first pixel Projection weight on every n kinds domain color:
Wherein, the first pixel is first area or any pixel in second area, and n domain color is W kind domain colors In any domain color, wnThe projection weight for being the first pixel on n domain color, Ir, Ig, Ib be the first pixel Rgb value;Rn, Gn, Bn are the rgb value of n domain color.
As a kind of optional embodiment, computing unit 803, including:
Second selects subelement, for selecting the first candidate target from multiple candidate targets, wherein, the first candidate target It is the arbitrary candidate target in multiple candidate targets;
Normalization subelement, for by the image of the first candidate target and the image normalization of tracking target to it is identical greatly It is little;
First input subelement, for the image of tracking target to be input into the first convolution net of the first deep neural network Feature calculation is carried out in network, tracking clarification of objective vector is obtained, wherein, the first deep neural network is based on Siamese structures;
Second input subelement, for being input into the image of the first candidate target to the second of the first deep neural network Feature calculation is carried out in convolutional network, the characteristic vector of the first candidate target is obtained;
3rd input subelement, for by tracking clarification of objective the first candidate target of vector sum characteristic vector be input into Similarity Measure is carried out in first fully-connected network of the first deep neural network, the first candidate target is obtained with tracking target Similarity.
As a kind of optional embodiment, the 3rd determination subelement, specifically for:
I-th image block is input into into the 3rd convolutional network of the second deep neural network carries out feature calculation, obtains i-th The characteristic pattern of image block, wherein, the second deep neural network is based on Siamese structures;By the characteristic pattern of the i-th image block be input into In the RPN networks of the second deep neural network, multiple candidate targets are extracted, and obtain the characteristic vector of multiple candidate targets.
As a kind of optional embodiment, computing unit 803, including:
Subelement is extracted, for extracting the characteristic vector of the first candidate target from the characteristic vector of multiple candidate targets, Wherein, the first candidate target is the arbitrary candidate target in multiple candidate targets;
4th input subelement, the Volume Four for being input into the image of tracking target to the second deep neural network accumulates net Feature calculation is carried out in network, tracking clarification of objective vector is obtained;
5th input subelement, for by tracking clarification of objective the first candidate target of vector sum characteristic vector be input into Similarity Measure is carried out in second fully-connected network of the second deep neural network, the first candidate target is obtained with tracking target Similarity.
Due to the method institute that the electronic equipment that the present embodiment is introduced is method for tracking target in the enforcement embodiment of the present invention Using electronic equipment, so the method based on the method for tracking target described in the embodiment of the present invention, the affiliated skill in this area Art personnel will appreciate that the specific embodiment of the electronic equipment of the present embodiment and its various change form, thus here for How the electronic equipment realizes that the method in the embodiment of the present invention is no longer discussed in detail.As long as those skilled in the art implement The electronic equipment that the method for method for tracking target is adopted in the embodiment of the present invention, belongs to the scope to be protected of the invention.
Technical scheme in the embodiments of the present invention, at least has the following technical effect that or advantage:
Due to the candidate target of follow-up each two field picture being compared with the tracking target in initial two field picture, by candidate Similarity highest candidate target is defined as tracking target in target, it is achieved thereby that the tracking to tracking target.It is of the invention real The electronic equipment in example is applied compared with the electronic equipment of the visual tracking method of utilization on-line study of the prior art, for first The process of each frame after beginning frame, can be regarded as whether judging target with losing, with can reliably judge tracking Target whether with losing advantage;And trace template need not be maintained, it is to avoid the continuous updating of trace template causes error quilt Persistently amplify, be conducive to giving the tracking target with losing for change, so as to improve the robustness of tracking system.
Those skilled in the art are it should be appreciated that embodiments of the invention can be provided as method, system or computer program Product.Therefore, the present invention can be using complete hardware embodiment, complete software embodiment or with reference to the reality in terms of software and hardware Apply the form of example.And, the present invention can be adopted and wherein include the computer of computer usable program code at one or more The computer program implemented in usable storage medium (including but not limited to magnetic disc store, CD-ROM, optical memory etc.) is produced The form of product.
The present invention is the flow process with reference to method according to embodiments of the present invention, equipment (system) and computer program Figure and/or block diagram are describing.It should be understood that can be by computer program instructions flowchart and/or each stream in block diagram The combination of journey and/or square frame and flow chart and/or the flow process in block diagram and/or square frame.These computer programs can be provided The processor of all-purpose computer, special-purpose computer, Embedded Processor or other programmable data processing devices is instructed to produce A raw machine so that produced for reality by the instruction of computer or the computing device of other programmable data processing devices The device of the function of specifying in present one flow process of flow chart or one square frame of multiple flow processs and/or block diagram or multiple square frames.
These computer program instructions may be alternatively stored in can guide computer or other programmable data processing devices with spy In determining the computer-readable memory that mode works so that the instruction being stored in the computer-readable memory is produced to be included referring to Make the manufacture of device, the command device realize in one flow process of flow chart or one square frame of multiple flow processs and/or block diagram or The function of specifying in multiple square frames.
These computer program instructions also can be loaded in computer or other programmable data processing devices so that in meter Series of operation steps is performed on calculation machine or other programmable devices to produce computer implemented process, so as in computer or The instruction performed on other programmable devices is provided for realizing in one flow process of flow chart or multiple flow processs and/or block diagram one The step of function of specifying in individual square frame or multiple square frames.
, but those skilled in the art once know basic creation although preferred embodiments of the present invention have been described Property concept, then can make other change and modification to these embodiments.So, claims are intended to be construed to include excellent Select embodiment and fall into having altered and changing for the scope of the invention.
Obviously, those skilled in the art can carry out the essence of various changes and modification without deviating from the present invention to the present invention God and scope.So, if these modifications of the present invention and modification belong to the scope of the claims in the present invention and its equivalent technologies Within, then the present invention is also intended to comprising these changes and modification.

Claims (20)

1. a kind of method for tracking target, in being applied to electronic equipment, the electronic equipment has image acquisition units, described image Collecting unit is used to gather view data, it is characterised in that methods described includes:
Determine that one tracks target in the initial two field picture of described image data;
Multiple candidate targets are extracted in the follow-up two field picture of described image data, the follow-up two field picture is the initial frame figure Arbitrary two field picture as after;
Calculate the similarity of each candidate target and the tracking target;
The tracking will be defined as with the similarity highest candidate target of the tracking target in the plurality of candidate target Target.
2. method for tracking target as claimed in claim 1, it is characterised in that it is described in the initial two field picture of view data really Fixed tracking target, including:
When the initial two field picture is exported by display screen, the selection operation of user is obtained;Based on the selection operation of user, The tracking target is determined in the initial two field picture;Or
Obtain for describing the tracking clarification of objective information;Based on the characteristic information, in the initial two field picture really The fixed tracking target.
3. method for tracking target as claimed in claim 1, it is characterised in that described to carry in the follow-up two field picture of view data Multiple candidate targets are taken, including:
Determine i-th -1 encirclement frame of the tracking target in the i-th -1 two field picture, wherein, the described i-th -1 two field picture belongs to described View data, i is the integer more than or equal to 2;When i is equal to 2, the described i-th -1 two field picture is the initial two field picture;
Based on the described i-th -1 encirclement frame, the i-th image block is determined in the i-th two field picture, wherein, i-th two field picture is described Follow-up two field picture, the center of i-th image block is identical with the center of the described i-th -1 encirclement frame, i-th image block Area of the area more than the described i-th -1 encirclement frame;
Determine the plurality of candidate target in i-th image block.
4. method for tracking target as claimed in claim 1, it is characterised in that it is described calculate each candidate target with it is described with The similarity of track target, including:
The first candidate target is selected from the plurality of candidate target, wherein, first candidate target is the plurality of candidate Arbitrary candidate target in target;
Calculate the first color feature vector of first candidate target, and the second color characteristic for calculating the tracking target Vector;
The distance of first color feature vector and second color feature vector is calculated, wherein, the distance is institute State the similarity of the first candidate target and the tracking target.
5. method for tracking target as claimed in claim 4, it is characterised in that the first of the calculating first candidate target Color feature vector, and the second color feature vector of the tracking target is calculated, including:
The image of first candidate target is carried out into principal component segmentation, a mask images are obtained;And, by the tracking mesh Target image carries out principal component segmentation, obtains the 2nd mask images;
By a mask images and the 2nd mask image scalings to formed objects;
The first mask image averagings are divided into into M region;And, the 2nd mask image averagings are divided into into M region, M is positive integer;
Calculate the color feature vector in each region in a mask images;And, in calculating the 2nd mask images The color feature vector in each region;
The color feature vector in each region in the first mask images is linked in sequence, obtain first color characteristic to Amount;And, the color feature vector in each region in the 2nd mask images is linked in sequence, obtain second color special Levy vector.
6. method for tracking target as claimed in claim 5, it is characterised in that in the calculating the first mask images each The color feature vector in region;And, the color feature vector in each region in the 2nd mask images is calculated, including:
Determine W kind domain colors, W is positive integer;
Calculate in a mask images projection weight of each pixel on every kind of domain color in first area, described first Region is any region in M region in a mask images;And, calculate second in the 2nd mask images Projection weight of each pixel on every kind of domain color in region, the second area is M in the 2nd mask images Any region in region;
Based on projection weight of each pixel in the first area on every kind of domain color, each in the first area is obtained The corresponding W of pixel ties up color feature vector;And, based on projection of each pixel on every kind of domain color in the second area Weight, obtains each pixel correspondence W dimension color feature vector in the second area;
W dimensions color feature vector corresponding to each pixel in the first area is normalized, and obtains the first area In each pixel color feature vector;And, W dimensions color feature vector corresponding to each pixel in the second area enters Row normalization, obtains the color feature vector of each pixel in the second area;
The color feature vector of each pixel in the first area is added, obtain the color characteristic of the first area to Amount;And, the color feature vector of each pixel in the second area is added, obtain the color characteristic of the second area Vector.
7. method for tracking target as claimed in claim 6, it is characterised in that based on following equation, calculates the first pixel in every n Plant the projection weight on domain color:
w n = 1 - | I r - R n | + | I g - G n | + | I b - B n | 256 × 3
Wherein, first pixel is the first area or any pixel in the second area, the n domain color It is any domain color in the W kinds domain color, wnThe projection for being first pixel on n domain color power Weight, Ir, Ig, the rgb value that Ib is first pixel;Rn, Gn, Bn are the rgb value of the n domain color.
8. method for tracking target as claimed in claim 1, it is characterised in that it is described calculate each candidate target with it is described with The similarity of track target, including:
The first candidate target is selected from the plurality of candidate target, wherein, first candidate target is the plurality of candidate Arbitrary candidate target in target;
By the image of first candidate target with the image normalization for tracking target to formed objects;
The image of the tracking target is input into carries out feature calculation into the first convolutional network of the first deep neural network, obtains The tracking clarification of objective vector is obtained, wherein, first deep neural network is based on Siamese structures;
The image of first candidate target is input into carries out spy into the second convolutional network of first deep neural network Calculating is levied, the characteristic vector of first candidate target, second convolutional network and the first convolution network share is obtained Convolution layer parameter;
The characteristic vector of the first candidate target described in the tracking clarification of objective vector sum is input into first depth god Similarity Measure is carried out in first fully-connected network of Jing networks, the phase of first candidate target and the tracking target is obtained Like degree.
9. method for tracking target as claimed in claim 3, it is characterised in that it is described determine in i-th image block it is described Multiple candidate targets, including:
I-th image block is input into into the 3rd convolutional network of the second deep neural network carries out feature calculation, obtains institute The characteristic pattern of the i-th image block is stated, wherein, second deep neural network is based on Siamese structures;
The characteristic pattern of i-th image block is input into into the RPN networks of second deep neural network, is obtained the plurality of The characteristic vector of candidate target and the plurality of candidate target.
10. method for tracking target as claimed in claim 9, it is characterised in that it is described calculate each candidate target with it is described The similarity of tracking target, including:
The characteristic vector of the first candidate target is extracted from the characteristic vector of the plurality of candidate target, wherein, described first waits Target is selected to be the arbitrary candidate target in the plurality of candidate target;
The image of the tracking target is input into carries out feature meter into the Volume Four product network of second deep neural network Calculate, obtain the tracking clarification of objective vector, the Volume Four product network and the shared convolutional layer ginseng of the 3rd convolutional network Number;
The characteristic vector of the first candidate target described in the tracking clarification of objective vector sum is input into second depth god Similarity Measure is carried out in second fully-connected network of Jing networks, the phase of first candidate target and the tracking target is obtained Like degree.
11. a kind of electronic equipment, the electronic equipment has image acquisition units, and described image collecting unit is used to gather image Data, it is characterised in that the electronic equipment, including:
First determining unit, for determining that one tracks target in the initial two field picture of described image data;
Extraction unit, for extracting multiple candidate targets, the follow-up two field picture in the follow-up two field picture of described image data It is the arbitrary two field picture after the initial two field picture;
Computing unit, for calculating the similarity of each candidate target and the tracking target;
Second determining unit, for by the plurality of candidate target with similarity highest candidate's mesh of the tracking target Mark is defined as the tracking target.
12. electronic equipments as claimed in claim 11, it is characterised in that first determining unit, including:
First determination subelement, for when the initial two field picture is exported by display screen, obtaining the selection operation of user;Base In the selection operation of user, the tracking target is determined in the initial two field picture;Or
Second determination subelement, for obtaining for describing the tracking clarification of objective information;Based on the characteristic information, The tracking target is determined in the initial two field picture.
13. electronic equipments as claimed in claim 11, it is characterised in that the extraction unit, including:
First determination subelement, for determining i-th -1 encirclement frame of the tracking target in the i-th -1 two field picture, wherein, it is described I-th -1 two field picture belongs to described image data, and i is the integer more than or equal to 2;When i is equal to 2, the described i-th -1 two field picture is The initial two field picture;
Second determination subelement, for based on the described i-th -1 encirclement frame, the i-th image block being determined in the i-th two field picture, wherein, institute State the i-th two field picture and be the follow-up two field picture, the center of the center of i-th image block and the described i-th -1 encirclement frame Identical, the area of i-th image block is more than the area of the described i-th -1 encirclement frame;
3rd determination subelement, for determining the plurality of candidate target in i-th image block.
14. electronic equipments as claimed in claim 11, it is characterised in that the computing unit, including:
First choice subelement, for selecting the first candidate target from the plurality of candidate target, wherein, first candidate Target is the arbitrary candidate target in the plurality of candidate target;
First computation subunit, for calculating the first color feature vector of first candidate target, and calculate it is described with Second color feature vector of track target;
Second computation subunit, for calculating the distance of first color feature vector and second color feature vector, Wherein, the distance is the similarity of first candidate target and the tracking target.
15. electronic equipments as claimed in claim 14, it is characterised in that first computation subunit, specifically for:
The image of first candidate target is carried out into principal component segmentation, a mask images are obtained;And, by the tracking mesh Target image carries out principal component segmentation, obtains the 2nd mask images;By a mask images and the 2nd mask images Zoom to formed objects;The first mask image averagings are divided into into M region;And, by the 2nd mask image averagings It is divided into M region, M is positive integer;Calculate the color feature vector in each region in a mask images;And, calculate The color feature vector in each region in the 2nd mask images;The color in each region in the first mask images is special Levy vector to be linked in sequence, obtain first color feature vector;And, by the face in each region in the 2nd mask images Color characteristic vector is linked in sequence, and obtains second color feature vector.
16. electronic equipments as claimed in claim 15, it is characterised in that first computation subunit, specifically for:
Determine W kind domain colors, W is positive integer;Each pixel is calculated in a mask images in first area in every kind of master Projection weight in color, the first area is any region in M region in a mask images;And, Calculate in the 2nd mask images projection weight of each pixel on every kind of domain color, the second area in second area It is any region in M region in the 2nd mask images;Based on each pixel in the first area in every kind of master Projection weight in color, obtains the corresponding W of each pixel in the first area and ties up color feature vector;And, based on institute Projection weight of each pixel on every kind of domain color in second area is stated, each pixel correspondence W in the second area is obtained Dimension color feature vector;W dimensions color feature vector corresponding to each pixel in the first area is normalized, and obtains institute State the color feature vector of each pixel in first area;And, W corresponding to each pixel in the second area ties up color Characteristic vector is normalized, and obtains the color feature vector of each pixel in the second area;By in the first area The color feature vector of each pixel is added, and obtains the color feature vector of the first area;And, by the second area In the color feature vector of each pixel be added, obtain the color feature vector of the second area.
17. electronic equipments as claimed in claim 16, it is characterised in that first computation subunit, specifically for being based on Following equation, calculates projection weight of first pixel on every n kinds domain color:
w n = 1 - | I r - R n | + | I g - G n | + | I b - B n | 256 × 3
Wherein, first pixel is the first area or any pixel in the second area, the n domain color It is any domain color in the W kinds domain color, wnThe projection for being first pixel on n domain color power Weight, Ir, Ig, the rgb value that Ib is first pixel;Rn, Gn, Bn are the rgb value of the n domain color.
18. electronic equipments as claimed in claim 11, it is characterised in that the computing unit, including:
Second selects subelement, for selecting the first candidate target from the plurality of candidate target, wherein, first candidate Target is the arbitrary candidate target in the plurality of candidate target;
Normalization subelement, for by the image normalization of the image of first candidate target and the tracking target to identical Size;
First input subelement, for the image of the tracking target to be input into the first convolution net of the first deep neural network Feature calculation is carried out in network, the tracking clarification of objective vector is obtained, wherein, first deep neural network is based on Siamese structures;
Second input subelement, for being input into the image of first candidate target to the of first deep neural network Feature calculation is carried out in two convolutional networks, the characteristic vector of first candidate target is obtained;
3rd input subelement, for the characteristic vector of the first candidate target described in the tracking clarification of objective vector sum is defeated Entering into the first fully-connected network of first deep neural network carries out Similarity Measure, obtains first candidate target With the similarity of the tracking target.
19. electronic equipments as claimed in claim 13, it is characterised in that the 3rd determination subelement, specifically for:
I-th image block is input into into the 3rd convolutional network of the second deep neural network carries out feature calculation, obtains institute The characteristic pattern of the i-th image block is stated, wherein, second deep neural network is based on Siamese structures;By i-th image block Characteristic pattern be input into into the RPN networks of second deep neural network, obtain the plurality of candidate target and described many The characteristic vector of individual candidate target.
20. electronic equipments as claimed in claim 19, it is characterised in that the computing unit, including:
Subelement is extracted, for extracting the characteristic vector of the first candidate target from the characteristic vector of the plurality of candidate target, Wherein, first candidate target is the arbitrary candidate target in the plurality of candidate target;
4th input subelement, for the image of the tracking target to be input into the Volume Four of second deep neural network Feature calculation is carried out in product network, obtain the tracking clarification of objective vector, the Volume Four product network and described volume three Product network share convolution layer parameter;
5th input subelement, for the characteristic vector of the first candidate target described in the tracking clarification of objective vector sum is defeated Entering into the second fully-connected network of second deep neural network carries out Similarity Measure, obtains first candidate target With the similarity of the tracking target.
CN201611041675.6A 2016-11-11 2016-11-11 A kind of method for tracking target and electronic equipment Active CN106650630B (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
CN201611041675.6A CN106650630B (en) 2016-11-11 2016-11-11 A kind of method for tracking target and electronic equipment
PCT/CN2017/110577 WO2018086607A1 (en) 2016-11-11 2017-11-10 Target tracking method, electronic device, and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201611041675.6A CN106650630B (en) 2016-11-11 2016-11-11 A kind of method for tracking target and electronic equipment

Publications (2)

Publication Number Publication Date
CN106650630A true CN106650630A (en) 2017-05-10
CN106650630B CN106650630B (en) 2019-08-23

Family

ID=58811573

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201611041675.6A Active CN106650630B (en) 2016-11-11 2016-11-11 A kind of method for tracking target and electronic equipment

Country Status (2)

Country Link
CN (1) CN106650630B (en)
WO (1) WO2018086607A1 (en)

Cited By (29)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107168343A (en) * 2017-07-14 2017-09-15 灵动科技(北京)有限公司 The control method and luggage case of a kind of luggage case
CN107346413A (en) * 2017-05-16 2017-11-14 北京建筑大学 Traffic sign recognition method and system in a kind of streetscape image
CN107481265A (en) * 2017-08-17 2017-12-15 成都通甲优博科技有限责任公司 Target method for relocating and device
WO2018086607A1 (en) * 2016-11-11 2018-05-17 纳恩博(北京)科技有限公司 Target tracking method, electronic device, and storage medium
CN108133197A (en) * 2018-01-05 2018-06-08 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN108171112A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Vehicle identification and tracking based on convolutional neural networks
CN108230359A (en) * 2017-11-12 2018-06-29 北京市商汤科技开发有限公司 Object detection method and device, training method, electronic equipment, program and medium
CN108229456A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Method for tracking target and device, electronic equipment, computer storage media
CN108416780A (en) * 2018-03-27 2018-08-17 福州大学 A kind of object detection and matching process based on twin-area-of-interest pond model
CN108491816A (en) * 2018-03-30 2018-09-04 百度在线网络技术(北京)有限公司 The method and apparatus for carrying out target following in video
CN108596957A (en) * 2018-04-26 2018-09-28 北京小米移动软件有限公司 Object tracking methods and device
CN108665485A (en) * 2018-04-16 2018-10-16 华中科技大学 A kind of method for tracking target merged with twin convolutional network based on correlation filtering
CN108898620A (en) * 2018-06-14 2018-11-27 厦门大学 Method for tracking target based on multiple twin neural network and regional nerve network
CN109118519A (en) * 2018-07-26 2019-01-01 北京纵目安驰智能科技有限公司 Target Re-ID method, system, terminal and the storage medium of Case-based Reasoning segmentation
CN109214238A (en) * 2017-06-30 2019-01-15 百度在线网络技术(北京)有限公司 Multi-object tracking method, device, equipment and storage medium
WO2019033541A1 (en) * 2017-08-14 2019-02-21 Huawei Technologies Co., Ltd. Generating labeled data for deep object tracking
CN109614907A (en) * 2018-11-28 2019-04-12 安徽大学 Pedestrian recognition methods and device again based on characteristic strengthening guidance convolutional neural networks
CN109685805A (en) * 2019-01-09 2019-04-26 银河水滴科技(北京)有限公司 A kind of image partition method and device
CN110147768A (en) * 2019-05-22 2019-08-20 云南大学 A kind of method for tracking target and device
CN110163029A (en) * 2018-02-11 2019-08-23 中兴飞流信息科技有限公司 A kind of image-recognizing method, electronic equipment and computer readable storage medium
CN110570460A (en) * 2019-09-06 2019-12-13 腾讯云计算(北京)有限责任公司 Target tracking method and device, computer equipment and computer readable storage medium
CN107292284B (en) * 2017-07-14 2020-02-28 成都通甲优博科技有限责任公司 Target re-detection method and device and unmanned aerial vehicle
CN111178284A (en) * 2019-12-31 2020-05-19 珠海大横琴科技发展有限公司 Pedestrian re-identification method and system based on spatio-temporal union model of map data
CN111428535A (en) * 2019-01-09 2020-07-17 佳能株式会社 Image processing apparatus and method, and image processing system
CN111524162A (en) * 2020-04-15 2020-08-11 上海摩象网络科技有限公司 Method and device for retrieving tracking target and handheld camera
CN111524159A (en) * 2019-02-01 2020-08-11 北京京东尚科信息技术有限公司 Image processing method and apparatus, storage medium, and processor
CN112347817A (en) * 2019-08-08 2021-02-09 初速度(苏州)科技有限公司 Video target detection and tracking method and device
CN112800811A (en) * 2019-11-13 2021-05-14 深圳市优必选科技股份有限公司 Color block tracking method and device and terminal equipment
CN113273174A (en) * 2020-09-23 2021-08-17 深圳市大疆创新科技有限公司 Method, device, system, equipment and storage medium for determining target to be followed

Families Citing this family (15)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111105436B (en) * 2018-10-26 2023-05-09 曜科智能科技(上海)有限公司 Target tracking method, computer device and storage medium
CN111428539A (en) * 2019-01-09 2020-07-17 成都通甲优博科技有限责任公司 Target tracking method and device
CN110059661B (en) * 2019-04-26 2022-11-22 腾讯科技(深圳)有限公司 Action recognition method, man-machine interaction method, device and storage medium
CN110335289B (en) * 2019-06-13 2022-08-05 河海大学 Target tracking method based on online learning
CN110544268B (en) * 2019-07-29 2023-03-24 燕山大学 Multi-target tracking method based on structured light and SiamMask network
CN110766720A (en) * 2019-09-23 2020-02-07 盐城吉大智能终端产业研究院有限公司 Multi-camera vehicle tracking system based on deep learning
CN110889718B (en) * 2019-11-15 2024-05-14 腾讯科技(深圳)有限公司 Scheme screening method, scheme screening device, medium and electronic equipment
CN113538507B (en) * 2020-04-15 2023-11-17 南京大学 Single-target tracking method based on full convolution network online training
CN111598928B (en) * 2020-05-22 2023-03-10 郑州轻工业大学 Abrupt motion target tracking method based on semantic evaluation and region suggestion
CN111914890B (en) * 2020-06-23 2024-05-14 北京迈格威科技有限公司 Image block matching method between images, image registration method and product
CN111783878B (en) * 2020-06-29 2023-08-04 北京百度网讯科技有限公司 Target detection method, target detection device, electronic equipment and readable storage medium
CN111814905A (en) * 2020-07-23 2020-10-23 上海眼控科技股份有限公司 Target detection method, target detection device, computer equipment and storage medium
CN112037256A (en) * 2020-08-17 2020-12-04 中电科新型智慧城市研究院有限公司 Target tracking method and device, terminal equipment and computer readable storage medium
CN113838088A (en) * 2021-08-30 2021-12-24 哈尔滨工业大学 Hyperspectral video target tracking method based on depth tensor
CN114491131B (en) * 2022-01-24 2023-04-18 北京至简墨奇科技有限公司 Method and device for reordering candidate images and electronic equipment

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090019149A1 (en) * 2005-08-02 2009-01-15 Mobixell Networks Content distribution and tracking
AU2011265494A1 (en) * 2011-12-22 2013-07-11 Canon Kabushiki Kaisha Kernalized contextual feature
CN103218798A (en) * 2012-01-19 2013-07-24 索尼公司 Device and method of image processing
CN103339655A (en) * 2011-02-03 2013-10-02 株式会社理光 Image capturing apparatus, image capturing method, and computer program product
US20140064558A1 (en) * 2012-09-06 2014-03-06 Sony Corporation Object tracking apparatus and method and camera
CN105184778A (en) * 2015-08-25 2015-12-23 广州视源电子科技股份有限公司 Detection method and device

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106650630B (en) * 2016-11-11 2019-08-23 纳恩博(北京)科技有限公司 A kind of method for tracking target and electronic equipment

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20090019149A1 (en) * 2005-08-02 2009-01-15 Mobixell Networks Content distribution and tracking
CN103339655A (en) * 2011-02-03 2013-10-02 株式会社理光 Image capturing apparatus, image capturing method, and computer program product
AU2011265494A1 (en) * 2011-12-22 2013-07-11 Canon Kabushiki Kaisha Kernalized contextual feature
CN103218798A (en) * 2012-01-19 2013-07-24 索尼公司 Device and method of image processing
US20140064558A1 (en) * 2012-09-06 2014-03-06 Sony Corporation Object tracking apparatus and method and camera
CN105184778A (en) * 2015-08-25 2015-12-23 广州视源电子科技股份有限公司 Detection method and device

Cited By (49)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2018086607A1 (en) * 2016-11-11 2018-05-17 纳恩博(北京)科技有限公司 Target tracking method, electronic device, and storage medium
CN107346413A (en) * 2017-05-16 2017-11-14 北京建筑大学 Traffic sign recognition method and system in a kind of streetscape image
CN109214238B (en) * 2017-06-30 2022-06-28 阿波罗智能技术(北京)有限公司 Multi-target tracking method, device, equipment and storage medium
CN109214238A (en) * 2017-06-30 2019-01-15 百度在线网络技术(北京)有限公司 Multi-object tracking method, device, equipment and storage medium
CN107168343A (en) * 2017-07-14 2017-09-15 灵动科技(北京)有限公司 The control method and luggage case of a kind of luggage case
CN107292284B (en) * 2017-07-14 2020-02-28 成都通甲优博科技有限责任公司 Target re-detection method and device and unmanned aerial vehicle
US10592786B2 (en) 2017-08-14 2020-03-17 Huawei Technologies Co., Ltd. Generating labeled data for deep object tracking
WO2019033541A1 (en) * 2017-08-14 2019-02-21 Huawei Technologies Co., Ltd. Generating labeled data for deep object tracking
CN107481265A (en) * 2017-08-17 2017-12-15 成都通甲优博科技有限责任公司 Target method for relocating and device
CN107481265B (en) * 2017-08-17 2020-05-19 成都通甲优博科技有限责任公司 Target relocation method and device
CN108230359A (en) * 2017-11-12 2018-06-29 北京市商汤科技开发有限公司 Object detection method and device, training method, electronic equipment, program and medium
CN108230359B (en) * 2017-11-12 2021-01-26 北京市商汤科技开发有限公司 Object detection method and apparatus, training method, electronic device, program, and medium
CN108229456B (en) * 2017-11-22 2021-05-18 深圳市商汤科技有限公司 Target tracking method and device, electronic equipment and computer storage medium
CN108229456A (en) * 2017-11-22 2018-06-29 深圳市商汤科技有限公司 Method for tracking target and device, electronic equipment, computer storage media
CN108171112A (en) * 2017-12-01 2018-06-15 西安电子科技大学 Vehicle identification and tracking based on convolutional neural networks
CN108171112B (en) * 2017-12-01 2021-06-01 西安电子科技大学 Vehicle identification and tracking method based on convolutional neural network
CN108133197A (en) * 2018-01-05 2018-06-08 百度在线网络技术(北京)有限公司 For generating the method and apparatus of information
CN110163029A (en) * 2018-02-11 2019-08-23 中兴飞流信息科技有限公司 A kind of image-recognizing method, electronic equipment and computer readable storage medium
CN110163029B (en) * 2018-02-11 2021-03-30 中兴飞流信息科技有限公司 Image recognition method, electronic equipment and computer readable storage medium
CN108416780B (en) * 2018-03-27 2021-08-31 福州大学 Object detection and matching method based on twin-region-of-interest pooling model
CN108416780A (en) * 2018-03-27 2018-08-17 福州大学 A kind of object detection and matching process based on twin-area-of-interest pond model
CN108491816A (en) * 2018-03-30 2018-09-04 百度在线网络技术(北京)有限公司 The method and apparatus for carrying out target following in video
CN108665485B (en) * 2018-04-16 2021-07-02 华中科技大学 Target tracking method based on relevant filtering and twin convolution network fusion
CN108665485A (en) * 2018-04-16 2018-10-16 华中科技大学 A kind of method for tracking target merged with twin convolutional network based on correlation filtering
CN108596957B (en) * 2018-04-26 2022-07-22 北京小米移动软件有限公司 Object tracking method and device
CN108596957A (en) * 2018-04-26 2018-09-28 北京小米移动软件有限公司 Object tracking methods and device
CN108898620A (en) * 2018-06-14 2018-11-27 厦门大学 Method for tracking target based on multiple twin neural network and regional nerve network
CN108898620B (en) * 2018-06-14 2021-06-18 厦门大学 Target tracking method based on multiple twin neural networks and regional neural network
CN109118519A (en) * 2018-07-26 2019-01-01 北京纵目安驰智能科技有限公司 Target Re-ID method, system, terminal and the storage medium of Case-based Reasoning segmentation
CN109614907B (en) * 2018-11-28 2022-04-19 安徽大学 Pedestrian re-identification method and device based on feature-enhanced guided convolutional neural network
CN109614907A (en) * 2018-11-28 2019-04-12 安徽大学 Pedestrian recognition methods and device again based on characteristic strengthening guidance convolutional neural networks
CN111428535A (en) * 2019-01-09 2020-07-17 佳能株式会社 Image processing apparatus and method, and image processing system
CN109685805A (en) * 2019-01-09 2019-04-26 银河水滴科技(北京)有限公司 A kind of image partition method and device
CN111524159B (en) * 2019-02-01 2024-07-19 北京京东乾石科技有限公司 Image processing method and apparatus, storage medium, and processor
CN111524159A (en) * 2019-02-01 2020-08-11 北京京东尚科信息技术有限公司 Image processing method and apparatus, storage medium, and processor
CN110147768B (en) * 2019-05-22 2021-05-28 云南大学 Target tracking method and device
CN110147768A (en) * 2019-05-22 2019-08-20 云南大学 A kind of method for tracking target and device
CN112347817B (en) * 2019-08-08 2022-05-17 魔门塔(苏州)科技有限公司 Video target detection and tracking method and device
CN112347817A (en) * 2019-08-08 2021-02-09 初速度(苏州)科技有限公司 Video target detection and tracking method and device
CN110570460B (en) * 2019-09-06 2024-02-13 腾讯云计算(北京)有限责任公司 Target tracking method, device, computer equipment and computer readable storage medium
CN110570460A (en) * 2019-09-06 2019-12-13 腾讯云计算(北京)有限责任公司 Target tracking method and device, computer equipment and computer readable storage medium
CN112800811A (en) * 2019-11-13 2021-05-14 深圳市优必选科技股份有限公司 Color block tracking method and device and terminal equipment
CN112800811B (en) * 2019-11-13 2023-10-13 深圳市优必选科技股份有限公司 Color block tracking method and device and terminal equipment
CN111178284A (en) * 2019-12-31 2020-05-19 珠海大横琴科技发展有限公司 Pedestrian re-identification method and system based on spatio-temporal union model of map data
CN111524162B (en) * 2020-04-15 2022-04-01 上海摩象网络科技有限公司 Method and device for retrieving tracking target and handheld camera
WO2021208261A1 (en) * 2020-04-15 2021-10-21 上海摩象网络科技有限公司 Tracking target retrieving method and device, and handheld camera
CN111524162A (en) * 2020-04-15 2020-08-11 上海摩象网络科技有限公司 Method and device for retrieving tracking target and handheld camera
WO2022061615A1 (en) * 2020-09-23 2022-03-31 深圳市大疆创新科技有限公司 Method and apparatus for determining target to be followed, system, device, and storage medium
CN113273174A (en) * 2020-09-23 2021-08-17 深圳市大疆创新科技有限公司 Method, device, system, equipment and storage medium for determining target to be followed

Also Published As

Publication number Publication date
WO2018086607A1 (en) 2018-05-17
CN106650630B (en) 2019-08-23

Similar Documents

Publication Publication Date Title
CN106650630A (en) Target tracking method and electronic equipment
CN107103613B (en) A kind of three-dimension gesture Attitude estimation method
CN108038420B (en) Human behavior recognition method based on depth video
Siagian et al. Biologically inspired mobile robot vision localization
CN110414432A (en) Training method, object identifying method and the corresponding device of Object identifying model
CN109816689A (en) A kind of motion target tracking method that multilayer convolution feature adaptively merges
CN107808143A (en) Dynamic gesture identification method based on computer vision
CN110458895A (en) Conversion method, device, equipment and the storage medium of image coordinate system
CN107274433A (en) Method for tracking target, device and storage medium based on deep learning
CN110619638A (en) Multi-mode fusion significance detection method based on convolution block attention module
CN104035557B (en) Kinect action identification method based on joint activeness
CN110147721A (en) A kind of three-dimensional face identification method, model training method and device
CN111563418A (en) Asymmetric multi-mode fusion significance detection method based on attention mechanism
CN105678813A (en) Skin color detection method and device
CN105160310A (en) 3D (three-dimensional) convolutional neural network based human body behavior recognition method
CN110378247A (en) Virtual objects recognition methods and device, storage medium and electronic device
CN104794737B (en) A kind of depth information Auxiliary Particle Filter tracking
CN106485199A (en) A kind of method and device of body color identification
CN106991147A (en) A kind of Plant identification and recognition methods
CN112233147A (en) Video moving target tracking method and device based on two-way twin network
CN109087337B (en) Long-time target tracking method and system based on hierarchical convolution characteristics
CN105096311A (en) Technology for restoring depth image and combining virtual and real scenes based on GPU (Graphic Processing Unit)
CN103903256B (en) Depth estimation method based on relative height-depth clue
CN106897681A (en) A kind of remote sensing images comparative analysis method and system
CN106373160A (en) Active camera target positioning method based on depth reinforcement learning

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant