CN110084829A - Method for tracking target, device, electronic equipment and computer readable storage medium - Google Patents
Method for tracking target, device, electronic equipment and computer readable storage medium Download PDFInfo
- Publication number
- CN110084829A CN110084829A CN201910186561.8A CN201910186561A CN110084829A CN 110084829 A CN110084829 A CN 110084829A CN 201910186561 A CN201910186561 A CN 201910186561A CN 110084829 A CN110084829 A CN 110084829A
- Authority
- CN
- China
- Prior art keywords
- frame
- image
- target
- tracking
- confidence value
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/223—Analysis of motion using block-matching
- G06T7/238—Analysis of motion using block-matching using non-full search, e.g. three-step search
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/20—Analysis of motion
- G06T7/246—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
- G06T7/248—Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving reference images or patches
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2210/00—Indexing scheme for image generation or computer graphics
- G06T2210/22—Cropping
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Image Analysis (AREA)
Abstract
The invention discloses a kind of method for tracking target, are applied to target following technical field, comprising: obtain search image based on current frame image, further according to network trace model, obtain confidence value and coordinate frame;According to confidence value and coordinate frame, first object quantity candidate frame is chosen;According to historical trace value and candidate frame, tracking result is obtained;The corresponding confidence value obtained, and judge confidence value whether in the first preset range;If so, carrying out image cropping to current frame image according to target template image;In the case where the clarity of image after judgement is cut is not less than default clarity, cutting image corresponding to current video frame image is replaced with into new target template image.Using the embodiment of the present invention, it is intended to which in more new template, objective image quality makes the judgement whether updated in the result and present frame fed back in conjunction with trace model, promotes the precision of tracking result, and avoid complicated calculation amount.
Description
Technical field
The present invention relates to the target following technical fields of video more particularly to a kind of method for tracking target, device, electronics to set
Standby and computer readable storage medium.
Background technique
With the development of science and technology, the impetus that convolutional neural networks method (CNN) handles image and video is more next
Bigger, well known face recognition technology the biggest is very mature, and many application scenarios have brought convenient life
It is living to enjoy, for example face is registered, mobile phone face unlocks, brush face enters the station and brush face is paid etc..
Tracking target in video, can because various factors (such as video camera shooting, target movement, photographed scene etc.) and
It changes in different time period.And in the prior art, it generallys use a fixed target template and carries out target tracking, this
Sample will lead to when target template immobilizes, and when these variations occur for target, will increase trace model judges difficulty,
The accuracy of tracking result is reduced, if each video frame images sets a target template, will cause calculation amount
Increase.Therefore, lack a kind of effective update method for carrying out target template in the prior art.
Summary of the invention
The purpose of the present invention is to provide a kind of method for tracking target, device, electronic equipment and computer-readable storage mediums
Matter, it is intended in more new template, in conjunction with trace model feed back result and present frame in objective image quality come make whether
The judgement of update, promotes the precision of tracking result, and avoids complicated calculation amount.
To achieve the above object, the present invention provides a kind of method for tracking target, which comprises
Based on current frame image, cut to obtain search image;
Described search image and target template image are input to network trace model, obtain confidence value and coordinate frame,
Wherein, the network trace model is the trace model formed based on CNN and RPN, and the current frame image is video to be processed
In corresponding any one video frame images;
According to the confidence value and the coordinate frame, first object quantity coordinate frame is chosen as the first coordinate frame;
According to historical trace value and the candidate frame, tracking result is obtained;
The corresponding confidence value of the tracking result is obtained, and judges the confidence value whether in the first preset range
It is interior;
If so, carrying out image cropping to current frame image according to the target template image;
In the case where the clarity of image after judgement is cut is not less than default clarity, by the current video frame image
Corresponding cutting image replaces with new target template image.
It is described that image sanction is carried out to current frame image according to the target template image in a kind of implementation of the invention
The step of cutting, comprising:
Obtain the space-number of video frame corresponding to the current video frame and target template image;
When the space-number is greater than preset value, image sanction is carried out to current frame image according to the target template image
It cuts.
It is described according to the confidence value and the coordinate frame in a kind of implementation of the invention, choose first object
The step of quantity coordinate frame is as the first coordinate frame, comprising:
The confidence value is ranked up according to sequence from big to small;
First object quantity confidence value is successively chosen, by coordinate frame corresponding to the first quantity confidence value
As tracking target candidate frame.
In a kind of implementation of the invention, according to historical trace value and the candidate frame, the step of obtaining tracking result,
Include:
Calculate the distance of the first coordinate frame and the second coordinate frame, wherein the second coordinate frame is the first video frame
Coordinate frame corresponding to the tracking result of image, the first video frame images are the previous video frame figure of the current video frame image
Picture;
The candidate frame IoU value being calculated is sorted from large to small, the second quantity candidate frame conduct is sequentially obtained
New tracking target candidate frame;
According to historical trace coordinate frame, new tracking target candidate frame is filtered, obtains the movement rail of tracking target
Mark prediction;
Choose wherein tracking result of the maximum respective coordinates frame of confidence level as present frame.
In a kind of implementation of the invention, the method also includes:
The tracking result of present frame is added in historical trace value;
Judge whether the number of historical trace value is greater than the second predetermined number;
If so, successively deleting and time longest pursuit gain being added according to the time is added.
In a kind of implementation of the invention, in the first aim template that target template image is the video to be processed
In the case where image, the preparation method of the target template image includes:
Video to be processed is obtained, and is cut from the first frame image of the video to be processed and obtains target template image.
In a kind of implementation of the invention, the method also includes:
When judging the confidence value not in the first preset range, processing is not updated to target template image.
In addition, the invention also discloses a kind of target tracker, described device includes:
First cuts module, for being based on current frame image, is cut to obtain search image;
First obtains module, for described search image and target template image to be input to network trace model, obtains
Confidence value and coordinate frame, wherein the network trace model is the trace model formed based on CNN and RPN, the present frame
Image is any one video frame images corresponding in video to be processed;
Module is chosen, for first object quantity coordinate frame being chosen and being made according to the confidence value and the coordinate frame
For the first coordinate frame;
Second obtains module, for obtaining tracking result according to historical trace value and the candidate frame;
Judgment module, for obtaining the corresponding confidence value of the tracking result, and judge the confidence value whether
In first preset range;
Second cut module, for the judging result of the judgment module be in the case where, according to the target mould
Plate image carries out image cropping to current frame image;
Replacement module will be described in the case that the clarity for the image after judgement cutting is not less than default clarity
Cutting image corresponding to current video frame image replaces with new target template image.
And a kind of electronic equipment is disclosed, including memory, processor and storage on a memory and can handled
The computer program run on device, the processor realize any one method for tracking target when executing the computer program
The step of.
A kind of computer readable storage medium is also disclosed, computer program, the computer program quilt are stored thereon with
The step of any one method for tracking target is realized when processor executes.
Therefore, using the embodiment of the present invention method for tracking target, device, electronic equipment and computer-readable storage medium
Matter is cut to obtain search image, then be input to network trace with target template image first against a current frame image
Model obtains confidence value and coordinate frame, then carries out post-processing to confidence value and coordinate frame and obtains tracking result;Primary
After target following, then the corresponding confidence value of tracking result is obtained, to current when confidence value is in the first preset range
Frame image carries out image cropping;In the case where the clarity of image after judgement is cut is not less than default clarity, forward sight will be worked as
Cutting image corresponding to frequency frame image replaces with new target template image.By acquisition tracking result each time, use
Online updating template-policy, in object tracking process, whether judge templet, which needs, updates, can be better by template renewal
Adaptation is made to the various change of tracking target.Therefore, using the embodiment of the present invention, the result fed back in conjunction with trace model with
And objective image quality makes the judgement whether updated in present frame, promotes the precision of tracking result, and avoids complicated
Calculation amount.
Detailed description of the invention
Fig. 1 is a kind of flow diagram of method for tracking target provided in an embodiment of the present invention.
Fig. 2 is a kind of data flow processing schematic of method for tracking target provided in an embodiment of the present invention.
Specific embodiment
Illustrate embodiments of the present invention below by way of specific specific example, those skilled in the art can be by this specification
Other advantages and efficacy of the present invention can be easily understood for disclosed content.The present invention can also pass through in addition different specific realities
The mode of applying is embodied or practiced, the various details in this specification can also based on different viewpoints and application, without departing from
Various modifications or alterations are carried out under spirit of the invention.
Referring to FIG. 1-2, it should be noted that diagram provided in the present embodiment only the invention is illustrated in a schematic way
Basic conception, only shown in schema then with related component in the present invention rather than component count, shape when according to actual implementation
Shape and size are drawn, when actual implementation kenel, quantity and the ratio of each component can arbitrarily change for one kind, and its component cloth
Office's kenel may also be increasingly complex.
As depicted in figs. 1 and 2, the embodiment of the invention provides a kind of method for tracking target, the method includes the steps such as
Under:
S101 is based on current frame image, is cut to obtain search image.
During video frequency object tracking, video format to be tracked can be AVI, MP4, MKV etc..Current video frame
It is the video frame images analyzed, can be any one video frame images in video to be processed.It is regarded in former frame
Frequency after treatment can obtain tracking result, then can work as according to the tracking result of former frame for subsequent video frame
Cutting obtains search image after expanding a certain range in prior image frame, and the expansion of specific current frame image finds search image and is
Existing process, this will not be repeated here for the embodiment of the present invention.
It should be noted that due in video object run have front and back continuity, so the mesh between before and after frames image
Cursor position will not deviate too greatly, and can estimate target in the position of present frame, but not according to the tracking result of previous frame image is
Accurate location.Position is estimated based on this again and amplifies a certain range, image is cut in the current frame and obtains search image, then pass through
Network processes match target template and search image, obtain the accurate location of target in the current frame.
Described search image and target template image are input to network trace model, obtain confidence value and seat by S102
Mark frame, wherein the network trace model is the trace model formed based on CNN and RPN, and the current frame image is to be processed
Any one corresponding video frame images in video.
In practical application, according to the target that the needs being provided previously track, in video first frame image after expanded scope
Cutting obtains template image, to obtain initial target template image.Using second video frame images as current video frame figure
As for, after obtaining search image in second video frame images, as shown in Fig. 2, by target template image and search graph
As being separately input in neural network CNN, then the result of CNN is input in RPN, exports confidence value and seat via RPN
Mark frame.
It should be noted that CNN network is made of multilayer convolutional layer, after image inputs network, by every layer of convolutional layer
The characteristic pattern that can all obtain multichannel is calculated, and obtained characteristic pattern can input next convolutional layer again and new feature is calculated
Figure.There are many classical network structures, such as AlexNet, VGG, Inception, ResNet etc. for CNN network.It is with AlexNet
Example, be of five storeys convolutional layer altogether, wherein interspersed activation primitive (ReLU), pond layer (Max Pooling), full articulamentum etc..It is omitting
In network in the case where full articulamentum, network output is 256 channel 6x6 size characteristic figures.
Each characteristic point of the parameters such as the given characteristic pattern size of RPN network meeting foundation, scaling, scale in characteristic pattern
The upper Anchor for generating fixed number.The multi-channel feature figure of CNN network output has two-way branch after RPN network
Output.It is classification branch all the way, outputs the confidence value of each corresponding A nchor, is in addition to return branch all the way, outputs
The coordinate regressand value of each corresponding A nchor.
It should be noted that corresponding to a confidence value for each coordinate frame, that is to say, that confidence level and coordinate
Frame is one-to-one relationship.
S103 chooses first object quantity coordinate frame and sits as first according to the confidence value and the coordinate frame
Mark frame.
It should be noted that coordinate frame be it is multiple, a fixed quantity is exported by using deep neural network CNN
Bounding box, i.e. coordinate frame.
In a kind of implementation of the invention, for the method for the selection use of coordinate frame are as follows: pressed to the confidence value
It is ranked up according to sequence from big to small;According to coordinate frame conduct corresponding to successively selection first object quantity confidence value
Track target candidate frame.
After carrying out confidence value sequence, first object quantity confidence level before being chosen according to descending sequence
Value, and since confidence level and coordinate frame are one-to-one relationships, it can be obtained by and the first quantity confidence value pair
The the first quantity coordinate frame answered, and using the first quantity each and every one coordinate frame as target candidate frame.
S104 obtains tracking result according to historical trace value and the candidate frame.
In the embodiment of the present invention, the distance of the first coordinate frame and the second coordinate frame is calculated, wherein described second sits
Coordinate frame corresponding to the tracking result that frame is the first video frame images is marked, the first video frame images are the current video frame figure
The previous video frame images of picture;The candidate frame IoU value being calculated is sorted from large to small, the second quantity is sequentially obtained
Candidate frame is as new tracking target candidate frame;According to historical trace coordinate frame, new tracking target candidate frame is filtered,
Obtain the motion profile prediction of tracking target;Choose wherein tracking knot of the maximum respective coordinates frame of confidence level as present frame
Fruit.
It should be noted that the coordinate frame distance in the embodiment of the present invention is measured using IoU, degree of overlapping IoU is
Intersection over Union is that a kind of measure concentrates a standard of detection respective objects accuracy in specific data,
It is a simple measurement standard.IoU score is the standard performance measurement of object type segmentation problem, in given one group of image,
IoU measurement gives the similitude between the estimation range of the object present in this group of image and ground truth region, that is, counts
Calculate the similitude of the first coordinate frame and the second coordinate frame.
Then IoU value is ranked up and retains preceding second quantity candidate frame as new tracking target candidate frame.Then
In conjunction with historical trace coordinate frame information, newly-generated tracking target candidate frame is further filtered.It is sat according to historical trace
Frame information is marked, in conjunction with video temporal characteristics, motion profile prediction can be carried out to tracking target, is further chosen wherein after filtering
Best tracking result of the maximum respective coordinates frame of confidence level as present frame.
In addition to use IoU, can also using other as coordinate frame central point distance, coordinate frame overlapping area account for previous frame with
The ratio etc. of track result coordinate frame.
Further, the best tracking result of present frame is added in historical trace information table and is used to subsequent video frame,
Ensure tracking information number in historical trace information table simultaneously.After the second predetermined number, when new tracking information is added, delete
Except the tracking information being added at first, hold list length is constant.
S105 obtains the corresponding confidence value of the tracking result, and judges whether the confidence value is default first
In range;If so, otherwise executing step S106 is not carried out the execution of template renewal.
The output best tracking result of present frame in part is post-processed by tracking result, while judging the best tracking result of present frame
Whether confidence level is in the first preset range, if present frame tracking result confidence level is relatively low, illustrates to track target that have very much can
The situation of active or mistake can be gone out.In this case more new template will cause subsequent tracking network characteristic error, cause with
Track procedure failure.Meanwhile if present frame tracking result confidence level is very high, illustrate to track target and template image very
Match, there is no too big variation, can not have to consider template renewal, the first preset range is selected as in actual application
[0.75,0.95]。
S106 carries out image cropping to current frame image according to the target template image.
Tracking result according to current frame image is obtained according to cutting after template image expanded scope in current frame image
To new candidate template image.
It will be appreciated that if the replacement of target template image imitates tracking while excessively frequently increasing calculation amount
Corresponding benefit is not brought in terms of the raising of fruit, therefore the specific implementation of step S106 includes: that acquisition is described current
The space-number of video frame corresponding to video frame and target template image;When the space-number is greater than preset value, according to described
Target template image carries out image cropping to current frame image.
In a kind of implementation of the invention, whether the interval between more current frame number and last time update frame number surpasses
The space-number threshold value I_th for crossing a setting does not need more new template if the interval time between two frames is too short, considers performance
Balance between efficiency is selected as 50 in space-number threshold value I_th test process.
In actual use, if interval is more than I_th between current frame image and last time more new frame image, by present frame
Serial number is saved as latest update frame number, solves serial number overflow problem in use, operating using remainder in frame number.
S107, in the case where the clarity of image after judgement is cut is not less than default clarity, by the current video
Cutting image corresponding to frame image replaces with new target template image.
Meet after carrying out image cropping, image definition quality evaluation is carried out to the candidate template image after cutting.And
The template image clarity used now is compared, to guarantee that template image definition quality does not have too many decline.Figure
Brenner, Tenengrad, SMD2, energy gradient function etc. can be used in image sharpness quality evaluating method.In candidate template
Image definition quality meets the clarity requirement of template image, then updating candidate template image is next frame video image processing
New template image.
The result of tracking network model output is post-processed, in order to further promote the accurate of tracking result
Property and tracking process robustness.Pass through last handling process, it is possible to reduce target is because block, the various factors such as background interference are made
At tracking lose the case where.Tracking result post-processing part can be further improved on the basis of combining history timing information
The tracking result of present frame reduces tracking error.But post-process the confidence level and seat of partial dependency tracking network model output
Frame is marked, situations such as the transformation of tracking target scene, target deformation, online template renewal part through the embodiment of the present invention is logical
It crosses more new strategy judge templet to upgrade demand, by template renewal, can reduce because tracking target opposite formwork image occurs
Transformation and to the increased interference of tracking network model.
A kind of target tracker is also disclosed, described device includes:
First cuts module, for being based on current frame image, is cut to obtain search image;
First obtains module, for described search image and target template image to be input to network trace model, obtains
Confidence value and coordinate frame, wherein the network trace model is the trace model formed based on CNN and RPN, the present frame
Image is any one video frame images corresponding in video to be processed;
Module is chosen, for first object quantity coordinate frame being chosen and being made according to the confidence value and the coordinate frame
For the first coordinate frame;
Second obtains module, for obtaining tracking result according to historical trace value and the candidate frame;
Judgment module, for obtaining the corresponding confidence value of the tracking result, and judge the confidence value whether
In first preset range;
Second cut module, for the judging result of the judgment module be in the case where, according to the target mould
Plate image carries out image cropping to current frame image;
Replacement module will be described in the case that the clarity for the image after judgement cutting is not less than default clarity
Cutting image corresponding to current video frame image replaces with new target template image.
And a kind of electronic equipment is disclosed, including memory, processor and storage on a memory and can handled
The computer program run on device, the processor realize any one method for tracking target when executing the computer program
The step of.
A kind of computer readable storage medium is also disclosed, computer program, the computer program quilt are stored thereon with
The step of any one method for tracking target is realized when processor executes.
The above-described embodiments merely illustrate the principles and effects of the present invention, and is not intended to limit the present invention.It is any ripe
The personage for knowing this technology all without departing from the spirit and scope of the present invention, carries out modifications and changes to above-described embodiment.Cause
This, institute is complete without departing from the spirit and technical ideas disclosed in the present invention by those of ordinary skill in the art such as
At all equivalent modifications or change, should be covered by the claims of the present invention.
Claims (10)
1. a kind of method for tracking target, which is characterized in that the described method includes:
Based on current frame image, cut to obtain search image;
Described search image and target template image are input to network trace model, obtain confidence value and coordinate frame, wherein
The network trace model is the trace model formed based on CNN and RPN, and the current frame image is right in video to be processed
Any one video frame images answered;
According to the confidence value and the coordinate frame, first object quantity coordinate frame is chosen as the first coordinate frame;
According to historical trace value and the candidate frame, tracking result is obtained;
The corresponding confidence value of the tracking result is obtained, and judges the confidence value whether in the first preset range;
If so, carrying out image cropping to current frame image according to the target template image;
It is in the case where the clarity of image after judgement is cut is not less than default clarity, the current video frame image institute is right
The cutting image answered replaces with new target template image.
2. a kind of method for tracking target according to claim 1, which is characterized in that described according to the target template image
The step of image cropping is carried out to current frame image, comprising:
Obtain the space-number of video frame corresponding to the current video frame and target template image;
When the space-number is greater than preset value, image cropping is carried out to current frame image according to the target template image.
3. a kind of method for tracking target according to claim 1, which is characterized in that described according to the confidence value and institute
Coordinate frame is stated, the step of first object quantity coordinate frame is as the first coordinate frame is chosen, comprising:
The confidence value is ranked up according to sequence from big to small;
Successively choose first object quantity confidence value, using coordinate frame corresponding to the first quantity confidence value as
Track target candidate frame.
4. a kind of method for tracking target according to claim 3, which is characterized in that according to historical trace value and the candidate
The step of frame, acquisition tracking result, comprising:
Calculate the distance of the first coordinate frame and the second coordinate frame, wherein the second coordinate frame is the first video frame images
Tracking result corresponding to coordinate frame, the first video frame images be the current video frame image previous video frame images;
The candidate frame IoU value being calculated is sorted from large to small, sequentially obtains the second quantity candidate frame as new
Track target candidate frame;
According to historical trace coordinate frame, new tracking target candidate frame is filtered, the motion profile for obtaining tracking target is pre-
It surveys;
Choose wherein tracking result of the maximum respective coordinates frame of confidence level as present frame.
5. a kind of method for tracking target according to claim 4, which is characterized in that the method also includes:
The tracking result of present frame is added in historical trace value;
Judge whether the number of historical trace value is greater than the second predetermined number;
If so, successively deleting and time longest pursuit gain being added according to the time is added.
6. a kind of method for tracking target according to claim 1, which is characterized in that wait locate described in being in target template image
In the case where the first aim template image for managing video, the preparation method of the target template image includes:
Video to be processed is obtained, and is cut from the first frame image of the video to be processed and obtains target template image.
7. a kind of method for tracking target according to claim 1-6, which is characterized in that the method also includes:
When judging the confidence value not in the first preset range, processing is not updated to target template image.
8. a kind of target tracker, which is characterized in that described device includes:
First cuts module, for being based on current frame image, is cut to obtain search image;
First obtains module, for described search image and target template image to be input to network trace model, obtains confidence
Angle value and coordinate frame, wherein the network trace model is the trace model formed based on CNN and RPN, the current frame image
For any one video frame images corresponding in video to be processed;
Module is chosen, for according to the confidence value and the coordinate frame, choosing first object quantity coordinate frame as the
One coordinate frame;
Second obtains module, for obtaining tracking result according to historical trace value and the candidate frame;
Whether judgment module for obtaining the corresponding confidence value of the tracking result, and judges the confidence value first
In preset range;
Second cut module, for the judging result of the judgment module be in the case where, according to the target template figure
As carrying out image cropping to current frame image;
Replacement module will be described current in the case that the clarity for the image after judgement cutting is not less than default clarity
Cutting image corresponding to video frame images replaces with new target template image.
9. a kind of electronic equipment including memory, processor and stores the meter that can be run on a memory and on a processor
Calculation machine program, which is characterized in that the processor is realized described in any one of claim 1 to 7 when executing the computer program
The step of method for tracking target.
10. a kind of computer readable storage medium, is stored thereon with computer program, it is characterised in that: the computer program
The step of any one of claim 1 to 7 method for tracking target is realized when being executed by processor.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910186561.8A CN110084829A (en) | 2019-03-12 | 2019-03-12 | Method for tracking target, device, electronic equipment and computer readable storage medium |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910186561.8A CN110084829A (en) | 2019-03-12 | 2019-03-12 | Method for tracking target, device, electronic equipment and computer readable storage medium |
Publications (1)
Publication Number | Publication Date |
---|---|
CN110084829A true CN110084829A (en) | 2019-08-02 |
Family
ID=67412420
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910186561.8A Pending CN110084829A (en) | 2019-03-12 | 2019-03-12 | Method for tracking target, device, electronic equipment and computer readable storage medium |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110084829A (en) |
Cited By (26)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472608A (en) * | 2019-08-21 | 2019-11-19 | 石翊鹏 | Image recognition tracking processing method and system |
CN110503095A (en) * | 2019-08-27 | 2019-11-26 | 中国人民公安大学 | Alignment quality evaluation method, localization method and the equipment of target detection model |
CN110647836A (en) * | 2019-09-18 | 2020-01-03 | 中国科学院光电技术研究所 | Robust single-target tracking method based on deep learning |
CN110766725A (en) * | 2019-10-31 | 2020-02-07 | 北京市商汤科技开发有限公司 | Template image updating method and device, target tracking method and device, electronic equipment and medium |
CN110766724A (en) * | 2019-10-31 | 2020-02-07 | 北京市商汤科技开发有限公司 | Target tracking network training and tracking method and device, electronic equipment and medium |
CN110910391A (en) * | 2019-11-15 | 2020-03-24 | 安徽大学 | Video object segmentation method with dual-module neural network structure |
CN110930428A (en) * | 2020-02-19 | 2020-03-27 | 成都纵横大鹏无人机科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN110956646A (en) * | 2019-10-30 | 2020-04-03 | 北京迈格威科技有限公司 | Target tracking method, device, equipment and storage medium |
CN111027370A (en) * | 2019-10-16 | 2020-04-17 | 合肥湛达智能科技有限公司 | Multi-target tracking and behavior analysis detection method |
CN111696132A (en) * | 2020-05-15 | 2020-09-22 | 深圳市优必选科技股份有限公司 | Target tracking method and device, computer readable storage medium and robot |
CN111723632A (en) * | 2019-11-08 | 2020-09-29 | 珠海达伽马科技有限公司 | Ship tracking method and system based on twin network |
CN111862624A (en) * | 2020-07-29 | 2020-10-30 | 浙江大华技术股份有限公司 | Vehicle matching method and device, storage medium and electronic device |
CN111898438A (en) * | 2020-06-29 | 2020-11-06 | 北京大学 | Multi-target tracking method and system for monitoring scene |
CN112001946A (en) * | 2020-07-14 | 2020-11-27 | 浙江大华技术股份有限公司 | Target object tracking method, computer equipment and device |
CN112132071A (en) * | 2020-09-27 | 2020-12-25 | 上海眼控科技股份有限公司 | Processing method, device and equipment for identifying traffic jam and storage medium |
CN112561956A (en) * | 2020-11-25 | 2021-03-26 | 中移(杭州)信息技术有限公司 | Video target tracking method and device, electronic equipment and storage medium |
CN112668497A (en) * | 2020-12-30 | 2021-04-16 | 南京佑驾科技有限公司 | Vehicle accurate positioning and identification method and system |
CN112908031A (en) * | 2019-11-19 | 2021-06-04 | 丰田自动车株式会社 | Image processing system, processing device, relay device, and recording medium |
CN112906558A (en) * | 2021-02-08 | 2021-06-04 | 浙江商汤科技开发有限公司 | Image feature extraction method and device, computer equipment and storage medium |
CN112967310A (en) * | 2021-02-04 | 2021-06-15 | 成都国翼电子技术有限公司 | FPGA-based template matching acceleration method |
CN113139985A (en) * | 2021-03-16 | 2021-07-20 | 北京理工大学 | Tracking target framing method for eliminating communication delay influence of unmanned aerial vehicle and ground station |
CN113808162A (en) * | 2021-08-26 | 2021-12-17 | 中国人民解放军军事科学院军事医学研究院 | Target tracking method and device, electronic equipment and storage medium |
CN113869163A (en) * | 2021-09-18 | 2021-12-31 | 北京远度互联科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
WO2022037587A1 (en) * | 2020-08-19 | 2022-02-24 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for video processing |
CN114140494A (en) * | 2021-06-30 | 2022-03-04 | 杭州图灵视频科技有限公司 | Single-target tracking system and method in complex scene, electronic device and storage medium |
CN117934555A (en) * | 2024-03-21 | 2024-04-26 | 西南交通大学 | Vehicle speed identification method, device, equipment and medium based on deep learning |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108062531A (en) * | 2017-12-25 | 2018-05-22 | 南京信息工程大学 | A kind of video object detection method that convolutional neural networks are returned based on cascade |
CN109033955A (en) * | 2018-06-15 | 2018-12-18 | 中国科学院半导体研究所 | A kind of face tracking method and system |
-
2019
- 2019-03-12 CN CN201910186561.8A patent/CN110084829A/en active Pending
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN108062531A (en) * | 2017-12-25 | 2018-05-22 | 南京信息工程大学 | A kind of video object detection method that convolutional neural networks are returned based on cascade |
CN109033955A (en) * | 2018-06-15 | 2018-12-18 | 中国科学院半导体研究所 | A kind of face tracking method and system |
Non-Patent Citations (2)
Title |
---|
BO LI等: "High Performance Visual Tracking with Siamese Region Proposal Network", 《2018 IEEE/CVF CONFERENCE ON COMPUTER VISION AND PATTERN RECOGNITION》 * |
柳玉辉等: "一种基于优化模板匹配的红外目标跟踪算法", 《东北大学学报(自然科学版)》 * |
Cited By (40)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110472608A (en) * | 2019-08-21 | 2019-11-19 | 石翊鹏 | Image recognition tracking processing method and system |
CN110503095A (en) * | 2019-08-27 | 2019-11-26 | 中国人民公安大学 | Alignment quality evaluation method, localization method and the equipment of target detection model |
CN110503095B (en) * | 2019-08-27 | 2022-06-03 | 中国人民公安大学 | Positioning quality evaluation method, positioning method and device of target detection model |
CN110647836A (en) * | 2019-09-18 | 2020-01-03 | 中国科学院光电技术研究所 | Robust single-target tracking method based on deep learning |
CN111027370A (en) * | 2019-10-16 | 2020-04-17 | 合肥湛达智能科技有限公司 | Multi-target tracking and behavior analysis detection method |
CN110956646A (en) * | 2019-10-30 | 2020-04-03 | 北京迈格威科技有限公司 | Target tracking method, device, equipment and storage medium |
CN110956646B (en) * | 2019-10-30 | 2023-04-18 | 北京迈格威科技有限公司 | Target tracking method, device, equipment and storage medium |
CN110766724A (en) * | 2019-10-31 | 2020-02-07 | 北京市商汤科技开发有限公司 | Target tracking network training and tracking method and device, electronic equipment and medium |
CN110766724B (en) * | 2019-10-31 | 2023-01-24 | 北京市商汤科技开发有限公司 | Target tracking network training and tracking method and device, electronic equipment and medium |
CN110766725B (en) * | 2019-10-31 | 2022-10-04 | 北京市商汤科技开发有限公司 | Template image updating method and device, target tracking method and device, electronic equipment and medium |
CN110766725A (en) * | 2019-10-31 | 2020-02-07 | 北京市商汤科技开发有限公司 | Template image updating method and device, target tracking method and device, electronic equipment and medium |
CN111723632A (en) * | 2019-11-08 | 2020-09-29 | 珠海达伽马科技有限公司 | Ship tracking method and system based on twin network |
CN111723632B (en) * | 2019-11-08 | 2023-09-15 | 珠海达伽马科技有限公司 | Ship tracking method and system based on twin network |
CN110910391A (en) * | 2019-11-15 | 2020-03-24 | 安徽大学 | Video object segmentation method with dual-module neural network structure |
CN110910391B (en) * | 2019-11-15 | 2023-08-18 | 安徽大学 | Video object segmentation method for dual-module neural network structure |
CN112908031A (en) * | 2019-11-19 | 2021-06-04 | 丰田自动车株式会社 | Image processing system, processing device, relay device, and recording medium |
CN110930428A (en) * | 2020-02-19 | 2020-03-27 | 成都纵横大鹏无人机科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN110930428B (en) * | 2020-02-19 | 2020-08-14 | 成都纵横大鹏无人机科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN111696132A (en) * | 2020-05-15 | 2020-09-22 | 深圳市优必选科技股份有限公司 | Target tracking method and device, computer readable storage medium and robot |
CN111696132B (en) * | 2020-05-15 | 2023-12-29 | 深圳市优必选科技股份有限公司 | Target tracking method, device, computer readable storage medium and robot |
CN111898438A (en) * | 2020-06-29 | 2020-11-06 | 北京大学 | Multi-target tracking method and system for monitoring scene |
CN112001946A (en) * | 2020-07-14 | 2020-11-27 | 浙江大华技术股份有限公司 | Target object tracking method, computer equipment and device |
CN111862624B (en) * | 2020-07-29 | 2022-05-03 | 浙江大华技术股份有限公司 | Vehicle matching method and device, storage medium and electronic device |
CN111862624A (en) * | 2020-07-29 | 2020-10-30 | 浙江大华技术股份有限公司 | Vehicle matching method and device, storage medium and electronic device |
WO2022037587A1 (en) * | 2020-08-19 | 2022-02-24 | Zhejiang Dahua Technology Co., Ltd. | Methods and systems for video processing |
CN112132071A (en) * | 2020-09-27 | 2020-12-25 | 上海眼控科技股份有限公司 | Processing method, device and equipment for identifying traffic jam and storage medium |
CN112561956A (en) * | 2020-11-25 | 2021-03-26 | 中移(杭州)信息技术有限公司 | Video target tracking method and device, electronic equipment and storage medium |
CN112561956B (en) * | 2020-11-25 | 2023-04-28 | 中移(杭州)信息技术有限公司 | Video target tracking method and device, electronic equipment and storage medium |
CN112668497A (en) * | 2020-12-30 | 2021-04-16 | 南京佑驾科技有限公司 | Vehicle accurate positioning and identification method and system |
CN112967310A (en) * | 2021-02-04 | 2021-06-15 | 成都国翼电子技术有限公司 | FPGA-based template matching acceleration method |
CN112967310B (en) * | 2021-02-04 | 2023-07-14 | 成都国翼电子技术有限公司 | Template matching acceleration method based on FPGA |
CN112906558A (en) * | 2021-02-08 | 2021-06-04 | 浙江商汤科技开发有限公司 | Image feature extraction method and device, computer equipment and storage medium |
CN112906558B (en) * | 2021-02-08 | 2024-06-11 | 浙江商汤科技开发有限公司 | Image feature extraction method and device, computer equipment and storage medium |
CN113139985B (en) * | 2021-03-16 | 2022-09-16 | 北京理工大学 | Tracking target framing method for eliminating communication delay influence of unmanned aerial vehicle and ground station |
CN113139985A (en) * | 2021-03-16 | 2021-07-20 | 北京理工大学 | Tracking target framing method for eliminating communication delay influence of unmanned aerial vehicle and ground station |
CN114140494A (en) * | 2021-06-30 | 2022-03-04 | 杭州图灵视频科技有限公司 | Single-target tracking system and method in complex scene, electronic device and storage medium |
CN113808162A (en) * | 2021-08-26 | 2021-12-17 | 中国人民解放军军事科学院军事医学研究院 | Target tracking method and device, electronic equipment and storage medium |
CN113808162B (en) * | 2021-08-26 | 2024-01-23 | 中国人民解放军军事科学院军事医学研究院 | Target tracking method, device, electronic equipment and storage medium |
CN113869163A (en) * | 2021-09-18 | 2021-12-31 | 北京远度互联科技有限公司 | Target tracking method and device, electronic equipment and storage medium |
CN117934555A (en) * | 2024-03-21 | 2024-04-26 | 西南交通大学 | Vehicle speed identification method, device, equipment and medium based on deep learning |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110084829A (en) | Method for tracking target, device, electronic equipment and computer readable storage medium | |
CN105044662B (en) | A kind of fingerprint cluster multi-point joint indoor orientation method based on WIFI signal intensity | |
CN109978756A (en) | Object detection method, system, device, storage medium and computer equipment | |
CN110070074A (en) | A method of building pedestrian detection model | |
Ren et al. | A novel squeeze YOLO-based real-time people counting approach | |
CN113240691A (en) | Medical image segmentation method based on U-shaped network | |
JP7263216B2 (en) | Object Shape Regression Using Wasserstein Distance | |
CN106127145A (en) | Pupil diameter and tracking | |
CN110276264A (en) | A kind of crowd density estimation method based on foreground segmentation figure | |
CN111860587B (en) | Detection method for small targets of pictures | |
CN102156985A (en) | Method for counting pedestrians and vehicles based on virtual gate | |
CN112712546A (en) | Target tracking method based on twin neural network | |
CN112949519B (en) | Target detection method, device, equipment and storage medium | |
CN109087337B (en) | Long-time target tracking method and system based on hierarchical convolution characteristics | |
CN107590502A (en) | A kind of whole audience dense point fast matching method | |
CN114677323A (en) | Semantic vision SLAM positioning method based on target detection in indoor dynamic scene | |
CN113643329B (en) | Twin attention network-based online update target tracking method and system | |
CN109871829A (en) | A kind of detection model training method and device based on deep learning | |
CN110033012A (en) | A kind of production method for tracking target based on channel characteristics weighted convolution neural network | |
CN105894540A (en) | Method and system for counting vertical reciprocating movements based on mobile terminal | |
CN115482425A (en) | Key point identification method, model training method, device and storage medium | |
CN105405152B (en) | Adaptive scale method for tracking target based on structuring support vector machines | |
CN106529441A (en) | Fuzzy boundary fragmentation-based depth motion map human body action recognition method | |
CN115439927A (en) | Gait monitoring method, device, equipment and storage medium based on robot | |
CN106156245A (en) | Line feature in a kind of electronic chart merges method and device |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
RJ01 | Rejection of invention patent application after publication |
Application publication date: 20190802 |
|
RJ01 | Rejection of invention patent application after publication |