CN110473227A - Method for tracking target, device, equipment and storage medium - Google Patents

Method for tracking target, device, equipment and storage medium Download PDF

Info

Publication number
CN110473227A
CN110473227A CN201910776111.4A CN201910776111A CN110473227A CN 110473227 A CN110473227 A CN 110473227A CN 201910776111 A CN201910776111 A CN 201910776111A CN 110473227 A CN110473227 A CN 110473227A
Authority
CN
China
Prior art keywords
image
candidate
frame
target
region image
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN201910776111.4A
Other languages
Chinese (zh)
Other versions
CN110473227B (en
Inventor
廖家聪
卢毅
詹皓云
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Atlas Future (nanjing) Artificial Intelligence Research Institute Co Ltd
Original Assignee
Atlas Future (nanjing) Artificial Intelligence Research Institute Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Atlas Future (nanjing) Artificial Intelligence Research Institute Co Ltd filed Critical Atlas Future (nanjing) Artificial Intelligence Research Institute Co Ltd
Priority to CN201910776111.4A priority Critical patent/CN110473227B/en
Publication of CN110473227A publication Critical patent/CN110473227A/en
Application granted granted Critical
Publication of CN110473227B publication Critical patent/CN110473227B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

This application provides a kind of method for tracking target, device, equipment and storage mediums.The method for tracking target includes: the first borderline region image in prior image frame obtained in video flowing;Obtain the equal-sized the second boundary area image on the current frame image in video flowing with the first borderline region image;Degree of correlation filtering processing is carried out to the first borderline region image and the second boundary area image, obtains candidate coordinate;According to the candidate coordinate determine respectively in the first borderline region image and the second boundary area image with the equal-sized first candidate region image of the boundary image and the second candidate region image;By first candidate region image and the second candidate region image, determine whether the tracking target in the current frame image is same target with the tracking target in prior image frame.The application can realize under the premise of lower hardware requirement carries out accurate tracking in real time to target in low frame per second frequency stream.

Description

Method for tracking target, device, equipment and storage medium
Technical field
This application involves target following technical fields, in particular to method for tracking target, device, equipment and storage Medium.
Background technique
There are many business scenarios to need to track the specific objective in monitor video or mobile video at present, than As the recognition of face under monitoring scene is identified sometimes for the video sequence of combining target rather than just a certain frame, and The basis that tracking is also some high order visual problems, such as behavioural analysis and behavior prediction are carried out to target in video streaming.
It is now based on the tracking technique based on deep learning and is broadly divided into two kinds, one is Detection-Based Tracking (DBT, the tracking based on detection), this method is mainly based upon target detection technique, then passes through twin network Carry out object matching, the advantages of this mode is can accurately to be tracked under low frame per second, but than relatively time-consuming, be unable to satisfy The requirement of real-time, and hardware cost is relatively high.Another way is Detection-Free Tracking (DFT, no detection Tracking), this method needs to be initialized with specific mode, then target positioned in subsequent video frame, The advantages of this mode is that have that real-time method much may be implemented in such methods, and hardware cost is lower, but this kind of side Method requires the frame per second of video, often showed in the video of low frame per second it is very poor, and such methods have it is apparent short Whether plate can not directly judge target with losing, and can not also judge whether there is new target and occur.
However, in practical application scene, such as under monitoring scene and in mobile video, feature that there are two this kind of scenes, First is that video frame rate is lower (in 15fps or so as the frame per second of camera, directly being showed with the mode of DFT very poor), second is that Hardware resource is (if with the mode of DBT, very time-consuming, being unable to satisfy real-time) in short supply.The existing equal nothing of DBT and DFT algorithm Method realizes the tracing detection for carrying out target in the above scenario.
Therefore, how to realize that the target following in low frame-rate video becomes urgent problem to be solved under low hardware requirement.
Summary of the invention
In view of this, method for tracking target provided by the embodiments of the present application, device, equipment and storage medium, can compared with It is realized under the premise of low hardware requirement and accurate tracking in real time is carried out to target in low frame per second frequency stream.
In a first aspect, a kind of method for tracking target provided by the embodiments of the present application, which comprises obtain in video flowing The first borderline region image in prior image frame, the first borderline region image includes described on the boundary of prior image frame Block diagram picture, the bounding box image in prior image frame include the image for tracking target;Obtain the present frame in the video flowing On image with the equal-sized the second boundary area image of the first borderline region image, the second boundary area image Position in the current frame image is identical in the position in prior image frame as the first borderline region image;It is right The first borderline region image and the second boundary area image carry out degree of correlation filtering processing, obtain candidate coordinate, institute Stating candidate coordinate is the first borderline region image and the second boundary region highest position of the degree of correlation;According to the time Select coordinate determine respectively in the first borderline region image and the second boundary area image with the boundary image size Equal the first candidate region image and the second candidate region image;Pass through first candidate region image and the second candidate regions Area image determines whether the tracking target in the current frame image is same mesh with the tracking target in prior image frame Mark.
During above-mentioned realization, the application is by obtaining the first borderline region figure in prior image frame in video flowing Equal-sized the second boundary on current frame image in picture and the acquisition video flowing with the first borderline region image Area image, thus only need to cut a second boundary area image in current frame image, without cutting out multiple images, into And hardware resource cost can be reduced, further, by after determining candidate coordinate, according to candidate coordinate in the first frontier district In area image and the second boundary area image with the equal-sized first candidate region image of the boundary image and second Candidate region image, and then the present frame figure is determined by first candidate region image and the second candidate region image Whether the tracking target as in is same target with the tracking target in prior image frame, is flowed to realize in low frame per second frequency In accurate tracking in real time is carried out to target, and can accurately determine and track whether target disappears.
With reference to first aspect, the embodiment of the present application provides the first possible embodiment of first aspect, the side Method further include: processing is modified to second candidate region image, with the bounding box image of the determination current frame image, The bounding box image of the current frame image is used to determine the target following in rear frame image of the current frame image.
During above-mentioned realization, by being modified processing to second candidate region image, to work as described in determination The bounding box image of prior image frame, so as to accurately obtain position and the bounding box of tracking target in current frame image Size, and then the application scenarios of target variation of form in video stream data can be preferably coped with, and improve to tracking The accuracy of the tracking of target.Further, by being used to determine the present frame figure for the bounding box image of current frame image The target following in rear frame image of picture, so as to so that the application can preferably cope with target form in video stream data Variation application scenarios.
The possible embodiment of with reference to first aspect the first, the embodiment of the present application provide second of first aspect Possible embodiment, it is described that processing is modified to second candidate region image, with the determination current frame image Bounding box image, comprising: bounding box correcting process is carried out to second candidate region image, obtains the amendment ginseng of bounding box Number, the corrected parameter includes the offset of bounding box;The bounding box of the current frame image is determined according to the corrected parameter Image.
During above-mentioned realization, by carrying out bounding box correcting process to second candidate region image, side is obtained The corrected parameter of boundary's frame, the corrected parameter include the offset of bounding box;The present frame is determined according to the corrected parameter The bounding box image of image, so as to obtain tracking target accurate location, realize precisely tracking, further, pass through by The bounding box image of current frame image is used to determine the boundary block diagram of the target following in rear frame image of the current frame image Picture can enable the application preferably cope with the application scenarios of target variation of form in video stream data.
With reference to first aspect, the embodiment of the present application provides the third possible embodiment of first aspect, described right The first borderline region image and the second boundary area image carry out degree of correlation filtering processing, obtain candidate coordinate, wrap It includes: the first borderline region image and the second boundary area image being inputted into same default convolutional neural networks respectively, Fisrt feature figure and second feature figure are obtained, the default convolutional neural networks are preparatory trained convolutional neural networks;It is right The fisrt feature figure and the second feature figure carry out degree of correlation filtering processing, obtain the maximum position of the degree of correlation;It will be described The maximum position of the degree of correlation is as candidate coordinate.
During above-mentioned realization, by respectively by the first borderline region image and the second boundary area image Same default convolutional neural networks are inputted, obtain fisrt feature figure and second feature figure, the default convolutional neural networks are pre- First trained convolutional neural networks;Degree of correlation filtering processing is carried out to the fisrt feature figure and the second feature figure, is obtained To the maximum position of the degree of correlation;Using the maximum position of the degree of correlation as candidate coordinate.Due to by fisrt feature figure and second The maximum position of the degree of correlation is associated with as candidate coordinate so that being more accurately obtained in two field pictures in characteristic pattern Highest candidate coordinate is spent, in order to carry out accurate tracking in real time to tracking target.
With reference to first aspect, the embodiment of the present application provides the 4th kind of possible embodiment of first aspect, described logical Cross first candidate region image and the second candidate region image, determine tracking target in the current frame image whether with The tracking target in prior image frame is same target, comprising: by first candidate region image and the second candidate regions Area image is spliced, and spliced candidate feature figure is obtained;Determine the corresponding confidence level of the candidate feature figure;According to described Confidence level determines whether the tracking target in the current frame image is same with the tracking target in prior image frame One target.
During above-mentioned realization, by the way that first candidate region image and the second candidate region image are spelled It connects, obtains spliced candidate feature figure;Determine the corresponding confidence level of the candidate feature figure;Institute is determined according to the confidence level State whether the tracking target in current frame image is same target with the tracking target in prior image frame, so as to Quickly to judge whether being same target, and then to determine whether tracking target disappears.
Second aspect, a kind of target tracker provided by the embodiments of the present application, comprising: first acquisition unit, for obtaining The first borderline region image in prior image frame in video flowing is taken, the first borderline region image includes described in previous frame The bounding box image of image, the bounding box image in prior image frame include the image for tracking target;Second acquisition unit is used Equal-sized the second boundary area on current frame image in the acquisition video flowing with the first borderline region image Area image, position of the second boundary area image in the current frame image and the first borderline region image are in institute The position stated in prior image frame is identical;First processing units, for the first borderline region image and second side Battery limit (BL) area image carries out degree of correlation filtering processing, obtains candidate coordinate, candidate's coordinate is the first borderline region image With the second boundary region highest position of the degree of correlation;The second processing unit, for being determined respectively according to the candidate coordinate It is candidate with the boundary image equal-sized first in the first borderline region image and the second boundary area image Area image and the second candidate region image;Target tracking unit, for being waited by first candidate region image and second Constituency area image determines whether the tracking target in the current frame image is same with the tracking target in prior image frame One target.
In conjunction with second aspect, the embodiment of the present application provides the first possible embodiment of second aspect, the dress It sets further include: position processing unit, for being modified processing to second candidate region image, with the determination present frame The bounding box image of image, the bounding box image of the current frame image be used to determine the current frame image in rear frame image Target following.
In conjunction with the first possible embodiment of second aspect, the embodiment of the present application provides second of second aspect Possible embodiment, the position processing unit, is also used to: carrying out bounding box Corrections Division to second candidate region image Reason, obtains the corrected parameter of bounding box, the corrected parameter includes the offset of bounding box;Institute is determined according to the corrected parameter State the bounding box image of current frame image.
In conjunction with second aspect, the embodiment of the present application provides the third possible embodiment of second aspect, and described One processing unit, is also used to: respectively that the first borderline region image and the second boundary area image input is same pre- If convolutional neural networks, fisrt feature figure and second feature figure are obtained, the default convolutional neural networks are trained in advance Convolutional neural networks;Degree of correlation filtering processing is carried out to the fisrt feature figure and the second feature figure, obtains the degree of correlation most Big position;Using the maximum position of the degree of correlation as candidate coordinate.
In conjunction with second aspect, the embodiment of the present application provides the 4th kind of possible embodiment of second aspect, target with Track unit, is also used to: first candidate region image and the second candidate region image being spliced, spliced time is obtained Select characteristic pattern;Determine the corresponding confidence level of the candidate feature figure;It is determined in the current frame image according to the confidence level Track whether target is same target with the tracking target in prior image frame.
The third aspect, a kind of electronic equipment provided by the embodiments of the present application, comprising: memory, processor and be stored in In the memory and the computer program that can run on the processor, when the processor executes the computer program It realizes as described in any one of first aspect the step of method for tracking target.
Fourth aspect, a kind of storage medium provided by the embodiments of the present application are stored with instruction on the storage medium, work as institute Instruction is stated when running on computers, so that the computer executes such as the described in any item method for tracking target of first aspect.
5th aspect, a kind of computer program product provided by the embodiments of the present application, the computer program product are being counted When being run on calculation machine, so that computer executes such as the described in any item method for tracking target of first aspect.
Other feature and advantage of the disclosure will illustrate in the following description, alternatively, Partial Feature and advantage can be with Deduce from specification or unambiguously determine, or by implement the disclosure above-mentioned technology it can be learnt that.
To enable the above objects, features, and advantages of the application to be clearer and more comprehensible, preferred embodiment is cited below particularly, and cooperate Appended attached drawing, is described in detail below.
Detailed description of the invention
Technical solution in ord to more clearly illustrate embodiments of the present application, below will be to needed in the embodiment attached Figure is briefly described, it should be understood that the following drawings illustrates only some embodiments of the application, therefore is not construed as pair The restriction of range for those of ordinary skill in the art without creative efforts, can also be according to this A little attached drawings obtain other relevant attached drawings.
Fig. 1 is a kind of flow chart of method for tracking target provided by the embodiment of the present application;
Fig. 2 is the structural schematic diagram for realizing the network in a kind of method for tracking target shown in FIG. 1;
Fig. 3 is the flow chart of another kind method for tracking target provided by the embodiment of the present application;
Fig. 4 is a kind of structural schematic diagram of target tracker provided by the embodiment of the present application;
Fig. 5 is the structural schematic diagram of a kind of electronic equipment provided by the embodiment of the present application.
Specific embodiment
Drawbacks described above existing in the prior art, applicants contemplate that being applicant after practicing and carefully studying It is obtaining as a result, therefore, the discovery procedure of the above problem and the embodiment of the present application is proposed regarding to the issue above in afternoon Solution all should be the contribution that applicant makes the application during the application.
To keep the purposes, technical schemes and advantages of the embodiment of the present application clearer, below in conjunction with the embodiment of the present application In attached drawing, technical solutions in the embodiments of the present application is described.
With reference to the accompanying drawing, it elaborates to some embodiments of the application.In the absence of conflict, following Feature in embodiment and embodiment can be combined with each other.
Referring to Fig. 1, being the flow chart of method for tracking target provided by the embodiments of the present application, it should be appreciated that side shown in FIG. 1 Method can be executed by target tracker, which can be corresponding with electronic equipment shown in fig. 5 hereinafter, The electronic equipment can be the various equipment for being able to carry out this method, for example, such as personal computer, smart phone, server, The embodiment of the present application is not limited to this, and is specifically comprised the following steps:
Step S101 obtains the first borderline region image in prior image frame in video flowing.
Optionally, video flowing is low frame-rate video stream, for example, frame per second is lower than the video flowing of 48 frame frame frequency per second Low frame-rate video stream.For example, frame per second is the video flowing of 15 frame per second or 5 frame per second.
Optionally, it can be the former frame being located next on the time in current frame image in prior image frame, when may not be Between on be close to before a frame.For example, present frame is the t frame in video flowing, then it can be the 1st in video flowing in previous frame Frame, for example, can be t-1 frame in previous frame, is also possible to t-2 frame etc. to any frame in t-1 frame.Wherein, t is positive integer.
Optionally, the first borderline region image includes the bounding box image in prior image frame, described in previous frame The bounding box image of image includes the image for tracking target.
Optionally, the shape of bounding box image is rectangle.
Optionally, tracking target can be also possible to other articles with face, for example, controlled knife, such as knife or steel pipe. Here, being not especially limited.
Certainly, in actual use, tracking target can also be animal, such as dog or cat etc..
Optionally, tracking target is preassigned, for example, it may be what backstage manager was inputted, it is also possible to reality Shi Dingyi's.Here, being not especially limited.
Optionally, the area of the first borderline region image can be 9 times of bounding box image, i.e. the first borderline region Wide and high 3 times of the wide and high respectively bounding box image of image.
Certainly, in actual use, the first borderline region image can also be 1 times of bounding box image, 2 times or 4 Times etc..Here, being not especially limited.
During above-mentioned realization, by cutting out in 3 times of regions of the bounding box image in prior image frame come can be with It avoids since in low frame-rate video, the amplitude of the movement of target is relatively high to cause target not cropped completely, to make The tracking target obtained in obtained first borderline region image is complete, and then improves the accuracy of target following.
Step S102 obtains the size phase on the current frame image in the video flowing with the first borderline region image Deng the second boundary area image.
Optionally, position of the second boundary area image in the current frame image and first borderline region Image is identical in the position in prior image frame.I.e. on current frame image will with the first borderline region image it is described Region in prior image frame cuts out to obtain the second boundary area image.
During above-mentioned realization, by current frame image will with the first borderline region image described in previous frame figure Region as in, which is cut out, to be come, and is only needed to cut an image and come out, so as to reduce hardware resource cost, into One step, the application can also overcome in the prior art through choosing in the bounding box (bounding box) from three fixed proportions It selects that score is highest to obtain the bounding box of next frame, and leads to not cope with answering for various dimensional variations in video With scene, to realize the technical effect for being suitable for the application scenarios of the various dimensional variations of target in video.
Step S103 carries out at degree of correlation filtering the first borderline region image and the second boundary area image Reason, obtains candidate coordinate.
Optionally, the candidate coordinate is the first borderline region image and the second boundary region degree of correlation highest Position.
As an implementation, step S103 includes: respectively by the first borderline region image and second side Battery limit (BL) area image inputs same default convolutional neural networks, obtains fisrt feature figure and second feature figure, the default convolution mind It is preparatory trained convolutional neural networks through network;Degree of correlation filter is carried out to the fisrt feature figure and the second feature figure Wave processing, obtains the maximum position of the degree of correlation;Using the maximum position of the degree of correlation as candidate coordinate.It specifically, will be described First borderline region image inputs default convolutional neural networks (Convoluted Neural Network, CNN), obtains first The second boundary area image is inputted the same CNN, obtains second feature figure by characteristic pattern (feature map1) Fisrt feature figure and second feature figure are carried out correlation filtering, obtain score chart, wherein by score chart by (feature map2) Mid-score is worth position of the maximum position as candidate region, i.e., candidate coordinate.
Optionally, fisrt feature figure and second feature figure when showing in data format, in the form of multi-C vector/matrix Display.
For example, the fisrt feature figure and second feature figure can be characterized with the vector/matrix of 5*5.
It should be understood that the example above is merely illustrative and non-limiting.
It is alternatively possible to carry out the degree of correlation to the fisrt feature figure and the second feature figure by correlation filtering Filtering processing.For example, (MOSSE, Minimum Output Sum of Squared Error can be tracked by correlation filtering Filter) or ASEF correlation filter (Average of Synthetic Exact Filters, synthesis accurate filtering device it is flat Mean value) degree of correlation filtering processing is carried out to the fisrt feature figure and the second feature figure, to obtain fisrt feature figure and the The maximum position of the degree of correlation in two characteristic patterns.
During above-mentioned realization, by respectively by the first borderline region image and the second boundary area image Same default convolutional neural networks are inputted, obtain fisrt feature figure and second feature figure, the default convolutional neural networks are pre- First trained convolutional neural networks;Degree of correlation filtering processing is carried out to the fisrt feature figure and the second feature figure, is obtained To the maximum position of the degree of correlation;Using the maximum position of the degree of correlation as candidate coordinate.Due to by fisrt feature figure and second The maximum position of the degree of correlation is associated with as candidate coordinate so that being more accurately obtained in two field pictures in characteristic pattern Highest candidate coordinate is spent, in order to carry out accurate tracking in real time to tracking target.
Step S104 determines the first borderline region image and the second boundary area according to the candidate coordinate respectively In area image with the equal-sized first candidate region image of the boundary image and the second candidate region image.
As an implementation, candidate region is determined using bounding box image as size centered on candidate coordinate;With time Favored area is cut on fisrt feature figure, obtains the first candidate region image (feature1);With candidate region second Characteristic pattern is cut, and the second candidate region image (feature2) is obtained.
Optionally, candidate region can also be referred to as interested region (Regions of Interest, ROI).
Optionally, it is cut on fisrt feature figure with candidate region, obtains the first candidate region image, comprising: In On fisrt feature figure centered on selecting coordinate, the first image identical with bounding box image size is cut out, by the first image It is resized to pre-set dimension, obtains the first candidate region image.
Wherein, pre-set dimension refers to pre-set picture size.
Optionally, the setting of pre-set dimension can be configured according to user demand, here, being not especially limited.
As an example it is assumed that candidate coordinate is (x, y), the size of bounding box image is 3*3, then on fisrt feature figure first It finds coordinate and is the point of (x, y), then centered on the point, cut out the region of a 3*3 size, waited the region as first Constituency area image.
Wherein, the specific implementation process of the second candidate region image is referred to the above-mentioned first candidate region image of obtaining Specific implementation process, here, repeating no more.
Optionally, the first candidate region image and the second candidate region image be when showing in data format, with it is one-dimensional to Amount/matrix form is shown.
For example, the form of the vector/matrix of 1*1024 can be shown as to characterize the first candidate region image and second Candidate region image.
It should be understood that the example above is merely illustrative and non-limiting.
Step S105 determines the present frame figure by first candidate region image and the second candidate region image Whether the tracking target as in is same target with the tracking target in prior image frame.
As an implementation, step S105 includes: by first candidate region image and the second candidate region figure As being spliced, spliced candidate feature figure is obtained;Determine the corresponding confidence level of the candidate feature figure;According to the confidence Degree determines whether the tracking target in the current frame image is same mesh with the tracking target in prior image frame Mark.
Optionally, splicing, which refers to, carries out head and the tail connection for the first candidate region image and the second candidate region image.It will Second candidate region image is connected to behind the first candidate region image.
Optionally it is determined that the corresponding confidence level of the candidate feature figure, comprising: the candidate feature figure is inputted full connection Layer, obtains the corresponding confidence level of the candidate feature figure.
Optionally, the full articulamentum is training in advance, and full articulamentum is used to calculate setting for the candidate feature figure of input Reliability.
Certainly, in actual use, the corresponding confidence level of candidate feature figure can also be obtained based on global average Chi Hualai. Here, being not especially limited.
Continue for above-mentioned example, it is assumed that the first candidate region image and the second candidate region image are 1*1024's First candidate region image and the second candidate region image are then carried out splicing and referred to, by the second candidate region figure by vector/matrix The a data splicing of picture obtains the vector/matrix of 1*2048 behind last of the first candidate region image.Example Such as, it is assumed that the first candidate region image is (a0, a1, a2 ... ..a1023), and the second candidate region image is (b0, b1, b2 ... ..b1023), the spliced candidate feature figure obtained is (a0, a1, a2 ... ..a1023, b0, b1, b2 ... ..b1023).
It should be understood that above are only example and non-limiting.
Optionally, according to the confidence level determine the tracking target in the current frame image whether with described in previous frame figure The tracking target as in is same target, comprising: is compared the confidence level with preset threshold, when the confidence level When more than or equal to preset threshold, the tracking target and the tracking mesh in prior image frame in current frame image are determined It is designated as same target.Conversely, when the confidence level be less than preset threshold when, determine current frame image in tracking target with it is described The tracking target in prior image frame is not same target.
Optionally, the setting of preset threshold can be set according to user demand, here, being not especially limited.
For example, preset threshold can be decimal, such as 0.8 or 0.9, it is of course also possible to be percentage, such as 90%.
It should be understood that above are only example and non-limiting.
During above-mentioned realization, by the way that first candidate region image and the second candidate region image are spelled It connects, obtains spliced candidate feature figure;Determine the corresponding confidence level of the candidate feature figure;Institute is determined according to the confidence level Whether the tracking target stated in current frame image is same target with the tracking target in prior image frame, thus fastly Speed judges whether it is same target, and then to determine whether tracking target disappears.
Certainly, in actual use, the first candidate region image and the second candidate region image can not also be spelled It connects, specifically, determines the first confidence level and second corresponding to the first candidate region image and the second candidate region image respectively Confidence level, according to first confidence level and the second confidence level determine the tracking target in the current frame image whether with it is described The tracking target in prior image frame is same target.
In a possible embodiment, the method also includes: determine current frame image in tracking target with it is described The tracking target in prior image frame is not transmission feedback information after same target.
Optionally, feedback information includes the description information to have disappeared for characterizing tracking target.
Certainly, in actual use, feedback information can also include the concrete reason to disappear for characterizing tracking target Description information.Here, being not especially limited.
During above-mentioned realization, by sending feedback information, it can provide after target disappears corresponding Feedback information, so that user quickly knows that tracking target has disappeared.
In a possible embodiment, the method also includes: processing is modified to second candidate region image, With the bounding box image of the determination current frame image, the bounding box image of the current frame image is for determining the present frame The target following in rear frame image of image.
Optionally, the size of the bounding box image of the current frame image determined after amendment can be equal in prior image frame The size of bounding box image.The size for being also possible to the bounding box image of current frame image is less than the bounding box in prior image frame The size of image.
Certainly, in actual use, the size of the bounding box image of current frame image can also be greater than in prior image frame Bounding box image size.Here, being not especially limited.
Optionally, refer to the next frame being located next on the time in present frame in rear frame, may not be and be close on the time A subsequent frame.
Continue by taking above-mentioned example as an example, it is assumed that present frame is the t frame in video flowing, then can be the video flowing in rear frame In any frame after t frame be also possible to t+2 frame etc. for example, t+1 frame should be can be in rear frame.
During above-mentioned realization, by being modified processing to second candidate region image, to work as described in determination The bounding box image of prior image frame, so as to accurately obtain position and the bounding box of tracking target in current frame image Size, and then the application scenarios of target variation of form in video stream data can be preferably coped with, and improve to tracking The accuracy of the tracking of target.Further, by being used to determine the present frame figure for the bounding box image of current frame image The target following in rear frame image of picture.
Optionally, processing is modified to second candidate region image, with the boundary of the determination current frame image Block diagram picture, comprising: bounding box correcting process is carried out to second candidate region image, obtains the corrected parameter of bounding box, institute State the offset that corrected parameter includes bounding box;The bounding box image of the current frame image is determined according to the corrected parameter.
It is repaired it is alternatively possible to carry out bounding box to second candidate region image based on preparatory trained full articulamentum Positive processing, obtains the corrected parameter of bounding box.Specifically, the second candidate region image is inputted into trained full connection in advance Layer, the corrected parameter of output boundary frame.
As an example it is assumed that bounding box is rectangle, the coordinate of the upper left angle point of the bounding box in prior image frame be (x1, Y1), the coordinate of bottom right angle point is (x2, y2), and the second candidate region image is inputted trained full articulamentum in advance, exports side The corrected parameter of boundary's frame are as follows: d_x1, d_y1, d_x2, d_y2.Then the second candidate region image is modified according to offset, The position of the bounding box of the second candidate region image in present frame is modified, obtains revised current frame image The coordinate of the upper left angle point of bounding box image is (x1+d_x1, y1+d_y1), and the coordinate of bottom right angle point is (x2+d_x2, y2+d_ y2)。
During above-mentioned realization, by carrying out bounding box correcting process to second candidate region image, side is obtained The corrected parameter of boundary's frame, the corrected parameter include the offset of bounding box;The present frame is determined according to the corrected parameter The bounding box image of image, so as to obtain tracking target accurate location, realize precisely tracking, further, pass through by The bounding box image of current frame image is used to determine the boundary block diagram of the target following in rear frame image of the current frame image Picture can enable the application preferably cope with the application scenarios of target variation of form in video stream data.
For example, at current time, present frame is the analysis frame for tracking target, and in the subsequent time at current time, Present frame is just as in previous frame, and in rear frame just as new present frame, so which frame determined according to timing node For in previous frame, which frame is present frame, which frame is in rear frame.And with the variation of time, the bounding box in every frame may It changes.
During above-mentioned realization, by obtaining revised bounding box image, so that can will work as in lasting tracking Bounding box image in previous frame, in prior image frame, and then can get accurate first side as in rear frame target following Battery limit (BL) area image, thus by circulation execute step S101 completed to step S105 to the targets of frames all in video flowing with Track, and during tracking completes target following by using in the bounding box image of previous frame, and track complete when, repair The position of bounding box image in proper previous frame, so that target following is more accurate and improves the real-time of tracking.
As an implementation, by generating the grid (anchor) of multiple fixed sizes around candidate region, really Determine the corresponding confidence level (faceness) of each anchor, grid corresponding to highest confidence level is obtained from multiple confidence levels Position as bounding box in current frame image.
Optionally, the size of grid can be configured according to the size of user demand or tracking target, here, not making to have Body limits.
Describe method for tracking target in the embodiment of the present application above in association with Fig. 1, in the following, with face be tracking target come into Row description, it is non-limiting as example, the method for tracking target in the embodiment of the present application is carried out below with reference to Fig. 2 and Fig. 3 detailed Thin description.Method as shown in Figure 3 includes:
Step S201, timing detect face.
Optionally, by determining tracking target to Face datection is carried out in prior image frame.
For example, based on DPM (Deformable Parts Model, deformable member model), algorithm of target detection is either Face datection is carried out based on convolutional neural networks and determines tracking target.
It should be understood that above are only example and non-limiting.
Optionally, it after determining tracking target, will be come out including the first borderline region image cropping of tracking target.Its In, specific implementation process is referred to step S101, here, being not especially limited.
Step S202 extracts feature and updates correlation filtering template.
Optionally, fisrt feature figure and second feature figure are extracted with CNN to the face detected.
Optionally, if fisrt feature figure and second feature figure are to carry out first time Face datection to low frame-rate video stream When it is acquired, at this time using fisrt feature figure and second feature figure initialization correlation filter (Correlation Filters, CF template).
Optionally, if it is judge tracking target in the t+1 frame image be in the t frame image with When track target is same target, the fisrt feature figure and second feature figure returned, then with the fisrt feature figure and second returned Characteristic pattern updates correlation filtering template.
Optionally, the first borderline region image and the second boundary area image of input are extracted respectively by CNN Fisrt feature figure and second feature figure.
For example, as shown in Fig. 2, respectively will from t frame target area (the first borderline region image i.e. above) and T+1 frame target area (the second boundary area image i.e. above) inputs same CNN, respectively obtains fisrt feature figure and second Characteristic pattern.
Step S203, obtains candidate region.
For example, as shown in Fig. 2, respectively by the fisrt feature figure obtained from t frame target area and from t+1 frame mesh The second feature figure input CF that mark region obtains carries out degree of correlation filtering processing, score chart (ScoresMap) is obtained, by score chart Mid-score is worth position of the maximum position as candidate region, i.e., candidate coordinate, in prior image frame centered on candidate coordinate In the size of bounding box image determine the size of candidate region.
That is, using CNN extraction feature and correlation filtering is combined to carry out weight to tracking target in subsequent video frame New definition obtains the candidate region of tracking target.
Step S204, revise the boundary block diagram picture.
Optionally, the corrected parameter that correlation filtering obtains is modified boundary block diagram picture using CNN.It is specific Realization process is referred to above be modified the second candidate region image the specific implementation process of processing, here, not making to have Body limits.
Continue for by taking above-mentioned example as an example, as shown in Fig. 2, being cut out centered on selecting coordinate on fisrt feature figure First image is resized to pre-set dimension, obtains the first candidate regions by the first image identical with bounding box image size Area image cuts out the second image identical with bounding box image size on second feature figure centered on selecting coordinate, by Two images carry out size adjusting (ROIpooling, interested pool area), i.e., are resized to preset by the second image Size obtains second feature figure.Full articulamentum (Full Connection Layer, FC) is met behind first candidate region image Bounding box amendment is done, with the bounding box image of the determination t+1 frame image.By the first candidate region image and the second candidate regions Area image splices and then meets FC and obtains confidence level, according to confidence level judge the tracking target in the t+1 frame image whether with Tracking target in the t frame image is same target.
Step S205, if for tracking target.
Optionally, the confidence level is compared with preset threshold, when the confidence level is greater than or equal to preset threshold When, determine that the tracking target and the tracking target in prior image frame in current frame image are same target.Conversely, When the confidence level be less than preset threshold when, determine current frame image in tracking target with it is described in prior image frame described in Tracking target is not same target.
Wherein, specific implementation process is referred to above, here, repeating no more.
Optionally, if it is decided that result be tracking target, that is, re-execute the steps S202, that is, in current frame image with Track target and the tracking target in prior image frame are same target, then with modified bounding box in the second candidate regions Clarification of objective is extracted in area image can make correlation filtering more to update correlation filtering template, more new template The variation of good reply target form in video stream data;If it is determined that it is not tracking target that result, which is, then target disappears.
Step S203 to step S205 is repeated, when tracking target disappearance in video, skips to step S202 again.
Whether step S206, video terminate.
Optionally, when video is not over, step S201 to step S205 is repeated, until video terminates.
Optionally, video terminates to refer to and no longer monitor, or monitoring is completed.Here, being not especially limited.
During above-mentioned realization, the network model that the method for tracking target as provided by the embodiment of the present application uses is very It is small, it is very low to hardware requirement, therefore, it is easy on the electronic equipment for being transplanted to some resource scarcitys.
Method for tracking target provided by the embodiment of the present application, by obtaining first in prior image frame in video flowing Borderline region image, the first borderline region image includes the bounding box image in prior image frame, described in previous frame figure The bounding box image of picture includes the image for tracking target;Obtain on the current frame image in the video flowing with first boundary The equal-sized the second boundary area image of area image, the second boundary area image is in the current frame image Position is identical in the position in prior image frame as the first borderline region image;To the first borderline region image Degree of correlation filtering processing is carried out with the second boundary area image, obtains candidate coordinate, candidate's coordinate is described first Borderline region image and the second boundary region highest position of the degree of correlation;Described the is determined respectively according to the candidate coordinate In one frontier district area image and the second boundary area image with the equal-sized first candidate region figure of the boundary image Picture and the second candidate region image;By first candidate region image and the second candidate region image, determine described current Whether the tracking target in frame image is same target with the tracking target in prior image frame.To make the application can be with Under the premise of reducing hardware resource cost, realize that carrying out the technology accurately tracked in real time to target in low frame per second frequency stream imitates Fruit.
Referring to Fig. 4, Fig. 4 is shown using the one-to-one target tracker of method for tracking target shown in FIG. 1, answer Understand, the device 300 is corresponding to Fig. 3 embodiment of the method with above-mentioned Fig. 1, is able to carry out each step that above method embodiment is related to Suddenly, which may refer to described above, appropriate herein to omit detailed description to avoid repeating.Dress Setting 300 includes that at least one can be stored in memory or be solidificated in device 300 in the form of software or firmware (firmware) Operating system (operating system, OS) in software function module.Specifically, which includes:
First acquisition unit 310, it is described for obtaining the first borderline region image in prior image frame in video flowing First borderline region image includes the bounding box image in prior image frame, and the bounding box image in prior image frame includes Track the image of target;
Second acquisition unit 320, for obtain on the current frame image in the video flowing with first borderline region The equal-sized the second boundary area image of image, position of the second boundary area image in the current frame image It is identical in the position in prior image frame as the first borderline region image;
First processing units 330, for being carried out to the first borderline region image and the second boundary area image Degree of correlation filtering processing obtains candidate coordinate, and candidate's coordinate is the first borderline region image and the second boundary The highest position of the region degree of correlation;
The second processing unit 340, for determining the first borderline region image and institute respectively according to the candidate coordinate State in the second boundary area image with the equal-sized first candidate region image of the boundary image and the second candidate region figure Picture;
Target tracking unit 350, for determining institute by first candidate region image and the second candidate region image State whether the tracking target in current frame image is same target with the tracking target in prior image frame.
In a possible embodiment, described device 300 further include: position processing unit, for candidate to described second Area image is modified processing, with the bounding box image of the determination current frame image, the bounding box of the current frame image Image is used to determine the target following in rear frame image of the current frame image.
Optionally, the position processing unit, is also used to: carrying out bounding box Corrections Division to second candidate region image Reason, obtains the corrected parameter of bounding box, the corrected parameter includes the offset of bounding box;Institute is determined according to the corrected parameter State the bounding box image of current frame image.
Optionally, the first processing units 330, are also used to: respectively by the first borderline region image and described Two borderline region images input same default convolutional neural networks, obtain fisrt feature figure and second feature figure, the default volume Product neural network is preparatory trained convolutional neural networks;It is related to the second feature figure progress to the fisrt feature figure Degree filtering processing, obtains the maximum position of the degree of correlation;Using the maximum position of the degree of correlation as candidate coordinate.
Optionally, target tracking unit 350 are also used to: by first candidate region image and the second candidate region figure As being spliced, spliced candidate feature figure is obtained;Determine the corresponding confidence level of the candidate feature figure;According to the confidence Degree determines whether the tracking target in the current frame image is same mesh with the tracking target in prior image frame Mark.
The application also provides a kind of electronic equipment, and Fig. 5 is the structural block diagram of the electronic equipment 400 in the embodiment of the present application, As shown in Figure 5.Electronic equipment 400 may include that processor 410, communication interface 420, memory 430 and at least one communication are total Line 440.Wherein, communication bus 440 is for realizing the direct connection communication of these components.Wherein, equipment in the embodiment of the present application Communication interface 420 be used to carry out the communication of signaling or data with other node devices.Processor 410 can be a kind of integrated electricity Road chip, the processing capacity with signal.
Above-mentioned processor 410 can be general processor, including central processing unit (Central Processing Unit, abbreviation CPU), network processing unit (Network Processor, abbreviation NP) etc.;It can also be digital signal processor (DSP), specific integrated circuit (ASIC), ready-made programmable gate array (FPGA) or other programmable logic device, discrete gate Or transistor logic, discrete hardware components.May be implemented or execute disclosed each method in the embodiment of the present application, Step and logic diagram.General processor can be microprocessor or the processor 410 is also possible to any conventional processing Device etc..
Memory 430 may be, but not limited to, random access memory (Random Access Memory, RAM), only It reads memory (Read Only Memory, ROM), programmable read only memory (Programmable Read-Only Memory, PROM), erasable read-only memory (Erasable Programmable Read-Only Memory, EPROM), Electricallyerasable ROM (EEROM) (Electric Erasable Programmable Read-Only Memory, EEPROM) etc.. Computer-readable instruction fetch is stored in memory 430, when the computer-readable instruction fetch is executed by the processor 410 When, electronic equipment 400 can execute each step that above-mentioned Fig. 1 is related to Fig. 3 embodiment of the method.
The memory 430 and each element of processor 410 are directly or indirectly electrically connected between each other, to realize data Transmission or interaction.It is electrically connected for example, these elements can be realized between each other by one or more communication bus 440.Institute Processor 410 is stated for executing the executable module stored in memory 430, such as the software function module that device 300 includes Or computer program.Also, device 300 is used to execute following methods: obtaining the first side in prior image frame in video flowing Battery limit (BL) area image, the first borderline region image includes the bounding box image in prior image frame, described in prior image frame Bounding box image include track target image;Obtain on the current frame image in the video flowing with first frontier district The equal-sized the second boundary area image of area image, position of the second boundary area image in the current frame image It sets identical in the position in prior image frame as the first borderline region image;To the first borderline region image and The second boundary area image carries out degree of correlation filtering processing, obtains candidate coordinate, candidate's coordinate is first side Battery limit (BL) area image and the second boundary region highest position of the degree of correlation;Described first is determined respectively according to the candidate coordinate In borderline region image and the second boundary area image with the equal-sized first candidate region image of the boundary image With the second candidate region image;By first candidate region image and the second candidate region image, the present frame is determined Whether the tracking target in image is same target with the tracking target in prior image frame.
Optionally, electronic equipment 500 can be personal computer, smart phone, server, apparatus such as computer.
It is appreciated that structure shown in fig. 5 is only to illustrate, the electronic equipment 400 may also include more than shown in Fig. 5 Perhaps less component or with the configuration different from shown in Fig. 5.Each component shown in Fig. 5 can use hardware, software Or combinations thereof realize.
The embodiment of the present application also provides a kind of storage medium, and instruction is stored on the storage medium, when described instruction exists When being run on computer, method described in implementation method embodiment when the computer program is executed by processor, to avoid weight Multiple, details are not described herein again.
The application also provides a kind of computer program product to be made when the computer program product is run on computers It obtains computer and executes method described in embodiment of the method.
Through the above description of the embodiments, those skilled in the art can be understood that the application can lead to Hardware realization is crossed, the mode of necessary general hardware platform can also be added to realize by software, based on this understanding, this Shen Technical solution please can be embodied in the form of software products, which can store in a non-volatile memories In medium (can be CD-ROM, USB flash disk, mobile hard disk etc.), including some instructions are used so that a computer equipment (can be Personal computer, server or network equipment etc.) execute each implement scene of the application method.
The foregoing is merely preferred embodiment of the present application, are not intended to limit this application, for the skill of this field For art personnel, various changes and changes are possible in this application.Within the spirit and principles of this application, made any to repair Change, equivalent replacement, improvement etc., should be included within the scope of protection of this application.It should also be noted that similar label and letter exist Similar terms are indicated in following attached drawing, therefore, once being defined in a certain Xiang Yi attached drawing, are then not required in subsequent attached drawing It is further defined and explained.

Claims (10)

1. a kind of method for tracking target, which is characterized in that the described method includes:
The first borderline region image in prior image frame in video flowing is obtained, the first borderline region image includes described In the bounding box image of prior image frame, the bounding box image in prior image frame includes the image for tracking target;
Obtain the equal-sized the second boundary on the current frame image in the video flowing with the first borderline region image Area image, position of the second boundary area image in the current frame image and the first borderline region image exist The position in prior image frame is identical;
Degree of correlation filtering processing is carried out to the first borderline region image and the second boundary area image, obtains candidate seat Mark, candidate's coordinate is the first borderline region image and the second boundary region highest position of the degree of correlation;
According to the candidate coordinate determine respectively in the first borderline region image and the second boundary area image with institute State the equal-sized first candidate region image of boundary image and the second candidate region image;
By first candidate region image and the second candidate region image, the tracking target in the current frame image is determined It whether is same target with the tracking target in prior image frame.
2. the method according to claim 1, wherein the method also includes:
Processing is modified to second candidate region image, it is described with the bounding box image of the determination current frame image The bounding box image of current frame image is used to determine the target following in rear frame image of the current frame image.
3. according to the method described in claim 2, it is characterized in that, described be modified place to second candidate region image Reason, with the bounding box image of the determination current frame image, comprising:
Bounding box correcting process is carried out to second candidate region image, obtains the corrected parameter of bounding box, the amendment ginseng Number includes the offset of bounding box;
The bounding box image of the current frame image is determined according to the corrected parameter.
4. the method according to claim 1, wherein described to the first borderline region image and described second Borderline region image carries out degree of correlation filtering processing, obtains candidate coordinate, comprising:
The first borderline region image and the second boundary area image are inputted into same default convolutional neural networks respectively, Fisrt feature figure and second feature figure are obtained, the default convolutional neural networks are preparatory trained convolutional neural networks;
Degree of correlation filtering processing is carried out to the fisrt feature figure and the second feature figure, obtains the maximum position of the degree of correlation;
Using the maximum position of the degree of correlation as candidate coordinate.
5. the method according to claim 1, wherein described waited by first candidate region image and second Constituency area image determines whether the tracking target in the current frame image is same with the tracking target in prior image frame One target, comprising:
First candidate region image and second candidate region image are spliced, spliced candidate feature is obtained Figure;
Determine the corresponding confidence level of the candidate feature figure;
According to the confidence level determine the tracking target in the current frame image whether with it is described in prior image frame described in Tracking target is same target.
6. a kind of target tracker characterized by comprising
First acquisition unit, for obtaining the first borderline region image in prior image frame in video flowing, first side Battery limit (BL) area image includes the bounding box image in prior image frame, and the bounding box image in prior image frame includes tracking mesh Target image;
Second acquisition unit is big with the first borderline region image on the current frame image in the video flowing for obtaining Small equal the second boundary area image, position of the second boundary area image in the current frame image and described the One frontier district area image is identical in the position in prior image frame;
First processing units, for carrying out degree of correlation filter to the first borderline region image and the second boundary area image Wave processing obtains candidate coordinate, and candidate's coordinate is that the first borderline region image is related to the second boundary region Spend highest position;
The second processing unit, for determining the first borderline region image and second side respectively according to the candidate coordinate In battery limit (BL) area image with the equal-sized first candidate region image of the boundary image and the second candidate region image;
Target tracking unit, for determining described current by first candidate region image and the second candidate region image Whether the tracking target in frame image is same target with the tracking target in prior image frame.
7. device according to claim 6, which is characterized in that described device further include:
Position processing unit, for being modified processing to second candidate region image, with the determination current frame image Bounding box image, the bounding box image of the current frame image is used to determine the mesh in rear frame image of the current frame image Mark tracking.
8. device according to claim 6, which is characterized in that the first processing units are also used to:
The first borderline region image and the second boundary area image are inputted into same default convolutional neural networks respectively, Fisrt feature figure and second feature figure are obtained, the default convolutional neural networks are preparatory trained convolutional neural networks;
Degree of correlation filtering processing is carried out to the fisrt feature figure and the second feature figure, obtains the maximum position of the degree of correlation;
Using the maximum position of the degree of correlation as candidate coordinate.
9. a kind of electronic equipment characterized by comprising memory, processor and storage are in the memory and can be The computer program run on the processor, the processor realize such as claim 1 to 5 when executing the computer program The step of described in any item method for tracking target.
10. a kind of storage medium, which is characterized in that the storage medium for storing instruction, when described instruction on computers When operation, so that the computer executes such as method for tracking target described in any one of claim 1 to 5.
CN201910776111.4A 2019-08-21 2019-08-21 Target tracking method, device, equipment and storage medium Active CN110473227B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910776111.4A CN110473227B (en) 2019-08-21 2019-08-21 Target tracking method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910776111.4A CN110473227B (en) 2019-08-21 2019-08-21 Target tracking method, device, equipment and storage medium

Publications (2)

Publication Number Publication Date
CN110473227A true CN110473227A (en) 2019-11-19
CN110473227B CN110473227B (en) 2022-03-04

Family

ID=68512684

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910776111.4A Active CN110473227B (en) 2019-08-21 2019-08-21 Target tracking method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN110473227B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652541A (en) * 2020-05-07 2020-09-11 美的集团股份有限公司 Industrial production monitoring method, system and computer readable storage medium
CN112215205A (en) * 2020-11-06 2021-01-12 腾讯科技(深圳)有限公司 Target identification method and device, computer equipment and storage medium
CN112529943A (en) * 2020-12-22 2021-03-19 深圳市优必选科技股份有限公司 Object detection method, object detection device and intelligent equipment
CN112819694A (en) * 2021-01-18 2021-05-18 中国工商银行股份有限公司 Video image splicing method and device
CN113808162A (en) * 2021-08-26 2021-12-17 中国人民解放军军事科学院军事医学研究院 Target tracking method and device, electronic equipment and storage medium

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120134541A1 (en) * 2010-11-29 2012-05-31 Canon Kabushiki Kaisha Object tracking device capable of detecting intruding object, method of tracking object, and storage medium
CN103325126A (en) * 2013-07-09 2013-09-25 中国石油大学(华东) Video target tracking method under circumstance of scale change and shielding
CN103793926A (en) * 2014-02-27 2014-05-14 西安电子科技大学 Target tracking method based on sample reselecting
CN106991396A (en) * 2017-04-01 2017-07-28 南京云创大数据科技股份有限公司 A kind of target relay track algorithm based on wisdom street lamp companion
CN107748873A (en) * 2017-10-31 2018-03-02 河北工业大学 A kind of multimodal method for tracking target for merging background information
CN108280845A (en) * 2017-12-26 2018-07-13 浙江工业大学 A kind of dimension self-adaption method for tracking target for complex background
CN108596946A (en) * 2018-03-21 2018-09-28 中国航空工业集团公司洛阳电光设备研究所 A kind of moving target real-time detection method and system
CN109255304A (en) * 2018-08-17 2019-01-22 西安电子科技大学 Method for tracking target based on distribution field feature
CN109344789A (en) * 2018-10-16 2019-02-15 北京旷视科技有限公司 Face tracking method and device
CN110009663A (en) * 2019-04-10 2019-07-12 苏州大学 A kind of method for tracking target, device, equipment and computer readable storage medium

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120134541A1 (en) * 2010-11-29 2012-05-31 Canon Kabushiki Kaisha Object tracking device capable of detecting intruding object, method of tracking object, and storage medium
CN103325126A (en) * 2013-07-09 2013-09-25 中国石油大学(华东) Video target tracking method under circumstance of scale change and shielding
CN103793926A (en) * 2014-02-27 2014-05-14 西安电子科技大学 Target tracking method based on sample reselecting
CN106991396A (en) * 2017-04-01 2017-07-28 南京云创大数据科技股份有限公司 A kind of target relay track algorithm based on wisdom street lamp companion
CN107748873A (en) * 2017-10-31 2018-03-02 河北工业大学 A kind of multimodal method for tracking target for merging background information
CN108280845A (en) * 2017-12-26 2018-07-13 浙江工业大学 A kind of dimension self-adaption method for tracking target for complex background
CN108596946A (en) * 2018-03-21 2018-09-28 中国航空工业集团公司洛阳电光设备研究所 A kind of moving target real-time detection method and system
CN109255304A (en) * 2018-08-17 2019-01-22 西安电子科技大学 Method for tracking target based on distribution field feature
CN109344789A (en) * 2018-10-16 2019-02-15 北京旷视科技有限公司 Face tracking method and device
CN110009663A (en) * 2019-04-10 2019-07-12 苏州大学 A kind of method for tracking target, device, equipment and computer readable storage medium

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
LIYAN XIE等: "An online learning target tracking method based on extreme learning machine", 《2016 12TH WORLD CONGRESS ON INTELLIGENT CONTROL AND AUTOMATION (WCICA)》 *
郝少华: "基于候选区域检测的核相关目标跟踪算法", 《电视技术》 *

Cited By (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111652541A (en) * 2020-05-07 2020-09-11 美的集团股份有限公司 Industrial production monitoring method, system and computer readable storage medium
CN111652541B (en) * 2020-05-07 2022-11-01 美的集团股份有限公司 Industrial production monitoring method, system and computer readable storage medium
CN112215205A (en) * 2020-11-06 2021-01-12 腾讯科技(深圳)有限公司 Target identification method and device, computer equipment and storage medium
CN112215205B (en) * 2020-11-06 2022-10-18 腾讯科技(深圳)有限公司 Target identification method and device, computer equipment and storage medium
CN112529943A (en) * 2020-12-22 2021-03-19 深圳市优必选科技股份有限公司 Object detection method, object detection device and intelligent equipment
CN112529943B (en) * 2020-12-22 2024-01-16 深圳市优必选科技股份有限公司 Object detection method, object detection device and intelligent equipment
CN112819694A (en) * 2021-01-18 2021-05-18 中国工商银行股份有限公司 Video image splicing method and device
CN112819694B (en) * 2021-01-18 2024-06-21 中国工商银行股份有限公司 Video image stitching method and device
CN113808162A (en) * 2021-08-26 2021-12-17 中国人民解放军军事科学院军事医学研究院 Target tracking method and device, electronic equipment and storage medium
CN113808162B (en) * 2021-08-26 2024-01-23 中国人民解放军军事科学院军事医学研究院 Target tracking method, device, electronic equipment and storage medium

Also Published As

Publication number Publication date
CN110473227B (en) 2022-03-04

Similar Documents

Publication Publication Date Title
CN110473227A (en) Method for tracking target, device, equipment and storage medium
US11164323B2 (en) Method for obtaining image tracking points and device and storage medium thereof
US10474921B2 (en) Tracker assisted image capture
JP7179695B2 (en) Lane tracking method and device
KR20180105876A (en) Method for tracking image in real time considering both color and shape at the same time and apparatus therefor
US9721387B2 (en) Systems and methods for implementing augmented reality
US20150104067A1 (en) Method and apparatus for tracking object, and method for selecting tracking feature
CN109598744A (en) A kind of method, apparatus of video tracking, equipment and storage medium
CN115423846A (en) Multi-target track tracking method and device
CN113112542A (en) Visual positioning method and device, electronic equipment and storage medium
CN110782469A (en) Video frame image segmentation method and device, electronic equipment and storage medium
US11354923B2 (en) Human body recognition method and apparatus, and storage medium
CN113850136A (en) Yolov5 and BCNN-based vehicle orientation identification method and system
CN111798422B (en) Checkerboard corner recognition method, device, equipment and storage medium
WO2022093283A1 (en) Motion-based pixel propagation for video inpainting
CN112183529A (en) Quadrilateral object detection method, quadrilateral object model training method, quadrilateral object detection device, quadrilateral object model training device and storage medium
CN115326051A (en) Positioning method and device based on dynamic scene, robot and medium
CN111738085B (en) System construction method and device for realizing automatic driving simultaneous positioning and mapping
CN112614154A (en) Target tracking track obtaining method and device and computer equipment
CN111598005A (en) Dynamic capture data processing method and device, electronic equipment and computer storage medium
CN111507999B (en) Target tracking method and device based on FDSST algorithm
CN116363628A (en) Mark detection method and device, nonvolatile storage medium and computer equipment
CN110097061A (en) A kind of image display method and apparatus
CN115690180A (en) Vector map registration method, registration system, electronic device and storage medium
CN114511897A (en) Identity recognition method, system, storage medium and server

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant