CN106709472A - Video target detecting and tracking method based on optical flow features - Google Patents

Video target detecting and tracking method based on optical flow features Download PDF

Info

Publication number
CN106709472A
CN106709472A CN201710034789.6A CN201710034789A CN106709472A CN 106709472 A CN106709472 A CN 106709472A CN 201710034789 A CN201710034789 A CN 201710034789A CN 106709472 A CN106709472 A CN 106709472A
Authority
CN
China
Prior art keywords
target
background
pixel
conspicuousness
color
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201710034789.6A
Other languages
Chinese (zh)
Inventor
向北海
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Hunan Youxiang Technology Co Ltd
Original Assignee
Hunan Youxiang Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Hunan Youxiang Technology Co Ltd filed Critical Hunan Youxiang Technology Co Ltd
Priority to CN201710034789.6A priority Critical patent/CN106709472A/en
Publication of CN106709472A publication Critical patent/CN106709472A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/41Higher-level, semantic clustering, classification or understanding of video scenes, e.g. detection, labelling or Markovian modelling of sport events or news items
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention provides a video target detecting and tracking method based on optical flow features. According to the technical scheme of the method, during the first step, an input image frame sequence is subjected to background sampling, and the optical flow vector of each pixel point after the sampling process is calculated. Meanwhile, the background motion is estimated based on the Mean Sift algorithm, and then the overall significance of a target is estimated. Finally, a threshold value is set according to the detection result of the target significance detection, so that a target region and a background region are separated. During the second step, the tracking of a video target is conducted: firstly, the target region is selected as a positive sample, and the background region is selected as a negative sample. The target is described based on the Haar features and the global color features of the target. Meanwhile, original features are subjected to sampling and compressing in the random matrix manner. Based on the bayesian criterion, the similarity between the target and a target of a previous frame is judged. Finally, the target is continuously tracked based on the particle filtering algorithm. In this way, multiple features including the target motion saliency, the color, the texture and the like are fused together, so that the success rate of target detection is improved. Therefore, the target can be quickly, effectively and continuously tracked.

Description

A kind of video object detection and tracking based on Optical-flow Feature
Technical field
The invention belongs to visual pattern processing technology field, it is related to object detecting and tracking method, refers in particular to a kind of based on light Flow video object detection and the tracking of feature.
Background technology
The detection of moving target is one of focus and difficult point in computer vision research with tracking.The detection of moving target Had broad application prospects in intelligent transportation, security monitoring and military field with tracking.Unmanned plane has high maneuverability, height Resolution ratio, good concealment, operation flexibly etc. advantage, using the video sensor of UAV flight ground moving object is carried out with Track and analysis, with important practice significance and theory value.
Yet with video sensor moved with the high-speed motion of unmanned plane and video sequence image in background The multifarious feature of complexity and moving target information so that the problem for the treatment of target detection tracking becomes more difficult.
The detection of moving target is to realize target following and moving target analysis, the basis for the treatment of, is transported according to video camera Dynamic situation, moving object detection can be divided into the target detection of static background and the target detection of movement background.
In recent years, the method that domestic and international researcher proposes many moving object detections, mainly including under static background Frame difference method, background subtracting method, expectation maximization method and the method based on graph theory etc., optical flow method, statistical model under dynamic background Method, Level Set Method etc..Frame difference method is to subtract each other the gray value of two continuous frames image corresponding pixel points, judges that the size of difference is come Determine that the pixel belongs to background or moving target, recycle these pixel regions to calculate moving target position in the picture Put, the method calculates simple, quick, but due to target sizes, background luminance difference in image so that Detection results are easily subject to Harmful effect.Optic flow technique is incorporated into moving object detection follow-up study, it is not necessary to be known a priori by any priori of scene Information, the velocity feature according to each pixel enters Mobile state analysis to image, is capable of detecting when self-movement target.
Motion target tracking is the further analysis to target state on the basis of detection, is broadly divided into two classes, one It is the tracking based on moving target priori, the moving target in first Detection and Extraction image is simultaneously modeled, then by template matches Respective objects regional location in next two field picture is found Deng strategy, target following is realized;Two is the priori for being independent of moving target Knowledge, directly detects moving target, and target interested is tracked from video sequence image.Target tracking algorism with The effective expression form of target is closely related, and the describing mode of target mainly includes centroid method, field method, method of characteristic point and side The methods such as edge profile, can be substantially be summarised as 4 classes:The tracking of feature based, the tracking based on region, the tracking based on profile With the tracking based on model.The tracing of feature based, target following is realized by extracting the notable feature of moving target, should Algorithm is not deformed by target or illumination variation is influenceed, and tracking can be also realized under target part circumstance of occlusion, but how to select Target signature is selected and describes, the lasting tracking for realizing moving target is difficult point;Based on area tracking method first with moving target The region comprising moving target is extracted in detection method segmentation, represents the region with rectangle or one template of oval formation, then Next frame is matched using gray scale or color related algorithm, so as to realize tracking, the algorithm is not before target is blocked Put, realize tenacious tracking, when target deformation or larger occlusion area, tracking accuracy declines;Tracing based on profile is Treatment is tracked to object boundary profile, a curve for closing is obtained using the global information of objective contour, it is not necessary to mesh Mark priori, but the method needs manual intervention to be initialized, and calculates and takes very much;Based on model following method, build first Vertical target morphology model and motion model, model parameter and kinematic parameter are determined further according to actual video sequence image, so that real Existing target following, but it is extremely difficult that accurate target true model is obtained in actual scene.
The content of the invention
It is an object of the invention to propose a kind of video object detection based on Optical-flow Feature and tracking, its novelty It is the cavity effect for overcoming traditional frame difference method and background compensation method that target is detected under movement background, it is non-by local variance The even method of sampling avoids drift phenomenon of the LK optical flow methods in sparse texture characteristic point, and target is moved into conspicuousness and color, line The multiple features fusions such as reason, improve the success rate of target detection, and realization is quickly and efficiently persistently tracked.
The technical scheme is that,
A kind of video object detection and tracking based on Optical-flow Feature, are divided into two big steps, and the first step is video mesh Mark detection:It is primarily based on the equal interval sampling method of background local neighborhood variance background is carried out to the picture frame sequence being input into and adopts Sample, calculates the light stream vector of each pixel after sampling, and estimate background motion by Mean Sift algorithms;Secondly with mesh Based on target motion conspicuousness, supplemented by color conspicuousness, the whole of target is estimated jointly with reference to optical flow method and color conspicuousness algorithm Body conspicuousness, finally by the result given threshold that target conspicuousness is detected, realizes that target area separates with background area.The Two steps are video frequency object tracking:It is positive sample, background area to choose the target area obtained by target detection in the first step first Domain is negative sample;Target is described using Haar features and target global color feature, and by the way of random matrix Compression is sampled to original feature;The similitude of it and previous frame target is judged by bayesian criterion, finally using particle Filtering algorithm carries out lasting tracking to target.
Specifically, the present invention a kind of video object detection and tracking based on Optical-flow Feature, comprises the following steps:
S1, video object detection
S11, input unmanned aerial vehicle vision frequency sequence, figure of the equal interval sampling method based on background local neighborhood variance to input As frame sequence carries out background sampling, the local adjacent of each pixel is calculated by box-filter (such as 5*5 is local) method Domain variance, the local variance distribution map given threshold according to background image, subregion sampling is carried out to image, filters part unfavorable In the sampled point of LK algorithmic match, while retaining the sampled point of target position.
If a width of W of current input image, a height of H, a width of m, a height of n of box rectangle templates are defined in image-region, Using the pixel in black bars representative image region, each black bars represents a pixel, and 1# rectangle frames represent box rectangles Template, 2# rectangle frames represent the regional extent that input picture W and box rectangle templates wide n high is formed, and 3# rectangle frames are the square of 1*m Shape frame, the grey square in 3# rectangle frames is designated as buff, for the intermediate variable in storage computation process;From box rectangle templates The upper left corner (x, y) starts for (0,0), and individual element is slided to the right, and the beginning (0,1) of next line is transferred to when reaching row end, with This analogizes;Each row pixel summation in 2# rectangle frames, the result of each row pixel summation are placed in grey square buff, then To the pixel summation in 3# rectangle frames, as a result as 1# rectangle frames be pixel in box rectangle templates and, be stored in initial in advance Change in the array A of definition, complete first time summation operation;The pixel and Sum [i] of 3# rectangle frames current location are to move right it First grey square buff [x-1] that preceding pixel and Sum [i-1] are subtracted in 3# rectangle frames adds 3# rectangle frame right hand edges First grey square buff [x+m-1], i.e., by buff update obtain indirectly slide box in all pixels and, table It is as follows up to formula:
Wherein, the initial value of i is 0, and similarly, it is to slide all pixels quadratic sum in box, 3# squares to define SquareSum Pixel quadratic sum SquareSum [i] of shape frame current location is pixel quadratic sum SquareSum [i-1] before moving right First grey square pixel quadratic sum SquareSum [x-1] in 3# rectangle frames is subtracted along with its right hand edge of 3# rectangle frames First grey square pixel quadratic sum SquareSum [x+m-1], then have:
Simultaneous expression formula (1) and (2), can try to achieve the expression formula such as (3) of local neighborhood variance Var (x, y) of background image Shown in formula:
Var (x, y)=SquareSum (x, y)-[Sum (x, y)/(m*n)]2 (3)
By being calculated after the local neighborhood variance distribution map of background image, by setting variance threshold values, in background The region that local neighborhood variance is more than the variance threshold values for setting then sets less interval threshold (such as sampling interval is 2), that is, adopt Sample density is big, obtains that sampled point is more, then sets less in the region of variance threshold values of the background local neighborhood variance less than setting Interval threshold (such as sampling interval be 10), i.e., sampling density is small, obtains that sampled point is few, filters the sampling for being not easy to LK algorithmic match Point, while retaining the sampled point of target position.The too small region of variance is detrimental to LK algorithms as match point, but In order to retain target relative position in the picture, therefore simultaneously non-fully filter variance yields and cross zonule, but it is big in variance Region carries out many samplings, because the sampled point in the big region of variance is beneficial to LK algorithmic match, the small region of variance is sampled mesh less Be to retain target relative position in the picture.
S12, sampled by step S11 after pixel point set, by each pixel in LK optical flow method set of computations Light stream vector, the motion vector of background is estimated by Mean Shift algorithms;
The light stream vector of each pixel in the pixel point set S after sampling, j-th sampled point are calculated using LK optical flow methods It is expressed as Sj={ Cj,Vj, CjIt is the coordinate of sampled point, VjBe the displacement of sampled point, by all sampled points be mapped to polar coordinates (r, On θ), N is sampled point number, then with VjAverage is as the expression formula of Mean Shift iteration initial values:
The maximum position of cuclear density is found out by Mean Shift iterative steps, core centre coordinate is the motion vector of background Vb
S13, estimate target entirety conspicuousness, be divided into background there are two kinds of feelings of moving target in without motion target and background Shape, two kinds of situations estimate that the method for target entirety conspicuousness is as follows respectively:
(1) in background during without motion target, overall conspicuousness distribution is distributed as with background image motion conspicuousness and is estimated, its Definition is:K is sampled point C in formulaiLocal neighborhood, SmRepresent that each sampled point is carried on the back with surrounding The set of the motion vector otherness of scape.According to background motion vector VbWith j-th displacement V of sampled pointjDifference adopted to reflect Diversity factor between sampling point and background.Diversity factor between sampled point and background is bigger, and the motion conspicuousness of its background image is got over Greatly.
(2) when having moving target in background, with move conspicuousness distribution based on, supplemented by color conspicuousness estimate target entirety Conspicuousness.
The computational methods for moving conspicuousness have been given by without motion Target situation in (1) background.There is fortune in the background During moving-target, motion conspicuousness S is calculated using with identical method in (1)m
Center-Surround models based on color contrast, by pixel neighborhood space feature difference degree To calculate color conspicuousness, definition is Sc=Pc* G, wherein ScRepresent the color conspicuousness of certain pixel, PcIt is the pixel The corresponding probable value in pixel significance probability distribution map in neighborhood, G is Gaussian function in neighborhood.By color conspicuousness The color conspicuousness distribution map of image is calculated, brightness is more big on color conspicuousness distribution map, shows that the pixel is more aobvious Write.
Then S=α S are utilizedm+(1-α)Sc, by motion conspicuousness SmWith color conspicuousness ScCarry out linear weighted function and estimate mesh The overall conspicuousness of mark, α takes 0.6.
S14, the target entirety conspicuousness distribution estimated result setting segmentation threshold by being obtained in S13, realize target area Separated with background area.
By the analysis to multitude of video data and compare, show that background conspicuousness is about 0.1, that is, taking background conspicuousness is 0.1 used as segmentation threshold, realizes that target area separates with background area.
S2 video frequency object trackings
S21, target area is separated with background area after, object edge is labeled, extract object edge E, with E It is original position, setting radius is r, sample image sampling is aligned in the range of E+r, negative sample image is adopted outside E-r scope Sample, wherein positive sample image are the sample image comprising target, and negative sample is background image;
S22, the positive sample image obtained to S21 carry out goal description, and target texture feature is described using Haar features, adopt Color of object feature is described with YUV local color histogram models;
The method for describing target texture feature using Haar features is:If Ti, tiIt is the unduplicated random rectangle convolution of two classes Window, I is input picture, and g represents characteristic function, and the definition for defining Haar features isj∈[1,2],j ∈N.Both (two rectangle convolution window T are set respectivelyiAnd ti) comparing threshold value and difference threshold between the two, when both Value is both less than and compares threshold value while when difference between the two is less than difference threshold, i.e., when both values are all smaller, g can be retouched State the edge feature of target;Compare threshold value while when difference between the two is less than difference threshold when both values are both greater than, i.e., As both (two rectangle convolution window TiAnd ti) value it is all larger when, g can describe the global feature of target;When one in both Value more than threshold value, another value is compared less than comparing threshold value and difference between the two and being more than difference threshold when, that is, work as , for small one and large one and when difference is larger, g can describe the local feature distribution situation of target for both values.When taking for two windows When being worth different, stress to describe the different characteristics of image of target, the setting of both specific value conditions and threshold value is according to tracking institute Depending on extraction target signature.
YUV local color histogram models are developed on the basis of gray level model, have both contained the ash of target Degree information, also the colouring information comprising target, can more accurately describe color of object, be color description common method it One.
S23 is sampled compression using the Haar features that sparse matrix compress mode is obtained to S22, by bayesian criterion Judge the similitude of it and previous frame target, lasting tracking is carried out to target using particle filter algorithm.
Initialized target position and Haar features, with R as radius around target, equal weight ground is uniform to deliver particle, leads to The similarity that bayesian criterion calculates each particle and target is crossed, formula (5) is as follows:
Wherein, position of centre of gravity is weighted as the center of circle with all particles, the particle that will be delivered is according to apart from center of circle radius yardstick point It is n level, represents that particle belongs to the probability in positive sample region with p (y=1), p (y=0) represents that particle belongs to negative sample region Probability, then p (vi| y=1) represent that particle is located under the conditions of the i-th level and belongs to the probability in positive sample region, p (vi| y=0) Represent that particle is located under the conditions of the i-th level and belongs to the probability in negative sample region.
Being gained knowledge according to Probability Theory and Math Statistics can approximately be obtained by average (expecting) and standard deviation (i.e. variance) Positive sample p (vi| y=1) and negative sample p (vi| y=0) probability, i.e.,Wherein N represents discrete Type function of random variable, μ1And δ1The respectively average and standard deviation of positive sample, μ0And δ0The respectively average and standard of negative sample Difference, the final position of target is obtained by all particle Weighted estimations, such as shown in formula (6), L in formulaiRepresent the position of each particle Put, L' represents final goal position.
The beneficial effects of the invention are as follows:
The present invention proposes a kind of video object detection based on Optical-flow Feature and tracking, and its novelty is to overcome Traditional frame difference method and background compensation method detect the cavity effect of target under movement background, by local variance nonuniform sampling side Method avoids drift phenomenon of the LK optical flow methods in sparse texture characteristic point, and target motion conspicuousness and color, texture etc. is more special Fusion is levied, the success rate of target detection is improved, realization is quickly and efficiently persistently tracked
Brief description of the drawings
Fig. 1 FB(flow block)s of the invention.
Fig. 2 picture frames background sampling schematic diagram.
Fig. 3 Mean Shift ask for background vector schematic diagram.
Specific embodiment
To make the object, technical solutions and advantages of the present invention clearer, below in conjunction with accompanying drawing to embodiment party of the present invention Formula is described in further detail.
By simulating the process that the mankind perceive to surrounding, it is primarily focused in target and ignores other in environment The influence of background parts, the present invention proposes a kind of video object detection based on Optical-flow Feature and tracking system, realizes to regarding Moving object detection and lasting tracking in frequency sequence, so as to reach the requirement of real-time and stability.
As shown in figure 1, being FB(flow block) of the invention, the first step is video object detection-phase.
The time series image for obtaining scene is shot first with unmanned plane, based on background local neighborhood variance at equal intervals The method of sampling carries out background sampling to the picture frame sequence being input into, and the office of each pixel is calculated by box-filter methods Portion's neighborhood variance, the local variance distribution map given threshold according to background image, subregion sampling is carried out to image, filters part It is unfavorable for the sampled point of LK algorithmic match, while retaining the sampled point of target position.Secondly to the video sequence after sampling Target detection is carried out, target detection part is based primarily upon background motion estimation and target conspicuousness estimates two aspects to separate the back of the body Scape and target, the motion vector for estimating background is combined by LK optical flow methods with Mean Shift algorithms, and parallel by GPU Computing is optimized.Estimate to be divided into background during target entirety conspicuousness and there are two kinds of feelings of moving target in without motion target and background Shape, in background during without motion target, is distributed as overall conspicuousness distribution and estimates with background image motion conspicuousness;There is fortune in background During moving-target, merged to move based on conspicuousness, supplemented by color conspicuousness, obtained target entirety conspicuousness.Finally to inspection Measure the target come to be tracked, target following part mainly includes feature extraction, goal description, similarity criterion and search plan Selection slightly.The positive negative sample of target is described using Haar features and target global color feature, using random matrix Mode is sampled compression to original feature, the similitude of it and previous frame target is judged by bayesian criterion, using particle Filtering algorithm is tracked to target, enables target search that balance is obtained between depth and range, solves population deficient existing As.
Because the light flow point that image is improved quality is easy to matching, it is subsequently accurately calculated, ropy smooth flow point meeting Larger matching error and background noise is brought, therefore the light flow point of the suitable LK algorithmic match of selection is very crucial.The present invention is logical The local neighborhood variance that box-filter (such as 5*5 is local) method calculates each pixel is crossed, the larger picture of variance is leached Vegetarian refreshments, and equal interval sampling is carried out to filtered pixel.If a width of W of current input image, a height of H, in image-region A width of m, a height of n of interior definition box templates, as shown in Fig. 2 using the pixel in black bars representative image region, each is black Color square represents a pixel, and 1# rectangle frames represent the box rectangle templates of 5*5 (m and n are 5), and 2# rectangle frames represent input figure The regional extent that image width W and box rectangle template n high is formed, i.e. 2# rectangle frames are the rectangle frame of W*5, and 3# rectangle frames are the square of 1*5 Shape frame, the grey square in 3# rectangle frames is designated as buff, for the intermediate variable in storage computation process.From box rectangle templates The upper left corner (x, y) starts for (0,0), and individual element is slided to the right, and the beginning (0,1) of next line is transferred to when reaching row end, with This analogizes;Each row pixel summation in 2# rectangle frames, the result of each row pixel summation are placed in grey square buff, then To the pixel summation in 3# rectangle frames, as a result as 1# rectangle frames be pixel in box rectangle templates and, be stored in initial in advance Change in the array A of definition, complete first time summation operation;The pixel and Sum [i] of 3# rectangle frames current location are to move right it First grey square buff [x-1] that preceding pixel and Sum [i-1] are subtracted in 3# rectangle frames adds 3# rectangle frame right hand edges First grey square buff [x+m-1], i.e., by buff update obtain indirectly slide box in all pixels and, table It is as follows up to formula:
Wherein, the initial value of i is 0, and similarly, it is to slide all pixels quadratic sum in box, 3# squares to define SquareSum Pixel quadratic sum SquareSum [i] of shape frame current location is pixel quadratic sum SquareSum [i-1] before moving right First grey square pixel quadratic sum SquareSum [x-1] in 3# rectangle frames is subtracted along with its right hand edge of 3# rectangle frames First grey square pixel quadratic sum SquareSum [x+m-1], then have:
Simultaneous expression formula (1) and (2), can try to achieve the expression formula such as (3) of local neighborhood variance Var (x, y) of background image Shown in formula:
Var (x, y)=SquareSum (x, y)-[Sum (x, y)/(m*n)]2 (3)
By being calculated after the local neighborhood variance distribution map of background image, by setting variance threshold values.In background It is big 2, i.e. sampling density that local neighborhood variance then sets sampling interval threshold value more than the place of variance threshold values, obtains sampled point Many, it is that 10, i.e. sampling density are small then to set sampling interval threshold value less than the place of variance threshold values in background local neighborhood variance, is obtained It is few to sampled point, the sampled point for being not easy to LK algorithmic match is filtered, while retaining the sampled point of target position.Variance is too small Region be detrimental to LK algorithms as match point, but in order to retain target relative position in the picture, therefore not Variance yields is filtered completely and crosses zonule, but carry out many sampling (purposes in the big region of variance:The sampled point in the region is beneficial to LK Algorithmic match), the small region of variance is sampled (purpose less:Retain target relative position in the picture).
The light stream vector of each pixel in the pixel point set S after S11 samples is calculated using LK optical flow methods, is adopted for j-th Sampling point is expressed as Sj={ Cj,Vj, CjIt is the coordinate of sampled point, VjIt is the displacement of sampled point, all sampled points are mapped into pole sits On mark (r, θ), N is sampled point number, then with VjAverage is as the expression formula of Mean Shift iteration initial values:
The maximum position of cuclear density is found out by Mean Shift iterative steps, core centre coordinate is the motion vector of background Vb
Mean Shift algorithms are partially improved, i.e., nuclear radius can with core sample point number carry out it is slight Adjustment, when detect the sample size in Mean Shift nuclear radius less than certain amount when, the radius of core can increase, so The benefit done is to ensure that sample is always remained in certain quantity in core, can so reduce single error sample to overall sample The influence of this average.The schematic diagram that a Mean Shift calculates background vector is illustrated in figure 3, wherein arrow represents that background is transported Dynamic vector, gray circles represent convergence position and the size of core, by average drifting computing, eventually find the maximum position of density Put, i.e. background motion vector.
Estimate target entirety conspicuousness, be divided into background has two kinds of situations of moving target, two in without motion target and background Plant situation and estimate that the method for target entirety conspicuousness is as follows respectively:
(1) when without motion target in background, overall conspicuousness distribution is distributed as with background image motion conspicuousness and is estimated, According to background motion vector VbWith j-th displacement V of sampled pointjDifference, the motion conspicuousness distribution that can obtain background image is fixed Adopted formula is:K is sampled point C in formulaiLocal neighborhood, SmRepresent each sampled point and ambient background Motion vector otherness set.According to background motion vector VbWith j-th displacement V of sampled pointjDifference reflect sampling Diversity factor between point and background.Diversity factor between sampled point and background is bigger, and the motion conspicuousness of its background image is bigger.
(2) when having moving target in background or target motion is faint, target motion conspicuousness can not be fully described by out Interesting target, is compensated using the distribution of color conspicuousness to the distribution of moving target conspicuousness in this case.
Use with (1) that identical method calculates motion conspicuousness S i.e. when without motion target in background firstm.Simultaneously The color conspicuousness of image is calculated with the Center-Surround models based on color contrast, and is entered by GPU concurrent operations Row optimization, the motion conspicuousness and color conspicuousness of target are merged, and obtain target entirety conspicuousness.Existed by pixel The feature difference degree of neighborhood space calculates color conspicuousness, and definition is Sc=Pc* G, wherein ScRepresent that the color of certain point is notable Property, PcFor the point in pixel significance probability distribution map in neighborhood corresponding probable value, G be neighborhood in Gaussian function.It is logical The color conspicuousness distribution map for being calculated image is crossed, brightness is more big on distribution map, show that the pixel is more notable.Then it is sharp With S=α Sm+(1-α)Sc, by motion conspicuousness SmWith color conspicuousness ScCarry out linear weighted function and estimate target entirety conspicuousness, α Take 0.6.
It is last by the analysis to multitude of video data and to be compared by target entirety conspicuousness distribution results given threshold, Show that background conspicuousness is about 0.1, that is, it is segmentation threshold (i.e. background conspicuousness is 0.1) to take it, by detection target area and the back of the body Scene area is separated.
Second step is, the video frequency object tracking stage.
In the video frequency object tracking stage, present invention uses a kind of class Harr features with certain global characteristics to target It is described, while increased one-dimensional color characteristic so that target signature obtains balance, local grain between entirety and part The slight change and identification that feature can catch target are blocked, and newly-increased global color feature can adapt to the rotation of target, contracting The overall variation such as put, taken into account the low-frequency component and radio-frequency component in feature space, tracking effect is significantly improved.Most Be sampled compression to original feature by the way of random matrix afterwards, and using particle filter algorithm realize target persistently with Track, eliminates partial redundance calculating, the arithmetic speed of algorithm is significantly improved.Concretely comprise the following steps:
(1) after the first step separates target area with background area, object edge is labeled, extracts object edge E, with E as original position, setting radius is r, sample image (sample image comprising target) is aligned in the range of E+r and is adopted Sample, it is outer to negative sample image (background image) sampling in E-r scopes;
(2) goal description is carried out to the positive sample image that step (1) is obtained, it is special to describe target texture using Haar features Levy, color of object feature is described using YUV local color histogram models.
Former its definition of Haar characterization methods isJ ∈ [2,4], j ∈ N, wherein, tiFor unduplicated random Rectangle convolution window, I is input picture, and g represents characteristic function.By definition it can be seen that, although Haar features can describe mesh Local details is marked, but summation operation has smoothed the local grain of target to a certain extent, and making the local contrast of target has Declined, when target deformation occurs or blocks, this characterizing definition mode can not completely state out clarification of objective change, mesh Indicate it may happen that tracking situations such as loss, drift, therefore the textural characteristics of target can not be fully described out.
The present invention is by taking two unduplicated random rectangle convolution window TiAnd ti, the ladder comprising target in describing feature Degree information, takes into account the local edge and global contrast information of target, and the definition for redefining Haar features isj∈[1,2],j∈N.Both comparing threshold values and difference threshold between the two are set respectively, when both Value be both less than and compare threshold value while when difference between the two is less than difference threshold, g can describe the edge feature of target;When Both values are both greater than and compare threshold value while when difference between the two is less than difference threshold, g can describe the overall special of target Levy;Compare threshold value and difference between the two is more than when the value of in both is more than to compare threshold value, another value and be less than During difference threshold, g can describe the local feature distribution situation of target.The inventive method has taken into account the high frequency of goal description part And low frequency component, can be from different frequency analysis description targets.
In order to more accurately describe target global characteristics, increase one-dimensional color characteristic and target is described, using YUV innings Portion's color histogram graph model describes the distribution of target global color, and YUV models develop on the basis of gray level model, both The half-tone information of target is contained, also the colouring information comprising target, can more accurately describe color of object, be that color is retouched One of common method stated.It has and calculates simple, the strong advantage of description power, can quickly catch moving integrally and not for target Disturbed by target rotation, deformation.
(3) compression is sampled to the Haar features that step (2) is obtained using sparse matrix compress mode, it is accurate with Bayes Then as the similarity measurement of current goal position and previous frame target location, by particle filter algorithm target is carried out with Track.
Because Haar feature quantities are very more, desired real-time standard is calculated far beyond tracking, therefore using During Haar features, it will usually these features are processed using certain screening or compress mode.Using the pressure of sparse matrix Contracting mode directly carries out matching primitives to the Haar features after compression, on the one hand ensure that the real-time of calculating, on the other hand may be used Restore primitive character completely with the feature after by compressing, it is ensured that the accuracy of track algorithm;For color characteristic, it is desirable to spy Negligible amounts are levied, therefore color characteristic need not be compressed.
Object tracking process is exactly to be found and the most like candidate regions of standard target feature by iteration in image sequence Domain, the selection mode of candidate region is the adjacent domain for traveling through previous frame target area.Initialized target position and Haar features, With R as radius around target, equal weight ground is uniform to deliver particle, and the phase of each particle and target is calculated by bayesian criterion Like spending, formula (5) is as follows:
Wherein, position of centre of gravity is weighted as the center of circle with all particles, the particle that will be delivered is according to apart from center of circle radius yardstick point It is n level, represents that particle belongs to the probability in positive sample region with p (y=1), p (y=0) represents that particle belongs to negative sample region Probability, then p (vi| y=1) represent that particle is located under the conditions of the i-th level and belongs to the probability in positive sample region, p (vi| y=0) Represent that particle is located under the conditions of the i-th level and belongs to the probability in negative sample region.
Being gained knowledge according to Probability Theory and Math Statistics can approximately be obtained by average (expecting) and standard deviation (i.e. variance) Positive sample p (vi| y=1) and negative sample p (vi| y=0) probability, i.e.,Wherein N represents discrete Type function of random variable, μ1And δ1The respectively average and standard deviation of target sample, μ0And δ0Respectively the average of background sample and Standard deviation, the final position of target is obtained by all particle Weighted estimations, such as shown in formula (6), L in formulaiRepresent each particle Position, L' represents final goal position.
The explanation of the preferred embodiment of the present invention contained above, this be in order to describe technical characteristic of the invention in detail, and Be not intended to be limited in the content of the invention in the concrete form described by embodiment, carry out according to present invention purport other Modification and modification are also protected by this patent.The purport of present invention is to be defined by the claims, rather than by embodiment Specific descriptions are defined.

Claims (7)

1. a kind of video object detection and tracking based on Optical-flow Feature, it is characterised in that comprise the following steps:Including with Lower step:
S1, video object detection
S11, input unmanned aerial vehicle vision frequency sequence, picture frame of the equal interval sampling method based on background local neighborhood variance to input Sequence carries out background sampling, the local neighborhood variance of each pixel is calculated by box-filter methods, according to Background The local variance distribution map given threshold of picture, subregion sampling is carried out to image, is filtered part and is unfavorable for adopting for LK algorithmic match Sampling point, while retaining the sampled point of target position;
S12, sampled by step S11 after pixel point set, by LK optical flow methods calculate S11 sample after pixel point set In each pixel light stream vector, the motion vector of background is estimated by Mean Shift algorithms;
S13, estimate target entirety conspicuousness, be divided into background has two kinds of situations of moving target, two in without motion target and background Plant situation and estimate that the method for target entirety conspicuousness is as follows respectively:
(1) in background during without motion target, overall conspicuousness distribution is distributed as with background image motion conspicuousness and is estimated, its definition Formula is:K is sampled point C in formulaiLocal neighborhood, SmRepresent each sampled point with ambient background The set of motion vector otherness;
(2) when having moving target in background, to move based on conspicuousness distribution, estimate that target is overall significantly supplemented by color conspicuousness Property, calculate motion conspicuousness S using with identical method in (1) firstm, while the Center- based on color contrast Surround models, color conspicuousness S is calculated by pixel in the feature difference degree of neighborhood spacec, by motion conspicuousness Sm With color conspicuousness ScCarry out linear weighted function and estimate target entirety conspicuousness;
S14, the target entirety conspicuousness distribution estimated result setting segmentation threshold by being obtained in step S13, realize target area Separated with background area;
S2, video frequency object tracking
S21, target area is separated with background area after, object edge is labeled, extract object edge E, with E for Beginning position, setting radius is r, and sample image sampling is aligned in the range of E+r, outer to negative sample image sampling in E-r scopes, its Middle positive sample image is the sample image comprising target, and negative sample is background image;
S22, the positive sample image obtained to S21 carry out goal description, and target texture feature is described using Haar features, use YUV local color histogram models describe color of object feature;
S23 is sampled compression using the Haar features that sparse matrix compress mode is obtained to S22, is judged by bayesian criterion Its similitude with previous frame target, lasting tracking is carried out using particle filter algorithm to target.
2. video object detection and tracking based on Optical-flow Feature according to claim 1, it is characterised in that step The implementation method of S11 is:
If a width of W of current input image, a height of H, a width of m, a height of n of box rectangle templates are defined in image-region, used Pixel in black bars representative image region, each black bars represent a pixel, and 1# rectangle frames represent box rectangular molds Plate, 2# rectangle frames represent the regional extent that input picture W and box rectangle templates wide n high is formed, and 3# rectangle frames are the rectangle of 1*m Frame, the grey square in 3# rectangle frames is designated as buff, for the intermediate variable in storage computation process;It is left from box rectangle templates Upper angle (x, y) starts for (0,0), and individual element is slided to the right, the beginning (0,1) of next line is transferred to when reaching row end, with this Analogize;Each row pixel summation in 2# rectangle frames, the result of each row pixel summation are placed in grey square buff, then it is right Pixel summation in 3# rectangle frames, as a result as 1# rectangle frames be pixel in box rectangle templates and, be stored in initialization in advance In the array A of definition, first time summation operation is completed;The pixel and Sum [i] of 3# rectangle frames current location are for before moving right Pixel and Sum [i-1] subtract first grey square buff [x-1] in 3# rectangle frames along with 3# rectangle frame right hand edges First grey square buff [x+m-1], i.e., update all pixels and expression obtained indirectly and slide in box by buff Formula is as follows:
b u f f [ x ] = Σ x = 0 W - 1 Σ y = 0 n - 1 I ( x , y ) S u m [ i ] = S u m [ i - 1 ] - b u f f [ x - 1 ] + b u f f [ x + m - 1 ] - - - ( 1 )
Wherein, the initial value of i is 0, and similarly, it is to slide all pixels quadratic sum in box, 3# rectangle frames to define SquareSum Pixel quadratic sum SquareSum [i] of current location is that pixel quadratic sum SquareSum [i-1] before moving right is subtracted First grey square pixel quadratic sum SquareSum [x-1] in 3# rectangle frames along with 3# rectangle frames its right hand edges the One grey square pixel quadratic sum SquareSum [x+m-1], then have:
S q u a r e B u f f [ x ] = Σ x = 0 W - 1 Σ y = 0 n - 1 I ( x , y ) * I ( x , y ) S q u a r e S u m [ i ] = S q u a r e S u m [ i - 1 ] - S q u a r e B u f f [ x - 1 ] + S q u a r e B u f f [ x + m - 1 ] - - - ( 2 )
Simultaneous expression formula (1) and (2), can try to achieve expression formula such as (3) formula institute of local neighborhood variance Var (x, y) of background image Show:
Var (x, y)=SquareSum (x, y)-[Sum (x, y)/(m*n)]2 (3)
By being calculated after the local neighborhood variance distribution map of background image, by setting variance threshold values, in background part It is 2 that neighborhood variance then sets sampling interval threshold value more than the region of the variance threshold values for setting, i.e., sampling density is big, obtains sampled point Many, it is 10 that the region for being less than the variance threshold values for setting in background local neighborhood variance then sets sampling interval threshold value, i.e. oversampling Degree is small, obtains that sampled point is few, filters the sampled point for being not easy to LK algorithmic match, while retaining the sampled point of target position.
3. video object detection and tracking based on Optical-flow Feature according to claim 1, it is characterised in that step The implementation method of S12 is:
The light stream vector of each pixel in the pixel point set S after S11 samples, j-th sampled point are calculated using LK optical flow methods It is expressed as Sj={ Cj,Vj, CjIt is the coordinate of sampled point, VjBe the displacement of sampled point, by all sampled points be mapped to polar coordinates (r, On θ), N is sampled point number, then with VjAverage is as the expression formula of Mean Shift iteration initial values:
V m s ( r , θ ) = Σ j = 0 N V j / N - - - ( 4 )
The maximum position of cuclear density is found out by Mean Shift iterative steps, core centre coordinate is the motion vector V of backgroundb
4. video object detection and tracking based on Optical-flow Feature according to claim 1, it is characterised in that step In (2) in S13, the Center-Surround models based on color contrast are poor in the feature of neighborhood space by pixel Different degree calculates color conspicuousness, and definition is Sc=Pc* G, wherein ScRepresent the color conspicuousness of certain pixel, PcIt is the picture Vegetarian refreshments corresponding probable value in pixel significance probability distribution map in neighborhood, G is Gaussian function in neighborhood;
The color conspicuousness distribution map of image is calculated by color conspicuousness, on color conspicuousness distribution map brightness more it is big then Show that the pixel is more notable.
5. video object detection and tracking based on Optical-flow Feature according to claim 1, it is characterised in that step In S1.4, background conspicuousness is taken for 0.1 used as segmentation threshold, realize that target area separates with background area.
6. video object detection and tracking based on Optical-flow Feature according to claim 1, it is characterised in that step In S2.2, the method for describing target texture feature using Haar features is:If Ti, tiIt is two unduplicated random rectangle convolution windows, I It is input picture, g represents characteristic function, the definition for defining Haar features isPoint Not She Ding both comparing threshold value and difference threshold between the two, compare threshold value while between the two when both values are both less than Difference be less than difference threshold when, g can describe the edge feature of target;Compare threshold value while two when both values are both greater than When difference between person is less than difference threshold, g can describe the global feature of target;Compare threshold when the value of in both is more than , less than when comparing threshold value and difference between the two and being more than difference threshold, g can describe the local special of target for value, another value Levy distribution situation.
7. video object detection and tracking based on Optical-flow Feature according to claim 6, it is characterised in that step The implementation method of S2.3 is:
Initialized target position and Haar features, with R as radius around target, equal weight ground is uniform to deliver particle, by shellfish Leaf this criterion calculates the similarity of each particle and target, and formula (5) is as follows:
H i ( v ) = log ( Π i = 1 n p ( v i | y = 1 ) p ( y = 1 ) Π i = 1 n p ( v i | y = 0 ) p ( y = 0 ) ) = Σ i = 1 n log ( p ( v i | y = 1 ) p ( v i | y = 0 ) ) - - - ( 5 )
Wherein, position of centre of gravity is weighted as the center of circle with all particles, the particle of dispensing is divided into n according to apart from center of circle radius yardstick Level, represents that particle belongs to the probability in positive sample region with p (y=1), and p (y=0) represents that particle belongs to the general of negative sample region Rate, then p (vi| y=1) represent that particle is located under the conditions of the i-th level and belongs to the probability in positive sample region, p (vi| y=0) represent Particle is located under the conditions of the i-th level and belongs to the probability in negative sample region;
Being gained knowledge according to Probability Theory and Math Statistics can approximately obtain positive sample p (v by average and standard deviationi| y=1) and negative sample This p (vi| y=0) probability, i.e.,Wherein N represents discrete random variable function, μ1And δ1Point Not Wei target sample average and standard deviation, μ0And δ0The respectively average and standard deviation of background sample, the final position of target by All particle Weighted estimations are obtained, such as shown in formula (6), L in formulaiThe position of each particle is represented, L' represents final goal position Put;
L ′ = ΣH i ( v ) * L i ΣH i ( v ) - - - ( 6 ) .
CN201710034789.6A 2017-01-17 2017-01-17 Video target detecting and tracking method based on optical flow features Pending CN106709472A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201710034789.6A CN106709472A (en) 2017-01-17 2017-01-17 Video target detecting and tracking method based on optical flow features

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201710034789.6A CN106709472A (en) 2017-01-17 2017-01-17 Video target detecting and tracking method based on optical flow features

Publications (1)

Publication Number Publication Date
CN106709472A true CN106709472A (en) 2017-05-24

Family

ID=58906915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710034789.6A Pending CN106709472A (en) 2017-01-17 2017-01-17 Video target detecting and tracking method based on optical flow features

Country Status (1)

Country Link
CN (1) CN106709472A (en)

Cited By (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107463912A (en) * 2017-08-10 2017-12-12 武汉大学深圳研究院 Video human Activity recognition method based on motion conspicuousness
CN107578426A (en) * 2017-07-26 2018-01-12 浙江工业大学 A kind of real-time optical flow analysis tracking towards serious degraded video
CN107644429A (en) * 2017-09-30 2018-01-30 华中科技大学 A kind of methods of video segmentation based on strong goal constraint saliency
CN108242062A (en) * 2017-12-27 2018-07-03 北京纵目安驰智能科技有限公司 Method for tracking target, system, terminal and medium based on depth characteristic stream
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
CN108615021A (en) * 2018-05-01 2018-10-02 孙智文 A kind of man-machine interactive Approach for road detection based on aerial images
CN108871290A (en) * 2018-06-07 2018-11-23 华南理工大学 A kind of visible light dynamic positioning method based on optical flow method detection and Bayesian forecasting
CN109035293A (en) * 2018-05-22 2018-12-18 安徽大学 The method of significant human body example segmentation suitable for video image
CN109239083A (en) * 2018-09-26 2019-01-18 深圳源广安智能科技有限公司 A kind of cable surface defects detection system based on unmanned plane
CN109271854A (en) * 2018-08-07 2019-01-25 北京市商汤科技开发有限公司 Based on method for processing video frequency and device, video equipment and storage medium
CN109740558A (en) * 2019-01-10 2019-05-10 吉林大学 A kind of Detection of Moving Objects based on improvement optical flow method
CN109740613A (en) * 2018-11-08 2019-05-10 深圳市华成工业控制有限公司 A kind of Visual servoing control method based on Feature-Shift and prediction
CN109784183A (en) * 2018-12-17 2019-05-21 西北工业大学 Saliency object detection method based on concatenated convolutional network and light stream
CN110111364A (en) * 2019-04-30 2019-08-09 腾讯科技(深圳)有限公司 Method for testing motion, device, electronic equipment and storage medium
CN110148156A (en) * 2019-04-29 2019-08-20 惠州市德赛西威智能交通技术研究院有限公司 A kind of symmetric targets image tracking method based on local light stream
CN110176027A (en) * 2019-05-27 2019-08-27 腾讯科技(深圳)有限公司 Video target tracking method, device, equipment and storage medium
CN110705589A (en) * 2019-09-02 2020-01-17 贝壳技术有限公司 Weight optimization processing method and device for sample characteristics
CN107292284B (en) * 2017-07-14 2020-02-28 成都通甲优博科技有限责任公司 Target re-detection method and device and unmanned aerial vehicle
CN111079546A (en) * 2019-11-22 2020-04-28 重庆师范大学 Unmanned aerial vehicle pest detection method
CN111160229A (en) * 2019-12-26 2020-05-15 北京工业大学 Video target detection method and device based on SSD (solid State disk) network
CN111667511A (en) * 2020-06-19 2020-09-15 南京信息工程大学 Method, device and system for extracting background from dynamic video
CN112307872A (en) * 2020-06-12 2021-02-02 北京京东尚科信息技术有限公司 Method and device for detecting target object
CN112329729A (en) * 2020-11-27 2021-02-05 珠海大横琴科技发展有限公司 Small target ship detection method and device and electronic equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202306549U (en) * 2011-11-03 2012-07-04 北京电子科技学院 Video retrieval system based on optical flow method
CN104598892A (en) * 2015-01-30 2015-05-06 广东威创视讯科技股份有限公司 Dangerous driving behavior alarming method and system
CN104952083A (en) * 2015-06-26 2015-09-30 兰州理工大学 Video saliency detection algorithm based on saliency target background modeling

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN202306549U (en) * 2011-11-03 2012-07-04 北京电子科技学院 Video retrieval system based on optical flow method
CN104598892A (en) * 2015-01-30 2015-05-06 广东威创视讯科技股份有限公司 Dangerous driving behavior alarming method and system
CN104952083A (en) * 2015-06-26 2015-09-30 兰州理工大学 Video saliency detection algorithm based on saliency target background modeling

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
沈翀: "复杂视频监控环境下的运动目标检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *
秦岳: "基于光流特征的运动目标检测与跟踪算法研究", 《中国优秀硕士学位论文全文数据库 信息科技辑》 *

Cited By (33)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107292284B (en) * 2017-07-14 2020-02-28 成都通甲优博科技有限责任公司 Target re-detection method and device and unmanned aerial vehicle
CN107578426A (en) * 2017-07-26 2018-01-12 浙江工业大学 A kind of real-time optical flow analysis tracking towards serious degraded video
CN107463912A (en) * 2017-08-10 2017-12-12 武汉大学深圳研究院 Video human Activity recognition method based on motion conspicuousness
CN107644429B (en) * 2017-09-30 2020-05-19 华中科技大学 Video segmentation method based on strong target constraint video saliency
CN107644429A (en) * 2017-09-30 2018-01-30 华中科技大学 A kind of methods of video segmentation based on strong goal constraint saliency
CN108242062A (en) * 2017-12-27 2018-07-03 北京纵目安驰智能科技有限公司 Method for tracking target, system, terminal and medium based on depth characteristic stream
CN108242062B (en) * 2017-12-27 2023-06-30 北京纵目安驰智能科技有限公司 Target tracking method, system, terminal and medium based on depth feature flow
CN108460786A (en) * 2018-01-30 2018-08-28 中国航天电子技术研究院 A kind of high speed tracking of unmanned plane spot
CN108615021A (en) * 2018-05-01 2018-10-02 孙智文 A kind of man-machine interactive Approach for road detection based on aerial images
CN109035293A (en) * 2018-05-22 2018-12-18 安徽大学 The method of significant human body example segmentation suitable for video image
CN108871290A (en) * 2018-06-07 2018-11-23 华南理工大学 A kind of visible light dynamic positioning method based on optical flow method detection and Bayesian forecasting
CN109271854A (en) * 2018-08-07 2019-01-25 北京市商汤科技开发有限公司 Based on method for processing video frequency and device, video equipment and storage medium
CN109239083A (en) * 2018-09-26 2019-01-18 深圳源广安智能科技有限公司 A kind of cable surface defects detection system based on unmanned plane
CN109740613B (en) * 2018-11-08 2023-05-23 深圳市华成工业控制股份有限公司 Visual servo control method based on Feature-Shift and prediction
CN109740613A (en) * 2018-11-08 2019-05-10 深圳市华成工业控制有限公司 A kind of Visual servoing control method based on Feature-Shift and prediction
CN109784183A (en) * 2018-12-17 2019-05-21 西北工业大学 Saliency object detection method based on concatenated convolutional network and light stream
CN109784183B (en) * 2018-12-17 2022-07-19 西北工业大学 Video saliency target detection method based on cascade convolution network and optical flow
CN109740558A (en) * 2019-01-10 2019-05-10 吉林大学 A kind of Detection of Moving Objects based on improvement optical flow method
CN110148156A (en) * 2019-04-29 2019-08-20 惠州市德赛西威智能交通技术研究院有限公司 A kind of symmetric targets image tracking method based on local light stream
CN110111364A (en) * 2019-04-30 2019-08-09 腾讯科技(深圳)有限公司 Method for testing motion, device, electronic equipment and storage medium
CN110111364B (en) * 2019-04-30 2022-12-27 腾讯科技(深圳)有限公司 Motion detection method and device, electronic equipment and storage medium
CN110176027A (en) * 2019-05-27 2019-08-27 腾讯科技(深圳)有限公司 Video target tracking method, device, equipment and storage medium
CN110176027B (en) * 2019-05-27 2023-03-14 腾讯科技(深圳)有限公司 Video target tracking method, device, equipment and storage medium
CN110705589A (en) * 2019-09-02 2020-01-17 贝壳技术有限公司 Weight optimization processing method and device for sample characteristics
CN111079546B (en) * 2019-11-22 2022-06-07 重庆师范大学 Unmanned aerial vehicle pest detection method
CN111079546A (en) * 2019-11-22 2020-04-28 重庆师范大学 Unmanned aerial vehicle pest detection method
CN111160229A (en) * 2019-12-26 2020-05-15 北京工业大学 Video target detection method and device based on SSD (solid State disk) network
CN111160229B (en) * 2019-12-26 2024-04-02 北京工业大学 SSD network-based video target detection method and device
CN112307872A (en) * 2020-06-12 2021-02-02 北京京东尚科信息技术有限公司 Method and device for detecting target object
CN111667511A (en) * 2020-06-19 2020-09-15 南京信息工程大学 Method, device and system for extracting background from dynamic video
CN111667511B (en) * 2020-06-19 2024-02-02 南京信息工程大学 Method, device and system for extracting background in dynamic video
CN112329729B (en) * 2020-11-27 2021-11-23 珠海大横琴科技发展有限公司 Small target ship detection method and device and electronic equipment
CN112329729A (en) * 2020-11-27 2021-02-05 珠海大横琴科技发展有限公司 Small target ship detection method and device and electronic equipment

Similar Documents

Publication Publication Date Title
CN106709472A (en) Video target detecting and tracking method based on optical flow features
CN104008370B (en) A kind of video face identification method
Zhou et al. Efficient road detection and tracking for unmanned aerial vehicle
CN109685045B (en) Moving target video tracking method and system
Prisacariu et al. Nonlinear shape manifolds as shape priors in level set segmentation and tracking
CN108446634B (en) Aircraft continuous tracking method based on combination of video analysis and positioning information
CN107767400B (en) Remote sensing image sequence moving target detection method based on hierarchical significance analysis
CN107273905B (en) Target active contour tracking method combined with motion information
CN110490907B (en) Moving target tracking method based on multi-target feature and improved correlation filter
CN109961417B (en) Image processing method, image processing apparatus, and mobile apparatus control method
CN102156995A (en) Video movement foreground dividing method in moving camera
CN107194929B (en) Method for tracking region of interest of lung CT image
CN107609571B (en) Adaptive target tracking method based on LARK features
CN109359549A (en) A kind of pedestrian detection method based on mixed Gaussian and HOG_LBP
CN113592894B (en) Image segmentation method based on boundary box and co-occurrence feature prediction
CN107507223A (en) Method for tracking target based on multi-characters clusterl matching under dynamic environment
CN111680713A (en) Unmanned aerial vehicle ground target tracking and approaching method based on visual detection
CN111199245A (en) Rape pest identification method
CN106529441B (en) Depth motion figure Human bodys' response method based on smeared out boundary fragment
CN110991398A (en) Gait recognition method and system based on improved gait energy map
CN112329784A (en) Correlation filtering tracking method based on space-time perception and multimodal response
CN114998890B (en) Three-dimensional point cloud target detection algorithm based on graph neural network
Hirner et al. FC-DCNN: A densely connected neural network for stereo estimation
CN109241981B (en) Feature detection method based on sparse coding
CN113627481A (en) Multi-model combined unmanned aerial vehicle garbage classification method for smart gardens

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
RJ01 Rejection of invention patent application after publication
RJ01 Rejection of invention patent application after publication

Application publication date: 20170524