CN1738426A - Video motion goal division and track method - Google Patents

Video motion goal division and track method Download PDF

Info

Publication number
CN1738426A
CN1738426A CN 200510094306 CN200510094306A CN1738426A CN 1738426 A CN1738426 A CN 1738426A CN 200510094306 CN200510094306 CN 200510094306 CN 200510094306 A CN200510094306 A CN 200510094306A CN 1738426 A CN1738426 A CN 1738426A
Authority
CN
China
Prior art keywords
target
template
image
tracking
coupling
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN 200510094306
Other languages
Chinese (zh)
Inventor
吴金勇
马国强
潘红兵
徐健键
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Nanjing University
Original Assignee
Nanjing University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Nanjing University filed Critical Nanjing University
Priority to CN 200510094306 priority Critical patent/CN1738426A/en
Publication of CN1738426A publication Critical patent/CN1738426A/en
Pending legal-status Critical Current

Links

Images

Abstract

The invention discloses a method for dividing and following the video moving target, which realizes the stable track of target by using the background-reduced method to divide the moving target, extracting the target features and recognizing the features. Wherein, the model of background is a real-time refreshed model while the background of first frame is generated self-adaptive by combining the video motion information and the field divided algorithm which is based on the self-organizing feature mapping (SOFM) network and the small region label combination; when using two expression methods in the feature expression as: parameter feature and pixel level spatial feature; in the process of tracking target, using the matching strategy of progressive fining from roughness to fine, and by the roughness matching and the fine matching, realizing the recognizing and track of target. The experimental result of said invention indicates that said invention can realize the correct division and tracking of rigid body or non-rigid body moving target in the video sequence in the case of complex field and multiple moving targets.

Description

A kind of video frequency motion target is cut apart and tracking
One, technical field
The present invention relates to the method for video image processing and control technology, especially video frequency motion target is cut apart and the image information processing method of following the tracks of.
Two, background technology
Along with the develop rapidly of computer technology, image processing techniques, control technology, the video quilt various aspects that are applied to society more and more widely to the analyzing and processing of video, have become the research focus of present image process field.Based on the various application systems of Video processing, as robot visual guidance system, video monitoring system, object-based video coding and transmission etc., its crucial video processing technique is exactly that the moving target of video sequence or object are cut apart or followed the tracks of.Therefore, a lot of at home and abroad companies, research institution and academia are all to giving to pay close attention to greatly and further investigation widely cutting apart with tracking of video frequency motion target.Although people are to motion target detection or cut apart and a large amount of research has been done in aspect such as tracking, various types of algorithms have been proposed, the algorithm that proposes is substantially all used at special scenes, still do not have general cut apart efficiently with follow the tracks of theoretical.
Aspect the cutting apart of target, classical at present moving target detecting method has time-domain difference method based on two frames or three frames, background subtraction method, optical flow method etc.The time-domain difference method can only detect the relative motion part of target in interframe, can not be partitioned into complete moving target, and is special under the single situation of object color component; The background subtraction method can be partitioned into complete moving target preferably, but background is very responsive to moving in and out of illumination, background object etc., and therefore, the generation of real-time reference background is the emphasis of background subtraction method research; Optical flow method is the moving target detecting method that a kind of constraint constant substantially with shade of gray or brightness constancy is assumed to be the basis, cut apart target by optical flow field, its variation to ambient lighting is comparatively responsive, and amount of calculation is big, generally algorithm need be carried out hardware and realize, be not suitable for real-time application scenario.
Aspect the tracking of target, method for tracking target commonly used can roughly be divided into four big classes: based on the tracking of model, based on the tracking in zone, based on the tracking of feature with based on the tracking of active contour.Tracking based on model can be object representation 2-D profile or 3-D model, the shortcoming of 2-D model is exactly the influence that is subjected to the video camera shooting angle, when camera angle change or the object rotation, when blocking, the expression of its 2-D model also can change thereupon, the 3-D model can address the above problem preferably, but its calculation of complex is not to be in daily use.By information such as the color of the moving region of present frame and former frame, texture are carried out correlation ratio, seek out the zone of each motion associated based on the tracking in zone, realize identification of targets and tracking.Tracking based on feature is to adopt the differentiable Points And lines on the target to realize following the tracks of as subcharacter, this method is owing to adopted characteristic point information, under part was blocked situation, if characteristic point information can also correctly be obtained, then system still can carry out tenacious tracking.Tracking or Snake algorithm based on active contour are exactly the profile that directly obtains target, by being on-the-fly modified, profile point realizes following the tracks of, this method need be carried out initialization to profile, and initialized profile must be near the edge of target, otherwise is difficult to correctly follow the tracks of.
From the acquisition mode of video, video acquisition has two kinds of forms: a kind of video acquisition that is based under the video camera fixation case, another kind is based on the video acquisition under the camera motion situation.Different video acquisition modes is influential to the dividing method of target, collection under the video camera fixation case, its surround of a comparison field is not with camera motion, and the collection under the camera motion situation, target is in motion in the visual field, background is also changing, and need carry out the motion compensation process of video camera this moment when target is cut apart earlier.Moving Object Segmentation of the present invention and tracking mainly are at what carry out under the video camera fixation case the moving target in the video sequence to be implemented to cut apart and followed the tracks of.
Present Research: it is the important research content of field of video processing that target is cut apart with following the tracks of, and is the focus of current video research.Present various cutting apart and tracking that proposes,, cut apart tracking with fast and stable accurately and remain video analysis in handling and have much challenging problem though can reach certain effect under given conditions.General cutting apart with tracking all carried out at special scenes, and general automatic target is cut apart and tracking, when particularly carrying out automatically fast with tenacious tracking to a plurality of rigid bodies or non-rigid body target in complex scene, also has a lot of problems to wait solution.Referring to [1] Sun Yang, Luo Yu. etc. a kind of traditional Chinese medical science tongue based on division-merging method is as Region Segmentation Algorithm and realization [J] thereof. Chinese image graphics journal; [2] Rosin P L, Ellis T.Image Difference Threshold Strategies andShadow Detection (image differential threshold and shadow Detection) [C] .Proc.of the 6 ThBritish Machine VisionConference, 1994.347-356.; [3] CU CCH IA RA R, GRANA C, PICCARDIM, et al.Improvingshadow suppression in moving object detection with HSV color information (the improvement image compression in the colored moving object detection of HSV) [A] .Proc of IEEE Int ' l Conference, on Intelligent Transportation Systems (intelligent transport system) [C] .Oakland:IEEE, 2001:334-339.Also referring to Www.cs.unc.edu/`welch
Three, summary of the invention
The present invention seeks to: the video frequency motion target that proposes a kind of novelty is cut apart and tracking, can adapt under complex scene, multiple mobile object situation, realizes the rigid body in the video sequence or non-rigid motion target are carried out correct cut apart and following the tracks of.
The object of the present invention is achieved like this: a kind of video frequency motion target is cut apart and tracking, adopts the background subtraction method to cut apart moving target, carries out the extraction and the feature identification of target signature again, realizes the tenacious tracking to target; Wherein, the model of background is the model of a real-time update, adopts based on the scene partitioning algorithm of s self-organizing feature map SOFM network and the merging of zonule mark and the mode that video motion information combines, and self adaptation generates first frame background; During feature representation, adopt two kinds of expression-forms: parameter characteristic and Pixel-level spatial feature; During target following, adopt progressively meticulous matching strategy from coarse to fine,, realize identification of targets and tracking by thick coupling and thin coupling; Target when the present invention carries out multiple target tracking to system is simultaneously met, separation and target enter visual field, target disappearance etc. and handle.
The mode that scene partitioning algorithm that described s self-organizing feature map (SOFM) neural net and zonule mark merge and video motion information combine is: bigger to network settings one earlier output classification number, it is trained, its image to input is carried out over-segmentation, adopting many tolerance Fuzzy Criteria of a kind of zonule mark to carry out iteration to each zone that splits then merges, obtains final segmentation result; The SOFM network is made up of two-layer, be respectively input layer and competition layer, the input layer number is n, the competition layer neuron is m, there are variable weights to be connected between each input node and output node, the input pattern of tieing up arbitrarily is mapped to one dimension or two-dimensional discrete figure in competition layer output, and keeps its topological structure constant.
With scene image data the SOFM network is trained, the a certain specific neuron that makes SOFM is to certain gray feature sensitivity, when image to be split is transfused to network, just can carry out self-organizing clustering according to the gray feature of data to data, finish image is carried out Region Segmentation.
The image of exporting after SOFM neural net cluster is divided into the zone of a fixed number firmly, and each pixel belongs to a certain specific cluster according to its gray feature; Earlier image is carried out over-segmentation, extract by class then and carry out piecemeal and mark, obtain markd zonule, adopt many tolerance fuzzy criterions to merge to mark zonule again.
Adopt the dynamic template update method, when the target of following the tracks of was measured less than setting threshold with the coupling of the template of this target, resampling also upgraded template parameter; The condition of template renewal is determined by following factor: T Adptive, T Track, C k: wherein, T AdptiveRepresent template to require the coupling tolerance threshold value of upgrading, 0<T Track<T Adptive<1; T TrackRepresent the Minimum Acceptable Value of template and object matching, if the coupling of a certain target and template tolerance, thinks that this target and time template do not match less than this threshold value; C kBe to use the Kalman filter [7]Predict the center of mass point position of this target that obtains;
After background subtracts processing, obtained the zone of moving target, by the center of mass point position of a certain target in next frame, Kalman filter prediction location, search near the moving region of this point, if the relevant matches of the spatial domain pixel of target and its spatial domain template tolerance t aEligible: T Track<t a<T Adptive, then again this target is carried out template extraction and template characteristic expression thereof, realize the automatic renewal of template;
By current frame image and background frames image are subtracted computing, extract the target of motion, calculating clarification of objective vector, characteristic vector and existing templates characteristic vector are compared, if itself and arbitrary existing templates all do not match, then think to have entered a new moving target in the visual field, implement after it is given new numbering and extracts its template to follow the tracks of; If a certain moving target does not also occur after the time of setting again, think that this target disappears, delete its template characteristic vector and spatial domain template image, to discharge memory space;
Adopt the parameter characteristic and the pixel level spatial feature of moving target to carry out the progressively meticulous fuzzy matching of many features, be made up of thick coupling and two processes of thin coupling: thick coupling is to adopt the pixel level spatial feature of target and existing spatial domain template to carry out related operation, define the degree of correlation scope of coupling with a plurality of threshold values, whether will carry out further thin coupling according to the value decision of the degree of correlation;
Thin matching process adopts the parameter characteristic vector of target and template to carry out the similarity differentiation, thereby realize tracking: in the time of tracking to target, calculate the center of mass point position of target, in the center of mass point position target is carried out label, and with the Kalman filter center of mass point position of this target in next frame predicted, be used for " meeting " between target handled.
Experimental result of the present invention shows, this method also can realize the rigid body in the video sequence or non-rigid motion target are carried out correct cut apart and following the tracks of under scene complexity, multiple mobile object situation.
Four, description of drawings
Fig. 1 is that target of the present invention is cut apart the main schematic flow sheet with track algorithm;
Fig. 2 is the segmentation result of Hall Monitor sequence the 25th frame of the present invention;
Fig. 3 is the segmentation result of Hall Monitor sequence the 50th frame of the present invention, and each photos and explanation have embodied process;
Fig. 4 is the present invention carries out motion target tracking in Hall Monitor sequence result;
Fig. 5 is the present invention's experimental result of the video sequence of collection in real time;
Fig. 6 is the algorithm flow chart that the first frame reference background of the present invention generates;
Fig. 7 is context update of the present invention and moving Object Segmentation FB(flow block);
Fig. 8 is that fresh target of the present invention enters, template renewal and coupling trace flow block diagram
Five, specific implementation
1.1 scene partitioning algorithm based on SOFM network and the merging of zonule mark
In order to obtain a kind of automatic division method of scene image, the energy self adaptation is to the dividing processing from simple (few target) to complicated (multiple target) scene image, the present invention proposes a kind of scene image automatic division method based on SOFM network and the merging of zonule mark, because s self-organizing feature map (SOFM) network [1] can not have the supervision self-organizing clustering to the pattern of input by competition learning, being well suited for the scene image that pixel is had a different characteristic carries out automatic cluster and cuts apart, but the size of the output classification number of SOFM network is a problem that is difficult to determine, generally rule of thumb be provided with, for fear of output the classification number excessive or too small to image cause the mistake cut apart, the present invention is by adopting the SOFM neural net of Kohonen, earlier bigger output classification number to network settings one, it is trained, its image to input is carried out over-segmentation, adopting many tolerance Fuzzy Criteria of a kind of zonule mark to carry out iteration to each zone that splits then merges, obtains final segmentation result.Experiment shows that this method has overcome above shortcoming, can be applicable to the self adaptation automatic dividing processing of scene image from simply (lacking target) under complicated (multiple target) situation.
1.1.1 s self-organizing feature map network configuration and learning method thereof
S self-organizing feature map (SOFM) [1]Network is proposed by neural net expert professor Kohonen of Univ Helsinki Finland, it is the neural net of a kind of competitive mode study, no teacher's teaching cluster, by the function of simulation cerebral nervous system s self-organizing feature map, in study, can unsupervisedly carry out self-organized learning.The SOFM network of Kohonen is made up of two-layer, be respectively input layer and competition layer, the input layer number is n, the competition layer neuron is m, there are variable weights to be connected between each input node and output node, the input pattern of tieing up arbitrarily is mapped to one dimension or two-dimensional discrete figure in competition layer output, and keeps its topological structure constant.
If X=is (x 0, x 1, x 2..., x n) R pBe arbitrary limited data acquisition system, R pBe p dimension real number characteristic vector space, to arbitrary input x iIf a certain neuronic knot vector and input value are close, then this neuron itself and near neuron thereof will selectively be optimized, activated this neuronic weight w Ij(t) be reinforced and near it neuronic weights be suppressed, thereby make a certain neuron to specific input sensitivity, and other neurons are insensitive to this input.For view data, its intensity profile of different scenes is difference to some extent all, with scene image data the SOFM network is trained, the a certain specific neuron that makes SOFM is to certain gray feature sensitivity, when image to be split is transfused to network, gray feature according to data just can carry out self-organizing clustering to data, finishes image is carried out Region Segmentation.Based on this principle, the present invention adopts the SOFM neural net of one dimension that view data is carried out self-organizing clustering, below is the Kohonen of cluster [1]Learning algorithm:
The first step: at random choose a less value v j(0), and makes w as the initial weight of input neuron to output neuron Ij(0)=v j(0), j=1,2 ..., m; I=1,2 ..., n.The neuronic set of the adjacency NE (0) of initialization output neuron j is provided with iterations T, initialization iterations t=0, initialization study parameter η (0), 0<η t (t)<1.
Second step: a new input pattern is provided.
The 3rd step: the Euclidean distance of calculating each weighted vector between the input and output neuron: d j = Σ i = 1 η [ x i ( t ) - w ij ( t ) ] 2 , j=1,2,…,m。Select Euclidean apart from the neuron node j of minimum as competition triumph node j *
The 4th step: the weights of revising triumph neuron node and contiguous neuron NE (t) node thereof according to formula (1):
w ij(t+1)=w ij(t)+η(t)[x i(t)-w ij,(t)] (1)
In the formula, j ∈ NE j * ( t ) , NE J*(t) be triumph neuron node j *The neuronic set of vicinity; η (t) is a learning coefficient; T is an iterations.
The 5th step: do not import all patterns, changeed for second step.
The 6th step: t increases progressively, and revises η (t) and NE (t), if t=T then stops, otherwise changes for second step.
1.1.2 the fuzzy merging criterion of the many tolerance in zone
The image of exporting after SOFM neural net cluster is divided into the zone of a fixed number firmly, and each pixel belongs to a certain specific cluster according to its gray feature.For scene image, when the scene destination number more for a long time, if output neuron is less, can cause can not be correct the target of cutting apart, and work as the scene destination number more after a little while, more output neuron cluster can cause the image over-segmentation again, the scene that should belong to same target is divided in two or more area classifications goes.In order to adapt to the variation of scene complexity automatically, the present invention is according to the characteristics of SOFM neural net cluster, earlier image is carried out over-segmentation, extract by class then and carry out piecemeal and mark, obtain markd zonule, adopting many tolerance fuzzy criterions to merge to own mark zonule again, obtained segmentation effect preferably, below is the specific descriptions of merging criterion.
If M = { m 1 ( 1 ) ‾ , m 1 ( 2 ) ‾ , . . . , m 1 ( k ) ‾ } Be the set of the image zonule of the own mark of the l class that splits output, m l(i) ∈ M, (i=1,2 ..., k; L=1,2, c) represent l class i markd image zonule, c represents the area classification number, then the merging predicate H of two image zonules unites decision by the gradation uniformity in zone, region distance, syntople etc., and the fuzzy merging criterion of therefore many tolerance can be described with following fuzzy language: if the area grayscale uniformity after 1. merging is better, then merge this zone; If 2. two region distances are closer, then consider whether merge, if far then do not support to merge; If 3. two zones are adjacency, can consider to merge, otherwise not support to merge.
(1) area grayscale uniformity criterion
For a certain category regions R, establishing pixel count is N, and (i, j) denotation coordination is that (gray average that then should the zone is expressed as for i, gradation of image value j) to f
g ‾ = 1 N Σ i , j ∈ R f ( i , j ) - - - ( 2 )
The uniformity of region R is estimated and can be expressed as
MAX i , j &Element; R | f ( i , j ) - g &OverBar; | < T - - - ( 3 )
(i j) is gray value to f in the formula, and T is a threshold value, when the difference of all pixels and area grayscale mean value in the zone during all less than threshold value T, thinks that this zone is uniform [2]Among the present invention owing to adopt the big classification number of cutting apart, after the SOFM cluster, the pixel of respectively cutting apart in the classification has bigger similitude, therefore the present invention adopts in two zones difference of the quadratic sum of the difference of all pixels and area grayscale mean value separately to measure the similarity in two zones, weigh the reasonability that the zone merges with the gray value that merges in the rear region with the quadratic sum of equal value difference, see formula (3) and (4).
&epsiv; = &Sigma; i , j &Element; R 1 ( f 1 ( i , j ) - g 1 &OverBar; ) 2 - &Sigma; i , j &Element; R 2 ( f 2 ( i , j ) - g 2 &OverBar; ) 2 - - - ( 3 ) , &epsiv; 1,2 = &Sigma; i , j &Element; R 1 &cup; R 2 ( f 1,2 ( i , j ) - g 1,2 &OverBar; ) 2 - - - ( 4 )
Wherein: in formula (3) and (4)
Figure A20051009430600104
It is respectively region R 1, R 2Gray average,
Figure A20051009430600105
Be the gray average after two zones merge, f X(i, j), (i, the gray value of j) locating, ε represent the gray difference degree in two zones, and the gray value in the more little expression of ε value two zones is approaching more in X ∈ [1,2, (1,2)] the expression X institute corresponding region.ε 1,2Be used to measure the gradation uniformity degree that merges rear region.
If ε and ε 1,2Value reach simultaneously and requirement is set then merges this zone (H g=ture), otherwise abandon merging, adopt so regional uniformity merging condition can be combined the result a kind of feedback mechanism is provided, can revise amalgamation result in real time, avoid mistake to merge.
(2) regional space is apart from criterion and syntople
The weights of using each cluster of SOFM neural net among the present invention are as interregional distance.If w (i), w (j), (i, j=1,2 ..., c) be the weights that are output as i class and j class zone in c the cluster areas, if | w (i)-w (j) |<ε d, then think the closer (H of two region distances d=ture), can consider whether merge ε dBe a threshold value, can be by the experiment decision.
To the two class zones that can consider to merge,, think that then two zones are adjacency (H if in the zonule of same mark position, all there is pixel to exist a=ture), can consider to merge.
Therefore, merging predicate H is: H=H g∧ H d∧ H aIf H is true, then merge the zone, otherwise nonjoinder.
1.1.3 scene partitioning algorithm
In order to adapt to from simple to the cutting apart automatically of complex scene image, the image that the present invention adopts partitioning algorithm is automatically finished by three big steps:
First step SOFM classification:
(1) is that the visual I of M * N carries out of one-dimensional and handles with size, is designated as I 1(i), i=0,1 ..., M * N.
(2) one dimension SOFM neural net that M * N input neuron arranged of structure, determining to cut apart the classification number is C, uses data I 1(i) according to above-mentioned Kohonen algorithm this network is trained and self-organizing clustering.
(3) two dimensionization processing cluster result is reduced into the equirotal image with former figure, is designated as I A
Second step piecemeal and the mark:
(1) ascending by the cluster weights from I AIn extract respective regions, with zone and former figure relatively, non-this regional pixel gray value reset, the pixel gray value of zone correspondence adopts former figure gray value, obtains C images I b(i), i=0,1 ..., C.
(2) with visual I b(i) be divided into the piece of 16 * 16 pixel sizes, travel through all pieces,, then use this piece of i mark if this piece includes the pixel of gray value for " 0 ", otherwise mark not.
The fuzzy merging of many tolerance in the 3rd step:
(1) from I b(i) appoint the piecemeal image of getting two adjacent weights in, travel through all 16 * 16, if two images are all underlined in same 16 * 16, then this piece calculated it by the criterion of the 2nd joint and merge predicate, whether decision merges.Handle the back image and be designated as I s
(2), then finish, otherwise change (1) if the piecemeal image of all weights all calculates to finish.I sBe last segmentation result.
1.2 the background self adaptation generates
In common monitoring image, obtaining one at any time, not have the pure background frames image of moving target fully generally be not too easily, but carry out moving object detection when cutting apart by background subtraction, and the background frames image is absolutely necessary again.The present invention is based on following hypothesis, video camera is fixed, and the variation of background mainly causes by illumination variation, and as the slow change of the illumination of outdoor scene when climate change or the asynchronism(-nization), the light of indoor scene such as throws open or close at variation; Background generally has the texture structure of locally consistent, when target exists wherein, generally is similar by the background around the background of target occlusion and the target on color space and texture structure.Therefore can form the approximate background of target occlusion part by interpolation processing, along with the variation of time, the background frames quilt is constantly dynamically adjusted, and will progressively obtain than the background frames image near real background.
1.2.1 the generation of first frame background
At first adopting static image partition method that first two field picture is cut apart, mainly is the zone that is partitioned into feature similarity, again in conjunction with movable information, determines the approximate region of moving target, so that the background of target coverage is filled, constructs first frame reference background.The present invention earlier with image from the RGB color space conversion to the hsv color space, extract the luminance signal of image, adopt the scene partitioning algorithm of the 5.1st joint that image is carried out just cutting apart to luminance signal then.
Adopt adjacent frame difference method again, detect the moving region, establish f t(x y) is current frame image, f t(x, y)=o t(i o, j o)+b t(i b, j b)+n t(i n, j n); f T-1(x y) is the former frame image, f T-1(x, y)=o T-1(i o, j o)+b T-1(i b, j b)+n T-1(i n, j n), o wherein z(i o, j o), b z(i b, j b), n z(i n, j n), z=t, t-1 represent the target pixel of two field picture respectively, background pixels and noise.Current frame image and former frame image are subtracted each other, obtain the difference of two two field pictures, in the fixed cameras system, can think that this difference mainly is that motion and noise by target causes, but continuous frame difference can not detect complete moving target, therefore, error image and above-mentioned split image are carried out projection, if motion pixel is bigger in the density of a certain cut zone, think that then this zone is a moving target, and remaining zone is marked as background area, thereby has told the approximate region and the background area of moving target, below the background pixels of motion target area is filled.
Image is divided into the piece of 3 * 3 sizes, searches for all pieces,, then use the target pixel in the mean value filling block of other pixels of this piece if having target pixel in this piece; If this piece is target pixel entirely, then use above this piece and all pixels in the mean value filling block of the piece on the left side; The pixel value of other piece is constant.After such processing, generated first frame reference background.
1.2.2 dynamically updating of background
The variation of background mainly is the influence that is subjected to illumination, and it is the process of a gradual change that the brightness of pixel changes.The present invention carries out luminance compensation according to the variation of brightness between consecutive frame to the reference background of former frame, generates the reference background of present frame.Earlier consecutive frame being carried out the frame difference calculates, detect the target travel zone, simultaneously the reference background of former frame image and former frame is subtracted each other, obtain the position of former frame target during the moment, current frame image and former frame reference background are subtracted each other, its frame difference may be that target travel causes, also may be that illumination variation causes, since moving target two continuous frames in the time change in location not too large, the motion parts of therefore uniting three frame difference results is inscribed the moving region of target in the time of just can obtaining present frame; Again current frame image and former frame image are carried out piecemeal with 3 * 3 sizes respectively, brightness (V) space at image, search for each piece of two two field pictures respectively, account for major part (greater than 6 picture elements), then calculate the average value mu of this piece background pixels if belong to the pixel number of background in this piece t iAnd μ T-1 i, wherein t and t-1 represent the frame preface respectively, i represents the piece preface.Generate the reference background of present frame by formula (5), in the formula to the target travel zone, the present frame background is continued to use the background value of the reference background same position of former frame, and other zones are then upgraded obtaining automatically by the background of former frame corresponding position according to luminance compensation.
Figure A20051009430600121
&delta; i = &mu; t i &mu; t - 1 i
B wherein t(x, y) the present frame reference background of expression generation, B T-1(x, the y) reference background of expression former frame, δ iBe penalty coefficient, by the ratio decision of the mean value of consecutive frame corresponding blocks background pixels.
1.3 moving Object Segmentation is extracted
1.3.1 To Template slightly extracts
After dynamically generating the reference background image of present frame, current frame image and background frames image are subtracted computing, because there are difference in the gray scale of moving target and the gray scale of background frames, subtract that difference after the computing mainly causes by the motion of target and shade, also comprised some noises simultaneously.In the HSV of image color space, only consider monochrome information, if a threshold value Th is if difference during more than or equal to this threshold value, thinks that this point is impact point or target shadow, its value is made as 255, other then think noise, its value is made as 0, thereby forms the binaryzation template M (x of moving region, y), see formula (6).
M ( x , y ) = 255 | Dif ( x , y ) | &GreaterEqual; Th 0 Otherwise ;
Th=(max(|Dif(x,y)|)-min(|Dif(x,y)|)×0.15 ………(6)
(x, y) expression present frame and reference background frame are in point (x, the difference of y) locating for Dif in the formula.The value of Th is an experiment value, the present invention choose the maximum and absolute value minimum of absolute value in the error image value difference 0.15 times of value as Th, obtain effect preferably.
Moving region template M (x, y) target and target shadow have been comprised, and the residual noise zonule of part, if the gray scale difference of the gray value of some impact point and reference background is little, also may there be the hole phenomenon in template after the binaryzation, therefore also to carry out subsequent treatment such as target shadow removal and morphological operation, just can obtain accurate To Template.
1.3.2 target shadow is removed
The target of target shadow and motion has similar motion feature, and above-mentioned background subtraction operation can not be distinguished target shadow and target.Therefore, extract the shade that the coarse segmentation target that obtains has comprised real moving target and moving target simultaneously, obtain accurate target area, also must remove the shade of target through background subtraction.According to the visual signature of shade, the shadow region can be regarded translucent zone as [3], when background dot was covered by shade, its brightness value diminished, and the chromatic value size remains unchanged substantially, and when the passive movement target coverage, its brightness value may become and greatly also may diminish, but the general variation of carrier chrominance signal is bigger.According to these features, the present invention adopts following formula to judge the removal shade to the current frame picture of To Template covering and the pixel of its corresponding reference background [4]:
M sd = 0 if ( &alpha; &le; I V ( x , y ) B V ( x , y ) &le; &beta; ) and ( I S ( x , y ) - B S ( x , y ) &le; &Gamma; S ) and ( I H ( x , y ) - B H ( x , y ) &le; &Gamma; H ) 255 Otherwise
M in the formula SdThe binaryzation To Template behind the shade, I are removed in representative H, I S, I VRepresent three components in HSV space of the current frame picture of template M correspondence respectively, B H, B S, B VRepresent three components in the HSV space of template M corresponding reference background frames pixel respectively, (x, y) ∈ M.Threshold value α, β, Γ H, Γ SSize according to experiment decision, generally be value less than 1.
1.3.3 morphology is handled
Because The noise, after image was through thick extraction and target shadow processing, also the zonule that can exist some isolated points and noise to cause will obtain the accurate movement target, must remove these isolated points and zonule.The present invention carries out conventional morphology processing with the structural elements SE of formula (7) to the image after handling.
SE = 0 1 0 1 1 1 0 1 0 - - - ( 7 )
Carry out 2 corrosion operations earlier, remove isolated point and zonule in the image, with same structural elements the image after corroding is carried out 2 expansive workings again, it is sliding to make the object edge that erodes be able to restore peace, obtain the binaryzation template of moving target, To Template and original image are carried out project, extract moving target.
1.4 target's feature-extraction and expression (parameter characteristic and pixel level spatial feature)
Target signature is selected and the tracking influence of extracting target is very crucial, and the quality of feature extraction directly has influence on the recognition and tracking of succeeding target.In order to realize that target is carried out progressively meticulous target following strategy, the present invention is to adopt two kinds of different feature representations and extraction pattern, i.e. parameter characteristic and pixel level spatial feature to objective expression.Because different targets, its shape, color etc. are normally distinguished to some extent.Therefore, clarification of objective obtains from contents such as the histogram of the shape of evaluating objects, color space, local pixel zone, not only had below final the selection general character and but also have the target signature of difference, target is expressed.
1.4.1 the extraction of parameter characteristic
The border moment characteristics of target shape: the border square has consistency such as translation, rotation, yardstick convergent-divergent [5]If centralization border square is &mu; pq = &Sigma; ( X , Y ) &Element; C ( x - x &OverBar; ) p ( y - y &OverBar; ) q , C is the object boundary curve, and then normalized border square is &eta; pq = &mu; pq / &mu; 00 p + q + 1 , p+q=2,3,…。Get 4 constant moment function formulas that are not higher than the central moment structure on three rank and obtain 4 boundary characteristic: f 120+ η 02, f 2 = ( &eta; 20 - &eta; 02 ) 2 + 4 &eta; 11 2 , f 3=(η 30-3η 12) 2+(η 03-3η 21) 2,f 4=(η 3012) 2+(η 0321) 2
The girth area is than feature f P/A: for above-mentioned classification, the ratio range of its girth area ratio of different classes of target is different, can obtain clarification of objective and represents by calculating this ratio.If girth is P, area is A, then f P / A = P A .
Zone length-width ratio feature f H/w: in two-dimentional rectangular coordinate system, the Far Left point p (x in ferret out zone l, y l), rightmost point p (x r, y r), topmost put p (x t, y t) and put p (x bottom b, y b), then
f h / w = | y t - y b | | x r - x l | .
The simple shape degree feature f of target area Sd: the simple shape degree [6]Be defined as C=4 π A/P 2According to the isoperimetric inequality of integral geometry, the area of establishing girth and be the closed curve institute region of P is A, and P is then arranged 2-4 π A 〉=0 can be regarded C as a kind of parameter of compacting to the center.When C=1, the zone is circular; When C=π/4, the zone is a square; When C = 3 &pi; / 9 The time, the zone is an equilateral triangle; When the zone is elongated strip shaped or shape when complicated, the C value is with less.
Color space feature f H, f S, f V: from the angle of human vision, the color of target has embodied the individual characteristic of target, and human eye can remove to distinguish different objects by the difference of color easily.For machine vision, because generally difference to some extent of the color between the different target, its color characteristic of same target is normally constant within a certain period of time, and the color characteristic that therefore can extract target in the color space of target is used for identification of targets.Because it is closely related between three components of R, G, B that coloured image is used always, in order to reduce influencing each other of each characteristic component in the color character space, the present invention extracts the RGB image transform in the HSV color space that relatively is fit to human-eye visual characteristic, H wherein, S, V colourity, saturation and the brightness of presentation video respectively.The RGB color space is as follows to the conversion formula in hsv color space:
V = R + G + B 3 ; - - - ( 8 )
S = 1 - 3 R + G + B [ min ( R , G , B ) ] ; - - - ( 9 )
H = cos - 1 [ ( R - G ) + ( R - B ) 2 ( R - G ) 2 + ( R - B ) ( G - B ) ] ; G ≠ B or R ≠ B (10)
In the HSV color space, calculate the histogram h of H, S, three components of V respectively H(z), h S(z), h V, (z), the feature of color space is with each histogrammic equal value representation, that is: f H = &Sigma; zh H ( z ) &Sigma; h H ( z ) ; f S = &Sigma; zh S ( z ) &Sigma; h S ( z ) ;
f V = &Sigma;zh V ( z ) &Sigma; h V ( z ) .
1.4.2 pixel level spatial feature
The pixel level spatial feature is to be used for feature that target is slightly mated, its different parameter characteristics that are with parameter characteristic are some the time-independent substantially features through the target that calculates, and the pixel level spatial feature is the pixel that directly splits from picture frame, does not need through complicated calculating.In order to reduce the operand in thick when coupling, the center of mass point that the present invention extracts in present frame with target is the center, its boundary rectangle long and wide 60% be grow and the zone of wide rectangle inside as the spatial feature image.
1.5 matching template and feature representation thereof
Matching template is fixed reference feature and the image that is used for Tracking Recognition, and obtaining of template has two kinds of methods usually, manually acquisition methods and acquisition methods automatically.Manually acquisition methods needs artificial participation, by man-machine interface, the moving target that needs are followed the tracks of is manually delineated or is intercepted, thereby obtain reference template, and this method can not be carried out automatic motion target tracking.Automatically acquisition methods is by automatic analytical sequence two field picture, seek moving target, and extract templates of moving objects automatically, this method can be searched for and template extraction the moving target in the visual field automatically, realization is followed the tracks of processing immediately to the moving target that occurs, and need not manually get involved.The present invention adopts the method for obtaining automatically to extract the template of moving target, and the feature representation of template is divided into parameterized template and two kinds of forms of spatial domain template.Parameterized template comes the feature of expression template with the form of characteristic vector, and the spatial domain template is the template of expressing with image pixel, and the extraction of two kinds of templates is identical with the clarification of objective extracting method with expression.
1.6 template renewal
Along with the carrying out of following the tracks of, situations such as rotation, size variation may appear in target, if also adopt original template to mate this moment, will certainly increase the risk that tracking error or tracking are lost.The present invention adopts the dynamic template renewal technology, and when the target of following the tracks of was measured less than setting threshold with the coupling of the template of this target, resampling also upgraded template parameter, makes template can reflect the respective objects of current tracking as far as possible.The condition of template renewal is determined by following factor: T Adptive, T Track, C k
Wherein, T AdptiveRepresent template to require the coupling tolerance threshold value of upgrading, 0<T Track<T Adptive<1; T TrackRepresent the Minimum Acceptable Value of template and object matching, if the coupling of a certain target and template tolerance, thinks that this target and time template do not match less than this threshold value; C kBe to use the Kalman filter [7]Predict the center of mass point position of this target that obtains.
After background subtracts processing, obtained the zone of moving target, by the center of mass point position of a certain target in next frame, Kalman filter prediction location, search near the moving region of this point, if the relevant matches of the spatial domain pixel of target and its spatial domain template tolerance t aEligible: T Track<t a<T Adptive, then again this target is carried out template extraction and template characteristic expression thereof, realize the automatic renewal of template.
1.7 entering and processing that target disappears of fresh target
By current frame image and background frames image are subtracted computing, extract the target of motion, calculating clarification of objective vector, characteristic vector and existing templates characteristic vector are compared, if itself and arbitrary existing templates all do not match, then think to have entered a new moving target in the visual field, implement after it is given new numbering and extracts its template to follow the tracks of.If a certain moving target does not also occur after the time of setting again, think that this target disappears, delete its template characteristic vector and spatial domain template image, to discharge memory space.
1.8 the processing of meeting with separating of target
In the multiple mobile object tracking system, meeting, block, separating between target is that generation is often arranged, and handles bad being easy to BREAK TRACK.The present invention meets to the target in following the tracks of and has carried out special processing with separating, and process is as follows:
If target area becomes the big preset threshold that surpasses suddenly, just think that target meets, stop extraction to target signature, simultaneously the center of mass point of the point that dopes with Kalman as target, suspending the matching operation and the template renewal of corresponding sports target and template thereof handles, within normal range (NR), just think that target separates up near the area of motion target area the center of mass point of prediction, carry out normal target following handling procedure.
1.9 the tracking of target and label
The tracking of target is to carry out on the basis of target detection and feature extraction, is high-level computer vision problem.Method for tracking target commonly used can roughly be divided into four big classes: based on the tracking of model, based on the tracking in zone, based on the tracking of feature with based on the tracking of active contour.Tracking based on model can be object representation 2-D profile or 3-D model, the shortcoming of 2-D model is exactly the influence that is subjected to the video camera shooting angle, when camera angle change or the object rotation, when blocking, the expression of its 2-D model also can change thereupon, the 3-D model can address the above problem preferably, but its calculation of complex is not to be in daily use.By information such as the color of the moving region of present frame and former frame, texture are carried out correlation ratio, seek the zone of each motion associated based on the tracking in zone, realize identification of targets and tracking.Tracking based on feature is to adopt the differentiable Points And lines on the target to realize following the tracks of as subcharacter, this method is owing to adopted characteristic point information, under part was blocked situation, if characteristic point information can also correctly be obtained, then system still can carry out tenacious tracking.Tracking or Snake algorithm based on active contour are exactly the profile that directly obtains target, by being on-the-fly modified, profile point realizes following the tracks of, this method need be carried out initialization to profile, and initialized profile must be near the edge of target, otherwise is difficult to correctly follow the tracks of.
The present invention adopts the parameter characteristic of moving target and pixel level spatial feature to carry out the progressively meticulous fuzzy matching of many features, and the composite character tracking that has proposed a kind of progressively meticulous fuzzy matching is realized target is carried out tenacious tracking.This method mainly is made up of two processes: thick coupling and thin coupling.In tracing process,, just need carefully not mate if thick coupling reaches the identification of targets requirement, can greatly reduce the amount of calculation of system like this, the tracking reaction time of raising system particularly has in scene under the situation about need follow the tracks of than multiple target, and effect is by being obvious.Whether thick coupling is to adopt the pixel level spatial feature of target and existing spatial domain template to carry out related operation, defines the degree of correlation scope of coupling with a plurality of threshold values, will carry out further carefully mating according to the value decision of the degree of correlation.Thin matching process adopts the parameter characteristic vector of target and template to carry out the similarity differentiation, thereby realizes the tracking to target.When following the tracks of, calculate the center of mass point position of target, in the center of mass point position target is carried out label, and the center of mass point position of this target in next frame predicted, be used for " meeting " between target handled (seeing the processing of meeting with separating of 5.3 targets) with the Kalman filter.
If from the pixel level spatial domain image that a certain target splits is I o, size is n * m * 3, existing spatial domain template image is I t j, j ∈ 1,2,3 ... the expression template sequence number, then:
The first step: the processing procedure of thick coupling
1) matching image intercepting.With I oAnd I t j(j ∈ 1,2,3 ...) each center of mass point of two images is the center, the medium and small ranks of row and column are that standard intercepts two secondary equirotal images, are designated as T respectively OsegAnd I Tseg j(j ∈ 1,2,3 ...).
2) relatedness computation.Transfer truncated picture to gray level image from coloured image, calculate with formula (11).
r j = &Sigma; k = 1 K &Sigma; l = 1 L I oseg ( k , l ) I tseg j ( k , l ) &Sigma; k = 1 K &Sigma; l = 1 L [ I oseg ( k , l ) ] 2 &times; &Sigma; k = 1 K &Sigma; l = 1 L [ I tseg j ( k , l ) ] 2 - - - ( 11 )
K in the formula (6), L represents the ranks number of cut-away view picture respectively.By the Cauchy-Schwarz inequality as can be known, 0≤r j≤ 1.Degree of correlation r represents that more near 1 two sub-pictures are similar more.
3) match decision.If r jMore than or equal to setting threshold T Track2(the present invention sets T according to experiment Track2Value equal 0.95) time think that this target is identical with this template, need not carry out further thin matching operation, directly just be that the label of this template correspondence is followed the tracks of with this target label, withdraw from thick matching process.If T Track1≤ r j<T Track2(T Track1=0.4), calculates the degree of correlation of this target and all templates successively, select the parameter characteristic of three maximum (if j>3) templates and this target to carry out further thin matching treatment.If the r of target and all templates jAll less than T Track1, represent that this target is a target that newly enters, then carry out " fresh target enters " and handle (seeing 5.5 joints).
Second step: the thin matching process of target
Thin coupling is carried out as required, by result's decision of thick coupling.When carefully mating, extract earlier the parameter characteristic (parameter characteristics of seeing 5.2 joints extract and express) of target, measure the distance metric of the parameter characteristic of itself and corresponding template then, realize label and tracking target.Concrete processing is as follows:
1) the parameter characteristic f of extraction target 1, f 2, f 3, f 4, f P/A, f H/w, f Sd, f H, f S, f Г
2) the parametrization vector substitution formula (12) of the parameter characteristic vector sum template of target is calculated the similarity of target and the thin coupling of template.
&eta; = &Sigma; n = 1 10 | F o n - F t n | , ( F o n , F t n &Element; &Omega; ) - - - ( 12 )
F wherein o nRepresent the parameter characteristic vector of target, F t nRepresent the parameter characteristic vector of template, select for use the mutual characteristic of correspondence of target and template to carry out difference.Ω representation feature ranges of vectors, it comprises 10 above-mentioned parameter characteristic vectors.
3) choose the template (under the multiple target template situation) that obtains minimum η value and carry out the label tracking as the matching template of target.
Experimental result:
Adopting the inventive method to carry out cutting apart and the tracking processing of moving target to Hall Monitor video sequence with the real-time video sequence of gathering of ccd video camera on PC during experiment, below is part of test results:
2.1 the experimental result that Hall Monitor video sequence is handled
Carrying out cutting apart with tracking of moving target in Hall Monitor video sequence handles, Fig. 2 and Fig. 3 are respectively the result who the 25th frame in the sequence and the 50th two field picture is carried out dividing processing, (a) wherein is the brightness parts of images (V component) among the HSV, (b) be the reference background that generates, (c) be the difference of present frame and its reference background frame, (d) be the binaryzation result of (c), (e) be the result of To Template after shadow removal and morphology processing, (f) be the moving target that finally is partitioned into.Shown in Figure 4 is the result who carries out motion target tracking in video sequence, and " NO.1 " among each figure or " NO.2 " are the label to moving target.
2.2 the experimental result that the video sequence of real-time collection is handled
Gather indoor video sequence and handle, video camera is a common CCD camera, Figure 5 shows that cutting apart and the result who follows the tracks of of moving target.
What the present invention relates to is that moving Object Segmentation and tracking are the core processing technology of a lot of concrete Video Applications systems, it also is the basis of carrying out the senior semantic analysis of video, can be widely used in the military and civilian specific area, as missile guidance, vision guided navigation, video monitoring, object-based video coding, video frequency searching, video conference or the like.Along with further developing of relevant supporting technologies such as computer technology, control technology, software and hardware and level, the further raising of people's demand, the Application and Development of various video systems, moving Object Segmentation and tracking will have more wide application prospect.

Claims (4)

1, a kind of video frequency motion target is cut apart and tracking, it is characterized in that adopting the background subtraction method to cut apart moving target, carries out the extraction and the feature identification of target signature again, realizes the tenacious tracking to target; Wherein, the model of background is the model of a real-time update, adopts based on the scene partitioning algorithm of s self-organizing feature map SOFM network and the merging of zonule mark and the mode that video motion information combines, and self adaptation generates first frame background; During feature representation, adopt two kinds of expression-forms: parameter characteristic and Pixel-level spatial feature; During target following, adopt progressively meticulous matching strategy from coarse to fine,, realize identification of targets and tracking by thick coupling and thin coupling;
The mode that scene partitioning algorithm that described s self-organizing feature map (SOFM) neural net and zonule mark merge and video motion information combine is: bigger to network settings one earlier output classification number, it is trained, its image to input is carried out over-segmentation, adopting many tolerance Fuzzy Criteria of a kind of zonule mark to carry out iteration to each zone that splits then merges, obtains final segmentation result; The SOFM network is made up of two-layer, be respectively input layer and competition layer, the input layer number is n, the competition layer neuron is m, there are variable weights to be connected between each input node and output node, the input pattern of tieing up arbitrarily is mapped to one dimension or two-dimensional discrete figure in competition layer output, and keeps its topological structure constant;
With scene image data the SOFM network is trained, the a certain specific neuron that makes SOFM is to certain gray feature sensitivity, when image to be split is transfused to network, just can carry out self-organizing clustering according to the gray feature of data to data, finish image is carried out Region Segmentation;
The image of exporting after SOFM neural net cluster is divided into the zone of a fixed number firmly, and each pixel belongs to a certain specific cluster according to its gray feature; Earlier image is carried out over-segmentation, extract by class then and carry out piecemeal and mark, obtain markd zonule, adopt many tolerance fuzzy criterions to merge to mark zonule again;
Adopt the dynamic template update method, when the target of following the tracks of was measured less than setting threshold with the coupling of the template of this target, resampling also upgraded template parameter; The condition of template renewal is determined by following factor: T Adptive, T Track, C k: wherein, T AdptiveRepresent template to require the coupling tolerance threshold value of upgrading, 0<T Track<T Adptive<1; T TrackRepresent the Minimum Acceptable Value of template and object matching, if the coupling of a certain target and template tolerance, thinks that this target and time template do not match less than this threshold value; C kBe to use the Kalman filter [7]Predict the center of mass point position of this target that obtains;
After background subtracts processing, obtained the zone of moving target, by the center of mass point position of a certain target in next frame, Kalman filter prediction location, search near the moving region of this point, if the relevant matches of the spatial domain pixel of target and its spatial domain template tolerance t aEligible: T Track<t a<T Adptive, then again this target is carried out template extraction and template characteristic expression thereof, realize the automatic renewal of template;
By current frame image and background frames image are subtracted computing, extract the target of motion, calculating clarification of objective vector, characteristic vector and existing templates characteristic vector are compared, if itself and arbitrary existing templates all do not match, then think to have entered a new moving target in the visual field, implement after it is given new numbering and extracts its template to follow the tracks of; If a certain moving target does not also occur after the time of setting again, think that this target disappears, delete its template characteristic vector and spatial domain template image, to discharge memory space;
Adopt the parameter characteristic and the pixel level spatial feature of moving target to carry out the progressively meticulous fuzzy matching of many features, be made up of thick coupling and two processes of thin coupling: thick coupling is to adopt the pixel level spatial feature of target and existing spatial domain template to carry out related operation, define the degree of correlation scope of coupling with a plurality of threshold values, whether will carry out further thin coupling according to the value decision of the degree of correlation;
Thin matching process adopts the parameter characteristic vector of target and template to carry out the similarity differentiation, thereby realize tracking: in the time of tracking to target, calculate the center of mass point position of target, in the center of mass point position target is carried out label, and with the Kalman filter center of mass point position of this target in next frame predicted, be used for " meeting " between target handled.
2, cut apart and tracking by the described video frequency motion target of claim 1, it is characterized in that the processing procedure of thick coupling and thin coupling is:
If from the pixel level spatial domain image that a certain target splits is Io, size is n * m * 3, and existing spatial domain template image is J t j, j ∈ 1,2,3 ... } expression template sequence number, then:
The first step: the processing procedure of thick coupling
1) matching image intercepting is with I 0And I t j(j ∈ 1,2,3 ... }) and each center of mass point of two images is the center, the medium and small ranks of row and column are that standard intercepts two secondary equirotal images, are designated as I respectively OsegAnd I Tseg j(j ∈ 1,2,3 ... });
2) relatedness computation transfers truncated picture to gray level image from coloured image, calculates with formula (11):
r j = &Sigma; k = 1 K &Sigma; l = 1 L I oseg ( k , l ) I tseg l ( k , l ) &Sigma; k = 1 K &Sigma; l = 1 L [ I oseg ( k , l ) ] 2 &times; &Sigma; k = 1 K &Sigma; I = 1 L [ I tseg j ( k , l ) ] 2 - - - ( 11 )
K in the formula (6), L represents the ranks number of cut-away view picture, 0≤r respectively j≤ 1, degree of correlation r represents that more near 1 two sub-pictures are similar more;
3) if match decision is r jMore than or equal to setting threshold T Track2The time think that this target is identical with this template, need not carry out further thin matching operation, directly just be that the label of this template correspondence is followed the tracks of with this target label, withdraw from thick the coupling; If T Track1≤ r j<T Track2(T Track1=0.4), calculates the degree of correlation of this target and all templates successively, select the parameter characteristic of three maximum (if j>3) templates and this target to carry out further thin matching treatment;
Second step: the thin matching process of target
Thin coupling when carefully mating, is extracted the parameter characteristic of target by result's decision of thick coupling earlier, measures the distance metric of the parameter characteristic of itself and corresponding template then, realizes label and tracking to target:
1) the parameter characteristic f of extraction target 1, f 2, f 3, f 4, f P/A, f H/w, f Sd, f H, f S, f V
2) the parametrization vector substitution formula (12) of the parameter characteristic vector sum template of target is calculated the similarity of target and the thin coupling of template;
&eta; = &Sigma; n = 1 10 | F o n - F t n | , ( F o n , F t n &Element; &Omega; ) - - - ( 12 )
F wherein o nRepresent the parameter characteristic vector of target, F t nRepresent the parameter characteristic vector of template, select for use the mutual characteristic of correspondence of target and template to carry out difference.Ω representation feature ranges of vectors, it comprises 10 above-mentioned parameter characteristic vectors;
3) choose the template (under the multiple target template situation) that obtains minimum η value and carry out the label tracking as the matching template of target.
3, cut apart and tracking by the described video frequency motion target of claim 2, it is characterized in that the target in following the tracks of is met with to separate the process of handling as follows: if target area becomes the big preset threshold that surpasses suddenly, just think that target meets, stop extraction to target signature, simultaneously the center of mass point of the point that dopes with Kalman as target, suspending the matching operation and the template renewal of corresponding sports target and template thereof handles, near the area of the motion target area the center of mass point of predicting is within normal range (NR), think that just target separates, carry out normal target following handling procedure.
4, cut apart and tracking by the described video frequency motion target of claim 2, it is characterized in that the method that mark zonule adopts many tolerance fuzzy criterions to merge being:, then merge this zone if the area grayscale uniformity after 1. merging is better; If 2. two region distances are closer, then consider whether merge, if far then do not support to merge; If 3. two zones are adjacency, consider to merge, otherwise do not support to merge.
CN 200510094306 2005-09-09 2005-09-09 Video motion goal division and track method Pending CN1738426A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN 200510094306 CN1738426A (en) 2005-09-09 2005-09-09 Video motion goal division and track method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN 200510094306 CN1738426A (en) 2005-09-09 2005-09-09 Video motion goal division and track method

Publications (1)

Publication Number Publication Date
CN1738426A true CN1738426A (en) 2006-02-22

Family

ID=36081053

Family Applications (1)

Application Number Title Priority Date Filing Date
CN 200510094306 Pending CN1738426A (en) 2005-09-09 2005-09-09 Video motion goal division and track method

Country Status (1)

Country Link
CN (1) CN1738426A (en)

Cited By (66)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100359536C (en) * 2006-03-02 2008-01-02 复旦大学 Image tracking algorithm based on adaptive Kalman initial searching position selection
CN100527842C (en) * 2007-01-26 2009-08-12 清华大学 Background-based motion estimation coding method
CN100585656C (en) * 2007-03-14 2010-01-27 中国科学院自动化研究所 An all-weather intelligent video analysis monitoring method based on a rule
CN101783015A (en) * 2009-01-19 2010-07-21 北京中星微电子有限公司 Equipment and method for tracking video
CN101834981A (en) * 2010-05-04 2010-09-15 崔志明 Video background extracting method based on online cluster
CN101431665B (en) * 2007-11-08 2010-09-15 财团法人工业技术研究院 Method and system for detecting and tracing object
CN101866429A (en) * 2010-06-01 2010-10-20 中国科学院计算技术研究所 Training method of multi-moving object action identification and multi-moving object action identification method
CN101127912B (en) * 2007-09-14 2010-11-17 浙江大学 Video coding method for dynamic background frames
CN101901334A (en) * 2009-05-31 2010-12-01 汉王科技股份有限公司 Static object detection method
CN101945215A (en) * 2009-07-03 2011-01-12 三洋电机株式会社 Video camera
WO2011063616A1 (en) * 2009-11-24 2011-06-03 杭州海康威视软件有限公司 Method and apparatus for moving object autotracking
CN102129332A (en) * 2011-03-07 2011-07-20 广东威创视讯科技股份有限公司 Detection method and device of touch points for image recognition
CN102148921A (en) * 2011-05-04 2011-08-10 中国科学院自动化研究所 Multi-target tracking method based on dynamic group division
CN101231694B (en) * 2008-02-21 2011-08-17 南京中兴特种软件有限责任公司 Method for partitioning mobile object base on a plurality of gaussian distribution
CN102155942A (en) * 2011-02-26 2011-08-17 山东大学 Global path planning method based on fuzzy topological map under large-scale environment
CN102196240A (en) * 2010-03-18 2011-09-21 鸿富锦精密工业(深圳)有限公司 Pick-up device and method for dynamically sensing monitored object by utilizing same
CN102215377A (en) * 2010-04-07 2011-10-12 北京智安邦科技有限公司 Device and method for PTZ (planning, tilting and zooming) single-target automatic tracking
CN102244728A (en) * 2010-05-10 2011-11-16 卡西欧计算机株式会社 Apparatus and method for subject tracking, and recording medium storing program thereof
CN101615294B (en) * 2008-06-26 2012-01-18 睿致科技股份有限公司 Method for tracing multiple objects
CN101339664B (en) * 2008-08-27 2012-04-18 北京中星微电子有限公司 Object tracking method and system
CN101840578B (en) * 2009-03-17 2012-05-23 鸿富锦精密工业(深圳)有限公司 Camera device and dynamic detection method thereof
CN102469304A (en) * 2010-11-12 2012-05-23 索尼公司 Video processing
CN101494779B (en) * 2008-01-25 2012-08-22 联发科技股份有限公司 Method, video encoder, and integrated circuit for detecting non-rigid body motion
CN102663722A (en) * 2011-01-31 2012-09-12 微软公司 Moving object segmentation using depth images
CN101395636B (en) * 2006-03-01 2012-12-12 株式会社尼康 Object-seeking computer program product, object-seeking device, and camera
CN101515998B (en) * 2008-02-20 2013-03-27 索尼株式会社 Image processing apparatus, image processing method, and program
CN103077538A (en) * 2013-01-15 2013-05-01 西安电子科技大学 Adaptive tracking method of biomimetic-pattern recognized targets
CN103745216A (en) * 2014-01-02 2014-04-23 中国民航科学技术研究院 Radar image clutter suppression method based on airspace feature
CN103927763A (en) * 2014-03-24 2014-07-16 河海大学 Identification processing method for multi-target tracking tracks of image sequences
CN103945227A (en) * 2014-04-16 2014-07-23 上海交通大学 Video semantic block partition method based on light stream clustering
CN104010168A (en) * 2014-06-13 2014-08-27 东南大学 Non-overlapping vision field multi-camera monitoring network topology self-adaptation learning method
CN104299245A (en) * 2014-10-13 2015-01-21 深圳先进技术研究院 Augmented reality tracking method based on neural network
CN104408400A (en) * 2014-10-28 2015-03-11 北京理工大学 Indistinguishable multi-target detection method based on single-image frequency domain information
WO2015074428A1 (en) * 2013-11-22 2015-05-28 华为技术有限公司 Neural network system, and image parsing method and device based on same
CN105184822A (en) * 2015-09-29 2015-12-23 中国兵器工业计算机应用技术研究所 Target tracking template updating method
CN105303757A (en) * 2015-09-25 2016-02-03 凌云光技术集团有限责任公司 Power grid intelligent monitoring method for preventing artificial external force damage and system thereof
CN105469423A (en) * 2015-11-16 2016-04-06 北京师范大学 Online target tracking method based on continuous attractor neural network
CN105844235A (en) * 2016-03-22 2016-08-10 南京工程学院 Visual saliency-based complex environment face detection method
CN106096508A (en) * 2016-05-30 2016-11-09 无锡天脉聚源传媒科技有限公司 A kind of image determines the method and device that target is covered
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
CN106296728A (en) * 2016-07-27 2017-01-04 昆明理工大学 A kind of Segmentation of Moving Object method in unrestricted scene based on full convolutional network
CN106447685A (en) * 2016-09-06 2017-02-22 电子科技大学 Infrared tracking method
CN106529497A (en) * 2016-11-25 2017-03-22 浙江大华技术股份有限公司 Image acquisition device positioning method and device
CN107274435A (en) * 2017-05-05 2017-10-20 西安交通大学 The correlation filter updating device of scene classification is considered in a kind of target following
CN107798689A (en) * 2016-08-28 2018-03-13 淮安信息职业技术学院 Traffic video image background extracting method
CN108010032A (en) * 2017-12-25 2018-05-08 北京奇虎科技有限公司 Video landscape processing method and processing device based on the segmentation of adaptive tracing frame
CN108961304A (en) * 2017-05-23 2018-12-07 阿里巴巴集团控股有限公司 Identify the method for sport foreground and the method for determining target position in video in video
CN109063603A (en) * 2018-07-16 2018-12-21 深圳地平线机器人科技有限公司 Image prediction method and apparatus and electronic equipment based on regional dynamics screening
CN109166107A (en) * 2018-04-28 2019-01-08 北京市商汤科技开发有限公司 A kind of medical image cutting method and device, electronic equipment and storage medium
CN109271854A (en) * 2018-08-07 2019-01-25 北京市商汤科技开发有限公司 Based on method for processing video frequency and device, video equipment and storage medium
CN109636803A (en) * 2017-10-05 2019-04-16 斯特拉德视觉公司 Method for segmented image and the device using this method
CN109766828A (en) * 2019-01-08 2019-05-17 重庆同济同枥信息技术有限公司 A kind of vehicle target dividing method, device and communication equipment
WO2019120011A1 (en) * 2017-12-22 2019-06-27 杭州萤石软件有限公司 Target detection method and apparatus
CN110018529A (en) * 2019-02-22 2019-07-16 南方科技大学 Rainfall measurement method, device, computer equipment and storage medium
WO2019148453A1 (en) * 2018-02-02 2019-08-08 深圳蓝胖子机器人有限公司 Method for training target recognition model, target recognition method, apparatus, and robot
WO2019161562A1 (en) * 2018-02-26 2019-08-29 Intel Corporation Object detection with image background subtracted
CN110189249A (en) * 2019-05-24 2019-08-30 深圳市商汤科技有限公司 A kind of image processing method and device, electronic equipment and storage medium
CN110879951A (en) * 2018-09-06 2020-03-13 华为技术有限公司 Motion foreground detection method and device
CN111917976A (en) * 2020-07-21 2020-11-10 青岛聚好联科技有限公司 Electronic equipment and method for extracting moving object in image
CN112990789A (en) * 2021-05-10 2021-06-18 明品云(北京)数据科技有限公司 User risk analysis method, system, electronic device and medium
CN113554683A (en) * 2021-09-22 2021-10-26 成都考拉悠然科技有限公司 Feature tracking method based on video analysis and object detection
US11215711B2 (en) 2012-12-28 2022-01-04 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
CN113963375A (en) * 2021-10-20 2022-01-21 中国石油大学(华东) Multi-feature matching multi-target tracking method for fast skating athletes based on regions
CN114170418A (en) * 2021-11-30 2022-03-11 吉林大学 Automobile wire harness connector multi-feature fusion image retrieval method by searching images through images
CN114785955A (en) * 2022-05-05 2022-07-22 广州新华学院 Motion compensation method, system and storage medium for dynamic camera in complex scene
US11710309B2 (en) 2013-02-22 2023-07-25 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates

Cited By (99)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101395636B (en) * 2006-03-01 2012-12-12 株式会社尼康 Object-seeking computer program product, object-seeking device, and camera
CN100359536C (en) * 2006-03-02 2008-01-02 复旦大学 Image tracking algorithm based on adaptive Kalman initial searching position selection
CN100527842C (en) * 2007-01-26 2009-08-12 清华大学 Background-based motion estimation coding method
CN100585656C (en) * 2007-03-14 2010-01-27 中国科学院自动化研究所 An all-weather intelligent video analysis monitoring method based on a rule
CN101127912B (en) * 2007-09-14 2010-11-17 浙江大学 Video coding method for dynamic background frames
CN101431665B (en) * 2007-11-08 2010-09-15 财团法人工业技术研究院 Method and system for detecting and tracing object
CN101494779B (en) * 2008-01-25 2012-08-22 联发科技股份有限公司 Method, video encoder, and integrated circuit for detecting non-rigid body motion
CN101515998B (en) * 2008-02-20 2013-03-27 索尼株式会社 Image processing apparatus, image processing method, and program
CN101231694B (en) * 2008-02-21 2011-08-17 南京中兴特种软件有限责任公司 Method for partitioning mobile object base on a plurality of gaussian distribution
CN101615294B (en) * 2008-06-26 2012-01-18 睿致科技股份有限公司 Method for tracing multiple objects
CN101339664B (en) * 2008-08-27 2012-04-18 北京中星微电子有限公司 Object tracking method and system
CN101783015A (en) * 2009-01-19 2010-07-21 北京中星微电子有限公司 Equipment and method for tracking video
CN101783015B (en) * 2009-01-19 2013-04-24 北京中星微电子有限公司 Equipment and method for tracking video
CN101840578B (en) * 2009-03-17 2012-05-23 鸿富锦精密工业(深圳)有限公司 Camera device and dynamic detection method thereof
CN101901334A (en) * 2009-05-31 2010-12-01 汉王科技股份有限公司 Static object detection method
CN101901334B (en) * 2009-05-31 2013-09-11 汉王科技股份有限公司 Static object detection method
CN101945215A (en) * 2009-07-03 2011-01-12 三洋电机株式会社 Video camera
WO2011063616A1 (en) * 2009-11-24 2011-06-03 杭州海康威视软件有限公司 Method and apparatus for moving object autotracking
CN102196240A (en) * 2010-03-18 2011-09-21 鸿富锦精密工业(深圳)有限公司 Pick-up device and method for dynamically sensing monitored object by utilizing same
CN102196240B (en) * 2010-03-18 2014-08-20 鸿富锦精密工业(深圳)有限公司 Pick-up device and method for dynamically sensing monitored object by utilizing same
CN102215377A (en) * 2010-04-07 2011-10-12 北京智安邦科技有限公司 Device and method for PTZ (planning, tilting and zooming) single-target automatic tracking
CN102215377B (en) * 2010-04-07 2012-10-31 北京智安邦科技有限公司 Device and method for PTZ (planning, tilting and zooming) single-target automatic tracking
CN101834981B (en) * 2010-05-04 2011-11-23 崔志明 Video background extracting method based on online cluster
CN101834981A (en) * 2010-05-04 2010-09-15 崔志明 Video background extracting method based on online cluster
US8878939B2 (en) 2010-05-10 2014-11-04 Casio Computer Co., Ltd. Apparatus and method for subject tracking, and recording medium storing program thereof
CN102244728B (en) * 2010-05-10 2014-02-05 卡西欧计算机株式会社 Apparatus and method for subject tracking
CN102244728A (en) * 2010-05-10 2011-11-16 卡西欧计算机株式会社 Apparatus and method for subject tracking, and recording medium storing program thereof
CN101866429B (en) * 2010-06-01 2012-09-05 中国科学院计算技术研究所 Training method of multi-moving object action identification and multi-moving object action identification method
CN101866429A (en) * 2010-06-01 2010-10-20 中国科学院计算技术研究所 Training method of multi-moving object action identification and multi-moving object action identification method
CN102469304A (en) * 2010-11-12 2012-05-23 索尼公司 Video processing
CN102663722A (en) * 2011-01-31 2012-09-12 微软公司 Moving object segmentation using depth images
CN102155942B (en) * 2011-02-26 2012-12-05 山东大学 Global path planning method based on fuzzy topological map under large-scale environment
CN102155942A (en) * 2011-02-26 2011-08-17 山东大学 Global path planning method based on fuzzy topological map under large-scale environment
CN102129332A (en) * 2011-03-07 2011-07-20 广东威创视讯科技股份有限公司 Detection method and device of touch points for image recognition
CN102148921B (en) * 2011-05-04 2012-12-12 中国科学院自动化研究所 Multi-target tracking method based on dynamic group division
CN102148921A (en) * 2011-05-04 2011-08-10 中国科学院自动化研究所 Multi-target tracking method based on dynamic group division
US11215711B2 (en) 2012-12-28 2022-01-04 Microsoft Technology Licensing, Llc Using photometric stereo for 3D environment modeling
CN103077538B (en) * 2013-01-15 2015-06-03 西安电子科技大学 Adaptive tracking method of biomimetic-pattern recognized targets
CN103077538A (en) * 2013-01-15 2013-05-01 西安电子科技大学 Adaptive tracking method of biomimetic-pattern recognized targets
US11710309B2 (en) 2013-02-22 2023-07-25 Microsoft Technology Licensing, Llc Camera/object pose from predicted coordinates
WO2015074428A1 (en) * 2013-11-22 2015-05-28 华为技术有限公司 Neural network system, and image parsing method and device based on same
CN103745216A (en) * 2014-01-02 2014-04-23 中国民航科学技术研究院 Radar image clutter suppression method based on airspace feature
CN103745216B (en) * 2014-01-02 2016-10-26 中国民航科学技术研究院 A kind of radar image clutter suppression method based on Spatial characteristic
CN103927763A (en) * 2014-03-24 2014-07-16 河海大学 Identification processing method for multi-target tracking tracks of image sequences
CN103927763B (en) * 2014-03-24 2016-08-17 河海大学 A kind of image sequence multiple target tracking track identification processing method
CN103945227A (en) * 2014-04-16 2014-07-23 上海交通大学 Video semantic block partition method based on light stream clustering
CN103945227B (en) * 2014-04-16 2017-02-01 上海交通大学 Video semantic block partition method based on light stream clustering
CN104010168B (en) * 2014-06-13 2017-06-13 东南大学 A kind of non-overlapping visual field multiple-camera monitors network topology adaptation learning method
CN104010168A (en) * 2014-06-13 2014-08-27 东南大学 Non-overlapping vision field multi-camera monitoring network topology self-adaptation learning method
CN104299245A (en) * 2014-10-13 2015-01-21 深圳先进技术研究院 Augmented reality tracking method based on neural network
CN104299245B (en) * 2014-10-13 2017-12-26 深圳先进技术研究院 Augmented reality tracking based on neutral net
CN104408400A (en) * 2014-10-28 2015-03-11 北京理工大学 Indistinguishable multi-target detection method based on single-image frequency domain information
CN104408400B (en) * 2014-10-28 2018-08-21 北京理工大学 It is a kind of that multi-target detection method can not be differentiated based on single image frequency domain information
CN105303757A (en) * 2015-09-25 2016-02-03 凌云光技术集团有限责任公司 Power grid intelligent monitoring method for preventing artificial external force damage and system thereof
CN105184822B (en) * 2015-09-29 2017-12-29 中国兵器工业计算机应用技术研究所 A kind of target following template renewal method
CN105184822A (en) * 2015-09-29 2015-12-23 中国兵器工业计算机应用技术研究所 Target tracking template updating method
CN105469423A (en) * 2015-11-16 2016-04-06 北京师范大学 Online target tracking method based on continuous attractor neural network
CN105469423B (en) * 2015-11-16 2018-06-22 北京师范大学 A kind of online method for tracking target based on continuous attraction sub-neural network
CN105844235A (en) * 2016-03-22 2016-08-10 南京工程学院 Visual saliency-based complex environment face detection method
CN105844235B (en) * 2016-03-22 2018-12-14 南京工程学院 The complex environment method for detecting human face of view-based access control model conspicuousness
CN106096508A (en) * 2016-05-30 2016-11-09 无锡天脉聚源传媒科技有限公司 A kind of image determines the method and device that target is covered
CN106097391B (en) * 2016-06-13 2018-11-16 浙江工商大学 A kind of multi-object tracking method of the identification auxiliary based on deep neural network
CN106097391A (en) * 2016-06-13 2016-11-09 浙江工商大学 A kind of multi-object tracking method identifying auxiliary based on deep neural network
CN106296728A (en) * 2016-07-27 2017-01-04 昆明理工大学 A kind of Segmentation of Moving Object method in unrestricted scene based on full convolutional network
CN106296728B (en) * 2016-07-27 2019-05-14 昆明理工大学 A kind of Segmentation of Moving Object method in the unrestricted scene based on full convolutional network
CN107798689A (en) * 2016-08-28 2018-03-13 淮安信息职业技术学院 Traffic video image background extracting method
CN106447685B (en) * 2016-09-06 2019-04-02 电子科技大学 A kind of infrared track method
CN106447685A (en) * 2016-09-06 2017-02-22 电子科技大学 Infrared tracking method
CN106529497A (en) * 2016-11-25 2017-03-22 浙江大华技术股份有限公司 Image acquisition device positioning method and device
CN107274435B (en) * 2017-05-05 2018-09-04 西安交通大学 The correlation filter updating device of scene classification is considered in a kind of target following
CN107274435A (en) * 2017-05-05 2017-10-20 西安交通大学 The correlation filter updating device of scene classification is considered in a kind of target following
CN108961304A (en) * 2017-05-23 2018-12-07 阿里巴巴集团控股有限公司 Identify the method for sport foreground and the method for determining target position in video in video
CN108961304B (en) * 2017-05-23 2022-04-26 阿里巴巴集团控股有限公司 Method for identifying moving foreground in video and method for determining target position in video
CN109636803A (en) * 2017-10-05 2019-04-16 斯特拉德视觉公司 Method for segmented image and the device using this method
CN109636803B (en) * 2017-10-05 2023-07-25 斯特拉德视觉公司 Method for segmenting image and apparatus using the same
WO2019120011A1 (en) * 2017-12-22 2019-06-27 杭州萤石软件有限公司 Target detection method and apparatus
US11367276B2 (en) 2017-12-22 2022-06-21 Hangzhou Ezviz Software Co., Ltd. Target detection method and apparatus
CN108010032A (en) * 2017-12-25 2018-05-08 北京奇虎科技有限公司 Video landscape processing method and processing device based on the segmentation of adaptive tracing frame
WO2019148453A1 (en) * 2018-02-02 2019-08-08 深圳蓝胖子机器人有限公司 Method for training target recognition model, target recognition method, apparatus, and robot
US11450009B2 (en) 2018-02-26 2022-09-20 Intel Corporation Object detection with modified image background
WO2019161562A1 (en) * 2018-02-26 2019-08-29 Intel Corporation Object detection with image background subtracted
CN109166107A (en) * 2018-04-28 2019-01-08 北京市商汤科技开发有限公司 A kind of medical image cutting method and device, electronic equipment and storage medium
CN109063603A (en) * 2018-07-16 2018-12-21 深圳地平线机器人科技有限公司 Image prediction method and apparatus and electronic equipment based on regional dynamics screening
CN109063603B (en) * 2018-07-16 2020-09-11 深圳地平线机器人科技有限公司 Image prediction method and device based on regional dynamic screening and electronic equipment
CN109271854A (en) * 2018-08-07 2019-01-25 北京市商汤科技开发有限公司 Based on method for processing video frequency and device, video equipment and storage medium
CN110879951A (en) * 2018-09-06 2020-03-13 华为技术有限公司 Motion foreground detection method and device
CN110879951B (en) * 2018-09-06 2022-10-25 华为技术有限公司 Motion foreground detection method and device
CN109766828A (en) * 2019-01-08 2019-05-17 重庆同济同枥信息技术有限公司 A kind of vehicle target dividing method, device and communication equipment
CN110018529A (en) * 2019-02-22 2019-07-16 南方科技大学 Rainfall measurement method, device, computer equipment and storage medium
CN110189249A (en) * 2019-05-24 2019-08-30 深圳市商汤科技有限公司 A kind of image processing method and device, electronic equipment and storage medium
CN110189249B (en) * 2019-05-24 2022-02-18 深圳市商汤科技有限公司 Image processing method and device, electronic equipment and storage medium
CN111917976A (en) * 2020-07-21 2020-11-10 青岛聚好联科技有限公司 Electronic equipment and method for extracting moving object in image
CN112990789A (en) * 2021-05-10 2021-06-18 明品云(北京)数据科技有限公司 User risk analysis method, system, electronic device and medium
CN112990789B (en) * 2021-05-10 2021-11-02 明品云(北京)数据科技有限公司 User health risk analysis system
CN113554683A (en) * 2021-09-22 2021-10-26 成都考拉悠然科技有限公司 Feature tracking method based on video analysis and object detection
CN113963375A (en) * 2021-10-20 2022-01-21 中国石油大学(华东) Multi-feature matching multi-target tracking method for fast skating athletes based on regions
CN114170418A (en) * 2021-11-30 2022-03-11 吉林大学 Automobile wire harness connector multi-feature fusion image retrieval method by searching images through images
CN114785955A (en) * 2022-05-05 2022-07-22 广州新华学院 Motion compensation method, system and storage medium for dynamic camera in complex scene
CN114785955B (en) * 2022-05-05 2023-08-15 广州新华学院 Dynamic camera motion compensation method, system and storage medium under complex scene

Similar Documents

Publication Publication Date Title
CN1738426A (en) Video motion goal division and track method
CN110688987B (en) Pedestrian position detection and tracking method and system
CN110276765B (en) Image panorama segmentation method based on multitask learning deep neural network
CN106845374B (en) Pedestrian detection method and detection device based on deep learning
US20180374199A1 (en) Sky Editing Based On Image Composition
CN111640125B (en) Aerial photography graph building detection and segmentation method and device based on Mask R-CNN
CN107862694A (en) A kind of hand-foot-and-mouth disease detecting system based on deep learning
CN109903331B (en) Convolutional neural network target detection method based on RGB-D camera
CN109583425A (en) A kind of integrated recognition methods of the remote sensing images ship based on deep learning
CN108304873A (en) Object detection method based on high-resolution optical satellite remote-sensing image and its system
CN104463191A (en) Robot visual processing method based on attention mechanism
CN1894703A (en) Pattern recognition method, and device and program therefor
CN1975759A (en) Human face identifying method based on structural principal element analysis
CN1761205A (en) System for detecting eroticism and unhealthy images on network based on content
CN111862119A (en) Semantic information extraction method based on Mask-RCNN
CN1822024A (en) Positioning method for human face characteristic point
CN109045676B (en) Chinese chess recognition learning algorithm and robot intelligent system and method based on algorithm
CN1828632A (en) Object detection apparatus, learning apparatus, object detection system, object detection method
CN111127360B (en) Gray image transfer learning method based on automatic encoder
CN1162798C (en) Chinese medicine tongue colour, fur colour and tongue fur thickness analysis method based on multiclass support vector machine
CN112634125A (en) Automatic face replacement method based on off-line face database
CN106295458A (en) Eyeball detection method based on image procossing
CN108053418B (en) Animal background modeling method and device
CN1466737A (en) Image conversion and encoding techniques
CN113705579A (en) Automatic image annotation method driven by visual saliency

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
AD01 Patent right deemed abandoned
C20 Patent right or utility model deemed to be abandoned or is abandoned