CN102999921B - Pixel label propagation method based on directional tracing windows - Google Patents

Pixel label propagation method based on directional tracing windows Download PDF

Info

Publication number
CN102999921B
CN102999921B CN201210452433.1A CN201210452433A CN102999921B CN 102999921 B CN102999921 B CN 102999921B CN 201210452433 A CN201210452433 A CN 201210452433A CN 102999921 B CN102999921 B CN 102999921B
Authority
CN
China
Prior art keywords
pixel
tracking window
label
window
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CN201210452433.1A
Other languages
Chinese (zh)
Other versions
CN102999921A (en
Inventor
钟凡
秦学英
彭群生
孟祥旭
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shandong University
Original Assignee
Shandong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shandong University filed Critical Shandong University
Priority to CN201210452433.1A priority Critical patent/CN102999921B/en
Publication of CN102999921A publication Critical patent/CN102999921A/en
Application granted granted Critical
Publication of CN102999921B publication Critical patent/CN102999921B/en
Expired - Fee Related legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a pixel label propagation method based on directional tracing windows. The method includes the steps of firstly, determining a region to be labeled in a target image; secondly, setting tracing windows in the target image and allowing the tracing windows to cover the region to be labeled; thirdly, establishing a Gaussian mixture model for each label by applying pixels in the input image, covered by the tracing windows, as samples; fourthly, calculating density in probability of belonging to each label for each pixel, to be labeled and covered by the tracing window; and fifthly, calculating confidence in probability estimated by the tracing windows for each pixel to be labeled; sixthly, processing all the tracing windows in all directions; and seventhly, determining a label belonging to each pixel to be labeled according to the probability and confidence of belonging to the label for the pixel to be labeled, covered by each tracing window. The method has the advantages that spatial context is effectively utilized, and errors caused by ambiguous features are reduced.

Description

Based on the pixel label propagation method of directivity tracking window
Technical field
The present invention relates to a kind of pixel label propagation method, particularly relate to a kind of pixel label propagation method based on directivity tracking window.
Background technology
It is Video processing that label between frame of video is propagated, especially common in video editing problem.Label can represent the result of Video processing usually, and label propagates the result that can be understood as known a certain frame, to the process that the result of other frame solves.Such as in area tracking and foreground segmentation, user can by the mutual result obtaining a certain frame, and recycling label propagates the result obtaining other frame of video.Label is propagated and is usually adopted following three kinds of methods:
1, based on the method for images match
First the image of the frame of incoming frame and target is carried out registration by the method based on images match, then copies the pixel label of incoming frame to target frame according to the corresponding relation of pixel.Therefore, the label transmission method of this class is equivalent to and carries out images match.Images match is the classical problem in computer vision, and the general optical flow tracking that adopts is carried out.Owing to blocking, the impact of edge fog etc., accurate images match is difficult to obtain.Optical flow approach based on local feature is not suitable for the flat site of image, and very sensitive to the video uncontinuity caused such as blocking based on the method for global optimization.Therefore, although label is propagated and can be equivalent to images match in theory, in fact these class methods are less is applied independently per sub carrier, and is all for obtaining an initial results usually.
2, based on the method for global classification device
Based on the method for global classification device first to its feature of each pixel extraction, and according to pixel at the Distance geometry syntople of feature space, complete the propagation of label at feature space.So-called global classification device, refers to that all pixels of target frame all share same sorter, and has nothing to do with the position of pixel.An exemplary of these class methods is the Video segmentations based on global color distribution, the method is using pixel color as feature, first with the prospect of known label and background pixel color for sample, acquisition prospect and background at the distribution function of color space, then are classified to unknown pixel based on distribution function.Method based on global classification device have ignored the spatial relation of pixel, and directly at feature space, label is propagated, this makes it have ambiguous region in feature to be easy to make mistakes, such as in the region that prospect is similar with background color, the methods of video segmentation based on global color distribution can produce a large amount of mistakes.But, owing to have ignored the spatial relation of pixel, and can sample in a big way, also make the method based on global characteristics distribution can process time discontinuity in video (namely owing to blocking, the new region that causes such as change in topology, rapid movement) preferably.
3, based on the method for local classifiers
In Adobe After Effects 5, the new RotoBrush instrument introduced have employed local classifiers and carries out label and propagate and its objective is to have the shortcoming of easily makeing mistakes in ambiguous region in feature in order to overcome global classification device, different from global classification device, a regional area of each local classifiers coverage goal image, the sample of Beijing National Sports Training Center's classifier then comes from the corresponding region of input picture.This is actually a kind of mode utilizing pixel space position relationship.On the other hand, the region covered due to local classifiers is more much smaller than global classification device, and therefore feature distribution is also comparatively simple, thus reduce further its possibility of makeing mistakes.
Technical matters solved by the invention is different from common vision and follows the tracks of, a kind of visual tracking method of non-parametric model, No.200910080381.8 and video object mark real-time multi-target marker and centroid calculation method, No.200510047785.9, and the multi-characteristic points tracking of a kind of micro-sequence image of feature point tracking, No.201010516768.6.Vision tracking and target label all can be summed up as the mark problem to region, and pixel label propagation needs to mark each pixel, therefore more tight with Video segmentation relation.The present invention also can be directly used in Video segmentation.Feature point tracking belongs to the method for images match, but only processes small part in video and be easy to the pixel of following the tracks of, and can not be used to pixel label and propagate.Directivity window of the present invention, mainly in order to utilize color distribution better, therefore has the difference of essence with signature tracking and images match.
The committed step adopting local classifiers is the overlay area defining each sorter, i.e. tracking window.Tracking window is larger, and the feature distribution in each window is more complicated, and the possibility comprising ambiguous feature is larger, and this will cause the problem similar with global classification device; Tracking window is less, then can be better to the robustness of ambiguous feature, but partial discontinuous between simultaneously causing frame of video is responsive, more easily makes mistakes at rapid movement and emerging region; Existing local classifiers all adopts the tracking window of regular shape i.e. square or circular tracking window, but is difficult to when facing ambiguous characteristic sum interframe discontinuous problem at the same time obtain gratifying effect; Directivity tracking window disclosed in this invention, will contribute to this shortcoming overcoming local classifiers.
Summary of the invention
Object of the present invention is exactly to solve the problem, and provides a kind of pixel label propagation method based on directivity tracking window, and it has and effectively utilizes space hereafter relation and reduce the advantage that led to errors by ambiguous feature.
To achieve these goals, the present invention adopts following technical scheme:
Based on a pixel label propagation method for directivity tracking window, concrete steps are
Step one: will treat propagation regions expansion 30-70 pixel in input picture, result is as the region to be marked in target image;
Step 2: to all directions of specifying, arranges tracking window along each direction in the target image, makes the tracking window on each direction cover region to be marked completely;
Step 3: to each tracking window, the pixel covered in incoming frame with tracking window is sample, take pixel color as feature, set up corresponding gauss hybrid models p (x|L) to represent its color distribution to each label L, x is the color of pixel to be marked; Described label represents the mark to the class that pixel is divided into, and each class label marks;
Step 4: to each tracking window, calculates the probability that its pixel each to be marked covered belongs to often kind of label;
Step 5: to each tracking window, calculates its degree of confidence to probability estimated by each pixel to be marked;
Step 6: process all tracking windows on all directions successively;
Step 7: to each pixel to be marked, record covers all tracking windows of this pixel to its probability calculated and degree of confidence, and the probability exported with the window that degree of confidence is maximum determines the label of this pixel.
The width of described tracking window is determined, adjustable length tracking window, and the width of described tracking window is W pixel.
The concrete steps of step 2 are:
(2-1) first arrange horizontal tracking window, top-downly scan the first row comprising region to be marked, be designated as r 0; Respectively with r 0row and r 0the top and bottom of+W-1 behavior first tracking window; Calculate initial row and the end column of pixel to be marked in these row, namely scan from left to right, what comprise first pixel to be marked is classified as initial row, and what comprise last pixel to be marked is classified as end column, and is set to left end and the right-hand member of first tracking window respectively; With r 0the initial row of+2W/3 behavior the 2nd tracking window, with r 0+ 2 (k-1) W/3 is the initial row (having the overlapping region of W/3 between adjacent tracking window) of a kth tracking window, adopts and arranges follow-up tracking window in the same way, until all pixels to be marked are completely covered, k is natural number;
(2-2) to other any direction θ, can first turn clockwise target image θ degree, according to arranging in step (2-1) that the method for horizontal tracking window arranges tracking window, then target image being rotated counterclockwise θ degree, obtaining the tracking window on θ direction.
The concrete form of the gauss hybrid models p (x|L) of step 3 is wherein N is normal distribution, π k, σ kbe respectively its average and variance, ω kfor the weight of kth item, K is the number of Gauss's item, is generally taken as between 3-5, parameter π k, σ k, ω k(Expectation-Maxmization) algorithm can be maximized obtain by expectation value.
The concrete steps of step 4 are:
(4-1) remember that the pixel color of tracking window internal label l is distributed as p (x|L=l), calculated by the gauss hybrid models of step 3 gained;
(4-2) number of bidding number is M, then pixel i to be marked belongs to the probability of label l and is:
p ( x i ) = p ( x i | L = l ) Σ j = 1 M p ( x i | L = j )
The concrete grammar of step 5 for: the degree of confidence belonging to tracking window internal label l probability estimated by the pixel i each to be marked that tracking window covers is:
c ( x i ) = p max ( x i ) - p min ( x i ) p max ( x i ) + p min ( x i ) + ϵ
Wherein p max(x i) and p min(x i) be respectively p (x i| L=j), j=1 ..., the maximal value in M and minimum value, j refers to certain label in tracking window; ε is constant, and usual value is 1e-3; When the probability density maximal value of pixel i and minimum value all very greatly or all very little time, it will be invested lower degree of confidence, pixel i belongs to the probability density of each label in tracking window very greatly corresponding to the situation of the color similarity of tracking window internal label, and pixel i belongs to the very little time discontinuity zone that corresponds to of probability density of each label in tracking window, in incoming frame, can not find the sample of association;
The concrete grammar of step 7 for: the pixel that each tracking window can cover it exports the Probability p (x that belongs to each label l i| L=l) and degree of confidence c (x i), note p ' (x i| L=l) for cover pixel i all tracking windows in probability corresponding to maximum one of degree of confidence, then the label of pixel i is j refers to certain label in tracking window, namely uses the label corresponding to most probable value to carry out marked pixels i.
Beneficial effect of the present invention: the present invention adopts rectangular tracking window, while the enough large spans of guarantee, maintains again relatively little area coverage.Enough large span can have and comparatively utilizes image related information apart from each other, thus realizes the process having interframe uncontinuity; And less area coverage, reduce ambiguous feature, and relatively simple feature distribution can be kept, thus reduce the error rate of sorter.Tracking window arranges along different directions, can realize the effective process to different directions motion and difformity region to be marked, and more effectively can utilize space hereafter relation, reduce the mistake caused by ambiguous feature further.
Accompanying drawing explanation
Fig. 1 is the video foreground segmentation schematic diagram propagated based on pixel label, and wherein question mark represents segmentation result to be solved;
Fig. 2 is be the schematic diagram of traditional square tracking window;
Fig. 3 (a) is Directionality levels tracking window provided by the invention;
Fig. 3 (b) is directivity provided by the invention 45 degree of tracking windows;
Fig. 3 (c) is the tracking window of the different directions centered by same pixel provided by the invention;
The image association that Fig. 4 (a) is the present invention utilizes long distance carrys out discontinuous schematic diagram between processed frame;
The best tracking window schematic diagram that Fig. 4 (b) is utilization orientation process different situations of the present invention and place place, target image each position;
The schematic diagram that Fig. 5 (a) merges for horizontal tracking window acquired results;
Fig. 5 (b) is the schematic diagram that 45 ° of tracking window acquired results merge;
Fig. 5 (c) is the schematic diagram that 90 ° of tracking window acquired results merge;
Fig. 5 (d) is the schematic diagram that 135 ° of tracking window acquired results merge;
The schematic diagram that Fig. 5 (e) finally exports for the tracking window acquired results on all directions;
Fig. 5 (f) is the direction schematic diagram selected at each pixel place;
Fig. 6 (a) is incoming frame image;
Fig. 6 (b) is target frame image;
Fig. 6 (c) is for square window is for the design sketch of video matting;
Fig. 6 (d) is for directivity tracking window is for the design sketch of video matting.
Embodiment
Below in conjunction with accompanying drawing and embodiment, the invention will be further described.
As shown in Figure 1, below for Video segmentation, wherein question mark represents result to be solved, and the invention will be further described by reference to the accompanying drawings.
Based on the Video segmentation that label is propagated, need the key issue solved to be the segmentation result utilizing former frame (incoming frame), present frame (target frame) is split.Segmentation result represents with bianry image, and foreground pixel value is 255, and background pixel value is 0.The main difficulty faced in the process is the contradiction of color similarity before overcoming, between background and video time uncontinuity.The method that employing directivity tracking window carries out front background segment is as follows:
1) by the segmentation result expansion 30-70 pixel (suitably can adjust according to foreground moving speed) of incoming frame, using the foreground area after expansion as the region to be marked in target image, other region is considered to background;
2) setting the width of tracking window is W pixel (W is generally taken as 15), and first arrange horizontal tracking window, concrete grammar is: top-downly scan the first row comprising region to be marked, is designated as r 0; Respectively with r 0row and r 0the top and bottom of+W-1 behavior first tracking window; Calculate the starting and ending row of pixel to be marked in these row, and be set to left end and the right-hand member of first tracking window respectively; With r 0the initial row of+2W/3 behavior the 2nd tracking window, with r 0+ 2 (k-1) W/3 is the initial row (having the overlapping region of W/3 between adjacent tracking window) of a kth tracking window, adopts and arranges follow-up tracking window in the same way, until all pixels to be marked are completely covered; The horizontal tracking window obtained is as shown in Fig. 3 (a);
3) target image is turned clockwise 45 degree, with 2) method arrange horizontal tracking window, be rotated counterclockwise 45 degree afterwards, obtain the tracking window on 45 degree of directions, result is as shown in Figure 3 (b);
4) with 3) mode arrange that the tracking window of 90 degree and 135 degree is as Fig. 3 (c);
5) to each tracking window, respectively with the RGB color of its prospect covered in incoming frame and background pixel for sample, gauss hybrid models (the Gaussian Mixtu re Model of training prospect and background color distribution, GMM) p (x|F) and p (x|B), wherein x is the color of pixel to be marked;
6) to each tracking window, its probability density p (x of the pixel i each to be marked covered in prospect and background GMM is calculated i| F) and p (x i| B), and then calculate its probability belonging to prospect:
p ( x i ) = p ( x i | F ) p ( x i | F ) + p ( x i | B )
7) to each tracking window, its degree of confidence to probability estimated by each pixel i to be marked is calculated:
c ( x i ) = | p ( x i | F ) - p ( x i | B ) | p ( x i | F ) + p ( x i | B ) + ϵ
Wherein ε is a constant, usually can be taken as 1e-3.Above formula represents, when probability density in prospect and background distributions of the color of pixel i all very greatly or all very little time, it will be invested lower degree of confidence, the color of pixel i corresponds to all very greatly the prospect situation similar with background color in prospect with the probability density of background, and the color of pixel i is all very little of time discontinuity zone in the probability density of prospect and background, in incoming frame, can not find the sample of association;
8) all tracking windows on all directions are processed successively;
9) because each pixel can be covered by multiple tracking window, the sorter of each window can export to pixel the Probability p (x that belongs to prospect i) and degree of confidence c (x i), therefore need therefrom to select one as final output, as shown in Figure 5.Note p ' (x i) for cover pixel i all tracking windows in probability corresponding to maximum one of degree of confidence, if then p ' (x i) > 0.5, then pixel i is labeled as prospect; Otherwise be labeled as background.
Fig. 4 (a) is that rectangle is followed the tracks of and how to be utilized the image of long distance to associate between processed frame discontinuous, and Fig. 4 (b) is the schematic diagram of how utilization orientation process different situations, and Fig. 4 (b) is depicted as the best tracking window at place place, each position.
To the schematic diagram that different directions tracking window acquired results merges, as Fig. 5 (a), Fig. 5 (b), Fig. 5 (c), Fig. 5 (d), as shown in Figure 5 (e) shows, be schematic diagram that the tracking window acquired results on all directions finally exports; As Fig. 5 (f) is depicted as the direction schematic diagram selected at each pixel place;
Rectangle tracking window and square window are used for the Contrast on effect of video matting, rectangular window correctly can identify the new region between two legs, this zone errors is then identified as prospect by square window, as shown in Fig. 6 (a), Fig. 6 (b), Fig. 6 (c), Fig. 6 (d).
As shown in Figure 2, be traditional square tracking window, and its Problems existing during uncontinuity between processing video frames.
By reference to the accompanying drawings the specific embodiment of the present invention is described although above-mentioned; but not limiting the scope of the invention; one of ordinary skill in the art should be understood that; on the basis of technical scheme of the present invention, those skilled in the art do not need to pay various amendment or distortion that creative work can make still within protection scope of the present invention.

Claims (8)

1. based on a pixel label propagation method for directivity tracking window, it is characterized in that, concrete steps are:
Step one: will treat propagation regions expansion 30-70 pixel in input picture, result is as the region to be marked in target image;
Step 2: to all directions of specifying, tracking window is arranged in the target image along each direction, the shape of tracking window is rectangular, and very close to each other and have certain overlapping region between unidirectional adjacent window apertures, make the tracking window on each direction cover region to be marked completely;
Step 3: to each tracking window, the pixel covered in incoming frame with tracking window is sample, take pixel color as feature, sets up corresponding gauss hybrid models P (x|L) to represent its color distribution to each label;
Step 4: to each tracking window, calculates the probability that its pixel each to be marked covered belongs to often kind of label;
Step 5: to each tracking window, calculates its degree of confidence to probability estimated by each pixel to be marked;
Step 6: process all tracking windows on all directions successively;
Step 7: to each pixel to be marked, record covers all tracking windows of this pixel to its probability calculated and degree of confidence, and the probability exported with the window that degree of confidence is maximum determines the label of this pixel.
2. as claimed in claim 1 based on the pixel label propagation method of directivity tracking window, it is characterized in that, described tracking window is that width is determined, adjustable length directivity window, and the width of described tracking window is W pixel.
3. as claimed in claim 1 based on the pixel label propagation method of directivity tracking window, it is characterized in that, the concrete steps of described step 2 are:
(2-1) first arrange horizontal tracking window, top-downly scan the first row comprising region to be marked, be designated as r 0; Respectively with r 0row and r 0the top and bottom of+W-1 behavior first tracking window; Calculate initial row and the end column of pixel to be marked in these row, namely scan from left to right, what comprise first pixel to be marked is classified as initial row, and what comprise last pixel to be marked is classified as end column, and is set to left end and the right-hand member of first tracking window respectively; With r 0the initial row of+2W/3 behavior the 2nd tracking window, with r 0+ 2 (k-1) W/3 is the initial row of a kth tracking window, has the overlapping region of W/3, adopt and arrange follow-up tracking window in the same way between adjacent tracking window, until all pixels to be marked are completely covered, k is natural number;
(2-2) to other any direction θ, first turn clockwise target image θ degree, according to arranging in step (2-1) that the method for horizontal tracking window arranges tracking window, then target image being rotated counterclockwise θ degree, obtaining the tracking window on θ direction.
4. as claimed in claim 1 based on the pixel label propagation method of directivity tracking window, it is characterized in that, the concrete form of the gauss hybrid models P (x|L) of described step 3 is wherein N is normal distribution, π k, σ kbe respectively its average and variance, ω kfor the weight of kth item, K is the number of Gauss's item, parameter π k, σ k, ω kall maximize algorithm by expectation value to obtain.
5. as claimed in claim 1 based on the pixel label propagation method of directivity tracking window, it is characterized in that, the concrete steps of described step 4 are:
(4-1) remember that the gauss hybrid models that the pixel color of tracking window internal label l distributes is p (x|L=l);
(4-2) number of bidding number is M, then pixel i to be marked belongs to the probability of label and is:
6., as claimed in claim 1 based on the pixel label propagation method of directivity tracking window, it is characterized in that, the concrete grammar of described step 5 for: the degree of confidence belonging to tracking window internal label l probability estimated by the pixel i each to be marked that tracking window covers is:
Wherein p max(x i) and p min(x i) be respectively p (x i| L=j), j=1 ..., the maximal value in M and minimum value; ε is constant, and usual value is 1e-3; When the probability density maximal value of pixel i and minimum value all very greatly or all very little time, it will be invested lower degree of confidence, pixel i belongs to the probability density of each label in tracking window very greatly corresponding to the situation of the color similarity of tracking window internal label, and pixel i belongs to the very little time discontinuity zone that corresponds to of probability density of each label in tracking window, in incoming frame, can not find the sample of association.
7., as claimed in claim 1 based on the pixel label propagation method of directivity tracking window, it is characterized in that, the concrete grammar of described step 7 for: the pixel that each tracking window can cover it exports the Probability p (x that belongs to each label l i| L=l) and degree of confidence c (x i), note p ' (x i| L=l) for cover pixel i all tracking windows in probability corresponding to maximum one of degree of confidence, then the label of pixel i is namely the label corresponding to most probable value is used to carry out marked pixels i.
8. as claimed in claim 1 based on the pixel label propagation method of directivity tracking window, it is characterized in that, unidirectional window is parallel to each other.
CN201210452433.1A 2012-11-09 2012-11-09 Pixel label propagation method based on directional tracing windows Expired - Fee Related CN102999921B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210452433.1A CN102999921B (en) 2012-11-09 2012-11-09 Pixel label propagation method based on directional tracing windows

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210452433.1A CN102999921B (en) 2012-11-09 2012-11-09 Pixel label propagation method based on directional tracing windows

Publications (2)

Publication Number Publication Date
CN102999921A CN102999921A (en) 2013-03-27
CN102999921B true CN102999921B (en) 2015-01-21

Family

ID=47928454

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210452433.1A Expired - Fee Related CN102999921B (en) 2012-11-09 2012-11-09 Pixel label propagation method based on directional tracing windows

Country Status (1)

Country Link
CN (1) CN102999921B (en)

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106874845B (en) * 2016-12-30 2021-03-26 东软集团股份有限公司 Image recognition method and device
CN111833398B (en) * 2019-04-16 2023-09-08 杭州海康威视数字技术股份有限公司 Pixel point marking method and device in image

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101142593A (en) * 2005-03-17 2008-03-12 英国电讯有限公司 Method of tracking objects in a video sequence
CN101216943A (en) * 2008-01-16 2008-07-09 湖北莲花山计算机视觉和信息科学研究院 A method for video moving object subdivision
CN101676953A (en) * 2008-08-22 2010-03-24 奥多比公司 Automatic video image segmentation

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101142593A (en) * 2005-03-17 2008-03-12 英国电讯有限公司 Method of tracking objects in a video sequence
CN101216943A (en) * 2008-01-16 2008-07-09 湖北莲花山计算机视觉和信息科学研究院 A method for video moving object subdivision
CN101676953A (en) * 2008-08-22 2010-03-24 奥多比公司 Automatic video image segmentation

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
Dynamic Color Flow: A Motion-adaptive Color Model for Object Segmentation in Video;Xue Bai et al.;《Proceedings 11th European Conference on Computer Vision-ECCV 2010》;20100911;第5卷;全文 *
在线视频分割实时后处理;钟凡 等;《计算机学报》;20090215;第32卷(第2期);全文 *

Also Published As

Publication number Publication date
CN102999921A (en) 2013-03-27

Similar Documents

Publication Publication Date Title
CN101770581B (en) Semi-automatic detecting method for road centerline in high-resolution city remote sensing image
Luo et al. Multi-scale traffic vehicle detection based on faster R–CNN with NAS optimization and feature enrichment
CN106909902B (en) Remote sensing target detection method based on improved hierarchical significant model
Gong et al. The recognition and tracking of traffic lights based on color segmentation and camshift for intelligent vehicles
CN105869178B (en) A kind of complex target dynamic scene non-formaldehyde finishing method based on the convex optimization of Multiscale combination feature
CN103208115B (en) Based on the saliency method for detecting area of geodesic line distance
CN104318258A (en) Time domain fuzzy and kalman filter-based lane detection method
CN102622769B (en) Multi-target tracking method by taking depth as leading clue under dynamic scene
CN107292246A (en) Infrared human body target identification method based on HOG PCA and transfer learning
CN104424638A (en) Target tracking method based on shielding situation
CN103077531B (en) Based on the gray scale Automatic Target Tracking method of marginal information
CN103020614B (en) Based on the human motion identification method that space-time interest points detects
CN104200485A (en) Video-monitoring-oriented human body tracking method
CN102289948A (en) Multi-characteristic fusion multi-vehicle video tracking method under highway scene
CN101404086A (en) Target tracking method and device based on video
CN102521597B (en) Hierarchical strategy-based linear feature matching method for images
CN103218827B (en) The contour tracing method of segmentation and figure matching and correlation is combined in Shape-based interpolation transmission
Fanani et al. Predictive monocular odometry (PMO): What is possible without RANSAC and multiframe bundle adjustment?
CN104167006B (en) Gesture tracking method of any hand shape
CN104392241A (en) Mixed regression-based head pose estimation method
CN105335701A (en) Pedestrian detection method based on HOG and D-S evidence theory multi-information fusion
CN103679677A (en) Dual-model image decision fusion tracking method based on mutual updating of models
CN104700071A (en) Method for extracting panorama road profile
CN104102904A (en) Static gesture identification method
CN116311353A (en) Intensive pedestrian multi-target tracking method based on feature fusion, computer equipment and storage medium

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20150121

Termination date: 20151109

EXPY Termination of patent right or utility model