CN102509308A - Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection - Google Patents

Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection Download PDF

Info

Publication number
CN102509308A
CN102509308A CN201110344804XA CN201110344804A CN102509308A CN 102509308 A CN102509308 A CN 102509308A CN 201110344804X A CN201110344804X A CN 201110344804XA CN 201110344804 A CN201110344804 A CN 201110344804A CN 102509308 A CN102509308 A CN 102509308A
Authority
CN
China
Prior art keywords
conspicuousness
dynamic texture
dynamic
motion segmentation
probability
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN201110344804XA
Other languages
Chinese (zh)
Inventor
周文明
姚莉秀
杨杰
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shanghai Jiaotong University
Original Assignee
Shanghai Jiaotong University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shanghai Jiaotong University filed Critical Shanghai Jiaotong University
Priority to CN201110344804XA priority Critical patent/CN102509308A/en
Publication of CN102509308A publication Critical patent/CN102509308A/en
Pending legal-status Critical Current

Links

Images

Landscapes

  • Image Analysis (AREA)

Abstract

The invention discloses a motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection. The method comprises the following steps of: 1, modeling a background by adopting mixtures of dynamic textures; 2, defining a spatiotemporal saliency discrimination function by utilizing a Kullback-Leibler (KL) divergence, and calculating a saliency map; and 3, processing a saliency threshold value to obtain a motion segmentation result. By the method, a motional object can be accurately segmented in a complex environment with a highly-dynamic background and a motional camera; and compared with the conventional method, the invention greatly improves complex scene processing and noise suppression, has relatively higher robustness and can be applied to various complex motion scenes.

Description

The motion segmentation method that conspicuousness detects during based on mixing dynamic texture sky
Technical field
The present invention relates to a kind of moving Object Segmentation method of technical field of information processing, specifically is a kind of based on mixing dynamic texture (Mixtures of Dynamic Textures, the motion segmentation method that conspicuousness detects when MDT) empty.
Background technology
The task of motion segmentation is from video sequence, as far as possible intactly to extract moving target.This task has two difficult points: (1) high dynamic background, and motion segmentation is run into the situation that background is also moved through regular meeting, such as the branch that rocks, sleet, pedestrian, water wave etc.; (2) camera motion, in a lot of practical applications, camera is not fixed.Classic method is made following hypothesis usually and simplified this problem: (1) camera is actionless; Perhaps (2) camera motion parameter can be obtained or calculate; Perhaps (3) background satisfies given model, like the mixed Gaussian background model.These supposition have reduced the complex nature of the problem, but in actual scene, these supposition might not be satisfied.Classic method is applied in the dynamic background, and the segmentation result noise is a lot, disturbs bigger.
Conspicuousness detects to solving these difficult problems and brings new thinking, and the task that conspicuousness detects is meant out that two ways becomes the focus that human vision is paid close attention to more easily in the scene.Different with motion segmentation, conspicuousness detects and does not distinguish prospect and background, does not provide precise results, and provides the conspicuousness figure of similar gray-scale map.The tradition conspicuousness detects and roughly can be divided into 3 types.The first kind is most popular method, and it regards the conspicuousness problem as certain perceptual property characteristic, and the shortcoming of this method is can not excellent popularization, and specific property detector effect in different scenes differs greatly.Second class methods are the complexities that conspicuousness are defined as image.An its very big advantage is exactly its dirigibility, can choose any bottom attribute and come computation complexity; But conspicuousness is defined as image complexity does not obtain checking biologically, have sizable difference with the artificial result who discerns so detect the result who obtains in many instances.Last class methods are based on the model of biological vision, and a common problem of these class methods is to lack the criterion of weighing testing result, and this makes troubles for improving to wait.
To the deficiency that traditional conspicuousness detects, some new methods have been suggested.D.Gao etc. have proposed differentiation conspicuousness method; This method is set up a discriminant function; Classification problem has been introduced the vision significance field, conspicuousness has been detected under complex environment, can obtain good result, but how to define the difficult point that a good discriminant function is this method [D.Gao and N.Vasconcelos.Discriminant saliency for visual recognition from cluttered scenes.In Proc.NIPS; Pages 481-488,2004.].In handling video sequence,, the method that conspicuousness detects when empty has been proposed in order to utilize the information between the frame.V.Mahadevan and Nuno Vasconcelos have proposed a kind of conspicuousness detection method when empty; This method is carried out modeling with dynamic texture to background earlier; Then at the center utilize information calculations conspicuousness [V.Mahadevan and Nuno Vasconcelos.Spatiotemporal Saliency in Dynamic Scenes.IEEE transactions on Pattern Analysis and Machine Intelligence when empty under the neighbour structure framework; Vol.32; No.1, p171-177,2009.].Experimental result shows that people's such as V.Mahadevan method has greatly improved than classic method.But this method is based on single mode dynamic texture model, and in real scene, dynamic background usually is not a single mode, but multi-modal, the single mode dynamic texture can't the accurate description real scene.
Summary of the invention
The present invention is directed to the deficiency that above-mentioned prior art exists; Propose a kind of based on mixing the dynamic texture motion segmentation method that conspicuousness detects when empty; This method still can be partitioned into moving target more accurately among the complex environment of high dynamic background and camera motion; Be greatly improved aspect complex scene processing and the squelch than classic method, and having stronger robustness, can be adapted to various compound movement scenes.
The present invention realizes that through following technical scheme it comprises three steps:
The first step: adopt the mixing dynamic texture that background is carried out modeling;
Second step: conspicuousness discriminant function and calculate conspicuousness figure when utilizing the KL distance definition empty;
The 3rd step: conspicuousness figure threshold process is obtained the motion segmentation result.
Below the inventive method each several part is carried out detailed explanation:
1, adopt the mixing dynamic texture that background is carried out modeling
In order to solve these two difficult points of high dynamic background and camera motion, the present invention adopts mixing dynamic texture model that background is carried out modeling.Dynamic texture is meant, some stability property that each frame of video sequence is showed on time and space.Common dynamic texture scene has: water wave, smog, branch, sandy beach, crowd, sleet etc.The dynamic texture model can be realized the modeling to these complex scenes.
Given video sequence Y, it represents observed sequence frame (y 1..., y τ), given K dynamic texture { Θ 1..., Θ K, the probability that generates video Y is respectively { α 1..., α K, and
Figure BDA0000105546070000021
The Y generative process can be modeled as:
1) at first from probability distribution { α 1..., α KMiddle sampling dynamic texture component Θ j, its probability is α j
2) at dynamic texture component Θ jLast sampling generates video sequence Y, and its probability can be expressed as conditional probability p (Y| Θ j).Can calculate by total probability formula
p ( Y ) = Σ j = 1 K α j p ( Y | Θ j ) - - - ( 2 )
Formula (2) is mixes the dynamic texture model.In this model, the parameter that needs to confirm is Θ={ Θ 1..., Θ K, α 1..., α K.
In order to obtain mixing dynamic texture parameter Θ, adopt the maximum likelihood method for parameter estimation, utilize video sequence Y that parameter is learnt.Choose maximization expectation algorithm (EM) during practical implementation, it progressively obtains the optimum solution of parameter with the mode of iteration.
Conspicuousness discriminant function and calculate conspicuousness figure when 2, utilizing the KL distance definition empty
In order to utilize temporal information, V is made as three-dimensional matrice with video sequence, and each element representation is l (m, n, t) ∈ L=R 3, wherein (m n) is coordinate position in the frame of video, and t is a time coordinate, is unit with the frame.If x is the proper vector (extracting direction character, color characteristic etc. like pixel value, Gabor wave filter) that each l ∈ L is extracted, the classification information at l place be C (l) ∈ 0,1}, wherein 1 expression this locate to be moving target, 0 representes to belong to background.Because employing center-neighborhood framework, two windows setting element to represent respectively center and neighborhood, use
Figure BDA0000105546070000032
The window of expression central area,
Figure BDA0000105546070000033
The window in expression neighborhood zone, total window definition does
Figure BDA0000105546070000034
The characteristic of central area Use conditional probability p Y|c (l)(y|1) expression, the characteristic of neighborhood
Figure BDA0000105546070000036
Use conditional probability p Y|c (l)(y|0) expression.Define conspicuousness S (l) as follows:
S ( l ) = Σ c = 0 1 p c ( l ) ( c ) KL ( p Y | c ( l ) ( y | c ) | | p Y ( y ) ) - - - ( 3 )
Wherein KL () expression KL distance is called relative entropy again, generally is used for weighing two differences between the probability distribution, is defined as:
KL ( p | q ) = ∫ X p X ( x ) log p X ( x ) q X ( x ) dx - - - ( 4 )
(3) the formula definition shows that bigger conspicuousness shows that this center, place and neighborhood characteristics differ greatly, and can distinguish target and background with less differentiation error, so the classification C of center (l) is likely moving target, rather than the background of dynamic texture.Like this, the size of pixel conspicuousness value has reflected that this place belongs to the size of moving target probability.
3, conspicuousness figure threshold process is obtained the motion segmentation result
Conspicuousness figure S is the image of half-tone information, in order to obtain the result of motion segmentation, it is made threshold process: the pixel that is lower than threshold value as a setting; And being higher than its value of reservation of threshold value, it belongs to the size of moving target probability the size reflection of value.
The present invention's remarkable result compared with prior art is: proposed based on the time and space significance detection method of dynamic texture model with utilize its significant result to make motion segmentation, realized under the complex environment of high dynamic background and camera motion, cutting apart the target of motion.Comparison with the conventional motion dividing method; Institute of the present invention extracting method is introduced vision significance and is cut apart; Because vision significance is based on the biological vision system principle; Its very important characteristic be exactly the center the neighborhood framework, just can obtain conspicuousness figure through calculating local feature difference, so insensitive to dynamic background and camera motion.And this conspicuousness method combines when empty conspicuousness and differentiates these two kinds of newer methods of conspicuousness, and they can improve the precision that conspicuousness detects, and are obvious to interference suppressioning effect.The inventive method has realized under the complex environment of dynamic background and camera motion, cutting apart the task of moving target basically.
Description of drawings
Fig. 1 is a Surfers cycle tests in the embodiment of the invention;
Fig. 2 is a Surf cycle tests in the embodiment of the invention;
Fig. 3 is a Cyclist cycle tests in the embodiment of the invention.
Embodiment
Elaborate in the face of embodiments of the invention down, present embodiment has provided detailed operating process being to implement under the prerequisite with technical scheme of the present invention.
The present invention provides a kind of motion segmentation method that conspicuousness detects when empty based on MDT, and the flow process of this method is:
Given video sequence V, each element are l (m, n, t) ∈ L=R 3, the dimension of state space x is n, the size of center window is n cIndividual pixel is used for the topology window size of modeling dynamic texture and is n p, time window is the τ frame;
Each pixel of each frame is carried out following operation:
Get the center window Size be n c* n c* τ, the neighborhood window Size is 4n c* 4n c* τ;
With texture modeling window n p* n p* τ covers the center window
Figure BDA0000105546070000043
Go up and move,, use the EM algorithm that its parametric texture is estimated then as texture sampling to the central area.The neighborhood window
Figure BDA0000105546070000044
With total window w lCarry out same operation, learn the parameter of mixing dynamic texture separately respectively;
After obtaining mixing the dynamic texture parameter, design conditions probability density function and KL distance by formula (3), obtain the conspicuousness S (l) at this place;
Figure makes threshold process to conspicuousness, the pixel that is lower than threshold value as a setting, and be higher than threshold value keep its value as prospect.
For understanding technical scheme of the present invention better, this method is further described in conjunction with embodiment.
1, adopt the mixing dynamic texture that background is carried out modeling
(1) sets up parameter model
The given video sequence V that comprises dynamic background and/or camera motion puts on 3 dimension coordinate l (m, n, t) ∈ L=R for each location of pixels l 3, wherein (m n) is the coordinate position of pixel in frame, and t is the sequence number of place frame.Certain frame y with sequence tBe regarded as observation variable, extracting special to this frame is x as its corresponding state variable t, intrinsic dimensionality n is generally about 10.Choose 2~4 dynamic texture components and set up mixing dynamic texture model, the parameter that needs like this to estimate is the parameter Θ of each component i={ A, Q, C, R, μ, S} and its corresponding probability of happening α i
(2) take training sample
1) selected window
This step will be chosen 3 windows: center window, neighborhood window, total window.Each window is with the parameter estimation of mixing the dynamic texture model respectively, and promptly each window is described with different mixing dynamic texture.
For the conspicuousness value at calculated for given pixel l place, access time window τ=11, before and after promptly the frame at pixel place adds each 5 frame totally 11 frames form the time window of training; The size of center window is n cIndividual pixel promptly is that n is chosen at this frame in the center with l c* n cThe window of size, and be stretched to other frame in the time window, obtain the center window
Figure BDA0000105546070000051
Its size is n c* n c* τ considers computation complexity, n cGenerally be taken as 10~20, this instance gets 16, and promptly obtaining the center window is 16 * 16 * 11.
In like manner choose total window w l, size 64 * 64 * 11.
Total window w lWith the center window
Figure BDA0000105546070000052
Difference be the neighborhood window.
2) choose sample
Selected texture modeling window; Size is 8 * 8 * 11; It is covered that center window
Figure BDA0000105546070000053
is gone up and moves, and each position is promptly as a sample of central area texture.Obtain 81 samples so altogether, these samples will be used for the mixing dynamic texture parameter of training centre's window.
Selected window corresponding sample in neighborhood window and total window in like manner.
(3) estimated parameter
Adopt the maximum likelihood method for parameter estimation, utilize sample to set up likelihood function.The EM algorithm of utilization iteration progressively obtains the optimum solution of parameter.
Conspicuousness discriminant function and calculate conspicuousness figure when 2, utilizing the KL distance definition empty
After the mixing dynamic texture parameter of 3 windows of difference; Computing center's window and always the KL distance of window respectively; The KL distance of neighborhood window and total window; The method of calculating K L distance is pressed V.Mahadevan and the method for Nuno Vasconcelos in " Spatiotemporal Saliency in Dynamic Scenes ", is added up by formula (3) at last and obtains conspicuousness figure.
3, conspicuousness figure threshold process is obtained the motion segmentation result
The pixel that is lower than threshold value as a setting, and be higher than threshold value keep its value as prospect.Because this method is apparent in view to the inhibition effect of disturbing,, generally between 30~60, and just can obtain good segmentation result after the simple threshold values so selection of threshold can be chosen lower threshold value.
Fig. 1,2,3 is respectively to 3 not homotactic test results, has contrasted the difference of the motion segmentation of classic method and the inventive method in the table.Can find out that from comparing result traditional classical way can not well be described the complex situations that high dynamic background and camera motion combine, the segmentation result effect is bad, have in addition can not provide significant segmentation result; And method of the present invention is owing to choose MDT model as a setting, and combines when empty conspicuousness and differentiate conspicuousness, so inhibition complex environment that can be stronger disturbs, under various test scenes, can both obtain preferable performance.

Claims (6)

1. one kind based on mixing the dynamic texture motion segmentation method that conspicuousness detects when empty, it is characterized in that comprising three steps:
The first step: adopt the mixing dynamic texture that background is carried out modeling;
Second step: conspicuousness discriminant function and calculate conspicuousness figure when utilizing the KL distance definition empty;
The 3rd step: conspicuousness figure threshold process is obtained the motion segmentation result;
Said employing mixes dynamic texture background is carried out modeling, is specially:
Given video sequence Y, it represents observed sequence frame (y 1..., y τ), given K dynamic texture { Θ 1..., Θ K, the probability that generates video Y is respectively { α 1..., α K, and
Figure FDA0000105546060000011
The Y generative process is modeled as:
1) at first from probability distribution { α 1..., α KMiddle sampling dynamic texture component Θ j, its probability is α j
2) at dynamic texture component Θ jLast sampling generates video sequence Y, and its probability tables is shown conditional probability p (Y| Θ j); Calculate by total probability formula
p ( Y ) = Σ j = 1 K α j p ( Y | Θ j ) - - - ( 2 )
Formula (2) is mixes the dynamic texture model, and in this model, the parameter that needs to confirm is Θ={ Θ 1..., Θ K, α 1..., α K.
2. according to claim 1 based on mixing the dynamic texture motion segmentation method that conspicuousness detects when empty; It is characterized in that said mixing dynamic texture parameter Θ; Adopt the maximum likelihood method for parameter estimation; Utilize video sequence Y that parameter is learnt, choose maximization expectation algorithm during practical implementation, it progressively obtains the optimum solution of parameter with the mode of iteration.
3. according to claim 1 based on mixing the dynamic texture motion segmentation method that conspicuousness detects when empty, it is characterized in that said dynamic texture is meant, some stability property that each frame of video sequence is showed on time and space; The dynamic texture scene comprises the water wave; Smog, branch, sandy beach; Crowd and sleet, the dynamic texture model can be realized the modeling to these complex scenes.
4. according to claim 1 based on mixing the dynamic texture motion segmentation method that conspicuousness detects when empty, conspicuousness discriminant function and calculate conspicuousness figure when it is characterized in that the said KL of utilization distance definition is empty is specially:
In order to utilize temporal information, V is made as three-dimensional matrice with video sequence, and each element representation is l (m, n, t) ∈ L=R 3, wherein (m n) is coordinate position in the frame of video, and t is a time coordinate, is unit with the frame; If x is the proper vector that each l ∈ L is extracted, the classification information at l place be C (l) ∈ 0,1}, wherein 1 expression this locate to be moving target, 0 representes to belong to background; Employing center-neighborhood framework, two windows setting element to represent respectively center and neighborhood, use
Figure FDA0000105546060000021
The window of expression central area,
Figure FDA0000105546060000022
The window in expression neighborhood zone, total window definition does
Figure FDA0000105546060000023
The characteristic of central area
Figure FDA0000105546060000024
Use conditional probability p Y|c (l)(y|1) expression, the characteristic of neighborhood
Figure FDA0000105546060000025
Use conditional probability p Y|c (l)(y|0) expression; Define conspicuousness S (l) as follows:
S ( l ) = Σ c - 0 1 p c ( l ) ( c ) KL ( p Y | c ( l ) ( y | c ) | | p Y ( y ) ) - - - ( 3 )
Wherein KL () expression KL distance is used for weighing two differences between the probability distribution, is defined as:
KL ( p | q ) = ∫ X p X ( x ) log p X ( x ) q X ( x ) dx - - - ( 4 )
(3) the formula definition shows; Bigger conspicuousness shows that this center, place and neighborhood characteristics differ greatly, and can distinguish target and background with less differentiation error; So the classification C of center (l) is likely moving target; Rather than the background of dynamic texture, like this, the size of pixel conspicuousness value has reflected that this place belongs to the size of moving target probability.
5. according to claim 1 based on mixing the dynamic texture motion segmentation method that conspicuousness detects when empty; It is characterized in that said conspicuousness figure threshold process being obtained the motion segmentation result; Be specially: conspicuousness figure S is the image of half-tone information; In order to obtain the result of motion segmentation, it is made threshold process, that is: the pixel that is lower than threshold value is as a setting; And being higher than its value of reservation of threshold value, it belongs to the size of moving target probability the size reflection of value.
6. according to claim 1 or 5 described, it is characterized in that said selection of threshold is between 30~60 based on mixing the dynamic texture motion segmentation method that conspicuousness detects when empty.
CN201110344804XA 2011-08-18 2011-11-04 Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection Pending CN102509308A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201110344804XA CN102509308A (en) 2011-08-18 2011-11-04 Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
CN201110237140 2011-08-18
CN201110237140.7 2011-08-18
CN201110344804XA CN102509308A (en) 2011-08-18 2011-11-04 Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection

Publications (1)

Publication Number Publication Date
CN102509308A true CN102509308A (en) 2012-06-20

Family

ID=46221386

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201110344804XA Pending CN102509308A (en) 2011-08-18 2011-11-04 Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection

Country Status (1)

Country Link
CN (1) CN102509308A (en)

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102915451A (en) * 2012-10-18 2013-02-06 上海交通大学 Dynamic texture identification method based on chaos invariant
CN103150374A (en) * 2013-03-11 2013-06-12 中国科学院信息工程研究所 Method and system for identifying abnormal microblog users
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
WO2017101626A1 (en) * 2015-12-15 2017-06-22 努比亚技术有限公司 Method and apparatus for implementing image processing
CN110909825A (en) * 2012-10-11 2020-03-24 开文公司 Detecting objects in visual data using a probabilistic model

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246547A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for detecting moving objects in video according to scene variation characteristic
CN101996410A (en) * 2010-12-07 2011-03-30 北京交通大学 Method and system of detecting moving object under dynamic background

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101246547A (en) * 2008-03-03 2008-08-20 北京航空航天大学 Method for detecting moving objects in video according to scene variation characteristic
CN101996410A (en) * 2010-12-07 2011-03-30 北京交通大学 Method and system of detecting moving object under dynamic background

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
VIJAY MAHADEVAN ET AL.: "Spatiotemporal Saliency in Dynamic Scenes", 《IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE》 *
周文明等: "基于MDT的空时显著性检测及其在运动分割中的应用", 《微型电脑应用》 *

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN110909825A (en) * 2012-10-11 2020-03-24 开文公司 Detecting objects in visual data using a probabilistic model
CN110909825B (en) * 2012-10-11 2024-05-28 开文公司 Detecting objects in visual data using probabilistic models
CN102915451A (en) * 2012-10-18 2013-02-06 上海交通大学 Dynamic texture identification method based on chaos invariant
CN103150374A (en) * 2013-03-11 2013-06-12 中国科学院信息工程研究所 Method and system for identifying abnormal microblog users
CN103390279A (en) * 2013-07-25 2013-11-13 中国科学院自动化研究所 Target prospect collaborative segmentation method combining significant detection and discriminant study
CN105488812A (en) * 2015-11-24 2016-04-13 江南大学 Motion-feature-fused space-time significance detection method
WO2017101626A1 (en) * 2015-12-15 2017-06-22 努比亚技术有限公司 Method and apparatus for implementing image processing

Similar Documents

Publication Publication Date Title
CN107748873B (en) A kind of multimodal method for tracking target merging background information
CN102722891B (en) Method for detecting image significance
CN103136766B (en) A kind of object conspicuousness detection method based on color contrast and color distribution
CN105528794A (en) Moving object detection method based on Gaussian mixture model and superpixel segmentation
WO2019071976A1 (en) Panoramic image saliency detection method based on regional growth and eye movement model
CN103310194A (en) Method for detecting head and shoulders of pedestrian in video based on overhead pixel gradient direction
CN101976504B (en) Multi-vehicle video tracking method based on color space information
CN102592128B (en) Method and device for detecting and processing dynamic image and display terminal
CN103559724A (en) Method for synchronously tracking multiple cells in high-adhesion cell environment
CN111027377B (en) Double-flow neural network time sequence action positioning method
CN102509308A (en) Motion segmentation method based on mixtures-of-dynamic-textures-based spatiotemporal saliency detection
CN104036523A (en) Improved mean shift target tracking method based on surf features
CN102637298A (en) Color image segmentation method based on Gaussian mixture model and support vector machine
CN103955949A (en) Moving target detection method based on Mean-shift algorithm
Zhang et al. Fast moving pedestrian detection based on motion segmentation and new motion features
Wu et al. Recognition of Student Classroom Behaviors Based on Moving Target Detection.
CN101923637A (en) Mobile terminal as well as human face detection method and device thereof
Zhang et al. Visual saliency based object tracking
CN106529441A (en) Fuzzy boundary fragmentation-based depth motion map human body action recognition method
CN103049788A (en) Computer-vision-based system and method for detecting number of pedestrians waiting to cross crosswalk
CN106056078A (en) Crowd density estimation method based on multi-feature regression ensemble learning
CN108647605B (en) Human eye gaze point extraction method combining global color and local structural features
CN101877135B (en) Moving target detecting method based on background reconstruction
CN105118073A (en) Human body head target identification method based on Xtion camera
CN114463800A (en) Multi-scale feature fusion face detection and segmentation method based on generalized intersection-parallel ratio

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
WD01 Invention patent application deemed withdrawn after publication
WD01 Invention patent application deemed withdrawn after publication

Application publication date: 20120620