CN103065312A - Foreground extraction method in gesture tracking process - Google Patents

Foreground extraction method in gesture tracking process Download PDF

Info

Publication number
CN103065312A
CN103065312A CN2012105751987A CN201210575198A CN103065312A CN 103065312 A CN103065312 A CN 103065312A CN 2012105751987 A CN2012105751987 A CN 2012105751987A CN 201210575198 A CN201210575198 A CN 201210575198A CN 103065312 A CN103065312 A CN 103065312A
Authority
CN
China
Prior art keywords
pixel
palm
foreground picture
current frame
foreground
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN2012105751987A
Other languages
Chinese (zh)
Other versions
CN103065312B (en
Inventor
邹雪梅
许志谦
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan Hongwei Technology Co Ltd
Original Assignee
Sichuan Hongwei Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan Hongwei Technology Co Ltd filed Critical Sichuan Hongwei Technology Co Ltd
Priority to CN201210575198.7A priority Critical patent/CN103065312B/en
Publication of CN103065312A publication Critical patent/CN103065312A/en
Application granted granted Critical
Publication of CN103065312B publication Critical patent/CN103065312B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Abstract

The invention discloses a foreground extraction method in a gesture tracking process. The foreground extraction method in the gesture tracking process comprises the following steps: building and updating a background, using a present frame image to minus a background image, acquiring a preliminary foreground image, carrying out binaryzation process, removing non-complexion pixel points in a foreground, removing points which are oversized in difference between pixel values and tracking palm mean values, and finally using a corrosion expansion algorithm to remove miscellaneous points. The foreground extraction method in the gesture tracking process makes full use of characteristics of a tracking object namely a palm, under the premise that small number of operation of a algorithm and small data size are ensured, not only can accurately extract foregrounds in static and motor processes of objects, but also removes disturbance of similar-complexion objects to palm tracking in a gesture control process, and provides convenient and effective input namely foreground images for a subsequent tracking matching process.

Description

Foreground extraction method in a kind of gesture tracing process
Technical field
The invention belongs to the digital image processing techniques field, more specifically say, the foreground extracting method in a kind of gesture tracing process when relating to the man-machine interaction of using manual method.
Background technology
Because the information that arrives of 2D camera collection is limited, in the process that we follow the tracks of palm, the None-identified depth information.Such as palm people's face behind, the furniture of the class colour of skin, the other people that walk about etc. from the angle on plane, can't accurately extract palm, and this just causes the misjudgement to track algorithm easily.
Background modeling method commonly used has at present: Gauss model, mixed Gauss model, codebook model etc., but be applied to all can relate in the Practical Project problems such as operand is large, data volume large or model can't upgrade in time.And some too simple modeling pattern can be introduced tracing object is learnt into as background, thereby brings the new problem such as lose of following the tracks of.
In fact, wish to set up a good background image, final purpose is still in order to detect a clean prospect.Usually can relate to these two problems of motion detection and motion tracking in the vision field, the extraction that all may use background modeling or prospect is processed in this two aspect.Certainly, the foreground extraction in the motion detection is in order to detect object, and the foreground extraction in the middle of the motion tracking is in the limited information that provides at 2 dimension images, can remove interference, carries out more exactly the tracking of back.
Summary of the invention
The object of the invention is to overcome the deficiencies in the prior art, the foreground extracting method in a kind of gesture tracing process is provided, to obtain comparatively totally and foreground image accurately, so that follow-up tracking module is very easy to find tracking target.
For achieving the above object, the foreground extracting method in the gesture tracing process of the present invention is characterized in that, may further comprise the steps:
(1), the foundation of background
Detect palm according to motion detection algorithm, to detecting the current frame image A1 initialization background image B of palm: among the current frame image A1 without the place of palm, pixel value is copied in the correspondence position of background image B, the place that palm is arranged among the current frame image A1 then is initialized as correspondence position among the background image B zero;
(2), introduce the region of search prospect is carried out binaryzation
If present frame traces into palm, then trace into the palm center as the search box center with former frame, adopt 3 times of palm width, 2 times of zones that the palm height may move as palm, this zone is the region of search;
2.1), make and obtain with the following method a preliminary foreground picture C: be C[i with the pixel assignment outside the region of search among the preliminary foreground picture C]=0, with the pixel assignment in the region of search among the preliminary foreground picture C be: C[i]=abs(A2[i] – B[i]), wherein, the function that abs () expression takes absolute value, A2[i] i pixel pixel value among the current frame image A2 that traces into of expression, B[i] i pixel pixel value among the expression background image B, C[i] represent i pixel pixel value among the preliminary foreground picture C;
2.2), the preliminary foreground picture C that will obtain carries out binaryzation one time, the result places binaryzation foreground picture D:
Pixel in the region of search among the preliminary foreground picture C is traveled through, if the RGB passage all greater than setting threshold th1, then assignment is 255, otherwise then assignment is 0, and expression formula is as follows: if Cr[i]〉th1 and Cg[i] th1 and Cb[i] th1, then D[i]=255, otherwise, D[i then]=0; Wherein, Cr[i] expression i pixel pixel value of R passage among the preliminary foreground picture C, Cg[i] expression i pixel pixel value of G passage among the preliminary foreground picture C, Cb[i] expression i pixel pixel value of B passage among the preliminary foreground picture C, D[i] represent i pixel pixel value among the binaryzation foreground picture D;
If present frame is followed the tracks of and do not traced into palm, then return step (1);
(3), target signature is upgraded
Be that mean value avgR, avgG, avgB on three passages of RGB of palm carried out respectively record to the former frame tracing object;
(4), the postsearch screening of foreground picture D
Behind the binaryzation foreground picture D that obtains step (2) output, also need to come the foreground point is screened from two aspects, see whether these trace points meet tracking characteristics;
Take current frame image A2 as foundation, available points all among the binaryzation foreground picture D is carried out Face Detection, if be non-colour of skin point in current frame image A2, then from binaryzation foreground image D, remove, if i.e.: D[i] equal 255, and A2[i] be non-colour of skin point, then D[i] be revised as 0, D[i] i pixel among the expression binaryzation foreground picture D;
Take current frame image A2 as foundation, all available points among the binaryzation foreground picture D are detected, if in current frame image A2, if it is that average difference on the palm is greater than the threshold value th2 that sets that this point has a passage and former frame tracing object, then do not think prospect, if that is: abs (A2R[i]-avgR) th2 or abs (A2G[i]-avgG) th2 or abs (A2B[i]-avgB) th2, then D[i]=0; Wherein, A2R[i] i pixel of R passage among the expression current frame image A2, A2G[i] i pixel of G passage among the expression current frame image A2, A2B[i] represent i the pixel of B passage among the current frame image A2;
(5), assorted point is removed
The operator that adopts 3x3 to postsearch screening after binaryzation foreground picture D first once the corrosion operation be that a pixel is 0 among the binaryzation foreground picture D, then 8 of its neighborhood pixels all are set as 0, and then the Expanded Operators that adopts 3x3 is done an expansive working namely as long as a pixel is 255 among the binaryzation foreground picture D, 8 pixels of its neighborhood all are set as 255, palm peripheral outline information is recovered, obtained comparatively totally and foreground picture D accurately;
(6), context update
According to current frame image A2, background image B is done following renewal: among the current frame image A2 without the place of palm, pixel value is copied in the correspondence position of background image B, the place of palm is arranged among the current frame image A2, then keep among the background image B correspondence position pixel value constant; Background image B after the renewal is used for the extraction of next frame foreground picture, returns step (2).
Goal of the invention of the present invention is achieved in that
Foreground extracting method in the gesture tracing process of the present invention, by background being set up and being upgraded, obtain the background image that preliminary foreground image extracts, with current frame image subtracting background image, obtain preliminary foreground image, and carry out in binary conversion treatment, the removal prospect pixel value in non-skin pixel point, the removal prospect and the point of following the tracks of the palm average and having big difference, use at last the corrosion expansion algorithm to remove impurity point.The present invention takes full advantage of the feature that tracing object is palm, guaranteeing that algorithm operation quantity is little, under the little prerequisite of data volume, not only can in the static and motion process of object, accurately extract prospect, but also removed the interference that class colour of skin object in the gesture control procedure is followed the tracks of palm, for the tracking and matching process of back provide clean and effectively input be foreground image.
Description of drawings
Fig. 1 is the foreground extracting method one embodiment process flow diagram in the gesture tracing process of the present invention;
Fig. 2 is background image one instantiation that initialization obtains;
Fig. 3 is the image after background image shown in Figure 2 upgrades when normal the tracking;
Fig. 4 is binaryzation foreground picture one instantiation;
Fig. 5 is the foreground picture after binaryzation foreground picture shown in Figure 4 is removed through postsearch screening, assorted point.
Embodiment
Below in conjunction with accompanying drawing the specific embodiment of the present invention is described, so that those skilled in the art understands the present invention better.What need to point out especially is that in the following description, when perhaps the detailed description of known function and design can desalinate main contents of the present invention, these were described in here and will be left in the basket.
Fig. 1 is the foreground extracting method one embodiment process flow diagram in the gesture tracing process of the present invention.
In the present embodiment, as shown in Figure 1, foreground extracting method in the gesture tracing process of the present invention may further comprise the steps: the steps such as the postsearch screening of the foundation of background, prospect binaryzation, target signature renewal, foreground picture D, assorted some removal and context update, the below is elaborated in conjunction with embodiment to each step.
1, the foundation of background
In order can effectively to follow the tracks of, we wish that all objects all are considered to background except palm certainly.
After motion detection algorithm detects palm, can input the position, pixel value information of palm to the motion tracking module, being equivalent to provides a starting point to the tracing process of back.
At this moment, we are just according to the current frame image A1 that detects palm, initialization background image B: among the current frame image A1 without the place of palm, pixel value is copied in the correspondence position of background image B, the place that palm is arranged among the current frame image A1 then is initialized as correspondence position among the background image B zero.Obtain background image B as shown in Figure 2.Need explanation lower, the background in the present embodiment is all analyzed based on rgb color space.
2, introduce the region of search prospect is carried out binaryzation
The frame per second of most of camera was 25 frame/seconds at least, the people when manual mode operation apart from the distance of camera usually between 1.5 meters to 3 meters.Found through experiments, palm is at a slow speed or during rapid movement, in adjacent 2 two field pictures that collect, the distance that hand moves can be not excessive yet.In the present invention, trace into the palm center as the search box center with former frame, adopted 3 times of palm width, 2 times of zones that the palm height may move as palm, the zone is along with the motion of palm is constantly updated.Experiment shows, arbitrary motion during the enough palm operations in this zone.
The interference of image middle distance palm than the territory, far field not only got rid of in the restriction in this zone, and reduced calculated amount and data volume, improved algorithm performance.
2.1, be C[i with the pixel assignment outside the region of search among the preliminary foreground picture C]=0, with the pixel assignment in the region of search among the preliminary foreground picture C be: C[i]=abs(A2[i] – B[i]), wherein, the function that abs () expression takes absolute value, A2[i] i pixel pixel value among the current frame image A2 that traces into of expression, B[i] i pixel pixel value among the expression background image B, C[i] represent i pixel pixel value among the preliminary foreground picture C;
2.2), the preliminary foreground picture C that will obtain carries out binaryzation one time, the result places binaryzation foreground picture D:
Pixel in the region of search among the preliminary foreground picture C is traveled through, if the RGB passage is all greater than setting threshold th1, in the present embodiment, threshold value th1 is 5, and then assignment is 255, otherwise then assignment is 0, expression formula is as follows: if Cr[i]>5 and Cg[i]>5 and Cb[i]>5, D[i then]=255, otherwise, D[i then]=0; Wherein, Cr[i] expression i pixel pixel value of R passage among the preliminary foreground picture C, Cg[i] expression i pixel pixel value of G passage among the preliminary foreground picture C, Cb[i] expression i pixel pixel value of B passage among the preliminary foreground picture C, D[i] i pixel pixel value among the expression binaryzation foreground picture D, the binaryzation foreground picture D that obtains, as shown in Figure 4.
3, target signature is upgraded
Because the color of the color of palm and some furniture, metope, carton, books is very similar.Although we have set up up-to-date background image B(in the context update step) because camera is unstable, the factors such as light variation are even static object also probably changes a lot at pixel value.At this moment, except come the difference prospect from interframe, also introduced the information in the frame.Found through experiments, the object that major part and palm color approach, the value of each passage is all more approaching on the YUV color space, especially the UV passage.But in rgb color space, in three passages obvious difference is arranged all.Consider this feature, the present invention is mean value avgR on three passages of palm RGB to the former frame tracing object, and avgG, avgB have carried out respectively record, for the screening in the step 4 is prepared.
4, the postsearch screening of foreground picture D
Behind the binaryzation foreground picture D that obtains step (2) output, also need to come the foreground point is screened from two aspects, see whether these trace points meet tracking characteristics.
On the one hand, be from colour of skin angle.The present invention is only applicable to the tracking of colour of skin object, and for non-colour of skin object, this process can skip over.Take current frame image A2 as foundation, available points all among the binaryzation foreground picture D is carried out Face Detection, if be non-colour of skin point in current frame image A2, then from binaryzation foreground image D, remove, if i.e.: D[i] equal 255, and A2[i] be non-colour of skin point, then D[i] be revised as 0, D[i] i pixel among the expression binaryzation foreground picture D;
On the other hand, we can to adopt the tracing object that gets access in the step 4 be palm information.Each point in the prospect is compared from three passages of RGB.Here consider the randomness that palm moves in doing the gesture operation process, in the present embodiment, get 60 among the threshold value th2.
Take current frame image A2 as foundation, all available points among the binaryzation foreground picture D are detected, if in current frame image A2, if it is that average difference on the palm is greater than the threshold value 60 of setting that this point has a passage and former frame tracing object, then do not think prospect, if that is: abs (A2R[i]-avgR) 60 or abs (A2G[i]-avgG) 60 or abs (A2B[i]-avgB) 60, D[i then]=0; Wherein, A2R[i] i pixel of R passage among the expression current frame image A2, A2G[i] i pixel of G passage among the expression current frame image A2, A2B[i] represent i the pixel of B passage among the current frame image A2;
5, assorted point is removed
After the repeatedly removal operation through the front, can there be part assorted point, i.e. isolated point among the foreground image D.These points can increase calculated amount and the judgement of back track algorithm.We wish not remove again the information in the palm when removing these noise spots.So, the operator that the present invention adopts 3x3 to postsearch screening after binaryzation foreground picture D first once the corrosion operation be that a pixel is 0 among the binaryzation foreground picture D, then 8 of its neighborhood pixels all are set as 0, and then the Expanded Operators that adopts 3x3 is done an expansive working namely as long as a pixel is 255 among the binaryzation foreground picture D, 8 pixels of its neighborhood all are set as 255, palm peripheral outline information is recovered, obtained comparatively totally and foreground picture D accurately;
Obtain comparatively totally and accurately foreground picture D as shown in Figure 5, it is palm that follow-up tracking module just is very easy to find tracing object according to this foreground picture D.
6, context update
In the motion tracking process, need to carry out maintenance update to background image B, this operation is done after every secondary tracking is finished, and its objective is to follow the tracks of for next frame provides background.Owing to consider illumination variation, image reference nearer on the time shaft is more accurate.
According to current frame image A2, background image B is done following renewal: among the current frame image A2 without the place of palm, pixel value is copied in the correspondence position of background image B, the place of palm is arranged among the current frame image A2, then keep among the background image B correspondence position pixel value constant; Background image B after the renewal is used for the extraction of next frame foreground picture, returns step (2).
Background image B after the renewal as shown in Figure 3.Comparison diagram 2 can be seen, among the background image B is one and can't see palm and the palm position is keeping the image that is blocked object by palm.
Although the above is described the illustrative embodiment of the present invention; so that those skilled in the art understand the present invention; but should be clear; the invention is not restricted to the scope of embodiment; to those skilled in the art; as long as various variations appended claim limit and the spirit and scope of the present invention determined in, these variations are apparent, all utilize innovation and creation that the present invention conceives all at the row of protection.

Claims (2)

1. the foreground extracting method in the gesture tracing process is characterized in that, may further comprise the steps:
(1), the foundation of background
Detect palm according to motion detection algorithm, then to detecting the current frame image A1 initialization background image B of palm: among the current frame image A1 without the place of palm, pixel value is copied in the correspondence position of background image B, the place that palm is arranged among the current frame image A1 then is initialized as correspondence position among the background image B zero;
(2), introduce the region of search prospect is carried out binaryzation
If present frame traces into palm, then trace into the palm center as the search box center with former frame, adopted 3 times of palm width, 2 times of zones that the palm height may move as palm;
2.1), make and obtain with the following method a preliminary foreground picture C: be C[i with the pixel assignment outside the region of search among the preliminary foreground picture C]=0, with the pixel assignment in the region of search among the preliminary foreground picture C be: C[i]=abs(A2[i] – B[i]), wherein, the function that abs () expression takes absolute value, A2[i] i pixel pixel value among the current frame image A2 that traces into of expression, B[i] i pixel pixel value among the expression background image B, C[i] represent i pixel pixel value among the preliminary foreground picture C;
2.2), the preliminary foreground picture C that will obtain carries out binaryzation one time, the result places binaryzation foreground picture D:
Pixel in the region of search among the preliminary foreground picture C is traveled through, if the RGB passage all greater than setting threshold th1, then assignment is 255, otherwise then assignment is 0, and expression formula is as follows: if Cr[i]〉th1 and Cg[i] th1 and Cb[i] th1, then D[i]=255, otherwise, D[i then]=0; Wherein, Cr[i] expression i pixel pixel value of R passage among the preliminary foreground picture C, Cg[i] expression i pixel pixel value of G passage among the preliminary foreground picture C, Cb[i] expression i pixel pixel value of B passage among the preliminary foreground picture C, D[i] represent i pixel pixel value among the binaryzation foreground picture D;
If present frame is followed the tracks of and do not traced into palm, then return step (1);
(3), target signature is upgraded
Be that mean value avgR, avgG, avgB on three passages of RGB of palm carried out respectively record to the former frame tracing object;
(4), the postsearch screening of foreground picture D
Behind the binaryzation foreground picture D that obtains step (2) output, also need to come the foreground point is screened from two aspects, see whether these trace points meet tracking characteristics;
Take current frame image A2 as foundation, available points all among the binaryzation foreground picture D is carried out Face Detection, if be non-colour of skin point in current frame image A2, then from binaryzation foreground image D, remove, if i.e.: D[i] equal 255, and A2[i] be non-colour of skin point, then D[i] be revised as 0, D[i] i pixel among the expression binaryzation foreground picture D;
Take current frame image A2 as foundation, all available points among the binaryzation foreground picture D are detected, if in current frame image A2, if it is that average difference on the palm is greater than the threshold value th2 that sets that this point has a passage and former frame tracing object, then do not think prospect, if that is: abs (A2R[i]-avgR) th2 or abs (A2G[i]-avgG) th2 or abs (A2B[i]-avgB) th2, then D[i]=0; Wherein, A2R[i] i pixel of R passage among the expression current frame image A2, A2G[i] i pixel of G passage among the expression current frame image A2, A2B[i] represent i the pixel of B passage among the current frame image A2;
(5), assorted point is removed
The operator that adopts 3x3 to postsearch screening after binaryzation foreground picture D first once the corrosion operation be that a pixel is 0 among the binaryzation foreground picture D, then 8 of its neighborhood pixels all are set as 0, and then the Expanded Operators that adopts 3x3 is done an expansive working namely as long as a pixel is 255 among the binaryzation foreground picture D, 8 pixels of its neighborhood all are set as 255, palm peripheral outline information is recovered, obtained comparatively totally and foreground picture D accurately;
(6), context update
According to current frame image A2, background image B is done following renewal: among the current frame image A2 without the place of palm, pixel value is copied in the correspondence position of background image B, the place of palm is arranged among the current frame image A2, then keep among the background image B correspondence position pixel value constant; Background image B after the renewal is used for the extraction of next frame foreground picture, returns step (2).
2. foreground extracting method according to claim 1 is characterized in that, described threshold value th1=5, threshold value th2=60.
CN201210575198.7A 2012-12-26 2012-12-26 Foreground extraction method in gesture tracking process Active CN103065312B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201210575198.7A CN103065312B (en) 2012-12-26 2012-12-26 Foreground extraction method in gesture tracking process

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201210575198.7A CN103065312B (en) 2012-12-26 2012-12-26 Foreground extraction method in gesture tracking process

Publications (2)

Publication Number Publication Date
CN103065312A true CN103065312A (en) 2013-04-24
CN103065312B CN103065312B (en) 2015-05-13

Family

ID=48107929

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201210575198.7A Active CN103065312B (en) 2012-12-26 2012-12-26 Foreground extraction method in gesture tracking process

Country Status (1)

Country Link
CN (1) CN103065312B (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103209321A (en) * 2013-04-03 2013-07-17 南京邮电大学 Method for quickly updating video background
CN107920213A (en) * 2017-11-20 2018-04-17 深圳市堇茹互动娱乐有限公司 Image synthesizing method, terminal and computer-readable recording medium
CN108960206A (en) * 2018-08-07 2018-12-07 北京字节跳动网络技术有限公司 Video frame treating method and apparatus
CN110189364A (en) * 2019-06-04 2019-08-30 北京字节跳动网络技术有限公司 For generating the method and apparatus and method for tracking target and device of information
CN111107261A (en) * 2018-10-25 2020-05-05 华勤通讯技术有限公司 Photo generation method and equipment

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6788809B1 (en) * 2000-06-30 2004-09-07 Intel Corporation System and method for gesture recognition in three dimensions using stereo imaging and color vision
CN101853071A (en) * 2010-05-13 2010-10-06 重庆大学 Gesture identification method and system based on visual sense
CN102799875A (en) * 2012-07-25 2012-11-28 华南理工大学 Tracing method of arbitrary hand-shaped human hand

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6788809B1 (en) * 2000-06-30 2004-09-07 Intel Corporation System and method for gesture recognition in three dimensions using stereo imaging and color vision
CN101853071A (en) * 2010-05-13 2010-10-06 重庆大学 Gesture identification method and system based on visual sense
CN102799875A (en) * 2012-07-25 2012-11-28 华南理工大学 Tracing method of arbitrary hand-shaped human hand

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
FENG-SHENG CHEN, ET AL.: "Hand gesture recognition using a real-time tracking method and hidden Markov models", 《IMAGE AND VISION COMPUTING》 *
郭北苑 等: "手势交互中手部目标的动态分割", 《系统仿真学报》 *

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN103209321A (en) * 2013-04-03 2013-07-17 南京邮电大学 Method for quickly updating video background
CN103209321B (en) * 2013-04-03 2016-04-13 南京邮电大学 A kind of video background Rapid Updating
CN107920213A (en) * 2017-11-20 2018-04-17 深圳市堇茹互动娱乐有限公司 Image synthesizing method, terminal and computer-readable recording medium
CN108960206A (en) * 2018-08-07 2018-12-07 北京字节跳动网络技术有限公司 Video frame treating method and apparatus
CN111107261A (en) * 2018-10-25 2020-05-05 华勤通讯技术有限公司 Photo generation method and equipment
CN110189364A (en) * 2019-06-04 2019-08-30 北京字节跳动网络技术有限公司 For generating the method and apparatus and method for tracking target and device of information

Also Published As

Publication number Publication date
CN103065312B (en) 2015-05-13

Similar Documents

Publication Publication Date Title
CN102184552B (en) Moving target detecting method based on differential fusion and image edge information
CN103065312B (en) Foreground extraction method in gesture tracking process
CN102915544B (en) Video image motion target extracting method based on pattern detection and color segmentation
CN103020991B (en) The method and system of moving target perception in a kind of video scene
CN103226701A (en) Modeling method of video semantic event
CN104299246A (en) Production line object part motion detection and tracking method based on videos
CN104392461A (en) Video tracking method based on texture features
CN105069816A (en) Method and system for counting inflow and outflow people
Van den Bergh et al. Real-time stereo and flow-based video segmentation with superpixels
CN103020980A (en) Moving target detection method based on improved double-layer code book model
CN104376580A (en) Processing method for non-interest area events in video summary
CN103971347A (en) Method and device for treating shadow in video image
CN113628202A (en) Determination method, cleaning robot and computer storage medium
CN103065145A (en) Vehicle movement shadow eliminating method
CN103793703A (en) Method and device for positioning face detection area in video
CN103996028A (en) Vehicle behavior recognition method
Dave et al. Statistical survey on object detection and tracking methodologies
CN202815785U (en) Optical touch screen
CN103049738B (en) Many Method of Vehicle Segmentations that in video, shade connects
Xu et al. Moving target tracking based on adaptive background subtraction and improved camshift algorithm
Wang et al. A real-time vision-based hand gesture interaction system for virtual EAST
CN103745486A (en) Method for eliminating noise interference by using moving track of object
Basset et al. Recovery of motion patterns and dominant paths in videos of crowded scenes
Bourja et al. Movits: Moroccan video intelligent transport system
Krishna et al. Automatic detection and tracking of moving objects in complex environments for video surveillance applications

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant