WO2014092549A2 - System and method for moving object extraction using stereo vision - Google Patents

System and method for moving object extraction using stereo vision

Info

Publication number
WO2014092549A2
WO2014092549A2 PCT/MY2013/000256 MY2013000256W WO2014092549A2 WO 2014092549 A2 WO2014092549 A2 WO 2014092549A2 MY 2013000256 W MY2013000256 W MY 2013000256W WO 2014092549 A2 WO2014092549 A2 WO 2014092549A2
Authority
WO
Grant status
Application
Patent type
Prior art keywords
pixel
image
moving object
motion map
moving
Prior art date
Application number
PCT/MY2013/000256
Other languages
French (fr)
Other versions
WO2014092549A8 (en )
WO2014092549A3 (en )
Inventor
Binti Kadim ZULAIKHA
Hock Woon Hon
Binti Samudin NORSHUHADA
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/285Analysis of motion using a sequence of stereo image pairs
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/174Segmentation; Edge detection involving the use of two or more images
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/194Segmentation; Edge detection involving foreground-background segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/215Motion-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence
    • G06T2207/10021Stereoscopic video; Stereoscopic image sequence
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20212Image combination
    • G06T2207/20221Image fusion; Image merging
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30232Surveillance
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The present invention provides a system and method for processing sequence of image frames captured through an imaging device, the imaging device includes an accelerometer. The system and method comprise an adaptive temporal background subtraction unit and a dynamic foreground object estimation unit for evaluating whether each pixel is either a moving object pixel or a non-moving object pixel based on the classification of the pixels between the first motion map and the second motion map generated thereon.

Description

System and Method for Moving Object Extraction Using Stereo Vision

Field of the Invention

[0001] The present invention relates to image processing. More particularly, the present invention relates to a system and method for extracting and detecting moving object through stereovision.

Background

[0002] The feature point extraction and classification for non-static cameras is difficult as the computer vision is unable to differentiate the movement on the feature point either it is caused by the actual moving object or by the motion of the cameras. [0003] It is recognized that feature points/vectors are widely used for detecting moving object from a video stream. However, it is a challenge to provide a reliable system and method to process these feature points in an effective and efficient way to extract the moving object. It is also desired that the system and method are adapted with adaptive capability to refine the detection results in real time. Summary

[0004] It is an objective of the present invention to provide a system and method for extracting moving objects from sequences of images (i.e. videos) captured on moving stereo cameras in real-time. The extracted and identified moving objects can then be further analysed through any surveillance applications such as tracking, intrusion detection etc. Through the system and method of the present invention, each of the captured images are segmented into moving objects and background. [0005] In one aspect, the present invention integrates an area-based detection and a feature-based detection extract moving objects from the images. In particular, the sequence of images, i.e. the videos, is captured through a stereo camera that comprises two imaging devices. The stereo camera may also equip with accelerometer for detection movement. Such system and method can also be implemented on any mobile devices, such as smart phones, having a camera and accelerometer integrated thereon. Accordingly, the motion of the stereo camera or the mobile device can be adaptively used to determine the background from the images, which in turn provides a reference for extracting moving object(s) through area-based method. [0006] In another aspect of the present invention, there is provided a method for processing sequence of image frames captured through an imaging device, the imaging device includes an accelerometer. The method comprises extracting feature points from the image frames; classifying each feature point as belongs to either a moving object or a non-moving object; receiving input from the accelerometer to estimate the transformations between a current and a previous frames; determine relative position of current frame with respect to previous frames; extracting previous frames that relate to a relative position of the current frame; selecting previous frames that overlapped with the current frame; performing background subtraction between the current frame and the related previous frames; forming a first motion map by fusing results of the background subtractions between the current frame and the respective extracted previous frames; updating precious frames list and linkage; forming a second motion map that maps out pixels as moving object pixel and non-moving object pixel based on the feature points' classification; examining each pixel of each image frame. When the examined pixel is located in an uncovered area of the first motion map, classifying the pixel as the same classification as defined on the second motion map; otherwise, classifying the pixel as belonging to a moving object when the pixel is classified as a moving object in both the first motion map and the second motion map; or otherwise, classifying the pixel as belonging to a non-moving object. [0007] In one embodiment, the imaging device is a stereo imaging device capturing the sequence of image frames, each image frame forms by a stereo image of a left image and a right image, the feature points are respectively extracted from the left image and the right image. The classification of each feature point may be derived based on matching corresponding feature points between the left and the right image. [0008] In another embodiment, the method further comprises identifying an uncovered area on the first motion map that is not covered by any of the related previous frames. Yet, the method may further segment the image frame based on the classified feature points to generate the second motion map.

[0009] In a further aspect of the present invention, there is also provide a surveillance system having an imaging device for capturing sequence of image frames, wherein the imaging device having an accelerometer. The surveillance system is adapted to carry out the aforesaid method, the surveillance system comprises an adaptive temporal background subtraction unit for carrying out the background subtraction to generate the first motion map, wherein the first motion maps out whether each pixel belongs to a moving object or a non-moving object; and a dynamic foreground object estimation unit for generating a second motion map that maps out whether each pixel belongs to a moving object or a non-moving object based on feature points classification. Each pixel is being identified as either a moving object pixel or a non-moving object pixel based on the evaluation of the classification of the pixel between the first motion map and the second motion map.

Brief Description of the Drawings

[0010] Preferred embodiments according to the present invention will now be described with reference to the figures accompanied herein, in which like reference numerals denote like elements;

[0011] FIG. 1 illustrates a schematic block diagram a surveillance system in accordance with one embodiment of the present invention;

[0012] FIG. 2A illustrates the process adaptive temporal background subtraction in accordance with one embodiment of the present invention

[0013] FIG. 2B exemplifies schematically the processing on sequence of frames through the adaptive temporal background subtraction

[0014] FIG. 3 illustrates a process of extracting foreground object in accordance with one embodiment of the present invention; and [0015] FIG. 4 illustrates a process for segmenting image in accordance with one embodiment of the present invention.

Detailed Description

[0016] Embodiments of the present invention shall now be described in detail, with reference to the attached drawings. It is to be understood that no limitation of the scope of the invention is thereby intended, such alterations and further modifications in the illustrated device, and such further applications of the principles of the invention as illustrated therein being contemplated as would normally occur to one skilled in the art to which the invention relates.

[0017] The present invention provides a surveillance system and method to differentiate whether a pixel in an image captured on a non-static stereo camera is a moving object pixel or a static/background pixel. Accordingly, image segmentation can be realized through a plurality of cameras or a stereo camera that has accelerometer attached thereto. The method is based on an integration of area-based and feature-based analysis. The method may further integrate with feature point detection and classification is performed to refine a final output of area-based detection, and to remove errors due to misalignment of current and background image.

[0018] FIG. 1 illustrates a block diagram of a surveillance system 100 in accordance with one embodiment of the present invention. The surveillance system 100 is adapted for processing surveillance videos captured through a pair of surveillance cameras 101. The pair of cameras is adapted for capturing stereo images/videos operationally. For the purpose of description below, the pair of cameras shall herewith below refer to as stereo cameras. Preferably, the surveillance cameras 101 equipped or attached with accelerometer 102 for detecting the movements of the respective surveillance cameras 101. The surveillance system 100 comprises a video acquisition module 103, database 112, an image-processing module 104, a display module 110, and a post detection module 111. The video acquisition module 103 is connected to the pair of cameras 101 to acquire images or videos captured through the cameras. The acceleration information concerning the plurality of cameras 101 recorded through the accelerometers 102 is also transmitted to the video acquisition module 103. The captured videos/images and the acceleration information are stored on the database module 112, which are retrievable by the imaging-processing module 104 for processing anytime. The database module 112 also stores presets and user configurations information thereon. The image-processing module 104 responsible for processing the input videos and acceleration information transmitted from the video acquisition module 103, and analyzing the same to extract and detects moving feature points from the images/videos. Once the feature points are being detected and extracted, they are output to the display module 110 as visual outputs, and to the post detection module 111 for triggering further processing or control as necessary.

[0019] In the present embodiment, a feature-based detection method is adapted to sparse feature-point classification to pixel points as moving object or background/static object. Further, an area-based detection method would give dense pixel classification into either moving pixels or not. Under these methods, current image of the video sequence will be compared with a previous one or sets of reference images to find the moving points or pixels within the image. In this case, the decision for the classification can be done on the pixels within the overlapping regions between current and the reference images. To achieve the above, as shown in FIG. 1, the image- processing module 104 further comprises a feature point extraction and classification unit 105, camera motion model estimation unit 106, an adaptive temporal background subtraction unit 107, a dynamic foreground object estimation unit 108, and an event rule unit 109.

[0020] The feature point extraction and classification unit 105 is adapted to process the images to classify feature points extracted from the images as either a moving object or background/static object. The camera motion model estimation unit 106 is adapted to estimate transformations between current and previous frames by aligning the images with respect to a common coordinate system, which will be required for later processing. The common coordinate system is referring to a reference coordinate system that common to both images that are being compared. The current image frame, for example, may be suitable to be served as the common coordinate system. The camera motion model estimation unit 106 operationally performs image matching between a current and previous frames to determine the movement of the imaging device. The information regarding the movement estimation is supplied to the adaptive temporal background subtraction unit 107 accordingly. The adaptive temporal background subtraction unit 107 uses a reference image from the camera motion model estimation unit 106 to align with the current frame, a background subtraction is then carried out on these two frames. The reference image will be aligned to the current frame's coordinate system, i.e. common coordinate system. Through the background subtraction, the pixels can be identified as either belongs to a moving object or a static background object. Each pixel is being tagged with the results on the background subtraction. Subsequently, the dynamic foreground extraction unit 108 combines the outputs from feature point extraction and classification unit 105 and adaptive temporal background subtraction unit 107 to produce a final image segmentation result. The detected moving object through the segmentation will be passed to the event rule unit 109 to analyze and determine if the moving object fits any specific event that requires attention. The event analysis will be carried out based on sets of preset rules. The rules may further define the necessary event to trigger as required. The display module 110 is provided to output sequence of the image frames captured by the cameras or the processed images or both in a superimposed manner for viewing visually. The post detection module 111 triggers alarm for post event action as required. [0021] FIG. 2A illustrates the process adaptive temporal background subtraction in accordance with one embodiment of the present invention. The process is implementable on the adaptive temporal background subtraction unit 107 of FIG. 1. Briefly, the process provides an area-based detection method to classify the image pixels into moving pixels and non-moving pixels. In one example, the area-based method includes a background subtraction, whereby a current image of the sequence is compared with a reference image to find the difference between the images. The results of the background subtraction, i.e. the differences, represents pixels of the moving object. The process starts by receiving the input from accelerometer in step 201. The acceleration information from the accelerometer is used to determine relative positions of current frame with respect to a previous or reference frames in step 202.

10022} FIG. 2B exemplifies schematically the processing on sequence of frames through the adaptive temporal background subtraction. The current frame, t, can either appear on a right frame, t-3, or left frame, t-2, of previous frames. Noted that the right frame and the left frame of previous frames are not to be confused with the left image and right image of a stereo image. The left frame and the right frame are taken by the imaging device at a physical location of the current camera position with respect to the previous camera position. After a set of previous frames in relation to current frame is determined and extracted in step 203, the camera motion can be estimated in step 204 by aligning the current frame with the extracted previous frames to determine overlapping regions between the frames. Then only the previous frames that are overlapped with current image are selected in step 205 for background subtraction. In this step, all other non-overlapped image frames will be eliminated. These overlapped frames are then be warped with respect to current frame in step 206. Background subtraction is then carried out to each of these warped images and current image in step 207. Say there are three previous frames that are overlapped with current frame, then there will be three background subtraction with each previous frames respectively. The resultant motion map from each background subtraction steps is then fused in step 208 to derive a final motion map. To fuse different maps, one may use rules such as if the majority of motion map indicate current pixel is a moving pixel, then it will denoted as moving pixel in final motion map. After final motion map has been determined, then the area within motion map, which are not covered by any previous frames will be determined 209. This area will be denoted as an uncovered area. Finally, the current image content is used to update the previous images in the step 210. The overlapping regions found in 204 are also used to determine which pixels in previous frames are to be updated. Only the current pixels that not identified as moving object are updated in the previous images. Output from this processing unit is the motion map; a binary image which value T indicates moving pixel and '0' indicates background or static object. For easy reference, the output motion map is denoted herewith as motion map A.

[0023] FIG. 3 illustrates a process of extracting foreground object in accordance with one embodiment of the present invention. The process is implementable on the dynamic foreground object extraction unit 108 of FIG. 1. In this process, another motion map, motion map B, will be generated, and is integrated with the motion map A generated by the adaptive temporal background subtraction unit 107 to produce a final motion map. The motion map B is generated based on the classified feature points from feature point extraction and classification unit 105. The process starts with receiving feature points and their corresponding classification information from the feature point extraction and classification unit 105. These classification results are used to segment the image into moving pixels and non-moving pixel. The segmentation outputs the motion map B. In step 303, the dynamic foreground object extraction unit 108 receives the motion map A from the adaptive temporal background subtraction unit. Following that, each pixel in the image is examined to determine whether the pixel belongs to moving object or static/background object in step 304. In step 305, it is check if the pixel is within an uncovered area in motion map A. If the pixel is within the uncovered area in motion map A, the pixel is classified as the same classification (i.e. moving object or non-moving object) as in the motion map B in step 308. The classification result is then output for the pixel. If the pixel is not within the uncovered area in the motion map A, i.e. within covered area, in step 305, the process further determined if that pixel is classified as moving pixel in both motion map A and motion map B in step 309. If the pixel is classified as moving pixel in the motion map A and the motion map B in the step 309, the pixel is classified as belong to a moving object at step 311, otherwise, the pixel is identified as static/background pixel at step 310. [0024] FIG. 4 illustrates a process for segmenting image in accordance with one embodiment of the present invention. The segmentation is adaptable in the step 302 of FIG. 3. The segmentation is carried out based on classified feature points. The process requires to input the feature points and their label to output the motion map B, whereby each of the pixels in motion map indicates whether the pixel belongs to moving pixel or not. The process starts with receiving feature point or feature point clouds and their respective classification information in step 401. In step 402, all classified feature points are being mapped on current frame. All pixels correspond to classified feature point are labelled in step 403. Background or static feature points are labelled with zero (0). Each pixel will be examined for similarities with respect to its k-nearest feature point neighbors classified as moving points and background points in step 404 and labelled the same accordingly. The step 404 is provided for labeling the remaining pixels that have not yet been labeled (i.e. as moving objects or background). To label these pixels, their respective k-nearest labelled feature points are examined. It computes the similarity between the examined pixel with all of its labelled feature point (within k- nearest pixels) and the examined pixel is labelled the same as the most similar labelled feature point. Thus, if its most similar feature point is the background, the examined pixel is labelled as background too, and vice versa.

[0025] The computations of the similarity is carried out recursively until all pixels are labelled. The labelling process starts with pixels surrounding feature point. For example, k is equal to 5. The k-value can be preset on the system by the operator as appropriately. Then for a pixel under consideration, 5 nearest classified feature points will be examined and the similarity between pixels under consideration and these 5 neighbours pixel will be computed. Accordingly, the respective pixels under consideration are labelled as the one most similar with it corresponding feature point in step 406. Then a filling hole algorithm be applied to the resultant map, to fill up the pixels with missing label in step 406 and output a motion map B.

[0026] While specific embodiments have been described and illustrated, it is understood that many changes, modifications, variations, and combinations thereof could be made to the present invention without departing from the scope of the invention.

Claims

Claims
1. A method for processing sequence of image frames captured through an imaging device, the imaging device includes an accelerometer, the method comprising:
extracting feature points from the image frames;
classifying each feature point as belongs to either a moving object or a non- moving object;
receiving input from the accelerometer to estimate the transformations between a current and a previous frames;
determine relative position of current frame with respect to previous frames; extracting previous frames that relate to a relative position of the current frame; selecting previous frames that overlapped with the current frame;
performing background subtraction between the current frame and the related previous frames;
forming a first motion map by fusing results of the background subtractions between the current frame and the respective extracted previous frames;
updating previous frames list and linkage;
forming a second motion map that maps out pixels as moving object pixel and non-moving object pixel based on the feature points' classification;
examining each pixel of each image frame,
wherein when the examined pixel is located in an uncovered area of the first motion map, classifying the pixel as the same classification as defined on the second motion map;
otherwise, classifying the pixel as belonging to a moving object when the pixel is classified as a moving object in both the first motion map and the second motion map; or otherwise, classifying the pixel as belonging to a non-moving object.
2. The method according to claim 1 , wherein the imaging device is a stereo imaging device capturing the sequence of image frames, each image frame forms by a stereo image of a left image and a right image, the feature points are respectively extracted from the left image and the right image, and wherein the classification of each feature point is derived based on matching corresponding feature points between the left and the right image.
3. The method according to claim 1 , wherein examining each pixel of each image frame comprises
computing similarity of k-nearest surrounding pixels of each classified feature point;
classifying each pixel as either moving pixel or non-moving pixel based on the computed similarity; and
examining the all of the pixels recursively until all pixels are classified.
4. The method according to claim 1, further comprising identifying an uncovered area on the first motion map that is not covered by any of the related previous frames.
5. The method according to claim 1 , further comprising segmenting the image frame based on the classified feature points to generate the second motion map.
6. A surveillance system having an imaging device for capturing sequence of image frames, wherein the imaging device having an accelerometer, the surveillance system is adapted to carry out the method of claim 1 , the surveillance system comprising:
an adaptive temporal background subtraction unit for carrying out the background subtraction to generate the first motion map, wherein the first motion maps out whether each pixel belongs to a moving object or a non-moving object; and
a dynamic foreground object estimation unit for generating a second motion map that maps out whether each pixel belongs to a moving object or a non-moving object based on feature points classification,
wherein each pixel is being identified as either a moving object pixel or a non- moving object pixel based on the evaluation of the classification of the pixel between the first motion map and the second motion map.
7. The surveillance system according to claim 6, wherein the imaging device is a stereo imaging device capturing the sequence of image frames, each image frame forms by a stereo image of a left image and a right image, the feature points are respectively extracted from the left image and the right image, and wherein the classification of each feature point is derived based on matching corresponding feature points between the left and the right image.
8. The surveillance system according to claim 6, wherein each pixel of each image frame is examined recursively by the dynamic foreground object estimation unit until all pixels are being classified to compute similarity of k-nearest surrounding pixels of each classified feature point; and classifying each pixel as either moving pixel or non-moving pixel based on the computed similarity.
9. The surveillance according to claim 6, wherein the adaptive temporal background subtraction unit is operably to identify an uncovered area on the first motion map that is not covered by any of the related previous frames.
10. The surveillance according to claim 6, wherein the dynamic foreground object estimation unit is operably to segment the image frame based on the classified feature points to generate the second motion map.
PCT/MY2013/000256 2012-12-13 2013-12-12 System and method for moving object extraction using stereo vision WO2014092549A8 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
MYPI2012005403 2012-12-13
MYPI2012005403 2012-12-13

Publications (3)

Publication Number Publication Date
WO2014092549A2 true true WO2014092549A2 (en) 2014-06-19
WO2014092549A3 true WO2014092549A3 (en) 2014-09-12
WO2014092549A8 true WO2014092549A8 (en) 2015-01-29

Family

ID=50179898

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2013/000256 WO2014092549A8 (en) 2012-12-13 2013-12-12 System and method for moving object extraction using stereo vision

Country Status (1)

Country Link
WO (1) WO2014092549A8 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6798897B1 (en) * 1999-09-05 2004-09-28 Protrack Ltd. Real time image registration, motion detection and background replacement using discrete local motion estimation
US20080043106A1 (en) * 2006-08-10 2008-02-21 Northrop Grumman Corporation Stereo camera intrusion detection system

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6798897B1 (en) * 1999-09-05 2004-09-28 Protrack Ltd. Real time image registration, motion detection and background replacement using discrete local motion estimation
US20080043106A1 (en) * 2006-08-10 2008-02-21 Northrop Grumman Corporation Stereo camera intrusion detection system

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
EDUARDO MONARI ET AL: "A real-time image-to-panorama registration approach for background subtraction using pan-tilt-cameras", ADVANCED VIDEO AND SIGNAL-BASED SURVEILLANCE (AVSS), 2011 8TH IEEE INTERNATIONAL CONFERENCE ON, IEEE, 30 August 2011 (2011-08-30), pages 237-242, XP032053754, DOI: 10.1109/AVSS.2011.6027329 ISBN: 978-1-4577-0844-2 *
RAKESH KUMAR ET AL: "Aerial Video Surveillance and Exploitation", PROCEEDINGS OF THE IEEE, IEEE. NEW YORK, US, vol. 89, no. 10, 1 October 2001 (2001-10-01), XP011044566, ISSN: 0018-9219 *

Also Published As

Publication number Publication date Type
WO2014092549A8 (en) 2015-01-29 application
WO2014092549A3 (en) 2014-09-12 application

Similar Documents

Publication Publication Date Title
US20070250898A1 (en) Automatic extraction of secondary video streams
US20090087024A1 (en) Context processor for video analysis system
US20140333775A1 (en) System And Method For Object And Event Identification Using Multiple Cameras
Kim Real time object tracking based on dynamic feature grouping with background subtraction
US20060215903A1 (en) Image processing apparatus and method
Palaniappan et al. Efficient feature extraction and likelihood fusion for vehicle tracking in low frame rate airborne video
US20140347475A1 (en) Real-time object detection, tracking and occlusion reasoning
US20060067562A1 (en) Detection of moving objects in a video
US20100165112A1 (en) Automatic extraction of secondary video streams
Auvinet et al. Left-luggage detection using homographies and simple heuristics
Singla Motion detection based on frame difference method
Rodríguez-Canosa et al. A real-time method to detect and track moving objects (DATMO) from unmanned aerial vehicles (UAVs) using a single camera
Bayona et al. Comparative evaluation of stationary foreground object detection algorithms based on background subtraction techniques
Chauhan et al. Moving object tracking using gaussian mixture model and optical flow
US20060285724A1 (en) Salient motion detection system, method and program product therefor
US8744122B2 (en) System and method for object detection from a moving platform
US20130279758A1 (en) Method and system for robust tilt adjustment and cropping of license plate images
Fernández-Caballero et al. Real-time human segmentation in infrared videos
US20100034423A1 (en) System and method for detecting and tracking an object of interest in spatio-temporal space
US20110002509A1 (en) Moving object detection method and moving object detection apparatus
US20150131851A1 (en) System and method for using apparent size and orientation of an object to improve video-based tracking in regularized environments
Kong et al. Detecting abandoned objects with a moving camera
Cao et al. Vehicle detection and motion analysis in low-altitude airborne video under urban environment
CN102811343A (en) Intelligent video monitoring system based on behavior recognition
Hu et al. Moving object detection and tracking from video captured by moving camera

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13831950

Country of ref document: EP

Kind code of ref document: A2

NENP Non-entry into the national phase in:

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13831950

Country of ref document: EP

Kind code of ref document: A2