WO2011019266A1 - Object tracking system and method - Google Patents

Object tracking system and method Download PDF

Info

Publication number
WO2011019266A1
WO2011019266A1 PCT/MY2010/000139 MY2010000139W WO2011019266A1 WO 2011019266 A1 WO2011019266 A1 WO 2011019266A1 MY 2010000139 W MY2010000139 W MY 2010000139W WO 2011019266 A1 WO2011019266 A1 WO 2011019266A1
Authority
WO
WIPO (PCT)
Prior art keywords
motion
location
blob
processing means
tracking
Prior art date
Application number
PCT/MY2010/000139
Other languages
French (fr)
Inventor
Kadim Zulaikha
Mei Kuan Lim
Kim Meng Liang
Sze Ling Tang
Original Assignee
Mimos Berhad
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Mimos Berhad filed Critical Mimos Berhad
Publication of WO2011019266A1 publication Critical patent/WO2011019266A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/254Analysis of motion involving subtraction of images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present invention relates generally to an object tracking system and method which is able to track multiple objects by locating the objects in the current image sequence and assigning a consistent object identifier to each of the objects throughout the image sequences, said system and method integrates the approach of bottom-up and top-down.
  • Object detection and tracking is important in many applications such as in visual surveillance, video communication and human computer interaction.
  • Object detection and tracking can be defined as the process of locating a moving object of interest in the scene and assigning a consistent object label to the same object throughout the image sequences. This process is not a trivial task due to several difficulties. The said difficulties are fast moving object, scene illumination changes, image appearance changes as the image viewpoint changes, partial or full occlusion of object- to-object and object-to-scene, non-rigid object shape as it moves and real time processing requirement.
  • Methods of visual tracking system can be divided into two major approaches: bottom-up and top-down. In a bottom-up approach, the image is generally segmented into objects which are then used for tracking.
  • Some common bottom-up algorithms are blob tracking, contour tracking and visual feature matching. Said bottom-up approach has mostly low computational complexity but not robust enough to track partial or fully occluded object. On the other hand, the latter approach is mostly associated with high computational complexity but more robust to track occluded object.
  • This top-down approach basically generates object hypothesis and tries to verify them using image. The object to be tracked is sometimes initialized manually by the user or determined automatically using some intelligent and complex object detection method. The said initialization stage is critical for this approach and it will affect the performance of the tracking process.
  • Some common tracking algorithms for said top-down approach are Kalman filter and Particle filter.
  • the method of tracking objects using a tracking system comprising steps of: i. sensor and camera performing motion detection on the ⁇ current image frame to obtain current motion map in binary format indicating location of motion and non-motion pixels; ii. any acceptable processing means performing pre-processing on said current motion map by eliminating noises and labelling said motion map; characterised in that iii. any acceptable processing means performing overlapping region analysis on current and previous motion map to determine the relationship between said current and previous motion pixel groups in the said motion map; iv. any acceptable processing means invoking one of a plurality of blob events based on the result of said overlapping region analysis.
  • An object tracking system for detecting and tracking multiple objects by localizing said objects and assigning consistent label to each object throughout a sequence of images, comprising:
  • said any acceptable processing means are configured to process image from said image capturing device characterized in that said processing means is configured to perform a method according to said preferred embodiment.
  • FIG 1 shows a flow chart of an object tracking method which comprises an integration of the bottom-up and top-down tracking approaches.
  • FIG 2 shows the rule of determining events based on the overlapping previous-current blobs test.
  • FIG 3 shows a flow chart of the sub-steps of the step of Perform Process New Object in FIG l.
  • FIG 4 shows a flow chart of the sub-steps of the step of Perform Top- Down Tracking Approach in FIG 1.
  • FIG 5 shows a flow chart of the sub-steps of the sub-step of Alignment Process to Align Estimated Object Location and the Motion Blob in FIG 4.
  • FIG 1 there is shown a flow chart of an object tracking method which comprises an integration of the bottom-up and top-down tracking approaches.
  • a blob based tracking method is proposed as the main tracking sub-system.
  • the idea is to track each of motion blobs appeared in the current frame by using at least one tracker.
  • Each tracker stores information of object features such as colour and texture, object centroid and area and object identifier.
  • a motion map of current frame is obtained by applying motion detection process (102) to the current image frame (104).
  • Motion map is a binary map which indicates the location of motion and non-motion pixels. A group of motion pixels are called motion blobs.
  • Said motion map will be pre- processed (106) to eliminate noise, fill up holes and label each of the motion blobs.
  • said processed motion map from the current frame will be compared (108) with its equivalent motion map from the previous frame (110) to obtain previous-current motion blobs relationship.
  • Each motion blob in the current motion map will be compared to other motion blobs from the previous motion map to determine one of a plurality of events comprising: new object (112), existing object (114), blob splitting (118) or blobs merging (116).
  • overlapping region analysis (108) is used as a comparison method.
  • Blob events comprises of any events of new object (112), objects merging (116), objects splitting (118) and existing object (114). If one motion blob in the current motion map overlaps with only one motion blob from the previous motion map (122), the current motion blob will be given the same object label as the previous motion blob. Given another example, if one motion blob in the current motion map overlaps with more than one motion blob in the previous motion map (124), it is interpreted that more than one motion blob in the previous motion map has merged with the current motion map, thus the event of object merging (116) is detected.
  • Each of the detected events will invoke different processes of updating the tracker information and assigning the object identifier.
  • the processes are described as follows: i. if one previous blob is related to only one current blob (122), then the current blob is corresponding to an existing object being tracked and it will be continually tracked and labelled as the same object label assigned to its related previous blob; ii. if the current blob is not related to any previous blob (120), that blob corresponds to a new object to be tracked and the sub-step of Process New Object (112) will be invoked; iii. if one previous blob relates to more than one current blob (126), the sub-step of Splitting Event (118) is detected; iv. if more than one previous blob relates to only one current blob (124), the sub-step of Merging Event (116) takes place and top- bottom tracking approach will be invoked.
  • ratio test is performed on the motion blob (302) identified as new object to be tracked.
  • the action of ratio test is done on the motion blob bounding box enclosing the said motion blob.
  • Ratio test comprises of the calculation of the ratio of the height by width of the said motion blob bounding box. This ratio test is to determine whether the blob contains single or multiple objects (304). If the blob contains multiple objects, human detection algorithm is applied on the blob (306) to separate multiple objects into each single object. Only a complete single object (track-able object) within the multiple objects blob will be tracked at this stage.
  • each single object blob its object properties will be extracted (308) together with its location information (centroid and enclosing bounding box coordinates). Colours and textures properties are also used to represent the object as they are robust to partial object occlusion, rotation, scale invariant and computationally inexpensive.
  • a tracker is initialized for each of single track-able objects within the blob (310). Each tracker will hold the information of object features and location properties.
  • the object features In order to assign object label identifier, the object features first need to be matched with all objects in the memory of the said tracking system (312). Objects in said memory are the objects have been tracked before and already leave the scene for not more than a certain number of frames.
  • current object is assigned as the same identifier as the object in the memory (316) and the object in said memory will be removed (318). If the current object is found to be different from any object in the memory, a new object identifier will be assigned to current object (320).
  • the step of initializing particle filter object tracking method comprises of said acceptable processing means performing the following sub-steps: i. extracting label and location for object from the previous image frame; ii. extracting object properties from said object location identified in the previous image frame; iii. storing said object label, location and properties in an object tracker data structure.
  • target model features are generated using the object colour and texture information from the previous frame in which the object appeared as a single object just before it merges in current frame.
  • the said tracker will find the most probable sample distribution in the current frame by comparing the target model with the current hypotheses of the particle filter using Bhattacharyya coefficient, which is a popular similarity measurement method between two distributions. After all the objects within the merged blob are being localized by their own trackers, their predicted locations will then have to be aligned with the motion blob area (406).
  • FIG 5 there is shown a flow chart of the sub-steps of the sub-step of Alignment Process to Align Estimated Object Location and the Motion Blob (406) in FIG 4.
  • the purpose of the alignment process is to fine-tune the object location in the current frame as predicted by said particle filter by incorporating the information from the motion blob.
  • the percentage of overlapping area between each of predicted object location as first outlined by an enclosing bounding box and its motion blob area is evaluated (502). This value of overlapping percentage determines which object location needs to be readjusted. If the overlapping percentage is less than a certain threshold value (504), then the estimated object location will be readjusted.
  • the possible moving directions so as to move the predicted object bounding box to be within the motion blob area at least more than a certain threshold value is determined (506). These possible directions are restricted to be among the 8-neighbor directions.
  • the object features are extracted (508) and similarity measurement between the said extracted features and the object model features is calculated (510) and the adjusted location of the object corresponds to the location along the moving line with highest similarity value (512). While the preferred embodiment of the present invention and its advantages has been disclosed in the above Detailed Description, the invention is not limited thereto but only by the spirit and scope of the appended claim.

Abstract

The present invention relates generally to an object tracking system and method which is able to track multiple objects by locating the objects in the current image sequence and assigning a consistent object identifier to each of the objects throughout the image sequences, said system and method integrates the approach of bottom-up and top-down (116).

Description

OBJECT TRACKING SYSTEM AND METHOD
1. TECHNICAL FIELD OF THE INVENTION
The present invention relates generally to an object tracking system and method which is able to track multiple objects by locating the objects in the current image sequence and assigning a consistent object identifier to each of the objects throughout the image sequences, said system and method integrates the approach of bottom-up and top-down.
2. BACKGROUND OF THE INVENTION
Object detection and tracking is important in many applications such as in visual surveillance, video communication and human computer interaction. Object detection and tracking can be defined as the process of locating a moving object of interest in the scene and assigning a consistent object label to the same object throughout the image sequences. This process is not a trivial task due to several difficulties. The said difficulties are fast moving object, scene illumination changes, image appearance changes as the image viewpoint changes, partial or full occlusion of object- to-object and object-to-scene, non-rigid object shape as it moves and real time processing requirement. Methods of visual tracking system can be divided into two major approaches: bottom-up and top-down. In a bottom-up approach, the image is generally segmented into objects which are then used for tracking. Some common bottom-up algorithms are blob tracking, contour tracking and visual feature matching. Said bottom-up approach has mostly low computational complexity but not robust enough to track partial or fully occluded object. On the other hand, the latter approach is mostly associated with high computational complexity but more robust to track occluded object. This top-down approach basically generates object hypothesis and tries to verify them using image. The object to be tracked is sometimes initialized manually by the user or determined automatically using some intelligent and complex object detection method. The said initialization stage is critical for this approach and it will affect the performance of the tracking process. Some common tracking algorithms for said top-down approach are Kalman filter and Particle filter.
It would hence be extremely advantageous if the above shortcoming is alleviated by having an object tracking system and method which is able to track multiple objects by locating the objects in the current image sequence and assigning a consistent object identifier to each of the objects throughout the image sequences, said system and method integrates the approach of bottom-up and top-down. 3. SUMMARY OF THE INVENTION
Accordingly, it is the primary aim of the present invention to provide an object tracking system and method which combines the advantages of both the top-down and bottom-up tracking approach to complement the limitations of both said approaches.
It is yet another object of the present invention to provide an object tracking system and method which has low computational complexity.
It is yet another object of the present invention to provide an object tracking system and method which is able to track partially or fully occluded objects.
Other and further objects of the invention will become apparent with an understanding of the following detailed description of the invention or upon employment of the invention in practice.
In a preferred embodiment the method of tracking objects using a tracking system comprising steps of: i. sensor and camera performing motion detection on the ^current image frame to obtain current motion map in binary format indicating location of motion and non-motion pixels; ii. any acceptable processing means performing pre-processing on said current motion map by eliminating noises and labelling said motion map; characterised in that iii. any acceptable processing means performing overlapping region analysis on current and previous motion map to determine the relationship between said current and previous motion pixel groups in the said motion map; iv. any acceptable processing means invoking one of a plurality of blob events based on the result of said overlapping region analysis.
According to a another embodiment of the present invention there is provided, An object tracking system for detecting and tracking multiple objects by localizing said objects and assigning consistent label to each object throughout a sequence of images, comprising:
i. any acceptable sensoring means;
ii. any acceptable processing means;
iii. any acceptable image capturing device; characterized in that
said any acceptable processing means are configured to process image from said image capturing device characterized in that said processing means is configured to perform a method according to said preferred embodiment.
4. BRIEF DESCRIPTION OF THE DRAWINGS
Other aspect of the present invention and their advantages will be discerned after studying the Detailed Description in conjunction with the accompanying drawings in which: FIG 1 shows a flow chart of an object tracking method which comprises an integration of the bottom-up and top-down tracking approaches.
FIG 2 shows the rule of determining events based on the overlapping previous-current blobs test.
FIG 3 shows a flow chart of the sub-steps of the step of Perform Process New Object in FIG l.
FIG 4 shows a flow chart of the sub-steps of the step of Perform Top- Down Tracking Approach in FIG 1. FIG 5 shows a flow chart of the sub-steps of the sub-step of Alignment Process to Align Estimated Object Location and the Motion Blob in FIG 4.
5. DETAILED DESCRIPTION OF THE DRAWINGS
In the following detailed description, numerous specific details are set forth in order to provide a thorough understanding of the invention.
However, it will be understood by those or ordinary skill in the art that the invention may be practised without these specific details. In other instances, well known methods, procedures and/ or components have not been described in detail so as not to obscure the invention. The invention will be more clearly understood from the following description of the embodiments thereof, given by way of example only with reference to the accompanying drawings which are not drawn to scale.
Referring to FIG 1, there is shown a flow chart of an object tracking method which comprises an integration of the bottom-up and top-down tracking approaches. In the bottom-up approach, a blob based tracking method is proposed as the main tracking sub-system. The idea is to track each of motion blobs appeared in the current frame by using at least one tracker. Each tracker stores information of object features such as colour and texture, object centroid and area and object identifier. Firstly, a motion map of current frame is obtained by applying motion detection process (102) to the current image frame (104). Motion map is a binary map which indicates the location of motion and non-motion pixels. A group of motion pixels are called motion blobs. Said motion map will be pre- processed (106) to eliminate noise, fill up holes and label each of the motion blobs. Next, said processed motion map from the current frame will be compared (108) with its equivalent motion map from the previous frame (110) to obtain previous-current motion blobs relationship. Each motion blob in the current motion map will be compared to other motion blobs from the previous motion map to determine one of a plurality of events comprising: new object (112), existing object (114), blob splitting (118) or blobs merging (116). In this case, overlapping region analysis (108) is used as a comparison method.
Referring now to FIG 2, there is shown the rule of determining events based on the overlapping previous-current blobs test (108). Blob events comprises of any events of new object (112), objects merging (116), objects splitting (118) and existing object (114). If one motion blob in the current motion map overlaps with only one motion blob from the previous motion map (122), the current motion blob will be given the same object label as the previous motion blob. Given another example, if one motion blob in the current motion map overlaps with more than one motion blob in the previous motion map (124), it is interpreted that more than one motion blob in the previous motion map has merged with the current motion map, thus the event of object merging (116) is detected. In another case, if more than one motion blob in the current motion map overlaps with only one motion blob in the previous motion map (126), it is interpreted that the previous motion blob is being split in the current motion map, thus the event of objects splitting (118) is detected. Using another example, if any motion blob in the current motion map does not overlap with any motion blob in the previous motion map (120), the motion blob in the current motion map is interpreted to be a new object, thus the event of new object (112) is detected.
Each of the detected events will invoke different processes of updating the tracker information and assigning the object identifier. The processes are described as follows: i. if one previous blob is related to only one current blob (122), then the current blob is corresponding to an existing object being tracked and it will be continually tracked and labelled as the same object label assigned to its related previous blob; ii. if the current blob is not related to any previous blob (120), that blob corresponds to a new object to be tracked and the sub-step of Process New Object (112) will be invoked; iii. if one previous blob relates to more than one current blob (126), the sub-step of Splitting Event (118) is detected; iv. if more than one previous blob relates to only one current blob (124), the sub-step of Merging Event (116) takes place and top- bottom tracking approach will be invoked.
Referring now to FIG 3, there is shown a flow chart of the sub-steps of the step of Perform Process New Object (112) in FIG 1. Initially, ratio test is performed on the motion blob (302) identified as new object to be tracked. The action of ratio test is done on the motion blob bounding box enclosing the said motion blob. Ratio test comprises of the calculation of the ratio of the height by width of the said motion blob bounding box. This ratio test is to determine whether the blob contains single or multiple objects (304). If the blob contains multiple objects, human detection algorithm is applied on the blob (306) to separate multiple objects into each single object. Only a complete single object (track-able object) within the multiple objects blob will be tracked at this stage. Then, for each single object blob, its object properties will be extracted (308) together with its location information (centroid and enclosing bounding box coordinates). Colours and textures properties are also used to represent the object as they are robust to partial object occlusion, rotation, scale invariant and computationally inexpensive. After that a tracker is initialized for each of single track-able objects within the blob (310). Each tracker will hold the information of object features and location properties. In order to assign object label identifier, the object features first need to be matched with all objects in the memory of the said tracking system (312). Objects in said memory are the objects have been tracked before and already leave the scene for not more than a certain number of frames. If the current object is found similar to any object in the memory (314), then current object is assigned as the same identifier as the object in the memory (316) and the object in said memory will be removed (318). If the current object is found to be different from any object in the memory, a new object identifier will be assigned to current object (320).
Referring now to FIG 4, there is shown a flow chart of the sub-steps of the step of Perform Top-Down Tracking Approach (116) in FIG 1. In this top-down approach, particle filter based object tracking is employed. Initially, said particle filter tracker corresponding to each of the objects in the merged blob will be initialized (402). The step of initializing particle filter object tracking method comprises of said acceptable processing means performing the following sub-steps: i. extracting label and location for object from the previous image frame; ii. extracting object properties from said object location identified in the previous image frame; iii. storing said object label, location and properties in an object tracker data structure.
In the initialization process, target model features are generated using the object colour and texture information from the previous frame in which the object appeared as a single object just before it merges in current frame. During the localization process (404), the said tracker will find the most probable sample distribution in the current frame by comparing the target model with the current hypotheses of the particle filter using Bhattacharyya coefficient, which is a popular similarity measurement method between two distributions. After all the objects within the merged blob are being localized by their own trackers, their predicted locations will then have to be aligned with the motion blob area (406). Referring now to FIG 5, there is shown a flow chart of the sub-steps of the sub-step of Alignment Process to Align Estimated Object Location and the Motion Blob (406) in FIG 4. The purpose of the alignment process is to fine-tune the object location in the current frame as predicted by said particle filter by incorporating the information from the motion blob. In this process, the percentage of overlapping area between each of predicted object location as first outlined by an enclosing bounding box and its motion blob area is evaluated (502). This value of overlapping percentage determines which object location needs to be readjusted. If the overlapping percentage is less than a certain threshold value (504), then the estimated object location will be readjusted. Subsequently for each object which requires adjustment, the possible moving directions so as to move the predicted object bounding box to be within the motion blob area at least more than a certain threshold value is determined (506). These possible directions are restricted to be among the 8-neighbor directions.
Assuming the imaginary line connecting the predicted object centroid to the point where the predicted object centroid has to move into the motion blob denotes as a moving line. Along the possible moving line, the object features are extracted (508) and similarity measurement between the said extracted features and the object model features is calculated (510) and the adjusted location of the object corresponds to the location along the moving line with highest similarity value (512). While the preferred embodiment of the present invention and its advantages has been disclosed in the above Detailed Description, the invention is not limited thereto but only by the spirit and scope of the appended claim.

Claims

WHAT IS CLAIMED IS:
1. A method of tracking objects using a tracking system, comprising the steps of: i. sensor and camera performing motion detection on the current image frame to obtain current motion map in binary format indicating location of motion and non-motion pixels (102); ii. any acceptable processing means performing pre-processing on said current motion map by eliminating noises and labelling said motion map (106); characterised in that iii. any acceptable processing means performing overlapping region analysis on current and previous motion map to determine the relationship between said current and previous motion pixel groups in the said motion map (108); iv. any acceptable processing means invoking one of a plurality of blob events (112, 114, 116, 118) based on the result of said overlapping region analysis (120, 122, 124, 126).
2. A method of tracking objects using a detection system as in Claim 1 wherein said overlapping region analysis (120, 122, 124, 126) is done by comparing each of motion pixel group that exists in the current motion map to all motion pixel groups that exist in the previous image frame to determine the said one of a plurality of blob events for each motion pixel group in said current motion map.
3. A method of tracking objects using a detection system as in any of the preceding claims wherein said blob events (112, 114, 116, 118) are new object detected (112), existing object detected (114), object merging (116) and object splitting (118).
4. A method of tracking objects using a detection system as in any of the preceding claims wherein said blob event of new object detected (112) comprises of said any acceptable processing means performing the following steps: i. performing height by width ratio test on the box enclosing said blob (302); ii. extracting new object properties from said blob obtained from step (i) (308); characterized in that iii. performing similarity test between said extracted new object properties from step (ii) and all other object properties in the memory of said any acceptable processing means (312); iv. labelling said new object with the same label as the object in the memory of the said any acceptable processing means that has the highest similarity measure and deleting said object in the memory (316); v. assigning said new object with a new label if there is no similar object in the memory of said any acceptable processing means
(320).
5. A method of tracking objects using a detection system as in any of the preceding claims wherein said blob event of object merging (116) comprises of said any acceptable processing means performing the following steps: i. initializing particle filter object tracking method on each new object of said group of motion pixels by using the object information from previous image frame (402); ii. determining predicted new location for said new object in current image frame using said particle filter object tracking method (404);
iii. aligning said predicted location with said object's location of group of motion pixel (406).
6. A method of tracking objects using a detection system as in any of the preceding claims wherein said step of initializing particle filter object tracking method (402) comprises of said any acceptable processing means performing the following sub-steps: i. extracting label and location for object from previous image frame;
ii. extracting object properties from said object location identified in previous image frame;
iii. storing said object label, location and properties in an object tracker data structure.
7. A method of tracking objects using a detection system as in any of the preceding claims wherein the step of aligning said predicted location with said object's location of group of motion pixel (406) comprises of said any acceptable processing means performing the following sub-steps: i. calculating the percentage value of overlapping area between said predicted location of object and motion pixel group (502); ii. if said percentage value is more than an acceptable threshold value (504), aligning said predicted location of object with location of said motion pixel group; iii. determining the possible movement direction of the box enclosing said motion pixel group to said motion pixel group (506); iv. extracting object properties within the box enclosing said motion pixel group along the possible moving lines (508);
v. performing similarity measures between the extracted image properties along the possible moving line (510), whereby the location with the highest similarity value is the new location of the object (512).
8. An object tracking system for detecting and tracking multiple objects by localizing said objects and assigning consistent label to each object throughout a sequence of images, comprising: iv. any acceptable sensoring means; any acceptable processing means;
any acceptable image capturing device;
characterized in that
said any acceptable processing means are configured to process image from said image capturing device characterized in that said processing means is configured to perform a method according to Claims 1 to 7.
PCT/MY2010/000139 2009-08-10 2010-08-09 Object tracking system and method WO2011019266A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
MYPI20093323A MY151478A (en) 2009-08-10 2009-08-10 Object tracking system and method
MYPI20093323 2009-08-10

Publications (1)

Publication Number Publication Date
WO2011019266A1 true WO2011019266A1 (en) 2011-02-17

Family

ID=43858376

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/MY2010/000139 WO2011019266A1 (en) 2009-08-10 2010-08-09 Object tracking system and method

Country Status (2)

Country Link
MY (1) MY151478A (en)
WO (1) WO2011019266A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012173465A1 (en) * 2011-06-17 2012-12-20 Mimos Berhad System and method of validation of object counting

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4893182A (en) * 1988-03-18 1990-01-09 Micronyx, Inc. Video tracking and display system
US5999651A (en) * 1997-06-06 1999-12-07 Matsushita Electric Industrial Co., Ltd. Apparatus and method for tracking deformable objects
US6219048B1 (en) * 1991-11-12 2001-04-17 Apple Computer, Inc. Object selection using hit test tracks
US6263088B1 (en) * 1997-06-19 2001-07-17 Ncr Corporation System and method for tracking movement of objects in a scene
US20030185434A1 (en) * 2002-03-07 2003-10-02 Samsung Electronics Co., Ltd. Method and apparatus for video object tracking
US6795567B1 (en) * 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
WO2006082429A1 (en) * 2005-02-04 2006-08-10 British Telecommunications Public Limited Company Classifying an object in a video frame

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US4893182A (en) * 1988-03-18 1990-01-09 Micronyx, Inc. Video tracking and display system
US6219048B1 (en) * 1991-11-12 2001-04-17 Apple Computer, Inc. Object selection using hit test tracks
US5999651A (en) * 1997-06-06 1999-12-07 Matsushita Electric Industrial Co., Ltd. Apparatus and method for tracking deformable objects
US6263088B1 (en) * 1997-06-19 2001-07-17 Ncr Corporation System and method for tracking movement of objects in a scene
US6795567B1 (en) * 1999-09-16 2004-09-21 Hewlett-Packard Development Company, L.P. Method for efficiently tracking object models in video sequences via dynamic ordering of features
US20030185434A1 (en) * 2002-03-07 2003-10-02 Samsung Electronics Co., Ltd. Method and apparatus for video object tracking
WO2006082429A1 (en) * 2005-02-04 2006-08-10 British Telecommunications Public Limited Company Classifying an object in a video frame

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
"Roborealm", INTERNET WAYBACK MACHINE, 17 December 2007 (2007-12-17), Retrieved from the Internet <URL:http://web.archive.org/web/20071217205739/http://www.roborealm.com> [retrieved on 20101015] *
JIA, T. ET AL.: "Moving object detection based on blob analysis", PROCEEDINGS OF THE IEEE INTERNATIONAL CONFERENCE ON AUTOMATION AND LOGISTICS, 1 September 2008 (2008-09-01) - 3 September 2008 (2008-09-03), QINGDAO, CHINA, pages 322 - 325, XP031329660 *

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2012173465A1 (en) * 2011-06-17 2012-12-20 Mimos Berhad System and method of validation of object counting

Also Published As

Publication number Publication date
MY151478A (en) 2014-05-30

Similar Documents

Publication Publication Date Title
US11727661B2 (en) Method and system for determining at least one property related to at least part of a real environment
JP6095018B2 (en) Detection and tracking of moving objects
AU2013237718A1 (en) Method, apparatus and system for selecting a frame
Lin et al. Efficient detection and tracking of moving objects in geo-coordinates
WO2019057197A1 (en) Visual tracking method and apparatus for moving target, electronic device and storage medium
CN109086725B (en) Hand tracking method and machine-readable storage medium
Nallasivam et al. Moving human target detection and tracking in video frames
Jung et al. Object detection and tracking-based camera calibration for normalized human height estimation
Zoidi et al. Stereo object tracking with fusion of texture, color and disparity information
Xu et al. Real-time detection via homography mapping of foreground polygons from multiple cameras
Vasuhi et al. Target detection and tracking for video surveillance
US20200394802A1 (en) Real-time object detection method for multiple camera images using frame segmentation and intelligent detection pool
Amri et al. A robust framework for joint background/foreground segmentation of complex video scenes filmed with freely moving camera
WO2011019266A1 (en) Object tracking system and method
Muddamsetty et al. Spatio-temporal saliency detection in dynamic scenes using local binary patterns
US20180053321A1 (en) Image Target Relative Position Determining Method, Device, and System Thereof
US11373451B2 (en) Device and method for recognizing gesture
JP5838112B2 (en) Method, program and apparatus for separating a plurality of subject areas
Dan et al. A multi-object motion-tracking method for video surveillance
Qiu et al. A methodology review on multi-view pedestrian detection
Deb et al. A motion region detection and tracking method
KR20130056171A (en) Real-time object recognition and tracking method using representative feature, and apparatus thereof
Kadim et al. Method to detect and track moving object in non-static PTZ camera
JP7274068B2 (en) Image processing device and image processing method
Konovalov et al. Automatic hand detection in RGB-depth data sequences

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 10808413

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 10808413

Country of ref document: EP

Kind code of ref document: A1