US8467576B2 - Method and apparatus for tracking multiple objects and storage medium - Google Patents

Method and apparatus for tracking multiple objects and storage medium Download PDF

Info

Publication number
US8467576B2
US8467576B2 US12/953,354 US95335410A US8467576B2 US 8467576 B2 US8467576 B2 US 8467576B2 US 95335410 A US95335410 A US 95335410A US 8467576 B2 US8467576 B2 US 8467576B2
Authority
US
United States
Prior art keywords
image frame
objects
subset
multiple objects
tracking
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related, expires
Application number
US12/953,354
Other versions
US20110081048A1 (en
Inventor
Woon Tack Woo
Young Min Park
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Intellectual Discovery Co Ltd
Original Assignee
Gwangju Institute of Science and Technology
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Gwangju Institute of Science and Technology filed Critical Gwangju Institute of Science and Technology
Assigned to GWANGJU INSTITUTE OF SCIENCE AND TECHNOLOGY reassignment GWANGJU INSTITUTE OF SCIENCE AND TECHNOLOGY ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: PARK, YOUNG MIN, WOO, WOON TACK
Publication of US20110081048A1 publication Critical patent/US20110081048A1/en
Application granted granted Critical
Publication of US8467576B2 publication Critical patent/US8467576B2/en
Assigned to INTELLECTUAL DISCOVERY CO., LTD. reassignment INTELLECTUAL DISCOVERY CO., LTD. ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: GWANGJU INSTITUTE OF SCIENCE AND TECHNOLOGY
Expired - Fee Related legal-status Critical Current
Adjusted expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/40Analysis of texture
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • G06T7/251Analysis of motion using feature-based methods, e.g. the tracking of corners or segments involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/12Edge-based segmentation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Definitions

  • the present invention relates to a method and an apparatus for tracking multiple objects and a storage medium. More particularly, the present invention relates to a method and an apparatus for tracking multiple objects that perform object detection of one subset per input image by performing only objection detection of one subset per camera image regardless of the number N of objects to be tracked and track all objects among images while the objects are detected to track multiple objects in real time, and a storage medium.
  • Augmented reality as one of the fields of virtual reality is a technique that combines a virtual object with a real environment to allow a user to look like an object that exists in an original environment.
  • the augmented reality can add and provide additional information which is difficult to acquire through only the real world by combining virtual objects with the real world. That is, since the virtual reality technique generally allows the user to be immersed in a virtual environment, the user cannot view the real environment, while the user can view the real environment and the real environment and the virtual objects are mixed in the augmented reality technique. In other words, the virtual reality replaces the real world to be viewed to the user, but the augmented reality has a distinction in that the real world is supplemented by overlapping the virtual objects with the real world to allow the user to the supplemented real world and provides better reality to the user than the virtual reality.
  • the augmented reality can be applied to various real environments, particularly, the augmented reality is widely used as a next-generation display technology suitable for a ubiquitous environment.
  • a quadrangular marker is generally used in the related art as the real environment for augmenting the virtual objects. This allows markers of a predetermined pattern to be detected and tracked in a black frame and can detect a plurality of markers. However, when a part of the marker is occluded, it is difficult to track the marker and the marker attracts the user's eyes by high contrast of white and black to inhibit the user's view and deteriorate user's immersion.
  • a method of using a real object appears in order to solve the disadvantages of the quadrangular marker. Since the method, which uses a picture or a pattern of the real object, that is, its own texture for detection and tracking instead of the quadrangular marker, uses natural features of the real object, it is excellent in tracking the object even though the object is partially occluded and provides the immersion to the user.
  • FIG. 1 is a flowchart of a method for tracking multiple objects using the detection of a single object in the related art.
  • a camera image is subjected to the detection of N objects in order to detect and track N objects. That is, when object N is detected by repetitively performing a process in which object 1 is detected and thereafter, object 2 is detected with respect to an input image at time t, a list (objects 1 to N) and poses of objects detected at time t are derived. Next, a list (object 1 to N) and poses of objects detected at time t+1 are derived by repetitively performing the process of detecting objects 1 to N with respect to an input image at time t+1. Such a process is repetitively performed up to a desired time. Therefore, by such a method, the overall performance is deteriorated to 1/N as compared with the case of detecting one object per input image.
  • the real object tracking algorithm is optimized to a case in which only a single object exists in the camera image.
  • the algorithm whether or not the single object to be detected exists from the input image is determined and when the single object exists in the input image, a 3D pose including a 3-axis position and a 3-axis orientation of the corresponding object is estimated. Since the algorithm goes through the process, it is suitable to apply to the single object.
  • the algorithm is used to track multiple objects, whether or not all objects to be tracked exist for each input image should be determined. Therefore, since the processing speed is decreased in proportion to the number of objects, it is difficult to operate the algorithm in real time.
  • the present invention has been made in an effort to provide a method and an apparatus for tracking multiple objects that can prevent overall tracking performance from being deteriorated in spite of concurrently tracking multiple objects as well as a single object and adopt augmented reality capable of providing dynamic variety while moving each object using the multiple objects, and a storage medium.
  • object detection is performed with respect to only objects of one subset per input image.
  • exemplary embodiments of the present invention it is possible to prevent overall tracking performance from being deteriorated in spite of concurrently tracking multiple objects as well as a single object and adopt augmented reality capable of giving dynamic variety while moving each object using the multiple objects, and provide more improved stability than a pose of an object estimated by only a current camera image by applying object tracking among images.
  • FIG. 1 is a flowchart of a method for tracking multiple objects using the detection of a single object in the related art
  • FIG. 2 is a flowchart of a method for tracking objects according to an exemplary embodiment of the present invention
  • FIG. 3 is a detailed flowchart of each process according to an exemplary embodiment of the present invention.
  • FIGS. 4 and 5 are conceptual diagrams illustrating an example in which objects A, B, and C are independent objects and movements of the objects are concurrently tracked;
  • FIG. 6 is a diagram illustrating examples of key frames used to track multiple 3D objects.
  • FIG. 7 is a photograph illustrating the tracking of multiple 3D objects.
  • step (a) may include: forming a key frame configuring a key frame set for storing an appearance of the object with respect to only an object of one subset among multiple objects with respect to the input image; extracting a keypoint from the key frame and measuring a 3 dimensional position of the extracted keypoint; and measuring a pose of the object of one subset by matching the extracted keypoint with the feature point extracted with respect to the input image.
  • all the objects among the images may be tracked by extracting a feature point of the object of the input image and matching the feature point with a feature point extracted from the image at the previous time.
  • Another exemplary embodiment of the present invention provides an apparatus for tracking multiple objects that includes: a detector performing object detection with respect to only an object of one subset among multiple objects with respect to an input image at a predetermined time; and a tracker tracking all objects among images from an image at a time prior to the predetermined time with respect to all objects in the input image while the detector performs the object detection.
  • the detector and the tracker may be operated in independent threads, respectively to perform detection and tracking on a multi-core CPU in parallel.
  • the tracker may track all the objects among the images by extracting a feature point in the object of the input image and matching the extracted feature point with a feature point extracted from the image at the previous time.
  • the detector may include: a key frame forming unit configuring a key frame set for storing an appearance of the object with respect to only an object of one subset among multiple objects with respect to the input image; a keypoint extracting unit extracting a keypoint from the key frame and measuring a 3 dimensional (D) position of the extracted keypoint; and a pose measuring unit measuring a pose of the object of one subset by matching the extracted keypoint with the feature point extracted with respect to the input image.
  • a key frame forming unit configuring a key frame set for storing an appearance of the object with respect to only an object of one subset among multiple objects with respect to the input image
  • a keypoint extracting unit extracting a keypoint from the key frame and measuring a 3 dimensional (D) position of the extracted keypoint
  • a pose measuring unit measuring a pose of the object of one subset by matching the extracted keypoint with the feature point extracted with respect to the input image.
  • processing speed is decreased in proportion to the number of objects, but tracking 3D movement of multiple objects among images is comparatively low in deterioration of performance.
  • the object detection technique makes a tracking application more robust, but is limited to process the tracking application in real time.
  • a time to detect multiple objects is divided in successive frames to perform the object detection in real time. Undetected objects among objects that exist in the current frame are detected in any one of the subsequent frames. During this process, a slight delay may occur, but it is practically almost difficult for a user to detect the slight delay. Therefore, the user determines that the object detection is made in real time.
  • the object When a new object appears, the object is immediately detected by a system that initializes frame-by-frame object tracking. After the new object is detected, the frame-by-frame object tracking is initialized (started). For this, “temporal keypoints” are used. The temporal keypoints are feature points detected on the surface of the object. The temporal keypoints match the successive frames.
  • object detection is performed while the frame-by-frame object tracking is performed and even when the frame-by-frame object tracking has already been performed, the object detection may be performed. According to such a method, it is possible to prevent a track from being lost by fast movement or occlusion.
  • the detection and the tracking are separated from each other and may be performed at different cores of a parallel processor.
  • FIG. 2 is a flowchart of a method for tracking objects according to an exemplary embodiment of the present invention.
  • the object of one subnet per camera image is detected regardless of the number N of objects to be tracked to thereby detect the object of one subset per input image. That is, the object of one subset is detected for each of the total N image inputs, such that temporal-division object detection is made at the time of tracking N objects. All objects are tracked among the images for a time interval when each object is detected. In other words, the movement of the detected object is tracked.
  • object detection is performed with respect to only objects that belong to set 1 among the multiple objects in the case of an input image at time t and all objects including the objects that belongs to set 1 are tracked among the images with respect to images of previous times t ⁇ 1, t ⁇ 2, . . . to calculate the poses of all objects at time t while detecting the objects that belong to set 1 .
  • object detection is performed with respect to only objects that belong to set 2 among the multiple objects in the case of an input image at time t+1 and all objects including the objects that belongs to set 2 are tracked among the images with respect to images of previous times t, t ⁇ 1, . . . to calculate poses of all objects at time t+1 while detecting the objects that belong to set 2 .
  • Such a process is repetitively performed up to an input image of a time t+N+1, objects that belong to set N are detected, and all objects among images are tracked.
  • FIG. 3 is a detailed flowchart of each process according to an exemplary embodiment of the present invention.
  • Object detection and frame-by-frame object tracking are operated in independent threads to be performed on a multi-core CPU in parallel. That is, in FIG. 3 , the objection detection is performed in Thread 1 and the frame-by-frame object tracking is performed in Thread 2 .
  • Thread 1 keypoints extracted from an input image in Thread 2 match a subset of a key frame to perform temporal-division object detection and perform object pose estimation by using the same.
  • Thread 2 the keypoints are extracted from the input image and match feature points of the objects among the images by performing matching with the previous frame to perform the object pose estimation.
  • Thread 1 when Thread 1 is implemented by the apparatus for tracking multiple objects that performs the method for tracking multiple objects according to the exemplary embodiment of the present invention, Thread 1 includes a detector that performs object detection of only objects of one subset among multiple objects with respect to an input image at a predetermined time.
  • the detector includes a key frame forming unit configuring a key frame set for storing an exterior of only an object of one subset among the multiple objects with respect to the input image, a keypoint extracting unit extracting the keypoint from the key frame and measuring a 3D position of the extracted keypoint, and a pose measuring unit matching the keypoint of the key frame with the feature point extracted from the input image and measuring a pose of the object.
  • Thread 2 includes a tracker that tracks all objects among the images from an image prior to a predetermined time with respect to all the objects in the input image while object detection is performed by the detector and extracts feature points from the object of the input image and matches the feature point extracted from the image of the previous time to track all the objects among the images.
  • the tracker generates temporal keypoints by using additional feature points that belong to a region of the object detected in the image among feature points that are not used for object detection with respect to the objects detected by the object detector at a predetermined time. Further, in the case of an object that is not detected but tracked, a pose of an object in the input image is estimated by using a temporal keypoint of the corresponding object, which is generated at the previous time and thereafter, the temporal keypoint is generated.
  • pose estimation is performed through the mergence with the feature points extracted from the input image and when the temporal keypoints are not extracted, the pose estimation is performed independently without the mergence with the feature points of the input image to track all the objects among the images.
  • the tracker determines whether or not the tracking of the object detected by the detector has already been initialized and when the tracking is not initialized, track initialization is performed and when the tracking has already been initialized, the pose estimation is performed by the mergence with the temporal keypoints generated from the image of the previous time. Further, the tracker estimates a pose of an object which is not detected in the detector but has already been subjected to the track initialization at the previous time in the input image by using only the temporal keypoint generated in the image previous time. The tracker tracks multiple objects that exist in the image while the detector included in Thread 1 detects the object of one subset with respect to the input image.
  • detection for tracking a target object is performed whenever an object to be tracked exists in a current frame and a jittering effect which may occur at the time of estimating the pose by performing only the detection is removed.
  • An object model includes geometrical information and an appearance of the target object.
  • the geometrical information is a 3D model stored in the form of a list of triangles and multiple software which may acquire the 3D model from an image exists.
  • the appearance is constituted by key frame sets.
  • the key frame sets may store object shapes from various viewpoints to occlude most of the object. In general, three to four key frames are enough to occlude the object in all directions.
  • a feature point called a keypoint is extracted for each key frame and the extracted keypoint is back-projected onto the 3D model to easily measure the 3D position of the keypoint.
  • the keypoint and the 3D position are stored to be used during the detection process.
  • the object detection and the frame-by-frame object tracking will be described in more detail.
  • each subset is shown in Equation 1 below.
  • K j represents a key frame
  • f represents the number of key frames which may be handled per frame rate
  • N represents the total number of key frames
  • Each input frame matches any one key frame among subsets S i .
  • the subsets are repetitively considered one by one and when an N/f frame ends, the subset starts from S 1 again.
  • FIGS. 4 and 5 are conceptual diagrams showing an example in which objects A, B, and C are independent objects and movements (6 DOF; Degrees of Freedom) of the objects are concurrently tracked.
  • FIG. 6 is a diagram showing an example of a key frame used to track multiple 3D objects.
  • a total key frame set includes nine key frames.
  • One subset includes one of the key frames and nine subsets are configured.
  • One subset matches a camera frame.
  • One subset may include a plurality of key frames depending on detection performance.
  • FIG. 7 is a photograph showing the tracking of multiple 3D objects.
  • the present invention can be implemented as a computer-readable code in a computer-readable recording medium.
  • the computer-readable recording media includes all types of recording apparatuses in which data that can be read by a computer system is stored.
  • Examples of the computer-readable recording media include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage, etc., and in addition, include a recording medium implemented in the form of a carrier wave (for example, transmission through the Internet). Further, the computer-readable recording media are distributed on computer systems connected through the network, and thus the computer-readable recording media may be stored and executed as the computer-readable code by a distribution scheme. In addition, a functional program, a code, and code segments for implementing the present invention will be easily interred by programmers skilled in the art.
  • the present invention can be widely applied to an information device supporting augmented reality including a mobile, multimedia signal processing, and signal and image processing fields.

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Multimedia (AREA)
  • Image Analysis (AREA)

Abstract

The present invention relates to a method and an apparatus for tracking multiple objects and a storage medium. More particularly, the present invention relates to a method and an apparatus for tracking multiple objects that performs object detection of one subset per an input image by performing only objection detection of one subset per camera image regardless of the number N of objects to be tracked and tracks all objects among images while the objects are detected to track multiple objects in real time, and a storage medium. The method for tracking multiple objects according to the exemplary embodiment of the present invention includes: (a) performing object detection with respect to only objects of one subset among multiple objects with respect to an input image at a predetermined time; and (b) tracking all objects among images from an image of a time prior to the predetermined time with respect to all objects in the input image while step (a) is performed.

Description

TECHNICAL FIELD
The present invention relates to a method and an apparatus for tracking multiple objects and a storage medium. More particularly, the present invention relates to a method and an apparatus for tracking multiple objects that perform object detection of one subset per input image by performing only objection detection of one subset per camera image regardless of the number N of objects to be tracked and track all objects among images while the objects are detected to track multiple objects in real time, and a storage medium.
BACKGROUND ART
Augmented reality as one of the fields of virtual reality is a technique that combines a virtual object with a real environment to allow a user to look like an object that exists in an original environment.
Unlike the existing virtual reality targeting only virtual space and objects, the augmented reality can add and provide additional information which is difficult to acquire through only the real world by combining virtual objects with the real world. That is, since the virtual reality technique generally allows the user to be immersed in a virtual environment, the user cannot view the real environment, while the user can view the real environment and the real environment and the virtual objects are mixed in the augmented reality technique. In other words, the virtual reality replaces the real world to be viewed to the user, but the augmented reality has a distinction in that the real world is supplemented by overlapping the virtual objects with the real world to allow the user to the supplemented real world and provides better reality to the user than the virtual reality. Due to these characteristics, unlike the existing virtual reality which can be limitatively applied to only a field such as a game, the augmented reality can be applied to various real environments, particularly, the augmented reality is widely used as a next-generation display technology suitable for a ubiquitous environment.
A quadrangular marker is generally used in the related art as the real environment for augmenting the virtual objects. This allows markers of a predetermined pattern to be detected and tracked in a black frame and can detect a plurality of markers. However, when a part of the marker is occluded, it is difficult to track the marker and the marker attracts the user's eyes by high contrast of white and black to inhibit the user's view and deteriorate user's immersion.
A method of using a real object appears in order to solve the disadvantages of the quadrangular marker. Since the method, which uses a picture or a pattern of the real object, that is, its own texture for detection and tracking instead of the quadrangular marker, uses natural features of the real object, it is excellent in tracking the object even though the object is partially occluded and provides the immersion to the user.
FIG. 1 is a flowchart of a method for tracking multiple objects using the detection of a single object in the related art.
Referring to FIG. 1, a camera image is subjected to the detection of N objects in order to detect and track N objects. That is, when object N is detected by repetitively performing a process in which object 1 is detected and thereafter, object 2 is detected with respect to an input image at time t, a list (objects 1 to N) and poses of objects detected at time t are derived. Next, a list (object 1 to N) and poses of objects detected at time t+1 are derived by repetitively performing the process of detecting objects 1 to N with respect to an input image at time t+1. Such a process is repetitively performed up to a desired time. Therefore, by such a method, the overall performance is deteriorated to 1/N as compared with the case of detecting one object per input image.
The real object tracking algorithm is optimized to a case in which only a single object exists in the camera image. According to the algorithm, whether or not the single object to be detected exists from the input image is determined and when the single object exists in the input image, a 3D pose including a 3-axis position and a 3-axis orientation of the corresponding object is estimated. Since the algorithm goes through the process, it is suitable to apply to the single object. When the algorithm is used to track multiple objects, whether or not all objects to be tracked exist for each input image should be determined. Therefore, since the processing speed is decreased in proportion to the number of objects, it is difficult to operate the algorithm in real time.
DISCLOSURE Technical Problem
The present invention has been made in an effort to provide a method and an apparatus for tracking multiple objects that can prevent overall tracking performance from being deteriorated in spite of concurrently tracking multiple objects as well as a single object and adopt augmented reality capable of providing dynamic variety while moving each object using the multiple objects, and a storage medium.
Technical Solution
In a method for tracking multiple objects according to an exemplary embodiment of the present invention, object detection is performed with respect to only objects of one subset per input image.
Advantageous Effects
According to exemplary embodiments of the present invention, it is possible to prevent overall tracking performance from being deteriorated in spite of concurrently tracking multiple objects as well as a single object and adopt augmented reality capable of giving dynamic variety while moving each object using the multiple objects, and provide more improved stability than a pose of an object estimated by only a current camera image by applying object tracking among images.
DESCRIPTION OF DRAWINGS
FIG. 1 is a flowchart of a method for tracking multiple objects using the detection of a single object in the related art;
FIG. 2 is a flowchart of a method for tracking objects according to an exemplary embodiment of the present invention;
FIG. 3 is a detailed flowchart of each process according to an exemplary embodiment of the present invention;
FIGS. 4 and 5 are conceptual diagrams illustrating an example in which objects A, B, and C are independent objects and movements of the objects are concurrently tracked;
FIG. 6 is a diagram illustrating examples of key frames used to track multiple 3D objects; and
FIG. 7 is a photograph illustrating the tracking of multiple 3D objects.
BEST MODE
An exemplary embodiment of the present invention provides a method for tracking multiple objects that includes: (a) performing object detection with respect to only objects of one subset among multiple objects with respect to an input image at a predetermined time; and (b) tracking all objects among images from an image of a time prior to the predetermined time with respect to all objects in the input image while step (a) is performed.
Herein, step (a) may include: forming a key frame configuring a key frame set for storing an appearance of the object with respect to only an object of one subset among multiple objects with respect to the input image; extracting a keypoint from the key frame and measuring a 3 dimensional position of the extracted keypoint; and measuring a pose of the object of one subset by matching the extracted keypoint with the feature point extracted with respect to the input image.
Further, at step (b), all the objects among the images may be tracked by extracting a feature point of the object of the input image and matching the feature point with a feature point extracted from the image at the previous time.
Another exemplary embodiment of the present invention provides an apparatus for tracking multiple objects that includes: a detector performing object detection with respect to only an object of one subset among multiple objects with respect to an input image at a predetermined time; and a tracker tracking all objects among images from an image at a time prior to the predetermined time with respect to all objects in the input image while the detector performs the object detection.
Further, the detector and the tracker may be operated in independent threads, respectively to perform detection and tracking on a multi-core CPU in parallel.
Further, the tracker may track all the objects among the images by extracting a feature point in the object of the input image and matching the extracted feature point with a feature point extracted from the image at the previous time.
Further, the detector may include: a key frame forming unit configuring a key frame set for storing an appearance of the object with respect to only an object of one subset among multiple objects with respect to the input image; a keypoint extracting unit extracting a keypoint from the key frame and measuring a 3 dimensional (D) position of the extracted keypoint; and a pose measuring unit measuring a pose of the object of one subset by matching the extracted keypoint with the feature point extracted with respect to the input image.
Mode for Invention
Hereinafter, exemplary embodiments of the present invention will be described in detail with reference to the accompanying drawings. First, it is to be noted that in giving reference numerals to elements of each drawing, like reference numerals refer to like elements even though like elements are shown in different drawings. Further, in describing the present invention, well-known functions or constructions will not be described in detail since they may unnecessarily obscure the understanding of the present invention. Hereinafter, the preferred embodiment of the present invention will be described, but it will be understood to those skilled in the art that the spirit and scope of the present invention are not limited thereto and various modifications and changes can be made.
In object detection, processing speed is decreased in proportion to the number of objects, but tracking 3D movement of multiple objects among images is comparatively low in deterioration of performance. The object detection technique makes a tracking application more robust, but is limited to process the tracking application in real time. According to the exemplary embodiment of the present invention, a time to detect multiple objects is divided in successive frames to perform the object detection in real time. Undetected objects among objects that exist in the current frame are detected in any one of the subsequent frames. During this process, a slight delay may occur, but it is practically almost difficult for a user to detect the slight delay. Therefore, the user determines that the object detection is made in real time.
When a new object appears, the object is immediately detected by a system that initializes frame-by-frame object tracking. After the new object is detected, the frame-by-frame object tracking is initialized (started). For this, “temporal keypoints” are used. The temporal keypoints are feature points detected on the surface of the object. The temporal keypoints match the successive frames. By this method, in the case of the object of which tracking is initialized (started), a pose of the object may be accurately calculated even when the object is not detected and less time is consumed than object detection.
As such, object detection is performed while the frame-by-frame object tracking is performed and even when the frame-by-frame object tracking has already been performed, the object detection may be performed. According to such a method, it is possible to prevent a track from being lost by fast movement or occlusion. The detection and the tracking are separated from each other and may be performed at different cores of a parallel processor.
FIG. 2 is a flowchart of a method for tracking objects according to an exemplary embodiment of the present invention.
In the object tracking method according to the exemplary embodiment of the present invention, referring to FIG. 2, the object of one subnet per camera image is detected regardless of the number N of objects to be tracked to thereby detect the object of one subset per input image. That is, the object of one subset is detected for each of the total N image inputs, such that temporal-division object detection is made at the time of tracking N objects. All objects are tracked among the images for a time interval when each object is detected. In other words, the movement of the detected object is tracked.
More specifically, object detection is performed with respect to only objects that belong to set 1 among the multiple objects in the case of an input image at time t and all objects including the objects that belongs to set 1 are tracked among the images with respect to images of previous times t−1, t−2, . . . to calculate the poses of all objects at time t while detecting the objects that belong to set 1.
Moreover, object detection is performed with respect to only objects that belong to set 2 among the multiple objects in the case of an input image at time t+1 and all objects including the objects that belongs to set 2 are tracked among the images with respect to images of previous times t, t−1, . . . to calculate poses of all objects at time t+1 while detecting the objects that belong to set 2.
Such a process is repetitively performed up to an input image of a time t+N+1, objects that belong to set N are detected, and all objects among images are tracked.
That is, in a first image, objects that belong to set 1 are detected and in an N-th image, objects that belong to set N are detected. In an N+1-th image, a method of detecting the objects that belong to set 1 is repetitively performed again. Consequently, since each object is detected only once per 1/N frame, the pose is consistently tracked by the frame-by-frame object tracking while no object is detected.
FIG. 3 is a detailed flowchart of each process according to an exemplary embodiment of the present invention.
Object detection and frame-by-frame object tracking are operated in independent threads to be performed on a multi-core CPU in parallel. That is, in FIG. 3, the objection detection is performed in Thread 1 and the frame-by-frame object tracking is performed in Thread 2.
In Thread 1, keypoints extracted from an input image in Thread 2 match a subset of a key frame to perform temporal-division object detection and perform object pose estimation by using the same.
In Thread 2, the keypoints are extracted from the input image and match feature points of the objects among the images by performing matching with the previous frame to perform the object pose estimation.
Although not shown, when Thread 1 is implemented by the apparatus for tracking multiple objects that performs the method for tracking multiple objects according to the exemplary embodiment of the present invention, Thread 1 includes a detector that performs object detection of only objects of one subset among multiple objects with respect to an input image at a predetermined time. Herein, the detector includes a key frame forming unit configuring a key frame set for storing an exterior of only an object of one subset among the multiple objects with respect to the input image, a keypoint extracting unit extracting the keypoint from the key frame and measuring a 3D position of the extracted keypoint, and a pose measuring unit matching the keypoint of the key frame with the feature point extracted from the input image and measuring a pose of the object.
Further, Thread 2 includes a tracker that tracks all objects among the images from an image prior to a predetermined time with respect to all the objects in the input image while object detection is performed by the detector and extracts feature points from the object of the input image and matches the feature point extracted from the image of the previous time to track all the objects among the images.
More specifically, the tracker generates temporal keypoints by using additional feature points that belong to a region of the object detected in the image among feature points that are not used for object detection with respect to the objects detected by the object detector at a predetermined time. Further, in the case of an object that is not detected but tracked, a pose of an object in the input image is estimated by using a temporal keypoint of the corresponding object, which is generated at the previous time and thereafter, the temporal keypoint is generated.
When the temporal keypoints are extracted from the image of the previous time, pose estimation is performed through the mergence with the feature points extracted from the input image and when the temporal keypoints are not extracted, the pose estimation is performed independently without the mergence with the feature points of the input image to track all the objects among the images.
In addition, the tracker determines whether or not the tracking of the object detected by the detector has already been initialized and when the tracking is not initialized, track initialization is performed and when the tracking has already been initialized, the pose estimation is performed by the mergence with the temporal keypoints generated from the image of the previous time. Further, the tracker estimates a pose of an object which is not detected in the detector but has already been subjected to the track initialization at the previous time in the input image by using only the temporal keypoint generated in the image previous time. The tracker tracks multiple objects that exist in the image while the detector included in Thread 1 detects the object of one subset with respect to the input image.
As such, in the method for tracking the multiple objects according to the exemplary embodiment of the present invention, detection for tracking a target object is performed whenever an object to be tracked exists in a current frame and a jittering effect which may occur at the time of estimating the pose by performing only the detection is removed.
An object model includes geometrical information and an appearance of the target object. The geometrical information is a 3D model stored in the form of a list of triangles and multiple software which may acquire the 3D model from an image exists.
The appearance is constituted by key frame sets. The key frame sets may store object shapes from various viewpoints to occlude most of the object. In general, three to four key frames are enough to occlude the object in all directions. A feature point called a keypoint is extracted for each key frame and the extracted keypoint is back-projected onto the 3D model to easily measure the 3D position of the keypoint. The keypoint and the 3D position are stored to be used during the detection process. Hereinafter, the object detection and the frame-by-frame object tracking will be described in more detail.
All key frames are divided into subsets and the subsets match the camera images. For example, each subset is shown in Equation 1 below.
S 1 = { k 1 , k 2 , , k f } S 2 = { k f + 1 , k f + 2 , , k 2 f } S N / f = { k f · floor ( N f ) + 1 , , k N } [ Equation 1 ]
Kj represents a key frame, f represents the number of key frames which may be handled per frame rate, and N represents the total number of key frames.
Each input frame matches any one key frame among subsets Si. The subsets are repetitively considered one by one and when an N/f frame ends, the subset starts from S1 again.
FIGS. 4 and 5 are conceptual diagrams showing an example in which objects A, B, and C are independent objects and movements (6 DOF; Degrees of Freedom) of the objects are concurrently tracked.
FIG. 6 is a diagram showing an example of a key frame used to track multiple 3D objects.
In the exemplary embodiment, a total key frame set includes nine key frames. One subset includes one of the key frames and nine subsets are configured. One subset matches a camera frame. One subset may include a plurality of key frames depending on detection performance.
FIG. 7 is a photograph showing the tracking of multiple 3D objects.
In FIG. 7, all three objects exist in the image and are concurrently tracked. The pose of the object is accurately calculated even when the object is moved by a hand or partial occlusion occurs among the objects. The objects are consistently detected while being tracked. A calculated 3D pose is projected to a 3D model with respect to all frames and display as a line.
Meanwhile, the present invention can be implemented as a computer-readable code in a computer-readable recording medium. The computer-readable recording media includes all types of recording apparatuses in which data that can be read by a computer system is stored.
Examples of the computer-readable recording media include a ROM, a RAM, a CD-ROM, a magnetic tape, a floppy disk, an optical data storage, etc., and in addition, include a recording medium implemented in the form of a carrier wave (for example, transmission through the Internet). Further, the computer-readable recording media are distributed on computer systems connected through the network, and thus the computer-readable recording media may be stored and executed as the computer-readable code by a distribution scheme. In addition, a functional program, a code, and code segments for implementing the present invention will be easily interred by programmers skilled in the art.
The spirit of the present invention has just been exemplified. It will be appreciated by those skilled in the art that various modifications, changes, and substitutions can be made without departing from the essential characteristics of the present invention. Accordingly, the embodiments disclosed in the present invention and the accompanying drawings are used not to limit but to describe the spirit of the present invention. The scope of the present invention is not limited only to the embodiments and the accompanying drawings. The protection scope of the present invention must be analyzed by the appended claims and it should be analyzed that all spirits within a scope equivalent thereto are included in the appended claims of the present invention.
INDUSTRIAL APPLICABILITY
The present invention can be widely applied to an information device supporting augmented reality including a mobile, multimedia signal processing, and signal and image processing fields.

Claims (11)

The invention claimed is:
1. A method for tracking multiple objects, comprising:
(a) obtaining video data including a first image frame and a second image frame, wherein the first image frame is a next image frame of the second image frame;
(b) detecting at least one object included in a first subset of the multiple objects from the first image frame; and
(b) tracking the multiple objects between the first image frame and the second image frame while step (b) is performed,
wherein step (b) includes:
selecting at least one key frame corresponding to the at least one object of the first subset from a key frame set for storing an appearance of the multiple objects,
extracting a keypoint from the selected at least one key frame, and
measuring a pose of the at least one object of the first subset by matching the extracted keypoint with a feature point extracted from the first image frame.
2. The method of claim 1, wherein step (b) includes:
extracting the feature point of the multiple objects from the first image frame; and
matching the extracted feature point with a feature point extracted from the second image frame.
3. The method of claim 1, wherein step (b) further includes:
measuring a 3 dimensional position of the extracted keypoint, and
wherein the pose of the at least one object of the first subset is measured based on the 3 dimensional position of the extracted keypoint.
4. The method of claim 1, further comprising:
(d) detecting at least one object included in a second subset of the multiple objects from a third image frame; and
(e) tracking the multiple objects between the first image frame and the third image frame, wherein the third image frame is included in the video data and a next frame of the first image frame while step (d) is performed.
5. The method of claim 4, wherein the at least one object of the second subset is a different object from the at least one object included in the first subset.
6. An apparatus for tracking multiple objects, comprising:
a memory storing a key frame set, wherein the key frame set includes key frames for storing an appearance of the multiple objects;
a video input unit obtaining video data including a first image frame and a second image frame, wherein the first image frame is a next image frame of the second image frame;
a detector detecting at least one object included in a first subset of the multiple objects from the first image frame; and
a tracker tracking the multiple objects between the second image frame and the first image frame while the detector detects the at least one object of the first subset,
wherein the detector detects the at least one object of the first subset by:
selecting at least one key frame corresponding to the at least one object of the first subset from a key frame set for storing an appearance of the multiple objects,
extracting a keypoint from the selected at least one key frame, and
measuring a pose of the at least one object of the first subset by matching the extracted keypoint with a feature point extracted from the first image frame.
7. The apparatus of claim 6, wherein the detector and the tracker are operated in independent threads, respectively to perform detection and tracking on a multi-core CPU in parallel.
8. The apparatus of claim 6, wherein the tracker tracks the multiple objects by extracting the feature point of the multiple objects from the first image frame and matching the extracted feature point with a feature point extracted from the second image frame.
9. The apparatus of claim 6, wherein the detector measures a 3 dimensional position of the extracted keypoint and measures the pose of the at least one object of the first subset based on the 3 dimensional position of the extracted keypoint.
10. The apparatus of claim 6, wherein the video data includes the third image frame which is a next image frame of the first image frame, and wherein a detector detects at least one object included in a second subset of the multiple objects from the third image frame, and a tracker tracks the multiple objects between the first image frame and the third image frame while the detector detects the at least one object of the second subset.
11. The apparatus of claim 10, wherein the at least one object of the second subset is a different object from the at least one object included in the first subset.
US12/953,354 2008-07-09 2010-11-23 Method and apparatus for tracking multiple objects and storage medium Expired - Fee Related US8467576B2 (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
KR1020080066519A KR100958379B1 (en) 2008-07-09 2008-07-09 Methods and Devices for tracking multiple 3D object, Storage medium storing the same
KR10-2008-0066519 2008-07-09
PCT/KR2009/003771 WO2010005251A2 (en) 2008-07-09 2009-07-09 Multiple object tracking method, device and storage medium

Related Parent Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2009/003771 Continuation WO2010005251A2 (en) 2008-07-09 2009-07-09 Multiple object tracking method, device and storage medium

Publications (2)

Publication Number Publication Date
US20110081048A1 US20110081048A1 (en) 2011-04-07
US8467576B2 true US8467576B2 (en) 2013-06-18

Family

ID=41507587

Family Applications (1)

Application Number Title Priority Date Filing Date
US12/953,354 Expired - Fee Related US8467576B2 (en) 2008-07-09 2010-11-23 Method and apparatus for tracking multiple objects and storage medium

Country Status (4)

Country Link
US (1) US8467576B2 (en)
EP (1) EP2299406B1 (en)
KR (1) KR100958379B1 (en)
WO (1) WO2010005251A2 (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198950A1 (en) * 2013-01-11 2014-07-17 Omron Corporation Image processing device, image processing method, and computer readable medium
US9984315B2 (en) 2015-05-05 2018-05-29 Condurent Business Services, LLC Online domain adaptation for multi-object tracking
US10380763B2 (en) * 2016-11-16 2019-08-13 Seiko Epson Corporation Hybrid corner and edge-based tracking

Families Citing this family (27)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20100119109A1 (en) * 2008-11-11 2010-05-13 Electronics And Telecommunications Research Institute Of Daejeon Multi-core multi-thread based kanade-lucas-tomasi feature tracking method and apparatus
KR101129328B1 (en) * 2010-03-03 2012-03-26 광주과학기술원 Apparatus and method for tracking target
KR101273634B1 (en) * 2011-02-08 2013-06-11 광주과학기술원 Tracking Method of Multiple Objects using Mobile Device in Augumented Reality Environment and System Using the same
JP5178860B2 (en) * 2011-02-24 2013-04-10 任天堂株式会社 Image recognition program, image recognition apparatus, image recognition system, and image recognition method
JP5026604B2 (en) 2011-02-24 2012-09-12 任天堂株式会社 Image recognition program, image recognition apparatus, image recognition system, and image recognition method
JP4989768B2 (en) 2011-02-24 2012-08-01 任天堂株式会社 Image processing program, image processing apparatus, image processing system, and image processing method
JP2011134343A (en) 2011-02-24 2011-07-07 Nintendo Co Ltd Image processing program, image processing apparatus, image processing system, and image processing method
JP5016723B2 (en) 2011-02-24 2012-09-05 任天堂株式会社 Image recognition program, image recognition apparatus, image recognition system, and image recognition method
JP4967065B2 (en) 2011-02-24 2012-07-04 任天堂株式会社 Image processing program, image processing apparatus, image processing system, and image processing method
US9939888B2 (en) * 2011-09-15 2018-04-10 Microsoft Technology Licensing Llc Correlating movement information received from different sources
US9606992B2 (en) * 2011-09-30 2017-03-28 Microsoft Technology Licensing, Llc Personal audio/visual apparatus providing resource management
JP5821526B2 (en) 2011-10-27 2015-11-24 ソニー株式会社 Image processing apparatus, image processing method, and program
WO2013077562A1 (en) * 2011-11-24 2013-05-30 에스케이플래닛 주식회사 Apparatus and method for setting feature points, and apparatus and method for object tracking using same
KR101371275B1 (en) * 2012-11-05 2014-03-26 재단법인대구경북과학기술원 Method for multiple object tracking based on stereo video and recording medium thereof
US9158988B2 (en) 2013-06-12 2015-10-13 Symbol Technclogies, LLC Method for detecting a plurality of instances of an object
US9355123B2 (en) 2013-07-19 2016-05-31 Nant Holdings Ip, Llc Fast recognition algorithm processing, systems and methods
TWI570666B (en) * 2013-11-15 2017-02-11 財團法人資訊工業策進會 Electronic device and video object tracking method thereof
US9501498B2 (en) 2014-02-14 2016-11-22 Nant Holdings Ip, Llc Object ingestion through canonical shapes, systems and methods
US9378556B2 (en) * 2014-04-25 2016-06-28 Xerox Corporation Method for reducing false object detection in stop-and-go scenarios
KR102223308B1 (en) * 2014-05-29 2021-03-08 삼성전자 주식회사 Method for image processing and electronic device implementing the same
US20160092727A1 (en) * 2014-09-30 2016-03-31 Alcatel-Lucent Usa Inc. Tracking humans in video images
US11080864B2 (en) * 2018-01-08 2021-08-03 Intel Corporation Feature detection, sorting, and tracking in images using a circular buffer
US10728430B2 (en) * 2018-03-07 2020-07-28 Disney Enterprises, Inc. Systems and methods for displaying object features via an AR device
CN109214379B (en) * 2018-10-23 2022-02-15 昆明微想智森科技股份有限公司 Multifunctional point reading pointing piece based on image recognition tracking technology and point reading method
US11475590B2 (en) * 2019-09-12 2022-10-18 Nec Corporation Keypoint based pose-tracking using entailment
US12051239B2 (en) 2020-08-11 2024-07-30 Disney Enterprises, Inc. Item location tracking via image analysis and projection
US11741712B2 (en) * 2020-09-28 2023-08-29 Nec Corporation Multi-hop transformer for spatio-temporal reasoning and localization

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
KR20000054784A (en) 2000-06-23 2000-09-05 이성환 system for tracking the movement of the multiple object using an Appearance Model Based on Temporal Color and controlling method therefore
US6542621B1 (en) * 1998-08-31 2003-04-01 Texas Instruments Incorporated Method of dealing with occlusion when tracking multiple objects and people in video sequences
US20030107649A1 (en) * 2001-12-07 2003-06-12 Flickner Myron D. Method of detecting and tracking groups of people
US20060092280A1 (en) 2002-12-20 2006-05-04 The Foundation For The Promotion Of Industrial Science Method and device for tracking moving objects in image
US20060195858A1 (en) 2004-04-15 2006-08-31 Yusuke Takahashi Video object recognition device and recognition method, video annotation giving device and giving method, and program
KR20080017521A (en) 2006-08-21 2008-02-27 문철홍 Method for multiple movement body tracing movement using of difference image
US20080109397A1 (en) * 2002-07-29 2008-05-08 Rajeev Sharma Automatic detection and aggregation of demographics and behavior of people
US7400745B2 (en) * 2003-12-01 2008-07-15 Brickstream Corporation Systems and methods for determining if objects are in a queue
US20090002489A1 (en) * 2007-06-29 2009-01-01 Fuji Xerox Co., Ltd. Efficient tracking multiple objects through occlusion
US8116534B2 (en) * 2006-05-29 2012-02-14 Kabushiki Kaisha Toshiba Face recognition apparatus and face recognition method

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN100371952C (en) * 2003-04-21 2008-02-27 日本电气株式会社 Video object recognition device and recognition method, video annotation giving device and giving method, and program
KR20080073933A (en) * 2007-02-07 2008-08-12 삼성전자주식회사 Object tracking method and apparatus, and object pose information calculating method and apparatus

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6542621B1 (en) * 1998-08-31 2003-04-01 Texas Instruments Incorporated Method of dealing with occlusion when tracking multiple objects and people in video sequences
KR20000054784A (en) 2000-06-23 2000-09-05 이성환 system for tracking the movement of the multiple object using an Appearance Model Based on Temporal Color and controlling method therefore
US20030107649A1 (en) * 2001-12-07 2003-06-12 Flickner Myron D. Method of detecting and tracking groups of people
US20080109397A1 (en) * 2002-07-29 2008-05-08 Rajeev Sharma Automatic detection and aggregation of demographics and behavior of people
US20060092280A1 (en) 2002-12-20 2006-05-04 The Foundation For The Promotion Of Industrial Science Method and device for tracking moving objects in image
US7400745B2 (en) * 2003-12-01 2008-07-15 Brickstream Corporation Systems and methods for determining if objects are in a queue
US20060195858A1 (en) 2004-04-15 2006-08-31 Yusuke Takahashi Video object recognition device and recognition method, video annotation giving device and giving method, and program
US8116534B2 (en) * 2006-05-29 2012-02-14 Kabushiki Kaisha Toshiba Face recognition apparatus and face recognition method
KR20080017521A (en) 2006-08-21 2008-02-27 문철홍 Method for multiple movement body tracing movement using of difference image
US20090002489A1 (en) * 2007-06-29 2009-01-01 Fuji Xerox Co., Ltd. Efficient tracking multiple objects through occlusion

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
International Search Report for PCT/KR2009/003771 filed Jul. 9, 2009.
Written Opinion of the International Searching Authority for PCT/KR2009/003771 filed Jul. 9, 2009.

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140198950A1 (en) * 2013-01-11 2014-07-17 Omron Corporation Image processing device, image processing method, and computer readable medium
US9235770B2 (en) * 2013-01-11 2016-01-12 Omron Corporation Image processing device, image processing method, and computer readable medium
US9984315B2 (en) 2015-05-05 2018-05-29 Condurent Business Services, LLC Online domain adaptation for multi-object tracking
US10380763B2 (en) * 2016-11-16 2019-08-13 Seiko Epson Corporation Hybrid corner and edge-based tracking

Also Published As

Publication number Publication date
WO2010005251A2 (en) 2010-01-14
US20110081048A1 (en) 2011-04-07
EP2299406A2 (en) 2011-03-23
KR100958379B1 (en) 2010-05-17
EP2299406A4 (en) 2012-10-31
WO2010005251A9 (en) 2010-03-04
KR20100006324A (en) 2010-01-19
WO2010005251A3 (en) 2010-04-22
EP2299406B1 (en) 2018-03-28

Similar Documents

Publication Publication Date Title
US8467576B2 (en) Method and apparatus for tracking multiple objects and storage medium
EP3698323B1 (en) Depth from motion for augmented reality for handheld user devices
CN111311684B (en) Method and equipment for initializing SLAM
CN110555901B (en) Method, device, equipment and storage medium for positioning and mapping dynamic and static scenes
US9135514B2 (en) Real time tracking/detection of multiple targets
US9390515B2 (en) Keyframe selection for robust video-based structure from motion
Klein et al. Full-3D Edge Tracking with a Particle Filter.
Fourie et al. Harmony filter: a robust visual tracking system using the improved harmony search algorithm
US9811733B2 (en) Method, apparatus and system for selecting a frame
KR101135186B1 (en) System and method for interactive and real-time augmented reality, and the recording media storing the program performing the said method
EP2614487B1 (en) Online reference generation and tracking for multi-user augmented reality
CN109255749B (en) Map building optimization in autonomous and non-autonomous platforms
KR101410273B1 (en) Method and apparatus for environment modeling for ar
CN111951325B (en) Pose tracking method, pose tracking device and electronic equipment
JP2006244074A (en) Moving object close-up frame detection method and program, storage medium storing program, moving object close-up shot detection method, moving object close-up frame or shot detection method and program, and storage medium storing program
CN110009683B (en) Real-time on-plane object detection method based on MaskRCNN
US8872832B2 (en) System and method for mesh stabilization of facial motion capture data
CN114095780A (en) Panoramic video editing method, device, storage medium and equipment
CN111260544B (en) Data processing method and device, electronic equipment and computer storage medium
Spurlock et al. Dynamic subset selection for multi-camera tracking
CN117152204A (en) Target tracking method, device, equipment and storage medium
KR20240025215A (en) Cloud fusion method, Device, Electronics and computer storage media
Esau et al. VisiTrack-video based incremental tracking in real time
Alaniz II et al. Real-Time Camera Pose Estimation for Virtual Reality Navigation
Drews Shared augmented reality: a framework for networked augmented reality applications

Legal Events

Date Code Title Description
AS Assignment

Owner name: GWANGJU INSTITUTE OF SCIENCE AND TECHNOLOGY, KOREA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:WOO, WOON TACK;PARK, YOUNG MIN;REEL/FRAME:025510/0934

Effective date: 20101125

FEPP Fee payment procedure

Free format text: PAYOR NUMBER ASSIGNED (ORIGINAL EVENT CODE: ASPN); ENTITY STATUS OF PATENT OWNER: SMALL ENTITY

AS Assignment

Owner name: INTELLECTUAL DISCOVERY CO., LTD., KOREA, REPUBLIC

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:GWANGJU INSTITUTE OF SCIENCE AND TECHNOLOGY;REEL/FRAME:035218/0758

Effective date: 20150303

REMI Maintenance fee reminder mailed
LAPS Lapse for failure to pay maintenance fees
STCH Information on status: patent discontinuation

Free format text: PATENT EXPIRED DUE TO NONPAYMENT OF MAINTENANCE FEES UNDER 37 CFR 1.362

FP Lapsed due to failure to pay maintenance fee

Effective date: 20170618