WO1997006631A2 - Dispositif de poursuite d'objet et procede correspondant - Google Patents

Dispositif de poursuite d'objet et procede correspondant Download PDF

Info

Publication number
WO1997006631A2
WO1997006631A2 PCT/IL1996/000070 IL9600070W WO9706631A2 WO 1997006631 A2 WO1997006631 A2 WO 1997006631A2 IL 9600070 W IL9600070 W IL 9600070W WO 9706631 A2 WO9706631 A2 WO 9706631A2
Authority
WO
WIPO (PCT)
Prior art keywords
border
image
edge
frames
event
Prior art date
Application number
PCT/IL1996/000070
Other languages
English (en)
Other versions
WO1997006631A3 (fr
Inventor
Ehud Spiegel
Yosef Pastor
Original Assignee
Ehud Spiegel
Yosef Pastor
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ehud Spiegel, Yosef Pastor filed Critical Ehud Spiegel
Priority to AU65303/96A priority Critical patent/AU6530396A/en
Priority to EP96925063A priority patent/EP0880852A2/fr
Priority to JP9508279A priority patent/JPH11510351A/ja
Publication of WO1997006631A2 publication Critical patent/WO1997006631A2/fr
Publication of WO1997006631A3 publication Critical patent/WO1997006631A3/fr

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01SRADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
    • G01S3/00Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received
    • G01S3/78Direction-finders for determining the direction from which infrasonic, sonic, ultrasonic, or electromagnetic waves, or particle emission, not having a directional significance, are being received using electromagnetic waves other than radio waves
    • G01S3/782Systems for determining direction or deviation from predetermined direction
    • G01S3/785Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system
    • G01S3/786Systems for determining direction or deviation from predetermined direction using adjustment of orientation of directivity characteristics of a detector or detector system to give a desired condition of signal derived from that detector or detector system the desired condition being maintained automatically
    • G01S3/7864T.V. type tracking systems
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/246Analysis of motion using feature-based methods, e.g. the tracking of corners or segments
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V10/00Arrangements for image or video recognition or understanding
    • G06V10/20Image preprocessing
    • G06V10/24Aligning, centring, orientation detection or correction of the image

Definitions

  • the present invention relates to image processing systems in general, and to object identification and tracking systems in particular.
  • U.S. Patent 5,333,213 to Koyama et al. describes a method and apparatus for image region extraction, extracting an image of a moving object in a dynamic image.
  • U.S. Patent 5,274,453 to Maeda describes an image processing system using mask information to combine a plurality of images.
  • U.S. Patent 5,345,313 to Blank describes an image editing system which takes a background and inserts part of an image in the background.
  • the present invention seeks to provide an improved object identification and tracking system.
  • a tracking method including receiving a representation of an event including at least one dynamic object having a border and having at least one edge portion which is absent during at least a portion of the event, and providing an ongoing indication of the location of the border of the object during the event.
  • the representation includes a video representation.
  • the edge portion includes a portion of the border.
  • the method also includes reconstructing at least one absent edge portion.
  • a tracking method including receiving a representation of an event including at least one dynamic object having a border, and providing an ongoing indication of the location of the border of the object during the event.
  • an edge-tracking method for tracking at least one dynamic object appearing in a sequence of frames, the method including for at least one key frame within the sequence of frames, marking at least one edge of at least one dynamic object based at least partly on external input, and for all frames within the sequence of frames other than the at least one key frame, automatically marking at least one edge of at least one dynamic object based on output from the first marking step.
  • the method also includes remarking said at least one automatically marked edge at least once, based on external input.
  • the external input includes human operator input.
  • At least one edge i ⁇ marked without detecting the edge at least one edge i ⁇ marked without detecting the edge.
  • the at least one key frame includes a subsequence of frames preceding all other frames within the sequence.
  • the at least one key frame includes a subsequence of frames following all other frames within the sequence.
  • the at least one key frame includes a subsequence of frames preceding at least one other frame within the sequence and following at least one other frame within the sequence.
  • an edge-structuring method for structuring a plurality of connected edges into a graph including providing a plurality of connected edges, traversing the plurality of connected edges in a chosen direction, and structuring the plurality of connected edges into a graph including a branch list and a node list, wherein the node list is independent of the chosen direction.
  • the node list includes an edge junction list. Still further in accordance with a preferred embodiment of the present invention the node list includes an edge terminal point list.
  • the node list includes an edge corner list.
  • the node list includes a curvature list.
  • the plurality of connected edges includes a plurality of pixels and wherein the traversing step includes specifying a current pixel, identifying at least one visible pixel associated with the current pixel, and classifying the current pixel based, at least in part, on the number of visible pixels identified.
  • the identifying step includes defining a blind strip, and ruling out as visible pixels at least one pixel associated with the blind strip.
  • the ruling out step includes ruling out as visible pixels all pixels associated with the blind strip whenever there is at least one visible pixel not associated with the blind strip.
  • a method for tracking a border of a moving object including selecting a plurality of border locations to be tracked in a first image, tracking at least some of the plurality of border locations from the first image to a second image, and computing the border in the ⁇ econd image based on an output of the tracking step and based on information characterizing the border in the first image.
  • At least one of the plurality of border locations includes a location at which at least one border characteristic changes.
  • the border characteristic includes at least one color adjacent to the border.
  • the tracking includes disregarding a border location which, when tracked from the first image to the second image, is found to have moved differently from other adjacent border locations.
  • the computing step includes transforming the border in the first image such that each of the plurality of border locations in the first image is transformed onto a corresponding one of the plurality of border locations in the second image.
  • the method also includes identifying an actual border in the second image by searching adjacent to the border as computed in the second image.
  • an actual border is identified depending on whether the adjacent colors of the actual'border resemble the adjacent colors of the border in the first image.
  • an output border is defined as the actual border, if identified, and as the border as computed in the second image, if. no actual border is identified.
  • a first output border is defined which coincides in part with the actual border, where the actual border has been identified, and in part with the border as computed in the second image, where the actual border has not been identified.
  • the method also includes identifying a new actual border in the second image by searching adjacent to the first output border, and defining a new output border which coincides in part with the new actual border, where the new actual border has been identified, and in part with the first output border, where the new actual border has not been identified.
  • the transforming step includes transforming a spline representation of the border in the first image such that each of the plurality of border locations in the first image is transformed onto a corresponding one of the plurality of border locations in the second image.
  • the method also includes providing a first image seen from a fir ⁇ t field of view and providing a second image seen from a different field of view.
  • the method also includes providing first and second images each including at least one of a moving dynamic object and a dynamic background.
  • the automatic marking step includes automatically marking all edges of at least one dynamic object based on output from the first marking step.
  • tracking apparatus including event input apparatus operative to receive a representation of an event including at least one dynamic object having a border and having at least one edge portion which is absent during at least a portion of the event, and a border locator operative to provide an ongoing indication of the location of the border of the object during the event.
  • edge-tracking apparatus for tracking at least one dynamic object appearing in a sequence of frames, the apparatus including an edge marker operative, for at least one key frame within the sequence of frames, to mark at least one edge of at least one dynamic object based at least partly on external input, and an automatic edge marker operative, for all frames within the sequence of frames other than the at lea ⁇ t one key frame, to automatically mark at least one edge of at least one dynamic object based on output from the fir ⁇ t marking step.
  • edge-structuring apparatus for structuring a plurality of connected edges into a graph
  • the apparatus including an edge traverser operative to traverse the plurality of connected edges in a chosen direction, and a graph structurer operative to ⁇ tructure the plurality of connected edge ⁇ into a graph including a branch li ⁇ t and a node list, wherein the node list is independent of the chosen direction.
  • apparatus for tracking a border of a moving object including a border selector operative to select a plurality of border locations to be tracked in a fir ⁇ t image, a border tracker operative to track at lea ⁇ t some of the plurality of border locations from the first image to a second image, and border computation apparatus operative to compute the border in the second image based on an output of the border tracker and based on information characterizing the border in the first image.
  • tracking apparatus including event input ,apparatus operative to receive a representation of an event including at least one dynamic object having a border, and a border locator operative to provide an ongoing indication of the location of the border of the object during the event.
  • the method also includes generating an effect which is applied differentially on different sides of the border.
  • the method al ⁇ o include ⁇ generating an effect which is applied differentially on different ⁇ ides of the at least one edge.
  • the effect includes an effect which is carried out at a location determined by a portion of the dynamic object.
  • an image modification method including receiving a representation of an event, the representation including a plurality of frames, the event includ ⁇ ing at least one dynamic object having a border, computing the location of the border of the dynamic object during the event, generating an effect which is applied differentially on different sides of the border, and displaying a result of applying the effect without previously di ⁇ playing a ⁇ eparate repre ⁇ entation of the border.
  • the step of generating an effect is per ⁇ formed on a subsequence of frames, including a plurality of frames, within the sequence of frames after an automatic marking step has been performed for the subsequence of frames.
  • the ⁇ tep of generating an effect i ⁇ performed on an individual frame from among the ⁇ equence of frames after an automatic marking step has been performed for the individual frame.
  • the effect is generated and displayed for an individual frame before the effect is generated for a ⁇ ub ⁇ equent frame.
  • the effect is di ⁇ played for all of the plurality of individual frame ⁇ without expecting user input between frames.
  • an image marking method including receiving a representation of an event, the representa ⁇ tion including a plurality of frames, the event including at least one dynamic object having a border, computing the location of the border of the dynamic object during the event, and provid ⁇ ing a user-sensible indication of locations of the dynamic object during the event, without previously displaying a separate repre ⁇ sentation of the border.
  • the effect includes one of the follow ⁇ ing group of effects: compositing, retouching, smoothing, com ⁇ pres ⁇ ion, compo ⁇ iting, painting, blurring, sharpening, a filter operation, and an effect which changes over time at a different rate on different sides of the edge.
  • the event includes a plurality of dynamic hotspot objects and wherein the providing step includes providing an ongoing indication of locations of borders of each of the plurality of dynamic hotspot objects during the event.
  • the method also includes the steps of using the ongoing indication of locations of the borders of each of the hotspot objects to interpret a user's selection of an individual one of the plurality of dynamic hot ⁇ pot objects, and displaying information regarding the individual dynamic hotspot object selected by the user.
  • the dynamic object is a portion of a larger object.
  • Fig. 1 is a simplified top-level block diagram of a dynamic object processing sy ⁇ tem constructed and operative in accordance with a preferred embodiment of the present invention
  • Fig. 2A is a ⁇ implified flowchart of an interactive proce ⁇ for identifying boundarie ⁇ of an object of interest in at lea ⁇ t one key frame from among a ⁇ equence of frame ⁇ and for marking the boundarie ⁇ in the remaining frames from among the seguence of frames;
  • Figs. 2B - 2F are simplified pictorial illustration ⁇ of an example of rough marking in accordance with the method of steps 115, 130, 140, 150 of Fig. 2A;
  • Fig. 2G is a simplified flowchart of the process of Fig. 1 wherein the at least one key frame comprises only the first frame in the sequence of frames;
  • Fig. 3A is a simplified block diagram of apparatus, such a ⁇ the dynamic object border tracker 70 of Fig. 1, for performing the method of Fig. 2A;
  • Figs. 3B - 3D are simplified pictorial illustration ⁇ ⁇ howing internal junctions, external junctions, and occlusion;
  • Figs. 3E and 3F are simplified pictorial illustrations depicting a portion of the operation of step 370 of Fig. 3A;
  • Fig. 4 i ⁇ a simplified block diagram of apparatus, such as the dynamic object border tracker 70 of Fig. 1, for performing the process of Fig. 2.G wherein at least one key frame comprises the first frame in the sequence of frames;
  • Fig. 5 is a simplified block diagram of a modification of the apparatus of Fig. 3A in which borders are accurately identified;
  • Fig. 6 is a simplified block diagram of a modification of the apparatus of Fig. 4 in which borders are accurately identified;
  • Fig. 7 is a simplified block diagram of a modification of the apparatus of Fig. 5 which is operative to predict border locations in non-key frames;
  • Fig. 8 is a simplified block diagram of a modification of the apparatus of Fig. 6 which is operative to predict border locations in non-key frames;
  • Fig. 9 is a simplified block diagram of a first alter ⁇ native subsy ⁇ tem for performing the preproce ⁇ ing operation ⁇ of Fig ⁇ . 3A and 4 - 8;
  • Fig. 10 i ⁇ a simplified block diagram of the component mapping unit of Figs. 3A and 4 - 8;
  • Fig. IIA is a simplified block diagram of a preferred method of operation of units 1550 and 1560 of Fig. 10;
  • Figs. IIB and lie are simplified pictorial illu ⁇ tration ⁇ of vi ⁇ ible area ⁇ , useful in understanding the method of Fig. IIA;
  • Figs. IID - 11H are simplified pictorial illustrations of a plurality of pixels, useful in understanding the method of Fig. IIA;
  • Fig. Ill is a simplified pictorial illustration of an edge picture, from which a tree i ⁇ to be built according to the method of Fig. IIA;
  • Fig. 12 i ⁇ a simplified block diagram of the special points correspondence finding block of Figs. 3A and 4 - 8;
  • Fig. 13 is a simplified flowchart of a preferred method of operation for the special points weights computation unit 1700 of Fig. 12;
  • Fig. 14 is a simplified flowchart of a preferred method of operation for the border estimation block of Fig ⁇ . 3A and 4;
  • Fig. 16 is a simplified flowchart of a preferred method of operation for the borders and mask generation unit of Figs. 3A and 4 - 8;
  • Fig. 17 is a ⁇ implified flowchart of a preferred method of operation for the exact object border description blocks of Figs. 3A, 4, 5, 6, 7 and 8;
  • Fig. 18 is a simplified flowchart of a preferred method of operation of step ⁇ 570, 572, and 574 of Fig. 5 and of steps 670, 672, and 674 of Fig. 6;
  • Fig. 19 is a simplified flowchart of an alternative method of operation of step 2340 of Fig. 18;
  • Fig. 20 is a ⁇ implified flowchart of a prediction method useful in the methods of Fig ⁇ . 7 and 8;
  • Fig. 21 is a simplified flowchart of a preferred method for carrying out the step ⁇ of Fig. 20 in the ca ⁇ e of fir ⁇ t-order prediction;
  • Fig. 22 is a ⁇ implified flowchart of a preferred method for carrying out the ⁇ tep ⁇ of Fig. 20 in the case of second-order prediction;
  • Fig. 23 is a simplified flowchart of a preferred method for carrying out the steps of Fig. 20 in the case of third-order and higher prediction.
  • Fig. 24 is a simplified block diagram of a modification of the apparatus of Fig. 4.
  • Fig. 25 is a simplified block diagram of a modification of the apparatus of Fig. 8;
  • Fig. 26 is a simplified block diagram of a modification of the apparatus of Fig. 3A.
  • Fig. 27 is a simplified block diagram of a modification of the apparatus of Fig. 7.
  • Fig. 1 is a ⁇ implified top-level block diagram of a dynamic object proce ⁇ ing ⁇ ystem constructed and operative in accordance with a preferred embodi ⁇ ment of the pre ⁇ ent invention.
  • dynamic object i ⁇ here intended to include objects which are stationary at times and at motion at other time ⁇ , as well as objects which are always in motion.
  • the system of Fig. 1 receives a sequence of time- varying images such as animation, photographed or other video images from any ⁇ uitable ⁇ ource, via a ⁇ uitable video interface 10 which may include an A/D unit if the input thereto i ⁇ analog.
  • Suitable video ⁇ ource ⁇ include, for example, a video camera 20, a network, a video ⁇ torage unit 30 (video memory, video disk, tape, CD-ROM or hard disk) or a film scanner 40.
  • the system includes processing unit 50, associated with a video memory 54.
  • the processing unit 50 may, for example, be any appropriate computer equipped with video capability and programmed with appropriate software.
  • an IBM compatible Pentium PC equipped with video I/O cards, as are well known in the art, may be used.
  • the proces ⁇ ing unit 50 may be implemented partly or completely in cu ⁇ to hardware or otherwise.
  • the proce ⁇ ing unit 50 receive ⁇ from a suitable user input device such as a graphics drawing device 60 (e.g. tablet and stylus or mouse) , an indication of at least one initial border of at least one dynamic object, in an initial frame.
  • a suitable user input device such as a graphics drawing device 60 (e.g. tablet and stylus or mouse)
  • the indication may be of borders of the dynamic object as it appears other than in the initial frame.
  • frame refers ⁇ to either a frame a ⁇ generally under ⁇ tood in the art or, in the case of interlaced video wherein a frame as generally understood in the art comprise ⁇ more than one field, any of the fields comprising a frame as generally understood in the art.
  • key frames are termed herein "key frames".
  • key frame ⁇ are selected to be those frames in which a characteristic of the dynamic object's appearance changes, e.g. due to a change in the object's motion oir due to occlusion by another object or due to light condition changes.
  • the frames may comprise a plurality of frames seen from more than one field of view, ⁇ uch as, for example, two different fields of view, or a dynamic field of view.
  • the frames may comprise frames depicting a dynamic object, a dynamic background, or both.
  • the proce ⁇ ing unit 50 include ⁇ a dynamic object border tracker 70 which i ⁇ operative to track the border ⁇ of the dynamic object through non-key frame ⁇ , ba ⁇ ed on the location ⁇ of the border ⁇ in the key frames. It is appreciated that the dynamic object border tracker 70 may preferably be operative to track borders in any direction through the non-key frames, that i ⁇ , forwards, backwards, converging from both end ⁇ , and ⁇ o forth.
  • the dynamic object border tracker 70 i ⁇ operative to complete a border by adding border segments which the tracker 70 did not succeed in finding. These border segments are termed herein “mis ⁇ ing border segments”.
  • the user may interactively correct the tracking of the border through either key frame ⁇ or non-key frame ⁇ by mean ⁇ of the drawing device 60.
  • the output of the dynamic object border tracker 70 typically compri ⁇ e ⁇ an' indication of the location of the border for each of the frame ⁇ of the image ⁇ equence.
  • the border location indication typically comprise ⁇ a ma ⁇ k, having "1" values at the border and "0" values other than at the border.
  • the border location indication is fed to and utilized by any of a plurality of application devices, thereby enabling an operator to is ⁇ ue a single command for proces ⁇ ing the dynamic object in the entire image seguence, rather than having to process the dynamic object "frame by frame", i.e. ⁇ eparately for each frame.
  • proce ⁇ ing of the background in the entire image ⁇ equence may al ⁇ o be carried out without having to proce ⁇ s separately for each frame.
  • suitable application devices include: a. " A video compositing device 80, operative to generate a video image comprising a plurality of "layers". b. An image . retouching device 90, operative to perform one-step enhancement, segmentation or special effects, on at least one dynamic object in the image sequence, rather than frame-by-frame retouching of the dynamic object.
  • Retouching operations include: color alteration, filtering, a ⁇ , for example, noise reduction, sharpening, or other types of filtering; and effects, as, for example, tiling.
  • the border location may be fed elsewhere, as, for example, to the network or to the video storage unit 30.
  • a video display device 95 provides a di ⁇ play which facilitate ⁇ interactive ⁇ essions.
  • border location indication may also be employed for a variety of other application ⁇ , including, for example, the following: a. video rate conversion or video standard conversion. b. image compres ⁇ ion in which at least one dynamic object in the image is compressed differently, typically more accurate ⁇ ly, than the remaining portions of the image. c. Scene analysi ⁇ , ⁇ uch a ⁇ automatic navigation applica ⁇ tion in which the border ⁇ of encountered object ⁇ are tracked ⁇ o a ⁇ to determine an optimal route therepast.
  • Fig. 2A is a simplified flowchart for interactive operation of the dynamic object border tracker of Fig. 1.
  • boundaries of an object of interest are identified in at least one key frame from among a sequence of frames and are utilized for marking the boundaries in the remaining frames from among the sequence of frames.
  • the user may select or localize borders by any suitable method such as: a.
  • a. As ⁇ hown in Fig. 2A (step 115) , rough manual marking of border location of an object of interest, e.g. by means of a brush operated by a tablet's stylus, such as a stylus associated with the graphics drawing device 60.
  • the ⁇ y ⁇ tem attempts to find a plurality of candidate edges within the rough marking. These candidate edges are displayed to the user who selects an appropriate edge from among them.
  • border selection or localization methods may, for example, be employed: b.
  • the user may mark the exact border location manually.
  • a spline tool or other curve drawing tool may be employed by the user to mark the border location.
  • the user may ⁇ elect a border contour from among a library of border contour ⁇ such as rectangles, previously used border contours or other predetermined border contours.
  • the user may use another means of indicating the border as, for example, choosing a color ⁇ electing method well known in the art such as chroma-key or color-key, to identify either the object or the background.
  • the system then identifies the transition between the selected and unselected portions, using methods well-known in the art, and takes the transition between the selected and unselected portions to be the border.
  • the system may, preferably at user option, add a rough marking surrounding all or a portion of the marking selected by the user.
  • Mis ⁇ ing edge ⁇ may preferably be filled in by the u ⁇ er.
  • the border ⁇ once marked a ⁇ above, may preferably be modified manually by the u ⁇ er.
  • the system finds the marked border location ⁇ in key frames (step 130) , and gives the u ⁇ er the option to modify the marked border locations based on the system response (steps 140 and 150) .
  • Figs. 2B - 2F are simplified pictorial illu ⁇ tration ⁇ of an example of rough marking in accordance with the method of ⁇ tep ⁇ 115, 130, 140, and 150 of Fig. 2A.
  • option (a) rough manual marking
  • Figs. 2B - 2F depict a display preferably provided to the user during the operation of step ⁇ 115, 130, 140, and 150 typically on video di ⁇ play 95 of Fig. 1.
  • Fig. 2B depicts an actual frame.
  • the actual frame of Fig. 2B is displayed as background to a ⁇ sist the u ⁇ er in making mark ⁇ and modification ⁇ .
  • said background is not shown in Figs. 2D - 2F.
  • Fig. 2B comprises a plurality of edges 116.
  • the edges 116 comprise the limit ⁇ of areas 121, 122, 123, 124, and 125.
  • the areas 121, 122, 123, 124, and 125 are taken to be of different color, but it is appreciated that, in general, different areas need not be of different color.
  • Area ⁇ 125 are taken not to be of intere ⁇ t to the user, while areas 121, 122, 123 are areas of interest, because areas of intere ⁇ t 121, 123, and 123 together comprise a desired object 117.
  • Area 124 is also taken to be not of interest to the user.
  • areas are defined by being surrounded by closed edges, or by the ends of the video display 95.
  • the user marks the rough marking area 126 with, for example, the graphic drawing device 60.
  • the user may mark the marking area limits 127 and indicate that the area 126 in between the marking area limits 127 is to be the rough marking area.
  • Fig. 2D all edges 116 not within the rough marking area 126 have been removed by step 130 of Fig. 2A.
  • the rough marking area 126 includes areas of interest 121, 122, and 123 and the edges 116 surrounding them, along with area 124, which is not of interest.
  • Fig. 2E only the remaining edges, within the rough marking area 126 are shown.
  • Edge 128 is internal to the desired object 117, while edge ⁇ 129 and 130 define area 124, which i ⁇ not of intere ⁇ t.
  • the user decision of step 140 is based on the display of Fig. 2E.
  • Fig. 2F representing the result ⁇ of ⁇ tep 150
  • the u ⁇ er has erased a portion of edge 129, typically using the graphic drawing device 60, so that area 124 is now out ⁇ ide the de ⁇ ired object 117.
  • the u ⁇ er may make a wide variety of modification ⁇ , including era ⁇ ing edges or portions thereof, and adding edges or portions thereof.
  • step 155 the sy ⁇ tem learn ⁇ the border qualitie ⁇ of the key frame ⁇ .
  • Border qualitie ⁇ may compri ⁇ e, for example, border length, average color and color changes.
  • the border qualities may also comprise a ⁇ pect ⁇ of motion of the border ⁇ ⁇ uch as, for example, border velocity and border acceleration. The method of step 155 is explained in more detail below with reference to the apparatus of Fig. 3A.
  • Non-key frames are input in step 190.
  • the sy ⁇ tem then proceeds to identify, in non-key frames, the borders which were marked in the key frames (step 160) .
  • the sy ⁇ tem may optionally make use of information obtained from the processing of other frames which were already processed.
  • the sy ⁇ tem ⁇ eek ⁇ the borders only in a specific region of interest (ROI) , which is taken to be the region in which the borders are expected to be found, typically based on information from other frames already processed, by identifying a region around the object borders in the previously proces ⁇ ed frame or frame ⁇ a ⁇ the ROI.
  • ROI region of interest
  • step 160 may be fed back to step 155, where the borders may be treated as key frame borders in further iterations of the method of _ _g. 2A.
  • the u ⁇ er may modify the ⁇ e borders directly ( ⁇ tep 180) or can decide to define more of the frame ⁇ , or different frame ⁇ , as key-frames and re-run the ⁇ ystem, based on the reference borders of the new set of key-frames.
  • the user is presented with displays and modification options similar to those de ⁇ cribed above with reference to Fig ⁇ . 2E and 2F.
  • the user may use any other method as described with reference to step 115 of Fig. 2A.
  • step 170 The user may be confronted with the "good enough?" decision of step 170 either only once, for the image sequence as a whole, or at one or more intermediate points, determined by the user and/or by the sy ⁇ tem, within the process of border marking of the image sequence.
  • the user may make a "good enough?" decision regarding each of the frames of the image sequence or regarding only some of the frames of the image sequence.
  • the user may provide additional input, to modify or correct the operation of the system, at any of the steps involving user input, comprising steps 110, 150, and 180.
  • Fig. 2G is a special ca ⁇ e of the flowchart of Fig. 2A in which a fir ⁇ t frame in a frame ⁇ equence is initially selected as the sole key frame, and processing continues on all the frames.
  • the step ⁇ of Fig. 2G are self-explanatory in light of the above explanation of Fig. 2A, except as follows.
  • step 210 the system may input only one frame or a sequence of sequential frames.
  • steps 220, 230, 240, and 250 process the sequence of frames, typically one frame at a time.
  • step 255 the ⁇ ystem learn ⁇ the border qualities of the current frame.
  • border qualities of a sequence of frame ⁇ may be learned a ⁇ , for example, border length, average color and color changes.
  • Such a sequence may have been input in step 210 or may be built in step 255 as a plurality of frames is proces ⁇ ed.
  • the border qualities may also comprise aspects of motion of the borders ⁇ uch a ⁇ , for example, border velocity, border acceleration.
  • the method of ⁇ tep 255 is explained in more detail below with reference to the apparatu ⁇ of Fig. 4.
  • step 260 the next frame is input.
  • the system finds border ⁇ of the next frame in step 260 with reference to the border qualities learned in step 255.
  • the operation of step 260 i ⁇ preferably limited to an ROI, defined ba ⁇ ed on the object behavior in previous frames.
  • the operation of step 260 is further de ⁇ cribed below with reference to the apparatu ⁇ of Fig. 4.
  • step 255 After ⁇ tep 275, if the la ⁇ t frame has not been reached, processing continues with step 255.
  • FIG. 3A is a ⁇ implified block diagram of apparatus, such as the dynamic object border tracker 70 of Fig. 1, for performing the method of Fig. 2A.
  • Fig. 2A The steps of Fig. 2A are performed by the following units in Fig. 3A:
  • Step 130 Units 330 and 340
  • Step 160 Unit ⁇ 330, 340, 360 and 370
  • edges in each frame including key ffrraames and non-key frames, are detected and are preferably modified to facilitate the remaining steps, as de ⁇ cribed in more detail below with reference to Fig. 9.
  • a component mapping unit 340 the edge ⁇ found by the pre-processing unit 330 are traced, as further described below with reference to Fig. 10, and a data ⁇ tructure is generated to represent the edges.
  • This structure typically comprises a forest of edge tree ⁇ in which each branch compri ⁇ e ⁇ an edge and each node compri ⁇ e ⁇ a " ⁇ pecial point" such a ⁇ , for example, a junction.
  • Special points may also, for example, include terminal points in an edge, whether or not the edge i ⁇ connected to a junction, and edge corner ⁇ .
  • the term "tree” includes trees which contain loops, that i ⁇ , path ⁇ which return to a junction that wa ⁇ already reached. It i ⁇ appreciated, as is well known in the art, that thi ⁇ type of tree may al ⁇ o be depicted a ⁇ a graph. Thus all operations specified herein to be performed on a tree may be performed on a corresponding graph.
  • graph and “tree” as used throughout the specification and claims are each meant to include both graph and tree representation ⁇ .
  • a single graph may represent a collection of trees, and thus a forest may comprise a single graph or more than one graph.
  • special points are taken to be internal junctions, that i ⁇ , junction ⁇ internal to the object or lying on the internal side of the border thereof.
  • tracking external junctions may be preferred; alternatively, a combination of internal and external junctions or other special point ⁇ may be tracked, depending on the precise position of a plurality of partially occluding objects.
  • Figs. 3B - 3D are simplified pictorial illustrations showing internal junctions, external junctions, and occlusion.
  • Fig. 3B comprise ⁇ a tracked object 341 and a ⁇ econd object 342.
  • the tracked object 341 has internal junction ⁇ 343 and 344 and an external junction 345.
  • the second object 342 ha ⁇ an internal junction 346. It i ⁇ appreciated that there may be other junction ⁇ in addition to those which are shown in Fig. 3B. As stated above, special points are, preferable, taken to be internal junctions.
  • Fig. 3C the tracked object 341 and the second object 342 have moved closer to each other, such that the second object 342 partially occlude ⁇ the tracked object 341.
  • internal junction 344 visible in Fig. 3B, is not visible in Fig. 3C due to the partial occlusion of the tracked object 341.
  • new external junctions 347 and 348 are created due to the partial occlusion.
  • junction 346 is now an external junction of the tracked object 341 due to the partial occlusion. It will therefore be appreciated that, in the case of partial occlu ⁇ ion, it may be preferred to take external junctions also as ⁇ pecial points.
  • Fig. 3D the tracked object 341 and the second object 342 have moved still closer to one another, such that the extent of occlusion is greater. It i ⁇ appreciated that, in Fig. 3D, new external junctions 347 and 348 and external junction 346 are still present, so that designating junction ⁇ 346, 347, and 348 as special points would be preferred in tracking the tracked object 341.
  • Unit 340 is described in more detail below with reference to Figs. 10 and 11.
  • Unit 350 receive ⁇ the output of step 340, and creates an exact object border description, preferably repre ⁇ ented in term ⁇ of a "chain code", for each keyframe.
  • a chain code is a representation of the border in terms of edges and special points and typically comprises pointers to the edge ⁇ and ⁇ pecial point ⁇ which form the border, in their proper sequence.
  • curves typically spline ⁇ , connecting the points, are computed. These curves are also termed herein “initial border estimation segments”. Computation of splines is described in C. de Boor and in P. J. Schneider, referred to above. The ⁇ pline ⁇ are typically employed in further ⁇ teps for the purpose of border estimation.
  • the chain code and a representation of the splines are stored in the data base 380.
  • unit 350 is described more fully below, with reference to Fig. 17.
  • unit ⁇ 330 and 340 operate on the non-keyframe ⁇ .
  • Unit 360 find ⁇ correspondences between the special points in the keyframe or frames and special points in the non- keyframes, that i ⁇ ,,pair ⁇ of special points which, one in a key frame and one not in a key frame, represent the same location in the object.
  • corresponding special points may be treated as estimated special points.
  • the correspondence is found with reference to stored point data found in the data base 380.
  • the stored point data typically comprise ⁇ chain code ⁇ repre ⁇ enting ⁇ pecial point ⁇ and edge segments of borders and spline representation ⁇ of edge segments of borders, both produced by units 350 and 370.
  • these correspondences may be employed to find correspondences between special points in the keyframe and special points in other non- keyframes.
  • Unit 360 is described in more detail below with reference to Figs. 12 and 13.
  • corresponding points When a corre ⁇ pondence i ⁇ found with reference to, for example, two special points, the special points are termed herein “corresponding points.” Similarly, when a correspondence is found between two border segments, the two border segment ⁇ are termed herein "corresponding segment ⁇ ". Corre ⁇ ponding point ⁇ and corresponding segments are as ⁇ umed to repre ⁇ ent the same points and border segments, respectively, in the dynamic object. The process of finding corresponding segment ⁇ is described below with reference to unit 370.
  • the operation of unit 360 i ⁇ re ⁇ tricted to an ROI is typically taken to be a region of a predetermined ⁇ ize around the border ⁇ and special points of the object as, for example, five pixels around.
  • the ⁇ pecial point ⁇ identified by unit 360 are received by unit 370, and the chain code ⁇ and spline representation ⁇ ⁇ tored in the data ba ⁇ e 380 are retrieved.
  • gaps are filled in via use of the spline repre ⁇ entation ⁇ , which repre ⁇ ent border e ⁇ timation.
  • the point ⁇ are typically connected together by projecting the ⁇ pline curve, from the data ba ⁇ e 380, of the initial border e ⁇ timation ⁇ egment ⁇ between the point ⁇ .
  • the projecting preferably include ⁇ u ⁇ e of an affine transformation.
  • the affine transformation may include rotation, scaling, and shifting.
  • Affine transformation ⁇ are well known in the art, and are de ⁇ cribed in Ballard and Brown, referred to above, at page 477.
  • the affine transformation is applied only to the control points of the ⁇ pline curve, and the spline curve is then recomputed.
  • Fig ⁇ . 3E and 3F are simplified pictorial illustration ⁇ depicting a portion of the operation of step 370 of Fig. 3A.
  • Fig. 3E depicts a first frame, compri ⁇ ing ⁇ pecial point ⁇ 371, 372, 373 and 374.
  • Fig. 3F depict ⁇ another frame, compri ⁇ ing ⁇ pecial point ⁇ 375, 376, and 377. Corre ⁇ pondences have already been found, as de ⁇ cribed above with reference to unit 360, between the following pair ⁇ of ⁇ pecial point ⁇ : 371 and 375; 372 and 376; and 373 and 377. No correspondence was found for point 374.
  • Estimated border segments 378 projected as previou ⁇ ly de ⁇ cribed, has been added between each adjacent pair of points, including points 375 and 377. It is appreciated that an estimated border segment 378 is projected between points 375 and 377 even though no corresponding point was found for point 374, based on the previous segments between points 373, 374, and 371.
  • Updated chain codes are computed from the estimated border segment ⁇ and the corre ⁇ ponding special points. Descriptions of the special points and estimated border segment ⁇ , a ⁇ well as the updated chain codes, are stored in the data base 380. Estimated border segment ⁇ may be used, in a later iteration, as initial border e ⁇ timation ⁇ egment ⁇ . Computation of chain code ⁇ i ⁇ de ⁇ cribed in more detail below with reference to Fig. 17.
  • An object border de ⁇ cription compri ⁇ ing an externally- u ⁇ able repre ⁇ entation of the object border a ⁇ , for example, a list of coordinates defining the location of the border, and an object mask, suitable for further processing, are generated by unit 390.
  • an object mask may be generated, and an object border description may be generated from the object mask in a later step when the object border description i ⁇ to be used.
  • Unit 390 is more fully described below with reference to Fig. 16.
  • Unit 379 allow ⁇ the u ⁇ er to examine the results of the method of Fig. 3A and to modify the results accordingly, including choosing new key frame ⁇ , as explained above with reference to Figs. 2A - 2F.
  • the result ⁇ of the method of Fig. 3A are presented to the user by drawing the object chain code over the pre ⁇ ent frame. It is appreciated that, although unit 379 is depicted as receiving input from unit 370, unit 379 may also utilize any other object border information available as, for example, information stored in the data base 380.
  • Unit 335 operates similarly to unit 379, except that unit 335 relates to preprocessing and thus it draws directly the edge ⁇ found by unit 330 rather than u ⁇ ing a chain code drawing.
  • Fig. 4 i ⁇ a simplified block diagram of apparatus for performing the proce ⁇ of Fig. 2G wherein at least one key frame comprises the first frame in the sequence of frames.
  • Fig. 2G The step ⁇ of Fig. 2G are performed by the following units in Fig. 4 '•
  • Step 230 Units 430 and 440
  • Step 260 Units 430, 440, 460 and 470
  • the units of Fig. 4 are similar to the unit ⁇ of Fig. 3A and are ⁇ elf-explanatory with reference to the above di ⁇ cussion of Fig. 3A, except as de ⁇ cribed below.
  • the corre ⁇ pondence between tthhee uunniitt ⁇ s of Fig. 3A and the unit ⁇ of Fig. 4 i ⁇ a ⁇ follows:
  • Fig. 4 the following units operate on consecutive first frames, treating the first frames as key frames, rather than on key frames in general as in the corresponding units of Fig. 3: 410, 435, 450.
  • Unit .455 provides next frames consecutively, preferably one frame at a time, to unit 430.
  • unit 460 the operation of unit 460 is restricted to an ROI, as described above with reference to Fig. 2G.
  • the ROI is typically taken to be a region of an predetermined size around the borders and special points of the object as, for example, five pixels around.
  • unit 460 operates on con ⁇ ecutive frame ⁇ , one frame at a time.
  • Unit 460 finds correspondences between the special points in con ⁇ ecutive frame ⁇ , that i ⁇ , pairs of special points which, one in a first frame and one in the ⁇ ucceeding frame, represent the same location in the object. In the context of processing further frames, corresponding special points may be treated a ⁇ e ⁇ timated special points.
  • the correspondence is found with reference to stored point data found in the data base 480.
  • the stored point data typically comprise ⁇ chain code ⁇ repre ⁇ enting special points and edge segments of borders and spline representations of edge segments of borders, both produced by units 450 and 465, described below.
  • these correspondences may be employed to find correspondence ⁇ between ⁇ pecial points in the two frames and ⁇ pecial point ⁇ in other frames.
  • Unit 460 is described in more detail below with reference to Figs. 12 and 13.
  • Unit 470 operates ⁇ imilarly to unit 370, except that an exact object border description is created by unit 465.
  • a current frame chain code representing an exact border description of the current frame, is computed based on the corresponding special points found by unit 460 and the borders estimated by unit 470. It is appreciated that, in Fig. 3A, the functionality of unit 465 i ⁇ included in unit 370. Unit 465 is described more fully below with reference to Fig. 17.
  • Unit 478 operates similarly to unit 379 of Fig. 3A, except that unit 478 operates on the output of unit 465.
  • Reference i ⁇ now made to Fig. 5 which i ⁇ a ⁇ implified block diagram of a modification of the apparatus of Fig. 3A in which borders are accurately identified.
  • the units of Fig. 5 are self-explanatory with reference to the above discu ⁇ ion of Fig. 3A, except a ⁇ follow ⁇ .
  • An e ⁇ timated border represented by a chain code termed herein an "intermediate chain code", is created by unit 570.
  • a more precise border is identified by unit 572, based on the estimated border produced by unit 570 and on chain code and spline data describing a stored frame border, obtained from the data base 580.
  • Unit 572 preferably operates by identifying edge ⁇ in the vicinity of the e ⁇ timated border and ⁇ electing the be ⁇ t candidate ⁇ for border ⁇ egment ⁇ .
  • E ⁇ timated border ⁇ egment ⁇ provided by unit 572 may be filled in by unit 574 where a more preci ⁇ e border wa ⁇ not ⁇ ucce ⁇ sfully identified by unit 572, and an exact object border description is created.
  • unit 572 the operation of unit 572 is restricted to an ROI, as described above with reference to Fig. 2A.
  • the ROI is typically taken to be a region of an predetermined size around the borders and special points of the object as, for example, five pixels around.
  • Units 570, 572 and 574 are described more fully below with reference to Figs. 18 and 24.
  • a chain code is computed based on the new more precise border by unit 576.
  • the chain code is typically computed by unit 576 rather than by unit 570.
  • the chain code is stored in the database 580 and is also passed along to unit 578, which allows the u ⁇ er to examine the new border.
  • the operation of unit 576 i ⁇ de ⁇ cribed in more detail below with reference to Fig. 17.
  • FIG. 6 is a simplified block diagram of a modification of the apparatus of Fig. 4 in which borders are accurately identified.
  • the unit ⁇ of Fig. 6 are self-explanatory with reference to the above discussion of Fig. 4, except as follows.
  • An estimated border represented by an intermediate chain code, is identified by unit 670.
  • a more precise border is identified by unit 672, based on the estimated border from unit 670 and on data on previous frames, preferably comprising chain codes and ⁇ plines, obtained from the data ba ⁇ e 680.
  • Unit 672 preferably operate ⁇ by identifying edge ⁇ in the vicinity of the e ⁇ timated border and ⁇ electing the be ⁇ t candidate ⁇ for border segments. The operation of unit 672 is described in detail below with reference to Figs. 18 and 24.
  • unit 672 the operation of unit 672 is restricted to an ROI, as described above with reference to Fig. 2A.
  • the ROI is typically taken to be a region of an predetermined size around the borders and special points of the object as, for example, five pixels around.
  • estimated border segment ⁇ may be filled in where a more precise border was not successfully identified by unit 672, and an exact object border description is created by unit 665. The operation of unit 674 is described in detail below with reference to Figs. 18 and 24.
  • FIG. 7 is a simplified block diagram of a modification of the apparatus of Fig. 5 which is operative to predict border locations in non-key frames.
  • the units of Fig. 7 are self-explanatory with reference to the above discus ⁇ ion of Fig. 5, except as follows.
  • Unit 774 exact object border description, performs both the operation of unit 574 of Fig. 5 and the operation of unit 576 of Fig. 5.
  • Unit 777 applies equations of motion, relating position to changes in po ⁇ ition and to rate of change in position, to the position ⁇ of special points and borders stored in the data base 780 in order to predict the location, in upcoming frames, of the special points and borders. It is appreciated that, in applying the equations of motion, it is nece ⁇ ary to take into account the di ⁇ tance and direction in time, in frame ⁇ , between key frames being processed, since time between frames is an important variable in applying equations of motion. Equations of motion are discu ⁇ ed in more detail below with reference to Figs. 21 23.
  • Unit 752 operates similarly to unit 777, but uses equations of motion to predict ⁇ pecial point ⁇ and border ⁇ according to the key frames rather than using other frames as i ⁇ the case with unit 752.
  • .unit 771 in contrast to unit 570 of Fig. 5, may apply equations of motion also to the stored spline data received from the data base 780, so that the stored spline data is updated to more accurately predict border po ⁇ ition.
  • Fig. 8 i ⁇ a ⁇ implified block diagram of a modification of the apparatus of Fig. 6 which is operative to predict border location ⁇ in non-key frame ⁇ .
  • the unit ⁇ of Fig. 8 are ⁇ elf-explanatory with reference to the above di ⁇ cussion of Figs. 6 and 7, except as follow ⁇ .
  • Equation ⁇ of motion are applied to the po ⁇ ition ⁇ of special points and borders for a previous frame stored in the d ⁇ a base 880 in order to predict the location, in subsequent frames, of the special points and borders. Equations of motion useful in the method of Fig. 8 are discussed below with reference to Figs. 20 - 23.
  • Unit 852 similarly to unit 866, uses equation ⁇ of motion to predict ⁇ pecial points and borders according to the key frames.
  • unit 871 may apply equations of motion to the stored ⁇ pline data received from the data ba ⁇ e 880, so that the stored spline data is updated to more accurately predict border position.
  • Unit 874 is similar to unit 774 of Fig. 7.
  • Fig. 9 is a simplified block diagram of an alternative sub ⁇ y ⁇ tem for performing the preproce ⁇ ing operations of blocks 33C, 430, 530, 630, 730, and 830 of Fig ⁇ . 3A and 4 - 8.
  • a commonly used RGB color space may not be optimal for edge detection because the three components R-G-B tend to all change similarly and in concert with intensity change, so that edges identified from the components of such a color space will tend to be similar. It is therefore desirable to choose a color space where the above behavior typically does not occur, that is, where the component ⁇ tend to behave differently, ⁇ o that edge ⁇ identified from the component ⁇ of such a color space will tend to be different.
  • a color space having the following components, computed from R, G, and B components of an RGB color space is used:
  • the above color space is discus ⁇ ed in Yu-Ichi Ohta, Takeo Kanada, and T. Sakai, referred to above.
  • the input color component ⁇ are converted to the chosen color space (unit 1100) .
  • the formulas provided may be used to compute the conversion.
  • the RGB space or any other appropriate color space may be used.
  • Edge detection for each color component is then performed (unit 1110) .
  • a minimum threshold value is applied to the color inten ⁇ ity of each color component and all edge ⁇ who ⁇ e color component intensity is less than the threshold value are ignored (1120) .
  • Edges detected in the separate color components are merged together (unit 1130) .
  • An edge picture comprising typically “1" values wherever an edge was detected and "0" value ⁇ otherwise, is produced.
  • Fig. 10 is a simplified block diagram of the component mapping unit of Figs. 3A and 4 8.
  • Unit 1500 make ⁇ a working copy of the edge picture produced by the pre-processing units of Fig ⁇ . 3A and 4 - 8.
  • the working copy is scanned for an edge pixel (unit 1510) , until the end of the picture is reached (unit 1520) . If the current pixel is not an edge pixel (unit 1540) , scanning continues.
  • the current pixel is an edge pixel
  • pixel ⁇ along the edge are traversed until a junction pixel, a terminal pixel, or another special point i ⁇ identified a ⁇ de ⁇ cribed below with reference to Fig. IIA, and the junction pixel, terminal pixel, or other special point is identified a ⁇ a root pixel. All pixel ⁇ connected with the root pixel are traced, forming an edge tree (unit 1550). If no junction pixel, terminal pixel, or other ⁇ pecial point i ⁇ found, the initial edge pixel i ⁇ taken a ⁇ the root pixel.
  • Unit 1550 identifie ⁇ candidate ⁇ pecial points, as, for example, points at edge junctions.
  • candidate special points may also, for example, include terminal points in an edge not connected to a junction and edge corners.
  • the edge tree is added to an edge forest consisting of all edge trees found (step 1570) , and the pixel ⁇ of the edge tree are erased from the working copy of the edge picture (step 1560) .
  • the edge forest provides a component map comprising special point candidates and edge candidate ⁇ and the relationships between them, a ⁇ de ⁇ cribed below with reference to Fig. IIA.
  • Method ⁇ for forming an edge tree are well known in the art, and include the method de ⁇ cribed in Yija Lin, Jiqing Dou and Eryi Zhang, "Edge expression based on tree structure", Pattern Recognition Vol. 25, No. 5, pp 507 r 517, 1992, referred to above.
  • the methods for forming an edge tree known in the art have the drawback that the list of edges and the list of nodes produced are not neces ⁇ arily independent of the direction of traversal of the edges and of the choice of root node.
  • a preferred method for forming an edge tree which overcomes the drawbacks of method ⁇ known in the prior art, i ⁇ now de ⁇ cribed with reference to Fig. IIA as follows. The method of Fig. IIA i ⁇ ⁇ pecific to the ca ⁇ e where all ⁇ pecial point ⁇ are junction ⁇ .
  • Evaluation rule ⁇ for forming the edge tree are as follows :
  • Visible area rule The region around the current edge pixel, as seen from the direction of entry to the pixel and towards the other directions, is termed herein a "visible area".
  • the visible area of the current edge pixel is classified as diagonal or straight according to the direction in which the current edge proceeds from the current edge pixel.
  • "Straight" means entering horizontally or vertically, that is, not diagonally.
  • Figs. IIB and 11C are simplified pictorial illustrations of visible areas, useful in understanding the method of Fig. IIA. In Fig. IIB, arrows depict the directions that are straight, while in Fig. 11C arrows depict the directions which are diagonal.
  • Figs. IID - HE are simplified pictorial illustrations of a plurality of pixels, useful in understanding the method of Fig. HA.
  • arrows depict the direction of entry into the visible area.
  • Fig. HD comprise ⁇ a ⁇ traight vi ⁇ ible area.
  • Fig. HE comprises a diagonal visible area.
  • blind strip rule If, in the visible area of the current edge pixel, there are one or more pixels in a straight direction, further connected edge pixels are preferably sought in the straight direction, and the diagonal directions are blocked, in the sense that they are not seen as part of the visible area.
  • Figs. HF - HH are simplified pictorial illustration ⁇ of a plurality of pixels, u ⁇ eful in understanding the method of Fig. HA.
  • Figs. HF - HH comprise a plurality of edge pixels and depict application of the blind strip rule thereto.
  • arrows depict the directions in which additional pixels are sought according to the blind strip rule.
  • Each of Figs. HF HH depict entry at a different pixel. It is appreciated that, in each case, regardless of point of entry, the same junctions are found.
  • Visible pixel rule refers ⁇ to edge pixels adjacent to the current pixel in the visible area, not including any pixels ignored under the blind strip rule. Note that, generally, because of the method of identifying edge pixels, no more than 3 visible pixels will be seen, except in the case of a root pixel which is at a junction of four edge ⁇ , in which case 4 visible pixels will be seen.
  • the current edge pixel is cla ⁇ ified based on the following pixel classification rules:
  • the current pixel has two or more vi ⁇ ible pixels, the current pixel i ⁇ identified a ⁇ a junction pixel. However, if exactly 2 vi ⁇ ible pixel ⁇ are seen and the current pixel is a root pixel, the current pixel is not identified as a junction pixel; rather, subsequent pixels are processed.
  • the current pixel is identified as a terminal pixel.
  • the current pixel is identified as a "usual branch pixel". However, if the current pixel is the root pixel, the current pixel is classified as a terminal pixel.
  • a branch is defined herein as a sequential, connected set of usual branch pixels, and is typically represented as a dynamic array of pixel coordinates and characteristics.
  • the color typically represents the color or some other characteristic of the pixel, such as an indication that the pixel is a pixel which was added to fill in a gap.
  • Each element of the tree is preferably defined by a list of attributes, preferably including the following: flag, defining its type as branch or junction; parent pointer, pointing to previous element or parent; and neighboring pointers, pointing to neighboring elements in the direction of traversal, or children.
  • the tree is then built according to the following tree building method:
  • the first pixel i ⁇ cla ⁇ ified according to the above rule ⁇ (step 1630) .
  • Delete parent pixel (step 1650); i.e., delete pixels from the image that were already processed. This step is omitted in case of the first pixel, which has no parent.
  • step 1610 move forward to the next edge pixel (1620) .
  • Fig. HI is a simplified pictorial illustration of an edge picture, from which a tree is to be built according to the method of Fig. HA.
  • Fig. HI comprises an edge 1690.
  • Fig HI also comprises an edge junction 1691, at the end of the edge 1690.
  • Fig. HI also comprises an edge 1692 lying between the edge junction 1691 and an edge junction 1693.
  • Fig. HI also comprises an edge 1694, at one end of which lies edge junction 1693.
  • Fig. HI also comprises an edge 1695, lying between edge junction 1691 and edge junction 1693.
  • Processing of Fig. HI in accordance with the method of Fig. HA, in order to build a tree may proceed as follows:
  • the root pixel is classified as a terminal pixel (step 1630) .
  • steps 1660 and 1670 are found at edge junction 1691, and proces ⁇ ing continues with steps 1660 and 1670.
  • steps 1660 and 1670 which comprise depth- first search proces ⁇ ing and junction deletion, is to proces ⁇ the remainder of Fig. HI before deleting the edge junction 1691.
  • Fig. 12 is a simplified block diagram of the special points corre ⁇ pondence finding block of Figs. 3A and 4 - 8.
  • a weight computation unit 1700 receives as input a list of special point candidates, typically from the ROI, and estimated or predicted special points and computes a correlation weight between each special point candidate and each estimated or predicted special point. The correlation weight is based on a correlation error.
  • the estimated points may comprise known points from a previous frame. The operation of the weight computation unit 1700 is described in more detail below with reference to Fig. 13.
  • a threshold filter 1710 applies a minimum threshold to the weights received from the weight computation unit 1700 and outputs a thresholded weight.
  • the thre ⁇ hold filter 1710 receives the correlation error from the weight computation unit 1700, and preferably computes an appropriate threshold based thereupon.
  • a typical threshold i ⁇ ba ⁇ ed directly on correlation error a ⁇ for example 0.125, when the correlation error is normalized in a range 0:1.
  • the special point candidates may not nece ⁇ sarily come only from the ROI, but may also come from a region chosen based on di ⁇ tance from an e ⁇ timated point, or from a region chosen based on other criteria.
  • the weight computation may take the distance into account.
  • an initial probability is computed (unit 1720) for each candidate, showing its probability to be each of one or more special points.
  • the initial probability for each point is computed as follows:
  • Pr(j) W max * (Wj) / SUM (Wj), where ⁇ is the weight of candidate j, and
  • SUM is taken over all candidates, not including the fictional candidate.
  • the candidates are then filtered in a candidates filter 1730, which picks the best pos ⁇ ible candidate for each special point based on a filter criterion.
  • the filter method may, for example, choose the candidate with the highest probability.
  • the method may use a more complex filtering criterion taking into account pos ⁇ ible movement ⁇ and irregularitie ⁇ in movement of the special points, and the relationships between them.
  • Fig. 13 is a simplified flowchart of a preferred method of operation for the special points weights computation unit 1700 of Fig. 12.
  • An e ⁇ timated or predicted special point and a special point candidate are input (steps 1810 and 1820) .
  • a correlation weight is computed (step 1830) , based on the estimated or predicted special point and the special point candidate colors.
  • correlation weight (1 / (1 + C * ER) ) where C is a coefficient, preferably having a value of 10;
  • ER is the normalized correlation error between a ⁇ pecial point candidate and an e ⁇ timated/predicted special point.
  • a preferred formula for computing ER is as follows:
  • I is the intensity of the pixel, normalized in the ranged 0:1,
  • K and OK are indexe ⁇ repre ⁇ enting a a ⁇ k of pixels around the special point candidate and the estimated/predicted special point, respectively, and n represents the index of the color, as defined above.
  • the correlation computation is repeated for each combination of an estimated/predicted special point with all of its candidate points (step ⁇ 1850 and 1860) .
  • Fig. 14 i ⁇ a simplified flowchart of a preferred method of operation for the border estimation block of Figs. 3A and 4.
  • Two consecutive corresponding special points are input (step 1910) .
  • An initial estimated border segment is input (1920) .
  • the initial estimated border segment connects the last two consecutive corresponding special points.
  • the estimation segment i ⁇ projected between the consecutive corresponding special points, and an estimated border segment is created (step 1940) .
  • the remaining special points are then processed until the last special point is reached (step ⁇ 1960 and 1980) .
  • the e ⁇ timated border ⁇ egment ⁇ are then u ⁇ ed to create an exact border description (step 1950) .
  • Fig. 15 is a simplified flowchart of an alternative preferred method of operation for the border estimation block of Figs. 3A and 4.
  • widths of ROIs (regions of interest) for borders are also computed. It is appreciated that the method of Fig. 15 is preferably performed at the conclusion of the method of Fig. 14.
  • a corresponding border segment is input (step 2010) .
  • the size of the ROI is selected as the size of the larger diameter of the two consecutive corre ⁇ ponding special points (step 2020) .
  • Fig. 16 is a simplified flowchart of a preferred method of operation for the borders and mask generation unit of Figs. 3A and 4 - 8.
  • object special points and border segments are drawn according to the chain code description (step 2100) .
  • the term "drawn”, as used in step 2100, does not necessarily indicate drawing in a visible form, but rather refers to creating an internal representation analogous to a drawn representation.
  • a seed-grow is created (step 2110) beginning from the frame of each picture.
  • the seed-grow is limited by meeting an object special point or border segment, and does not go past a border segment.
  • the seed-grow continues until no further growth is possible.
  • the seed grow begins on a portion of the picture frame which is not part of an object.
  • an extra row of blank pixels is added all around the picture frame, and the seed- grow begins in one of the extra pixels.
  • Pixels of the picture are then assigned values (step 2130) as follows: area covered by the seed-grow, 0; other areas, 1; optionally, transition pixels, intermediate value between 0 and 1. Assigning an intermediate value to transition pixels may be preferred, for example, in the case where the mask being created includes anti-aliasing. Optionally, a border de ⁇ cription may be created from the object ma ⁇ k by outputting only the transition pixels.
  • Fig. 17 is a simplified flowchart of a preferred method of operation for the exact object border description blocks of Figs. 3A, 4, 5, 6, 7 and 8.
  • Fig. 17 comprises a method for operation of blocks 350, 450, 465, 550, 576, 650, 665, 750, and 850.
  • the method of Fig. 17 also comprises a preferred method of operation for the border de ⁇ cription portion of block ⁇ 370, 470, 570, 670, 771, 774, 871, and 874.
  • the method of Fig. 17 preferably comprises the following steps.
  • Edge thinning i ⁇ performed on border segments, preserving special points (step 2202) .
  • the width is reduced to one pixel.
  • the reduction to one pixel is accomplished by keeping only one pixel, either the central pixel in the case where the edge is an odd number of pixels in width, or one of the two central pixels if the edge is an even number of pixels in width.
  • every pixel constituting a special point is kept, and if the special point is not a central pixel then the special point is kept and other pixels are not kept.
  • the thinned border segments from step 2202 are merged with the edge picture of the border ROI (step 2204) .
  • step 2204 The merged output of 2204 is thinned again in step 2206, similarly to the thinning of step 2202 de ⁇ cribed above.
  • a preferred method for performing step 2210 is similar to that described above with reference to Figs. 10 and 11.
  • a seed-grow is performed (step 2220) , ⁇ imilar to the seed-grow described above with reference to step 2110 of Fig. 16.
  • the grow limit for the seed grow of step 2220 is any border segment.
  • a chain code is computed from the new component map as follows. Elements bounded by the seed-grow area are marked as external elements. "Bounded" i ⁇ under ⁇ tood in step 2230 to mean surrounded, except pos ⁇ ibly at junction points. Element ⁇ not touched at all by, that i ⁇ , not bordering at all onthe seed-grow area, are marked as internal elements. Other elements, touched but not bounded by the seed-grow area are marked as border elements.
  • a chain code is then computed, linking, in order, all of the border elements.
  • the chain code is taken to comprise an exact border description, and the junctions described therein are taken as estimated special points.
  • Fig. 18 is a simplified flowchart of a preferred method of operation of the following elements: 570, 572, and 574 of Fig. 5, combined; 670, 672, and 674 of Fig. 6, combined; 771, 772, 774 of Fig. 7, combined; and 871, 872, 874 of Fig. 8, combined. It is appreciated that, in the case of element ⁇ 774, and 874, Fig. 18 de ⁇ cribe ⁇ only a portion of the operation thereof and does not include other portions which are described above with reference to Fig. 17.
  • border correspondence ⁇ are found, including compensating for border segments not found by using the estimated border segments.
  • step 2370 an exact border description is created from all corresponding segments.
  • Fig. 19 is a simplified flowchart of an alternative method of operation of step 2340 of Fig. 18.
  • a distance map is a map which indicates the distance of each pixel from an individual estimated border segment.
  • the distance map is built for each estimated border segment within the ROI. In the case that two end corresponding points have a different ROI size, the larger ROI size is preferably used.
  • the di ⁇ tance map i ⁇ created a ⁇ follows: a. each border segment pixel is a ⁇ igned a distance of 0; b. each unas ⁇ igned pixel adjacent to a pixel that was already assigned a distance of n is assigned a distance of n+l, except for pixels diagonally adjacent to the last pixel at the end of the region of as ⁇ igned pixels, which is not assigned a distance; c. step b is repeated until each pixel within the ROI has been as ⁇ igned a distance.
  • border candidates are typically edges found in the ROI.
  • the color parameters preferably comprise average and tolerance.
  • each of the average and the tolerance are computed separately for a strip, typically of width 1 pixel, adjacent to the edge at the interior and exterior thereof. The interior and exterior are distinguished based on the direction of traversal of the border.
  • the average is computed as separate averages for each color component, each taken to be the average value of one of the three color components I l t I 2 , and I 3 .
  • the tolerance also computed separately for each color component, describes the tolerance of the average color, and is typically based on the variance.
  • Step 2425 is similar to step 2420, except that the input to step 2425 is a found border segment.
  • a weight is computed for each border segment, representing the similarity between the candidate border segments and the found border segment.
  • separate weights are computed based on average color, average distance, tolerance of average color, and tolerance of distance.
  • average distance is computed based on the average of the distances assigned to the pixels in the candidate border segment by the distance map computed previously, as described in step 2410.
  • a threshold filter applies a minimum threshold to the weights received from the weight computation 2430 and outputs a combined, thresholded weight.
  • an initial probability is computed (unit 2450) for each candidate, representing its probability to be a part of a border segment corresponding to the found border segment.
  • the candidates are then filtered in a candidates filter 2460, which picks the best possible group of one or more candidates for the border segment corresponding to the found border segment, based on a filter criterion.
  • the filter method takes into account the probability that each candidate i ⁇ part of the border segment, as well as the relationship, that is, distance and angle, between the candidates with respect to the border segment.
  • the filter method may employ maximum probability methods or any appropriate statistical iteration method.
  • Parts of the border segment that were not found previously in step 2460 are filled in step 2470 using the estimated border segment or parts thereof.
  • FIG. 20 is a simplified flowchart of a prediction method useful in the methods of Figs. 7
  • the method of Fig. 20 preferably comprises the following steps.
  • Fig. 20 refers to the case in which frame-by-frame proce ⁇ ing i ⁇ occurring.
  • a check i ⁇ made for whether chain code i ⁇ available for four or more con ⁇ ecutive frames (step 2810) . If not, processing continues with step 2820, described below.
  • a third order prediction of borders and/or special points is performed (step 2815) .
  • Step 2815 is described in more detail in Fig. 23, below, for the case of special points.
  • borders are predicted similarly by using equation ⁇ of motion on the control point ⁇ of the splines of the estimated border segments.
  • step ⁇ 2820 and 2830 a decision is made, based on how many chain codes are available, as to whether a second order prediction (step 2825) or a first order prediction (step 2835) is to be made.
  • Step 2825 is described more fully in Fig. 22 below, and ⁇ tep 2835 in Fig. 21 below.
  • step 2840 sufficient information is not available for a prediction.
  • the user is asked to identify the desired object in the second frame (step 2845) and, if necessary, in the first frame (step 2850) .
  • Fig. 21 is a simplified flowchart of a preferred method for carrying out the steps of Fig. 20 in the case of first-order prediction.
  • the method of Fig. 21 preferably comprise ⁇ the following steps.
  • First frame and second frame corresponding special points are input. Points found only in the second frame are added to the first frame (step 2910) , so that the same number of points will be found in each frame, allowing a complete prediction for the next frame.
  • point location is preferably determined by rever ⁇ e geometrical interpolation along the chain-code edge, according to the point ⁇ ' location relative to the location of the two chain-code point ⁇ , found in both frames, which bound the point to be interpolated.
  • step 2920 the special point velocities, next frame first order predicted location, and next frame ⁇ pecial point ROI size are computed.
  • Fig. 22 is a simplified flowchart of a preferred method for carrying out the steps of Fig. 20 in the. case of ⁇ econd-order prediction.
  • next frame special point ROI size is optional.
  • the current frame ROI size may be used.
  • Fig. 23 is a simplified flowchart of a preferred method for carrying out the steps of Fig. 20 in the case of third-order and higher prediction.
  • the method of Fig. 23 is self-explanatory with reference to the above description of Figs. 21 and 22, step 3110 being analogous to steps 2910 and 3010, and step 3120 being analogous to step 2920 and 3020.
  • step 3105 a decision is made, ba ⁇ ed on how many frames have been previou ⁇ ly proce ⁇ sed, as to how many frames are used in subsequent steps.
  • step 3120 it is appreciated that the computation of next frame special point ROI size is optional. Alternatively, the current frame ROI size may be used.
  • an effect device 92 such as, for example, a device for performing one or more of the following effects: compression; painting; blurring; sharpening; a filter operation; and an effect which changes over time at a different rate on different sides of the border.
  • the application devices 80, 90 or 92 oper ⁇ ate on an individual frame, and cause the result of this opera ⁇ tion to be displayed to the user, before operation proceeds generated for a subsequent frame.
  • the video display apparatus 95 or a separate video display apparatus is employed to display a result of performing the operation without previously di ⁇ playing a separate representation of said border.
  • a result of performing an operation, without previously displaying a separate representation of the border, may be dis- played for a plurality of frames rather than for a single frame.
  • An advantage of this option is that interactive correction may be effected not on the basis of viewing a repre ⁇ sentation of the border but rather on the basis of viewing an effect or application generated by the application devices assum ⁇ ing a particular border. Viewing an effect or application is often a more useful method for evaluating the quality of the border tracking, relative to viewing a representation of the border itself as tracked.
  • the result of performing an operation such as an effect or application is displayed together with a representation of the border as tracked. If the result of performing the operation is deemed by the user unsatisfactory, the user uses the display to correct the border. Preferably, the display changes automatically to reflect the change in the result of performing the operation due to the new border.
  • blocks 135 and 165 indicate that, optionally, an effect or application is carried out, e.g. by an external application device.
  • the application or effect is carried out before the user is prompted to decide whether or not the border has been appropriately tracked (steps 140, 170) .
  • the user can employ the results of the appli ⁇ cation or effect to evaluate the border as tracked.
  • blocks 235 and 265 indicate that, optionally, an effect or application is carried out, e.g. by an external application device.
  • the application or effect is carried out before the user is prompted to decide whether or not the border has been appropriately tracked (steps 240, 270) .
  • the user can employ the results of the appli ⁇ cation or effect to evaluate the border as tracked.
  • Fig. 24 is similar to Fig. 4 except for the following differences, which can exist either separately or in combination: a. Subsequent frames are only brought (step 455) once the user has deemed the border, as tracked, satisfactory, in the current frame. b. The user determines whether or not the border is satis ⁇ factory by reviewing result ⁇ of operation ⁇ (effect ⁇ and/or appli ⁇ cations) generated with the a ⁇ umption that the border's location in the current frame i ⁇ as tracked.
  • result ⁇ of operation ⁇ effect ⁇ and/or appli ⁇ cations
  • step 490 in Fig. 4 is replaced with step 492 in Fig. 24.
  • Fig. 25 is similar to Fig. 8 except for the following differences, which can exist either separately or in combination: a. Special points are only predicted in a subsequent frame (step 866) once the user has deemed the border, a ⁇ tracked, satisfactory, in the current frame. b. The user determines whether or not the border is satis ⁇ factory by reviewing result ⁇ of operation ⁇ (effects and/or appli ⁇ cations) generated with the as ⁇ umption that the border's location in the current frame is as tracked.
  • result ⁇ of operation ⁇ effects and/or appli ⁇ cations
  • step 890 in Fig. 8 is replaced with step 892 in Fig. 25.
  • Fig. 26 is similar to Fig. 3A except for the following differences, which can exist either separately or in combination: a. Other frames are only brought (step 355) once the user has deemed the border, as tracked, satisfactory, in the current frame. b. The user determines whether or not the border i ⁇ satis ⁇ factory by reviewing result ⁇ of operation ⁇ (effect ⁇ and/or appli ⁇ cations) generated with the as ⁇ umption that the border's location in the current frame is as tracked.
  • step 390 in Fig. 3A is replaced with step 392 in Fig. 26.
  • Fig. 27 is similar to Fig. 7 except for the following differences, which can exist either separately or in combination: a. Special points are only predicted in another frame or frames (step 777) once the user has deemed the border, as tracked, satisfactory, in the current frame. b. The user determines whether or not the border i ⁇ ⁇ atis- factory by reviewing result ⁇ of operation ⁇ (effects and/or appli ⁇ cations) generated with the assumption that the border's location in the current frame is a ⁇ tracked.
  • step 790 in Fig. 7 i ⁇ replaced with step 792 in Fig. 27.
  • the mask generated in step 2130 of Fig. 16 may be used to determine whether the user is pointing at a dynamic object which may, for example, be a hotspot, or whether the user i ⁇ pointing at a background location which i ⁇ not a hot ⁇ pot.
  • the present invention is not limited to the proces ⁇ ing of color images but may also proces ⁇ , for example, monochrome and gray ⁇ cale image ⁇ .
  • the ⁇ oftware component ⁇ of the present invention may, if desired, be implemented in ROM (read ⁇ only memory) form.
  • the software components may, generally, be implemented in hardware, if desired, using conventional techniques.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Multimedia (AREA)
  • Theoretical Computer Science (AREA)
  • Electromagnetism (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Processing Or Creating Images (AREA)
  • Image Analysis (AREA)
  • Manufacture, Treatment Of Glass Fibers (AREA)
  • Burglar Alarm Systems (AREA)
  • Crystals, And After-Treatments Of Crystals (AREA)
  • Measurement Of Velocity Or Position Using Acoustic Or Ultrasonic Waves (AREA)
  • Radar Systems Or Details Thereof (AREA)

Abstract

L'invention concerne un procédé de poursuite qui consiste à recevoir une représentation d'un évènement englobant au moins un objet dynamique ayant une limite et au moins une portion de bordure absente pendant au moins une partie de l'évènement, et à fournir une indication permanente de l'emplacement de la limite de l'objet pendant l'évènement.
PCT/IL1996/000070 1995-08-04 1996-08-02 Dispositif de poursuite d'objet et procede correspondant WO1997006631A2 (fr)

Priority Applications (3)

Application Number Priority Date Filing Date Title
AU65303/96A AU6530396A (en) 1995-08-04 1996-08-02 Apparatus and method for object tracking
EP96925063A EP0880852A2 (fr) 1995-08-04 1996-08-02 Dispositif de poursuite d'objet et procede correspondant
JP9508279A JPH11510351A (ja) 1995-08-04 1996-08-02 オブジェクト追跡の装置および方法

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
IL114839 1995-08-04
IL11483995A IL114839A0 (en) 1995-08-04 1995-08-04 Apparatus and method for object tracking

Publications (2)

Publication Number Publication Date
WO1997006631A2 true WO1997006631A2 (fr) 1997-02-20
WO1997006631A3 WO1997006631A3 (fr) 1997-07-24

Family

ID=11067836

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/IL1996/000070 WO1997006631A2 (fr) 1995-08-04 1996-08-02 Dispositif de poursuite d'objet et procede correspondant

Country Status (6)

Country Link
EP (1) EP0880852A2 (fr)
JP (1) JPH11510351A (fr)
AU (1) AU6530396A (fr)
CA (1) CA2228619A1 (fr)
IL (1) IL114839A0 (fr)
WO (1) WO1997006631A2 (fr)

Cited By (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO1998045811A1 (fr) * 1997-04-04 1998-10-15 Avid Technology, Inc. Procede et dispositif servant a modifier une couleur d'une image
WO2000034918A1 (fr) * 1998-12-11 2000-06-15 Synapix, Inc. Procede interactif de detection et de marquage de bords
WO2000043955A1 (fr) * 1999-01-23 2000-07-27 Lfk-Lenkflugkörpersysteme Gmbh Procede et systeme pour retrouver des objets caches dans des images
WO2000077734A2 (fr) * 1999-06-16 2000-12-21 Microsoft Corporation Traitement a visualisation multiple du mouvement et de la stereo
GB2357650A (en) * 1999-12-23 2001-06-27 Mitsubishi Electric Inf Tech Method for tracking an area of interest in a video image, and for transmitting said area
US6269195B1 (en) 1997-04-04 2001-07-31 Avid Technology, Inc. Apparatus and methods for selectively feathering a composite image
EP1526481A2 (fr) * 2003-10-24 2005-04-27 Adobe Systems Incorporated Extraction d'objet basée sur la couleur et la texture visuelle
US7680300B2 (en) * 2004-06-01 2010-03-16 Energid Technologies Visual object recognition and tracking

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5067014A (en) * 1990-01-23 1991-11-19 David Sarnoff Research Center, Inc. Three-frame technique for analyzing two motions in successive image frames dynamically
US5093869A (en) * 1990-12-26 1992-03-03 Hughes Aircraft Company Pattern recognition apparatus utilizing area linking and region growth techniques
US5134472A (en) * 1989-02-08 1992-07-28 Kabushiki Kaisha Toshiba Moving object detection apparatus and method
US5274453A (en) * 1990-09-03 1993-12-28 Canon Kabushiki Kaisha Image processing system
US5291563A (en) * 1990-12-17 1994-03-01 Nippon Telegraph And Telephone Corporation Method and apparatus for detection of target object with improved robustness
US5333213A (en) * 1991-06-28 1994-07-26 Nippon Hoso Kyokai Method and apparatus for high speed dynamic image region extraction
US5345313A (en) * 1992-02-25 1994-09-06 Imageware Software, Inc Image editing system for taking a background and inserting part of an image therein

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5134472A (en) * 1989-02-08 1992-07-28 Kabushiki Kaisha Toshiba Moving object detection apparatus and method
US5067014A (en) * 1990-01-23 1991-11-19 David Sarnoff Research Center, Inc. Three-frame technique for analyzing two motions in successive image frames dynamically
US5274453A (en) * 1990-09-03 1993-12-28 Canon Kabushiki Kaisha Image processing system
US5291563A (en) * 1990-12-17 1994-03-01 Nippon Telegraph And Telephone Corporation Method and apparatus for detection of target object with improved robustness
US5093869A (en) * 1990-12-26 1992-03-03 Hughes Aircraft Company Pattern recognition apparatus utilizing area linking and region growth techniques
US5333213A (en) * 1991-06-28 1994-07-26 Nippon Hoso Kyokai Method and apparatus for high speed dynamic image region extraction
US5345313A (en) * 1992-02-25 1994-09-06 Imageware Software, Inc Image editing system for taking a background and inserting part of an image therein

Cited By (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6269195B1 (en) 1997-04-04 2001-07-31 Avid Technology, Inc. Apparatus and methods for selectively feathering a composite image
US6128001A (en) * 1997-04-04 2000-10-03 Avid Technology, Inc. Methods and apparatus for changing a color of an image
AU738375B2 (en) * 1997-04-04 2001-09-13 Avid Technology, Inc. Methods and apparatus for changing a color of an image
WO1998045811A1 (fr) * 1997-04-04 1998-10-15 Avid Technology, Inc. Procede et dispositif servant a modifier une couleur d'une image
WO2000034918A1 (fr) * 1998-12-11 2000-06-15 Synapix, Inc. Procede interactif de detection et de marquage de bords
WO2000043955A1 (fr) * 1999-01-23 2000-07-27 Lfk-Lenkflugkörpersysteme Gmbh Procede et systeme pour retrouver des objets caches dans des images
US6950563B1 (en) 1999-01-23 2005-09-27 Lfk-Lenkflugkoerpersysteme Gmbh Method and system for relocating hidden objects in images
WO2000077734A3 (fr) * 1999-06-16 2001-06-28 Microsoft Corp Traitement a visualisation multiple du mouvement et de la stereo
WO2000077734A2 (fr) * 1999-06-16 2000-12-21 Microsoft Corporation Traitement a visualisation multiple du mouvement et de la stereo
GB2357650A (en) * 1999-12-23 2001-06-27 Mitsubishi Electric Inf Tech Method for tracking an area of interest in a video image, and for transmitting said area
EP1526481A2 (fr) * 2003-10-24 2005-04-27 Adobe Systems Incorporated Extraction d'objet basée sur la couleur et la texture visuelle
EP1526481A3 (fr) * 2003-10-24 2008-06-18 Adobe Systems Incorporated Extraction d'objet basée sur la couleur et la texture visuelle
US7869648B2 (en) 2003-10-24 2011-01-11 Adobe Systems Incorporated Object extraction based on color and visual texture
US7680300B2 (en) * 2004-06-01 2010-03-16 Energid Technologies Visual object recognition and tracking

Also Published As

Publication number Publication date
EP0880852A2 (fr) 1998-12-02
AU6530396A (en) 1997-03-05
IL114839A0 (en) 1997-02-18
JPH11510351A (ja) 1999-09-07
WO1997006631A3 (fr) 1997-07-24
CA2228619A1 (fr) 1997-02-20

Similar Documents

Publication Publication Date Title
US6724915B1 (en) Method for tracking a video object in a time-ordered sequence of image frames
US5940538A (en) Apparatus and methods for object border tracking
US8078006B1 (en) Minimal artifact image sequence depth enhancement system and method
US8160390B1 (en) Minimal artifact image sequence depth enhancement system and method
US7684592B2 (en) Realtime object tracking system
US7181081B2 (en) Image sequence enhancement system and method
KR100459893B1 (ko) 동영상에서 칼라 기반의 객체를 추적하는 방법 및 그 장치
Stander et al. Detection of moving cast shadows for object segmentation
JP2856229B2 (ja) 画像切り出し箇所検出方法
US8401336B2 (en) System and method for rapid image sequence depth enhancement with augmented computer-generated elements
US8897596B1 (en) System and method for rapid image sequence depth enhancement with translucent elements
EP0853293B1 (fr) Dispositif et méthode d'extraction d'un objet dans une image
US6072903A (en) Image processing apparatus and image processing method
US6278460B1 (en) Creating a three-dimensional model from two-dimensional images
US5436672A (en) Video processing system for modifying a zone in successive images
WO2001026050A2 (fr) Traitement de segmentation d'image ameliore faisant appel a des techniques de traitement d'image assiste par l'utilisateur
JP2004505393A (ja) イメージ変換および符号化技術
WO1997006631A2 (fr) Dispositif de poursuite d'objet et procede correspondant
JP2981382B2 (ja) パターンマッチング方法
JPH11283036A (ja) 対象物検出装置及び対象物検出方法
JPH06243258A (ja) 奥行き検出装置
WO1999052063A1 (fr) Detection et traitement en fonction de caracteristiques

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A2

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A2

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
121 Ep: the epo has been informed by wipo that ep was designated in this application
AK Designated states

Kind code of ref document: A3

Designated state(s): AL AM AT AU AZ BB BG BR BY CA CH CN CU CZ DE DK EE ES FI GB GE HU IL IS JP KE KG KP KR KZ LK LR LS LT LU LV MD MG MK MN MW MX NO NZ PL PT RO RU SD SE SG SI SK TJ TM TR TT UA UG US UZ VN AM AZ BY KG KZ MD RU TJ TM

AL Designated countries for regional patents

Kind code of ref document: A3

Designated state(s): KE LS MW SD SZ UG AT BE CH DE DK ES FI FR GB GR IE IT LU MC NL PT SE BF BJ CF CG CI CM

ENP Entry into the national phase

Ref document number: 2228619

Country of ref document: CA

Ref country code: CA

Ref document number: 2228619

Kind code of ref document: A

Format of ref document f/p: F

ENP Entry into the national phase

Ref country code: JP

Ref document number: 1997 508279

Kind code of ref document: A

Format of ref document f/p: F

WWE Wipo information: entry into national phase

Ref document number: 1996925063

Country of ref document: EP

REG Reference to national code

Ref country code: DE

Ref legal event code: 8642

WWP Wipo information: published in national office

Ref document number: 1996925063

Country of ref document: EP

WWW Wipo information: withdrawn in national office

Ref document number: 1996925063

Country of ref document: EP