EP1225518B1 - Vorrichtung und Verfahren zur Erzeugungung von objekt-markierten Bildern in einer Videosequenz - Google Patents

Vorrichtung und Verfahren zur Erzeugungung von objekt-markierten Bildern in einer Videosequenz Download PDF

Info

Publication number
EP1225518B1
EP1225518B1 EP20010307388 EP01307388A EP1225518B1 EP 1225518 B1 EP1225518 B1 EP 1225518B1 EP 20010307388 EP20010307388 EP 20010307388 EP 01307388 A EP01307388 A EP 01307388A EP 1225518 B1 EP1225518 B1 EP 1225518B1
Authority
EP
European Patent Office
Prior art keywords
query
frames
shots
frame
objects
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Lifetime
Application number
EP20010307388
Other languages
English (en)
French (fr)
Other versions
EP1225518A3 (de
EP1225518A2 (de
Inventor
Seong-Deok Lee
Chang-Yeong Kim
Ji-Yeon Kim
Sang-kyun 103-401 Geumhwa Maeul Daewoo Kim
Young-Su Moon
Doo-Sik Park
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Samsung Electronics Co Ltd
Original Assignee
Samsung Electronics Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Samsung Electronics Co Ltd filed Critical Samsung Electronics Co Ltd
Publication of EP1225518A2 publication Critical patent/EP1225518A2/de
Publication of EP1225518A3 publication Critical patent/EP1225518A3/de
Application granted granted Critical
Publication of EP1225518B1 publication Critical patent/EP1225518B1/de
Anticipated expiration legal-status Critical
Expired - Lifetime legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/73Querying
    • G06F16/732Query formulation
    • G06F16/7335Graphical querying, e.g. query-by-region, query-by-sketch, query-by-trajectory, GUIs for designating a person/face/object as a query predicate
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F16/00Information retrieval; Database structures therefor; File system structures therefor
    • G06F16/70Information retrieval; Database structures therefor; File system structures therefor of video data
    • G06F16/78Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually
    • G06F16/783Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content
    • G06F16/7837Retrieval characterised by using metadata, e.g. metadata not derived from the content or metadata generated manually using metadata automatically derived from the content using objects detected or recognised in the video content
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99944Object-oriented database structure
    • Y10S707/99945Object-oriented database structure processing
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10TECHNICAL SUBJECTS COVERED BY FORMER USPC
    • Y10STECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y10S707/00Data processing: database and file management or data structures
    • Y10S707/99941Database schema or data structure
    • Y10S707/99948Application of database or data structure, e.g. distributed, multimedia, or image

Definitions

  • the present invention relates to an apparatus and method for extracting query objects from a video sequence and generating object-label images for the query objects.
  • query objects are manually extracted from each frame of a moving picture sequence in order to generate object-labelled images in the moving picture.
  • Methods for automatically extracting objects without need for additional operation have been recency announced.
  • Motion based extraction methods include frame difference based extraction methods, background subtraction based extraction methods, and motion analysis based extraction methods.
  • Frame difference based extraction methods as disclosed in U.S. Patent Nos. 5,500,904 and 5,109,435, extract motion by calculating a difference in brightness between continuous frames of an image.
  • background subtraction methods as disclosed in U.S. Patent No. 5,748,775, a background image is recovered by the temporal change of an image feature parameter and an object region is extracted by the difference between an original image and the background image.
  • motion analysis methods as disclosed in U.S. Patent No. 5,862,508, a motion region is extracted by calculating the direction of movement and the speed of a moving object.
  • An approach of using a feature value of an object region includes a template matching method as disclosed in U.S. Patent No. 5,943,442, a multi-value threshold method as disclosed in U.S. Patent No. 5,138,671, and a feature value matching method.
  • the methods In order to use these methods in extracting a query object from moving picture data, the methods must be applied to all frames based on query objects. Therefore, a considerable amount of time is required.
  • WO 98/50869 discloses a method and a system for conducting object-orientated content-based video search. A method is also disclosed for extracting previously undefined "video objects" from video clips.
  • a method of labelling query objects in a video sequence based on images of the query objects, the labelled video sequence being for use in an object queryable interactive service comprising the steps of: a. receiving the video sequence and receiving the images of the query objects; b. dividing the video sequence into one or more shots, each of which is a set of frames having a similar scene, and selecting one or more key frames from each of the shots; c. determining whether there exists an object similar to each of the query objects in each of the key frames and, if there is a similar object in a key frame, extracting the similar object as a corresponding query object based initial object region; d. tracking object regions in all frames of each of the shots based on the corresponding query object based initial object regions; and e. labelling the object regions tracked in each of the frames based on information on the corresponding query objects.
  • the invention also relates to an apparatus for labelling query objects in a video sequence based on images of the query objects, the labelled video sequence being for use in an object queryable interactive system, the apparatus comprising: a video sequence receiving unit for receiving the video sequence, and a query image receiving unit for receiving the images of the query objects; a shot and key frame setting unit arranged to divide the video sequence into one or more shots, each of which is a set of frames having a similar scene, and select one or more key frames from each of the shots; an initial object region extractor arranged to determine whether there exists an object similar to each of the query objects in each of the key frames and, if there is a similar object in a key frame, extract the similar object as a corresponding query object based initial object region; an object region tracker arranged to track object regions in all frames of each of the shots based on the corresponding query object based initial object regions; and an object-labelled image generator arranged to label the object regions tracked in each of the frames based on information on the corresponding query objects.
  • the present invention provides an apparatus and method for generating object-labelled images in a moving picture, in which query object regions can be automatically extracted in each frame based on key frames without need for additional manual operation and regardless of the degree of motion of an object, and object images labelled based on information of the corresponding query objects are generated in each frame.
  • FIG. 1 is a schematic block diagram of an object based interactive service system, to which the present invention is applied.
  • the object based interactive service system includes user terminals 100, a server 120, a video database (DB) 130 for video sequences, and an object DB 140 for objects of interest.
  • DB video database
  • one or more object regions within moving picture data, which correspond to one or more query objects, are generated as object-labelled images.
  • each of the user terminals 100 includes an object based interactive image player or an MPEG 4 player and is connected to the server 120 through a network 110 in a remote manner.
  • a user can watch a moving picture (video sequence) provided by the server 120 on the screen of the user terminal by executing the object based interactive image player.
  • the user can select an arbitrary object (an object of interest) in an arbitrary frame of the video sequence, while watching the same through the object based interactive image player.
  • the server 120 provides the video sequences stored in the video DB 130 to each of the user terminals 100 and also provides detailed information on the object selected by the user with reference to the object DB 140. At this time, the user can look at information on the selected object through a separate frame (an ⁇ frame in the case of the MPEG 4) provided along with RGB (or YUV) frames.
  • the server 120 manages the video DB 130, in which various video sequence data are stored, and the object DB 140, in which information on objects of interest, such as products or persons, included in a particurar image of a video sequence is stored.
  • the DBs 130 and 140 can be implemented in the server 120.
  • the interactive service system of FIG. 1, can be realized in web-based circumstances.
  • the server 120 serves as a web server, and each of the user terminals 100 includes a web browser and is connected to the web server 120 through the Internet 110.
  • FIG. 2 is a block diagram of the object-labelled image generating apparatus according to the present invention.
  • the object-labelled image generating apparatus includes a video sequence receiving unit 200, a query image receiving unit 210, a shot and key frame setting unit 220, an initial object region extractor 230, an object region tracker 240, and a object-labelled image generator 250.
  • the video sequence receiving unit 200 receives a video sequence, i.e., a series of frame data of three primary colours, such as a series of RGB (or YUV) images, and outputs the received image sequence to the shot and key frame setting unit 220.
  • a video sequence i.e., a series of frame data of three primary colours, such as a series of RGB (or YUV) images
  • the video sequence is a set of frames.
  • Each of the frames may be an image including a query object or an image without any query object.
  • the shot and key frame setting unit 220 divides the input video sequence into one or more shots, each of which is a set of frames having a similar scene, and outputs information on the divided shots, i.e., information on frames which constitute each of the shots, to the object region tracker 240. Also, the shot and key frame setting unit 220 selects a key frame (a representative (R) frame) from each of the shots, which represents the shot.
  • a key frame a representative (R) frame
  • the number of key frames for a single shot may be one or more.
  • the initial object region extractor 230 sequentially receives query images each including a query object from the query image receiving unit 210 and receives the key frame of each of the shots from the shot and key frame setting unit 220.
  • the initial object region extractor 230 determines whether the key frame for each of the shots includes an object corresponding to the query object, of the query image input from the query image receiving unit 210, extracts an initial object region corresponding to the query object from the key frame of each of the shots, and masks the area of the initial object region as a binary image, a grey-scale image, etc., to generate a shot mask image.
  • the shot mask images are output to the object region tracker 240.
  • the object region tracker 240 receives the shots divided from the original video sequence, the query images each including one query object, and the shot mask images.
  • the object region tracker 240 tracks object regions in all frames of each of the shots based on,the initial object regions. Specifically, object regions for all the frames of each of the shots are tracked based on the corresponding initial object regions extracted based on the query objects. If an object region exists in a frame, the location and area of the object region in the frame are identified, and the area of the object region is masked as a binary image, a scale-scale image, etc., to generate a frame mask image. This object region tracking is performed on all the frames of the shots and is repeated until the frame mask images for all query objects are made.
  • the object-labelled image generator 250 merges the frame mask images tracked based on the query objects in each frame and labels one or more query objects existing in each of the frames. Specifically, the query object based frame mask images for each of the frames are merged as a single object-labelled image frame in which all objects are labelled. Assuming that a frame includes, for example, three query objects, the object regions corresponding to the three query objects may be marked with a peculiar pixel value between 1 and 255 and the other pixel region without any object may be marked with "0" (OFF).
  • Information on the object-labelled image frames generated by the object-labelled image generator 250 and information on real objects corresponding to the labelled object images are stored in the object DB 140 shown in FIG. 1.
  • FIGS. 3A and 3B are flowcharts illustrating an object-labelled image generating method according to the present invention. The operation of the object-labelled image generating apparatus of FIG. 2 will be described in detail with reference to FIGS. 3A and 3B.
  • a video sequence from which a query object is to be extracted is divided into one or more shots each of which is a set of frames having a similar scene, and one or more key frames are selected from each of the shots (steps 300 through 304).
  • one video sequence can be divided into a plurality of shots according to changes in camera angle, persons or subjects, place, and illumination. Variations between the shots are greater than those, for example, in colour values, between the frames which constitute each of the shots and can be detected from a difference in colour between two frames, i.e., the key frames, of the shots of interest.
  • One of the frames constituting each of the shots is selected as a key frame.
  • the first or middle frame of each of shots is selected as the key frame.
  • only the key frame of each of the shots is used to determine whether a query object exists in each of the shots. For example, if there are p shots; the number of key frames is equal to p.
  • FIG. 3A a video sequence and query images (1 through n) are input (step 300).
  • the video sequence is divided into one or more shots '(1 through p), a key frame is selected in each of the shots (step 302).
  • p key frames are buffered (step 304).
  • FIG. 4 shows an example of a video sequence divided into p shots and their key frames.
  • the first frame is selected from each of the shots as key frames as key frames KF!1, KF!2, KF!3, ... , and KF!p.
  • FIG. 5 shows an example of dividing a video sequence extracted from a soap opera into 8 shots and selecting their key frames.
  • the video sequence consisting of 619 frames in total are divided into 9 shots, and the key frame of each of the shots is designated by frame number.
  • An object region is extracted from each of the key frames based on query objects (steps 306 through 312). Preferably, it is determined whether an object similar to a query object exists in each of the query objects based on colour histogram or features such as texture or structure of the multi-colour regions constituting objects.
  • n query objects are input one by one.
  • a first query object is loaded (step 306). It is checked whether an object similar to the first query object exists in each of the p key frames, and if such an object exists, the object is extracted as an initial object region for the corresponding key frame (step 308). Pixels which belong to the initial object region of the key frame are turned on ("1") and the remaining pixels are turned off ("0"), thereby generating a shot mask image for the key frame (step 310). It is determined whether the query object number is greater than n (step 312). If not, the next query object is loaded (step 314). The above-mentioned operations are repeated with respect to n query objects. To be specific, n-by-p shot mask images are created with respect to p key frames and n query objects (the pixels of the shot mask image without the object region are all turned off ("0")).
  • Object regions are tracked with respect to all the frames of each of the shots based on the initial object regions (steps 316 through 330).
  • the initial object regions which are extracted from each of the key frames of the shots based on the query images in previous processes, are extended over the remaining frames of each of the shots.
  • location and area (range) of an object region corresponding to the query object are tracked in all the frames of each of the shots based on information on colour of the query image corresponding to the query object.
  • a more accurate object region can be extracted by checking similarity between the tracked object regions and using both motion model and colour information, by considering changes in location and area of the object image.
  • a shot mask image for the first query image is loaded (step 318).
  • the pixels of the loaded shot mask image are turned off ("0"), i.e., when it is determined that the loaded shot mask image does not include an object region corresponding to the first query image (step 320)
  • the next shot mask image is loaded (step 328).
  • the shot number is greater than p (step 326). If the shot number is not greater than p, the next shot mask image is loaded (step 328).
  • the object region is tracked in all the frames of the corresponding shot (step 322), thereby generating frame mask images for the corresponding shot based on the first query object (step 324).
  • the above-described operations are repeated with respect to all the shots and with respect to all the query objects (steps 330 and 332).
  • the frame image masks based on the query objects are merged in each frame, and the query object regions existing in each of the frames are labelled (step 334).
  • n -by- m frame mask images can be generated through the previous processes and can be merged in m frames. However, actually all the frames do not include n query objects and thus the number of generated frame mask images is less than n -by- m.
  • Each of the query objects has a peculiar colour value between 0 and 255 and pixels of the query object regions, which correspond to the query objects, existing in the merged frames have the unique colour value assigned to the corresponding query object.
  • FIG. 6 shows an example of a frame image and query objects existing in the frame image.
  • an arbitrary frame image shown on the left has a plurality of query objects, such as a desk diary 552, a necklace 553, a cup 554, a cloth 555, and a background 551.
  • FIG. 7 shows an example of labelling objects with label numbers. As shown in FIG. 7, each of the query objects has a unique label number. Thus, when the frame mask images generated based on the query objects are merged in each frame, each of the frame mask images is labelled with the corresponding unique label number, as shown on the right, of FIG. 7.
  • FIG. 8 shows an example where an object is labelled with the centroid and the minimum area rectangle.
  • the centroid of the object region which is marked with "X”
  • the minimum area rectangle enclosing or enclosed within the object region in a frame can be used instead of the unique label number:
  • P1 and P2 denote diagonally opposite corners of the rectangle.
  • FIG. 9 shows an example of object labelling using the centroid and the coordinate values of the minimum area rectangle of FIG. 8.
  • a video sequence is divided into a plurality of shots, each of which consists of a set of frames having a similar scene, and an initial object region is extracted from each of the shots by determining whether an object image exists in key frames of the shots. Based on the initial object region extracted from each of the key frames, object regions are tracked in all frames of the shots. Then, the object regions are labelled to generate object-labelled images. Therefore, compared with a conventional method of extracting objects and generating object-labelled images, the present invention can be applied regardless of the degree of motion of an object and time required to extract query objects can be reduced. Also, the present invention can easily be applied to provide object based interactive services without need for additional manual operations.
  • FIG. 10 shows an embodiment of an object based interactive service using the present invention.
  • Object images existing in each frame are labelled into object-labelled images and stored in the object DB 104 described with reference to FIG. 1.
  • the user's browser is provided with information on an object corresponding to the clicked object image,'which is stored in the object DB, 104.
  • the right side of FIG. 10 shows an example of information on the object.
  • the invention may be embodied in a general purpose digital computer by running a program from a computer usable medium, including but not limited to storage media such as magnetic storage media (e.g., ROM's, floppy disks, hard disks, etc.), optically readable media (e.g., CD-ROMs, DVDs, etc.) and carrier waves (e.g., transmissions over the Internet).
  • a computer usable medium including but not limited to storage media such as magnetic storage media (e.g., ROM's, floppy disks, hard disks, etc.), optically readable media (e.g., CD-ROMs, DVDs, etc.) and carrier waves (e.g., transmissions over the Internet).
  • the present invention may be embodied as a computer usable medium having a computer readable program code unit for distributed computer systems connected through a network.
  • the frame mask images generated based on the query objects are merged in each frame, and thus time required to extract a plurality of query objects from a frame can be reduced, compared to the conventional object extraction method. Therefore, the present invention can easily be applied in creating, editing, and encoding moving picture data based on objects.
  • the present invention can widely be used in interactive Internet broadcasting, and can be adopted to prepare Internet based advertisement materials, contents, and as a writing tool.

Claims (9)

  1. Verfahren zum Markieren von Anfrageobjekten in einer Videosequenz auf Basis von Bildern der Anfrageobjekte, wobei die Videosequenz zur Verwendung in einem interaktiven Service für abfragbare Objekte vorgesehen ist, wobei das Verfahren die Schritte umfasst:
    a. Empfangen der Videosequenz und Empfangen der Bilder der Anfrageobjekte (300);
    b. Teilen der Videosequenz in eine oder mehrere Aufnahmen, deren jede ein Satz Frames mit einer ähnlichen Szene ist, und Auswählen eines oder mehrerer Key-Frames aus jeder der Aufnahmen (302);
    c. Bestimmen, ob ein Objekt ähnlich jedem der Anfrageobjekte in jedem der Key-Frames vorhanden ist, und wenn es ein ähnliches Objekt in einem Key-Frame gibt, Extrahieren des ähnlichen Objekts als ein entsprechendes Anfrageobjekt auf Basis des Ausgangsobjektbereichs (308);
    d. Verfolgen von Objektbereichen in allen Frames jeder der Aufnahmen auf Basis des entsprechenden Anfrageobjekts auf Basis der Ausgangsobjektbereiche (322); und
    e. Markieren der verfolgten Objektbereiche in jedem der Frames auf Basis von Information über die entsprechenden Anfrageobjekte (334).
  2. Verfahren nach Anspruch 1, worin Schritt c. ferner umfasst: Erzeugen von Aufnahmemaskenbildern auf Basis des Anfrageobjekts in allen Key-Frames der Aufnahmen durch Einstellen von Pixeln des Anfrageobjekts auf Basis der Ausgangsobjektbereiche, die aus jedem der Key-Frames extrahiert sind, als ersten Wert und Einstellen der übrigen Pixel jedes der Key-Frames als zweiten Wert (310).
  3. Verfahren nach Anspruch 2, worin Schritt d. umfasst:
    d1. Verfolgen der Objektbereiche in allen Rahmen jeder der Aufnahmen auf Basis des entsprechenden Anfrageobjekts auf Basis von Aufnahmemaskenbildern und Videomerkmalswerten der entsprechenden Anfrageobjekte (322); und
    d2. Erzeugen von Framemaskenbildern auf Basis des Anfrageobjekts in allen Frames jeder der Aufnahmen durch Einstellen von Pixeln der Objektbereiche, die in jedem der Frames verfolgt sind, als ersten Wert und Einstellen der übrigen Pixel jedes der Key-Frames als zweiten Wert (324).
  4. Verfahren nach Anspruch 3, worin in Schritt e. jeder der Objektbereiche in jedem Frame mit einer einzigartigen Zahl markiert wird, die auf das entsprechende Anfragebild oder Koordinateninformation des entsprechenden Anfragebildes in jedem Frame gesetzt ist.
  5. Computerprogrammprodukt, das in einen digitalen Computer einladbar ist, mit Code zum Durchführen der Schritte eines Verfahrens gemäß einem der vorhergehenden Ansprüche, wenn es auf dem Computer läuft.
  6. Vorrichtung zum Markieren von Anfrageobjekten in einer Videosequenz auf Basis von Bildern der Anfrageobjekte, wobei die markierte Videosequenz zur Verwendung in einem interaktiven System für abfragbare Objekte vorgesehen ist, wobei die Vorrichtung umfasst:
    eine Videosequenzempfangseinheit (220) zum Empfangen der Videosequenz und eine Anfragebildempfangseinheit (210) zum Empfangen der Bilder der Anfrageobjekte;
    eine Aufnahme- und Key-Frame-Einstelleinheit (220), die so angeordnet ist, dass sie die Videosequenz in eine oder mehrere Aufnahmen teilt, deren jede ein Satz Frames mit einer ähnlichen Szene ist, und Auswählen eines oder mehrerer Key-Frames aus jeder der Aufnahmen;
    einen Ausgangsobjektbereichsextraktor (230), so angeordnet, dass er bestimmt, ob ein Objekt ähnlich jedem der Anfrageobjekte in jedem der Key-Frames vorhanden ist, und wenn ein ähnliches Objekt in einem Key-Frame vorhanden ist, Extrahieren des ähnlichen Objekts als ein entsprechendes Anfrageobjekt auf Basis des Ausgangsobjektbereichs;
    eine Objektbereichsverfolgungseinrichtung (240), so angeordnet, dass sie Objektbereiche in allen Frames jeder der Aufnahmen auf Basis des entsprechenden Anfrageobjekts auf Basis von Ausgangsobjektbereichen verfolgt; und
    einen Generator (250) für objekt-markierte Bilder, so angeordnet, dass er die verfolgten Objektbereiche in jedem der Frames auf Basis von Information über die entsprechenden Anfrageobjekte markiert.
  7. Vorrichtung nach Anspruch 6, worin der Ausgangsobjektbereichsextraktor (230), ferner so angeordnet ist, dass er auf Basis des Anfrageobjekts Aufnahmemaskenbilder in allen Key-Frames jeder der Aufnahmen erzeugt durch Einstellen von Pixeln des Anfrageobjekts auf Basis von Ausgangsobjektbereichen extrahiert aus jedem der Key-Frames als ersten Wert und Einstellen der übrigen Pixel in jedem der Key-Frames als zweiten Wert.
  8. Vorrichtung nach Anspruch 7, worin die Objektbereichsverfolgungseinrichtung (240) die Objektbereiche in allen Frames jeder der Aufnahmen auf Basis der entsprechenden Aufnahmemaskenbildern auf Basis des Anfrageobjekts und Videomerkmalswerten der entsprechenden Anfrageobjekte verfolgt und auf Basis des Anfrageobjekts Framemaskenbilder in allen Frames jeder der Aufnahmen erzeugt durch Einstellen von Pixeln der in jedem der Frames verfolgten Objektbereiche als ersten Wert und Einstellen der übrigen Pixel jedes der Key-Frames als zweiten Wert.
  9. Vorrichtung nach Anspruch 6 bis 8, worin der Generator (250) für objekt-markierte Bilder jeden der Objektbereiche in jedem Frame mit einer einzigartigen Zahl markiert, die auf das entsprechende Anfragebild oder Koordinateninformation des entsprechenden Anfragebildes in jedem Frame gesetzt ist.
EP20010307388 2001-01-20 2001-08-30 Vorrichtung und Verfahren zur Erzeugungung von objekt-markierten Bildern in einer Videosequenz Expired - Lifetime EP1225518B1 (de)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
KR2001003423 2001-01-20
KR1020010003423A KR100355382B1 (ko) 2001-01-20 2001-01-20 영상 시퀀스에서의 객체 레이블 영상 생성장치 및 그 방법

Publications (3)

Publication Number Publication Date
EP1225518A2 EP1225518A2 (de) 2002-07-24
EP1225518A3 EP1225518A3 (de) 2003-01-02
EP1225518B1 true EP1225518B1 (de) 2006-01-18

Family

ID=19704920

Family Applications (1)

Application Number Title Priority Date Filing Date
EP20010307388 Expired - Lifetime EP1225518B1 (de) 2001-01-20 2001-08-30 Vorrichtung und Verfahren zur Erzeugungung von objekt-markierten Bildern in einer Videosequenz

Country Status (6)

Country Link
US (1) US7024020B2 (de)
EP (1) EP1225518B1 (de)
JP (1) JP4370387B2 (de)
KR (1) KR100355382B1 (de)
CN (1) CN1222897C (de)
DE (1) DE60116717T2 (de)

Families Citing this family (65)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6774908B2 (en) 2000-10-03 2004-08-10 Creative Frontier Inc. System and method for tracking an object in a video and linking information thereto
US20030098869A1 (en) * 2001-11-09 2003-05-29 Arnold Glenn Christopher Real time interactive video system
KR100486709B1 (ko) * 2002-04-17 2005-05-03 삼성전자주식회사 객체기반 대화형 동영상 서비스 시스템 및 그 방법
JP4300767B2 (ja) * 2002-08-05 2009-07-22 ソニー株式会社 ガイドシステム、コンテンツサーバ、携帯装置、情報処理方法、情報処理プログラム、及び記憶媒体
US7647301B2 (en) * 2003-08-08 2010-01-12 Open-Circuit, Ltd. Information provision apparatus, format separation apparatus, information provision method and program
US7299126B2 (en) * 2003-11-03 2007-11-20 International Business Machines Corporation System and method for evaluating moving queries over moving objects
US7664292B2 (en) * 2003-12-03 2010-02-16 Safehouse International, Inc. Monitoring an output from a camera
US7697026B2 (en) * 2004-03-16 2010-04-13 3Vr Security, Inc. Pipeline architecture for analyzing multiple video streams
US20050229227A1 (en) * 2004-04-13 2005-10-13 Evenhere, Inc. Aggregation of retailers for televised media programming product placement
GB2414615A (en) * 2004-05-28 2005-11-30 Sony Uk Ltd Object detection, scanning and labelling
US7519200B2 (en) 2005-05-09 2009-04-14 Like.Com System and method for enabling the use of captured images through recognition
US8732025B2 (en) * 2005-05-09 2014-05-20 Google Inc. System and method for enabling image recognition and searching of remote content on display
US7809192B2 (en) * 2005-05-09 2010-10-05 Like.Com System and method for recognizing objects from images and identifying relevancy amongst images and information
US20080177640A1 (en) 2005-05-09 2008-07-24 Salih Burak Gokturk System and method for using image analysis and search in e-commerce
US7783135B2 (en) 2005-05-09 2010-08-24 Like.Com System and method for providing objectified image renderings using recognition information from images
US7542610B2 (en) * 2005-05-09 2009-06-02 Like.Com System and method for use of images with recognition analysis
US7945099B2 (en) * 2005-05-09 2011-05-17 Like.Com System and method for use of images with recognition analysis
US7660468B2 (en) * 2005-05-09 2010-02-09 Like.Com System and method for enabling image searching using manual enrichment, classification, and/or segmentation
WO2006122164A2 (en) * 2005-05-09 2006-11-16 Riya, Inc. System and method for enabling the use of captured images through recognition
US7657126B2 (en) * 2005-05-09 2010-02-02 Like.Com System and method for search portions of objects in images and features thereof
US7657100B2 (en) 2005-05-09 2010-02-02 Like.Com System and method for enabling image recognition and searching of images
US7760917B2 (en) 2005-05-09 2010-07-20 Like.Com Computer-implemented method for performing similarity searches
US7809722B2 (en) * 2005-05-09 2010-10-05 Like.Com System and method for enabling search and retrieval from image files based on recognized information
US8494951B2 (en) * 2005-08-05 2013-07-23 Bgc Partners, Inc. Matching of trading orders based on priority
US20070208629A1 (en) * 2006-03-02 2007-09-06 Jung Edward K Y Shopping using exemplars
US8600832B2 (en) 2006-03-03 2013-12-03 The Invention Science Fund I, Llc Considering selling exemplar-based goods, items, or services
US8571272B2 (en) * 2006-03-12 2013-10-29 Google Inc. Techniques for enabling or establishing the use of face recognition algorithms
US9690979B2 (en) 2006-03-12 2017-06-27 Google Inc. Techniques for enabling or establishing the use of face recognition algorithms
US8233702B2 (en) * 2006-08-18 2012-07-31 Google Inc. Computer implemented technique for analyzing images
US8341152B1 (en) 2006-09-12 2012-12-25 Creatier Interactive Llc System and method for enabling objects within video to be searched on the internet or intranet
CN100413327C (zh) * 2006-09-14 2008-08-20 浙江大学 一种基于轮廓时空特征的视频对象标注方法
KR100853267B1 (ko) * 2007-02-02 2008-08-20 전남대학교산학협력단 스테레오 시각 정보를 이용한 복수 인물 추적 방법 및 그시스템
CN100568958C (zh) * 2007-02-14 2009-12-09 成都索贝数码科技股份有限公司 一种基于网络的节目远程编辑方法
AU2008260048B2 (en) * 2007-05-30 2012-09-13 Creatier Interactive, Llc Method and system for enabling advertising and transaction within user generated video content
US7929764B2 (en) * 2007-06-15 2011-04-19 Microsoft Corporation Identifying character information in media content
US8416981B2 (en) 2007-07-29 2013-04-09 Google Inc. System and method for displaying contextual supplemental content based on image content
CN101420595B (zh) * 2007-10-23 2012-11-21 华为技术有限公司 一种描述和捕获视频对象的方法及设备
US9189794B2 (en) * 2008-02-11 2015-11-17 Goldspot Media, Inc. Method and apparatus for maximizing brand exposure in a minimal mobile display
US20110110649A1 (en) * 2008-06-19 2011-05-12 Thomson Licensing Adaptive video key frame selection
US20100070529A1 (en) * 2008-07-14 2010-03-18 Salih Burak Gokturk System and method for using supplemental content items for search criteria for identifying other content items of interest
US8239359B2 (en) * 2008-09-23 2012-08-07 Disney Enterprises, Inc. System and method for visual search in a video media player
US9715701B2 (en) * 2008-11-24 2017-07-25 Ebay Inc. Image-based listing using image of multiple items
CN102075689A (zh) * 2009-11-24 2011-05-25 新奥特(北京)视频技术有限公司 一种快速制作动画的字幕机
JP4784709B1 (ja) * 2011-03-10 2011-10-05 オムロン株式会社 対象物追跡装置、対象物追跡方法、および制御プログラム
MX2013014731A (es) * 2011-06-17 2014-02-11 Thomson Licensing Navegacion de video a traves de ubicacion de objetos.
US8798362B2 (en) * 2011-08-15 2014-08-05 Hewlett-Packard Development Company, L.P. Clothing search in images
CN102930887A (zh) * 2012-10-31 2013-02-13 深圳市宜搜科技发展有限公司 一种音频文件处理方法及系统
US9626567B2 (en) 2013-03-13 2017-04-18 Visible Measures Corp. Automated video campaign building
US9378556B2 (en) * 2014-04-25 2016-06-28 Xerox Corporation Method for reducing false object detection in stop-and-go scenarios
CN103970906B (zh) * 2014-05-27 2017-07-04 百度在线网络技术(北京)有限公司 视频标签的建立方法和装置、视频内容的显示方法和装置
US11438510B2 (en) 2016-03-22 2022-09-06 Jung Yoon Chun System and method for editing video contents automatically technical field
KR101717014B1 (ko) * 2016-04-21 2017-03-15 (주)노바빈 비디오 컨텐츠 자동 편집 시스템 및 자동 편집 방법
CN107798272B (zh) * 2016-08-30 2021-11-02 佳能株式会社 快速多目标检测与跟踪系统
KR101751863B1 (ko) * 2017-03-08 2017-06-28 (주)잼투고 비디오 컨텐츠 자동 편집 시스템 및 자동 편집 방법
CN108629224B (zh) * 2017-03-15 2019-11-05 北京京东尚科信息技术有限公司 信息呈现方法和装置
KR101827985B1 (ko) * 2017-05-19 2018-03-22 (주)잼투고 비디오 컨텐츠 자동 편집 시스템 및 자동 편집 방법
JP6856914B2 (ja) * 2017-07-18 2021-04-14 ハンジョウ タロ ポジショニング テクノロジー カンパニー リミテッドHangzhou Taro Positioning Technology Co.,Ltd. インテリジェントな物体追跡
CN110119650A (zh) * 2018-02-06 2019-08-13 优酷网络技术(北京)有限公司 信息展示方法及装置
TWI666595B (zh) 2018-02-26 2019-07-21 財團法人工業技術研究院 物件標示系統及方法
CN109284404A (zh) * 2018-09-07 2019-01-29 成都川江信息技术有限公司 一种将实时视频中的场景坐标与地理信息相匹配的方法
JP7121277B2 (ja) * 2018-09-28 2022-08-18 日本電信電話株式会社 情報同期装置、情報同期方法及び情報同期プログラム
KR102604937B1 (ko) 2018-12-05 2023-11-23 삼성전자주식회사 캐릭터를 포함하는 동영상을 생성하기 위한 전자 장치 및 그에 관한 방법
KR101997799B1 (ko) * 2018-12-17 2019-07-08 엘아이지넥스원 주식회사 관심영역 연관 영상 제공시스템
KR102028319B1 (ko) * 2018-12-17 2019-11-04 엘아이지넥스원 주식회사 연관 영상 제공장치 및 방법
US11823476B2 (en) 2021-05-25 2023-11-21 Bank Of America Corporation Contextual analysis for digital image processing

Family Cites Families (14)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5109435A (en) * 1988-08-08 1992-04-28 Hughes Aircraft Company Segmentation method for use against moving objects
JPH07104921B2 (ja) * 1989-11-17 1995-11-13 松下電器産業株式会社 画像閾値決定方法
JPH0766448B2 (ja) * 1991-06-25 1995-07-19 富士ゼロックス株式会社 画像信号分析装置
US5500904A (en) * 1992-04-22 1996-03-19 Texas Instruments Incorporated System and method for indicating a change between images
JP3329408B2 (ja) * 1993-12-27 2002-09-30 日本電信電話株式会社 動画像処理方法および装置
JP3123587B2 (ja) * 1994-03-09 2001-01-15 日本電信電話株式会社 背景差分による動物体領域抽出方法
JP3569992B2 (ja) * 1995-02-17 2004-09-29 株式会社日立製作所 移動体検出・抽出装置、移動体検出・抽出方法及び移動体監視システム
JPH09282456A (ja) * 1996-04-18 1997-10-31 Matsushita Electric Ind Co Ltd 画像ラベリング装置および画像検索装置
US5943442A (en) * 1996-06-12 1999-08-24 Nippon Telegraph And Telephone Corporation Method of image processing using parametric template matching
EP1008064A4 (de) 1997-05-05 2002-04-17 Univ Columbia Algorithmen und system für objektorientierte inhaltsbasierte videosuche
JP3787019B2 (ja) * 1997-07-18 2006-06-21 日本放送協会 画像の領域分割処理用ラベルマーカ生成装置および画像の領域分割処理装置
KR100304662B1 (ko) * 1998-01-21 2001-09-29 윤종용 2차원 영상 시퀀스를 이용한 스테레오 영상 생성장치 및 방법
KR100361939B1 (ko) * 1999-07-27 2002-11-22 학교법인 한국정보통신학원 객체 움직임을 이용한 mpeg 비디오 시퀀스의 데이터 베이스 구축 및 검색 방법과 그 기록 매체
KR100331050B1 (ko) * 2000-06-01 2002-04-19 송종수 동영상 데이터상의 객체 추적 방법

Also Published As

Publication number Publication date
KR100355382B1 (ko) 2002-10-12
CN1367616A (zh) 2002-09-04
KR20020062429A (ko) 2002-07-26
US7024020B2 (en) 2006-04-04
EP1225518A3 (de) 2003-01-02
DE60116717T2 (de) 2006-11-02
CN1222897C (zh) 2005-10-12
US20020097893A1 (en) 2002-07-25
DE60116717D1 (de) 2006-04-06
EP1225518A2 (de) 2002-07-24
JP2002232839A (ja) 2002-08-16
JP4370387B2 (ja) 2009-11-25

Similar Documents

Publication Publication Date Title
EP1225518B1 (de) Vorrichtung und Verfahren zur Erzeugungung von objekt-markierten Bildern in einer Videosequenz
US6342904B1 (en) Creating a slide presentation from full motion video
US5923365A (en) Sports event video manipulating system for highlighting movement
Hampapur et al. Production model based digital video segmentation
Colombari et al. Segmentation and tracking of multiple video objects
Agbinya et al. Multi-object tracking in video
US20030091237A1 (en) Identification and evaluation of audience exposure to logos in a broadcast event
US20200374491A1 (en) Forensic video exploitation and analysis tools
GB2452512A (en) Object Tracking Including Occlusion Logging
US11853357B2 (en) Method and system for dynamically analyzing, modifying, and distributing digital images and video
JP3315888B2 (ja) 動画像表示装置および表示方法
US20040207656A1 (en) Apparatus and method for abstracting summarization video using shape information of object, and video summarization and indexing system and method using the same
CN102638686B (zh) 处理动态图像的方法及设备
Li et al. Motion-focusing key frame extraction and video summarization for lane surveillance system
Wang et al. Taxonomy of directing semantics for film shot classification
Moon et al. Lee
WO1997012480A2 (en) Method and apparatus for implanting images into a video sequence
JP3197633B2 (ja) 移動体の自動追尾装置
Messer et al. Automatic sports classification
Latecki et al. Extraction of key frames from videos by optimal color composition matching and polygon simplification
JPWO2004068414A1 (ja) 注目物体の出現位置表示装置
Chapdelaine et al. Designing caption production rules based on face, text, and motion detection
AU3910299A (en) Linking metadata with a time-sequential digital signal
Fablet et al. Spatio-temporal segmentation and general motion characterization for video indexing and retrieval
CN110557675A (zh) 一种对视频节目内容进行分析、标注及时基校正的方法

Legal Events

Date Code Title Description
PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

17P Request for examination filed

Effective date: 20010926

AK Designated contracting states

Kind code of ref document: A2

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

PUAL Search report despatched

Free format text: ORIGINAL CODE: 0009013

AK Designated contracting states

Kind code of ref document: A3

Designated state(s): AT BE CH CY DE DK ES FI FR GB GR IE IT LI LU MC NL PT SE TR

AX Request for extension of the european patent

Free format text: AL;LT;LV;MK;RO;SI

17Q First examination report despatched

Effective date: 20030731

AKX Designation fees paid

Designated state(s): DE FR GB

GRAP Despatch of communication of intention to grant a patent

Free format text: ORIGINAL CODE: EPIDOSNIGR1

GRAS Grant fee paid

Free format text: ORIGINAL CODE: EPIDOSNIGR3

GRAA (expected) grant

Free format text: ORIGINAL CODE: 0009210

AK Designated contracting states

Kind code of ref document: B1

Designated state(s): DE FR GB

REG Reference to a national code

Ref country code: GB

Ref legal event code: FG4D

REF Corresponds to:

Ref document number: 60116717

Country of ref document: DE

Date of ref document: 20060406

Kind code of ref document: P

ET Fr: translation filed
PLBE No opposition filed within time limit

Free format text: ORIGINAL CODE: 0009261

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT

26N No opposition filed

Effective date: 20061019

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 16

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 17

REG Reference to a national code

Ref country code: FR

Ref legal event code: PLFP

Year of fee payment: 18

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: FR

Payment date: 20180727

Year of fee payment: 18

Ref country code: DE

Payment date: 20180725

Year of fee payment: 18

REG Reference to a national code

Ref country code: DE

Ref legal event code: R079

Ref document number: 60116717

Country of ref document: DE

Free format text: PREVIOUS MAIN CLASS: G06F0017300000

Ipc: G06F0016000000

PGFP Annual fee paid to national office [announced via postgrant information from national office to epo]

Ref country code: GB

Payment date: 20180725

Year of fee payment: 18

REG Reference to a national code

Ref country code: DE

Ref legal event code: R119

Ref document number: 60116717

Country of ref document: DE

GBPC Gb: european patent ceased through non-payment of renewal fee

Effective date: 20190830

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: DE

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20200303

Ref country code: FR

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190831

PG25 Lapsed in a contracting state [announced via postgrant information from national office to epo]

Ref country code: GB

Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES

Effective date: 20190830