WO2004034708A1 - Appareil et procede permettant de fournir de maniere distincte des informations supplementaires a propos de chaque objet dans une image de diffusion numerique - Google Patents

Appareil et procede permettant de fournir de maniere distincte des informations supplementaires a propos de chaque objet dans une image de diffusion numerique Download PDF

Info

Publication number
WO2004034708A1
WO2004034708A1 PCT/KR2002/001895 KR0201895W WO2004034708A1 WO 2004034708 A1 WO2004034708 A1 WO 2004034708A1 KR 0201895 W KR0201895 W KR 0201895W WO 2004034708 A1 WO2004034708 A1 WO 2004034708A1
Authority
WO
WIPO (PCT)
Prior art keywords
unit
information
tracking
unit image
scene transition
Prior art date
Application number
PCT/KR2002/001895
Other languages
English (en)
Inventor
Seong-Whan Lee
Sang-Cheol Park
Seong-Hoon Lim
Original Assignee
Virtualmedia Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Virtualmedia Co., Ltd. filed Critical Virtualmedia Co., Ltd.
Priority to PCT/KR2002/001895 priority Critical patent/WO2004034708A1/fr
Priority to AU2002348647A priority patent/AU2002348647A1/en
Publication of WO2004034708A1 publication Critical patent/WO2004034708A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/20Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using video object coding
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/46Embedding additional information in the video signal during the compression process

Definitions

  • the present invention relates to an apparatus and method for providing additional information regarding a particular object in a digital broadcast image, and more particularly, to an apparatus and method for recognizing and tracking a particular object corresponding to a user's setting in a digital broadcast image and providing additional information regarding the particular object.
  • additional information services services of providing additional information together with a digital broadcast image.
  • additional information regarding at least one object included in the motion image is also displayed on a screen together with the motion image.
  • additional information regarding all objects in the motion image is provided in units of frames over time.
  • the present invention provides an apparatus and method for extracting a particular object designated by a user from a digital broadcast image or a normal motion image, recognizing the extracted object, and providing additional information regarding the object while the object is displayed on a screen. According to an aspect of the present invention, there is provided an apparatus for providing additional information regarding a particular object in a digital broadcast image.
  • the apparatus includes a motion image input unit which receives a motion image signal that is a stream of sequential unit images; a user command input unit which receives a user command; a scene transition detection unit which analyzes a motion image signal received through the motion image input unit and detects scene transition information that is information on a unit image having scene transition; a target object setting unit which receives scene transition information from the scene transition detection unit, a motion image signal from the motion image input unit, and object designation information on an object designated by a user from the user command input unit, detects a unit image corresponding to the object designation information among a unit image corresponding to the scene transition information and unit images succeeding the unit image corresponding to the scene transition information, sets a target area of the object in the detected unit image, and detects an initial position of the object; an object processing unit which receives object target area setting information resulting from the setting of the target area from the target object setting unit, scene transition information from the scene transition detection unit, and a motion image signal from the motion image input unit, sequentially extracts an object from
  • a method for providing additional information regarding a particular object in a digital broadcast image includes (a) receiving a motion image signal that is a stream of sequential unit images, analyzing the motion image signal, and detecting scene transition information regarding a unit image having scene transition; (b) receiving object designation information regarding an object designated by a user, detects a unit image corresponding to the object designation information among a unit image corresponding to the scene transition information detected in step (a) and unit images succeeding the unit image corresponding to the scene transition information and preceding a unit image corresponding to next scene transition information, setting an object target area in the detected unit image; (c) sequentially extracting the object from a unit image corresponding to the object target area set in step (b) and unit images succeeding the unit image corresponding to the object target area and preceding the unit image corresponding to the next scene transition information; (d) verifying whether the object extracted from each of the unit images in step (c) exists in each unit image; (e) tracking a moving position of
  • FIG. 1 is a schematic block diagram of an apparatus for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
  • FIG. 2 is a schematic block diagram of an object processing unit according to an embodiment of the present invention.
  • FIG. 3 is a flowchart of a method for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
  • FIG. 4 is a flowchart of an operation of detecting scene transition according to an embodiment of the present invention.
  • FIG. 5 is a flowchart of an operation of tracking an object according to an embodiment of the present invention.
  • FIG. 1 is a schematic block diagram of an apparatus for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
  • the apparatus includes a motion image input unit 100, a scene transition detection unit 110, a target object setting unit 120, an object processing unit 130, an additional information insertion unit 140, an output unit 150, a first buffer 160, a second buffer 170, and a user command input unit 180.
  • the motion image input unit 100 receives a motion image signal, i.e., a stream of sequential unit images, for example, frame_1 through frame_n.
  • the user command input unit 180 receives a user's command signals (for example, an object designation signal and an object tracking stop request signal).
  • the scene transition detection unit 110 sequentially receives the unit images, for example, frame_1 through frame_n, constituting the motion image signal from the motion image input unit 100 and stores them in the first buffer 160.
  • the scene transition detection unit 110 also compares a unit image (e.g., frame_t) currently stored in the first buffer 160 with a unit image (e.g., frame_(t-3)), which corresponds to scene transition information and has already been stored in the second buffer 170, detects a unit image having scene transition according to a comparison result, and stores scene transition information regarding the detected unit image in the second buffer 170.
  • a unit image e.g., frame_t
  • frame_(t-3) e.g., frame_(t-3)
  • the scene transition detection unit 110 determines the unit image (e.g., frame_t) currently stored in the first buffer 160 as a unit image having scene transition. Thereafter, the scene transition detection unit 110 stores the scene transition information regarding the unit image (e.g., frame_t) in the second buffer 170 and simultaneously stores a next unit image (e.g., frame_(t+1)) in the first buffer 160. Next, the scene transition detection unit 110 compares the unit image (e.g., frame_(t+1)) currently stored in the first buffer 160 with the unit image (e.g., frame_t) corresponding to scene transition information stored in the second buffer 170.
  • the unit image e.g., frame_t
  • the scene transition detection unit 110 detects the unit image (e.g., frame_t) stored in the first buffer 160 as a unit image having scene transition and stores scene transition information regarding the detected unit image (e.g., frame_t) in the second buffer 170. In addition, the scene transition detection unit 110 determines whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame (e.g., frame_n) of the motion image signal.
  • the scene transition detection unit 110 terminates scene transition detection with respect to the motion image signal. However, when it is determined that the unit image stored in the first buffer 160 is not the last frame of the motion image signal, the scene transition detection unit 110 stores a next unit image (e.g., frame_(t+1)) in the first buffer 160 and compares the unit image (e.g., frame_(t+1)) with a unit image (e.g., frame_t), which is stored in the second buffer 170 and corresponds to scene transition information.
  • a next unit image e.g., frame_(t+1)
  • the scene transition detection unit 110 determines that there is no scene transition.
  • the scene transition detection unit 110 determines whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame (e.g., frame i) of the motion image signal. When it is determined that the unit image stored in the first buffer 160 is the last frame of the motion image signal, the scene transition detection unit 110 terminates scene transition detection with respect to the motion image signal.
  • the unit image e.g., frame_t
  • the scene transition detection unit 110 terminates scene transition detection with respect to the motion image signal.
  • the scene transition detection unit 110 stores a next unit image (e.g., frame_(t+1)) in the first buffer 160 and compares the unit image (e.g., frame_(t+1)) with a unit image (e.g., frame_(t-3)), which is stored in the second buffer 170 and corresponds to scene transition information.
  • a unit image having scene transition e.g., frame_(t-3)
  • a representative of scene transition information detected by the scene transition detection unit 110 is a frame number corresponding to a unit image having scene transition.
  • the target object setting unit 120 receives scene transition information from the scene transition detection unit 110, a motion image signal from the motion image input unit 100, and object designation information regarding object designated by a user from the user command input unit 180. Next, the target object setting unit 120 detects a unit image (e.g., frame_(t+1)) corresponding to the object designation information among a unit image (e.g., frame_t) corresponding to the scene transition information and unit images (e.g., frame_(t+1) through frame_(t+20)) which are input following the unit image (e.g., frame_t) corresponding to the current scene transition information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information.
  • a unit image e.g., frame_(t+1)
  • unit images e.g., frame_(t+1) through frame_(t+20
  • the target object setting unit 120 sets a target area of an object (hereinafter, referred to as an object target area) in the detected unit image (e.g., frame_(t+1)) and detects an initial position of the object. Thereafter, the target object setting unit 120 transmits object target area setting information, i.e., information on the unit image (e.g., frame_(t+1)) in which the object target area has been set, to the object processing unit 130.
  • the object target area setting information may include the object target area and a frame number of the unit image where the object target area has been set.
  • the object processing unit 130 receives scene transition information from the scene transition detection unit 110, object target area setting information from the target object setting unit 120, and a motion image signal from the motion image input unit 100. Next, the object processing unit 130 sequentially extracts an object from each of a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information.
  • a unit image e.g., frame_(t+1)
  • the object processing unit 130 tracks a motion of the extracted object over a stream of the sequential unit images (e.g., frame_(t+1) through frame (t+20)) and transmits tracking information of the object (hereinafter, referred to as object tracking information) to the additional information insertion unit 140.
  • the object tracking information may include frame numbers of unit images, over which the object is extracted and tracked, and basic information regarding the object (such as, a name of the object).
  • the object processing unit 130 performs extraction and tracking of an object with respect to unit images (e.g., frame_(t+1) through frame (t+20)) from a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information received from the target object setting unit 120 to a unit image (e.g. frame (t+20)) preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information received from the scene transition detection unit 110.
  • unit images e.g., frame_(t+1) through frame (t+20)
  • the additional information insertion unit 140 receives object tracking information from the object processing unit 130, detects a range of unit images (e.g., frame_(t+1) through frame (t+20)), over which an object has been tracked, based on the object tracking information, and inserts predetermined additional information regarding the object into the range of the unit images.
  • a range of unit images e.g., frame_(t+1) through frame (t+20)
  • Various apparatuses and methods already known can be used to insert the additional information regarding the object into the range corresponding to the unit images with respect to which the object has been extracted and tracked.
  • the output unit 150 receives object additional information as an insertion result from the additional information insertion unit 140, converts the object additional information to be suitable for a system (such as a digital TV, a mobile apparatus, or video on demand (VOD)) to which the object additional information will be provided, and outputs the converted object additional information to the system.
  • a system such as a digital TV, a mobile apparatus, or video on demand (VOD)
  • VOD video on demand
  • the object processing unit 130 includes an object extractor 131 , an object recognizer 132, an object tracker 133, and an object management database (DB) 134.
  • the object extractor 131 receives scene transition information from the scene transition detection unit 110 shown in FIG. 1 , object target area setting information from the target object setting unit 120 shown in FIG. 1 , and a motion image signal from the motion image input unit 100 shown in FIG. 1.
  • the object extractor 131 sequentially extracts an object from each of a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information.
  • the object extractor 131 transmits information on a unit image from which an object is extracted, i.e., object extraction information, to the object recognizer 132.
  • the object extraction information includes basic information regarding the extracted object and a frame number of a unit image from which the object is extracted.
  • the object extractor 131 While sequentially extracting an object from each of the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and the unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding the unit image (e.g., frame_(t+21)) corresponding to the next scene transition information, when it is determined that a current unit image (e.g., frame_(t+19)) is the last frame of the motion image signal, the object extractor 131 terminates object extraction immediately after extracting an object from the current unit image (e.g., frame_(t+19)).
  • Various apparatuses and methods already known can be used to extract an object from a unit image.
  • the object recognizer 132 receives object extraction information from the object extractor 131 and a motion image signal from the motion image input unit 100 and verifies whether an object exists in a unit image (e.g., frame_(t+1)) corresponding to the object extraction information based on the object management DB 134.
  • the object management DB 134 stores basic information (e.g., a name of an object) regarding all objects existing in a motion image.
  • the object recognizer 132 When an object is verified as existing in the unit image (e.g., frame_(t+1)) corresponding to the object extraction information, the object recognizer 132 transmits object recognition information to the object tracker 133. However, whet an object is not verified as existing in the unit image (e.g., frame_(t+1)) corresponding to the object extraction information, the object recognizer 132 requests the target object setting unit 120 to reset an object. Then, the target object setting unit 120 requests a user to newly designate an object and repeats the-above described operations with respect to the newly designated object.
  • the target object setting unit 120 requests a user to newly designate an object and repeats the-above described operations with respect to the newly designated object.
  • the object recognizer 132 sequentially receives object extraction information from the object extractor 131 and verifies whether an object exists in each of the unit images (e.g., frame_(t+2) through frame_(t+20)) corresponding to the object extraction information.
  • the object tracker 133 sequentially receives object recognition information regarding each of unit frames (e.g., frame_(t+1) through frame_(t+20)) from the object recognizer 132 and a motion image signal from the motion image input unit 100, tracks a moving position of an object over a stream of the unit images (e.g., frame_(t+1) through frame_(t+20)) corresponding to the sequentially received object recognition information, and outputs object tracking information according to the motion of the object.
  • unit frames e.g., frame_(t+1) through frame_(t+20)
  • the object tracker 133 While tracking the object, the object tracker 133 compares a size of the object in a previous unit image (e.g., frame_(t+10)) with a size of the object in a current unit image (e.g., frame_(t+11)). When a difference between the object size in the previous unit image and the object size in the current unit image is greater than a predetermined reference value, the object tracker 133 determines that the size of the object has changed and performs object size compensation with respect to the current unit image (e.g., frame_(t+11)) before performing object tracking over a next unit image (e.g., frame_(t+12)).
  • a predetermined reference value e.g., the object tracker 133 determines that the size of the object has changed and performs object size compensation with respect to the current unit image (e.g., frame_(t+11)) before performing object tracking over a next unit image (e.g., frame_(t+12)).
  • the object tracker 133 When object tracking stop request information generated by a user requesting stop of object tracking is received from the user command input unit 180 during object tracking, the object tracker 133 outputs object tracking information based on a result of tracking a moving position of the object over unit images (e.g., frame_(t+1) through frame_(t+18)) from the unit image (e.g., frame__(t+1)) corresponding to the object recognition information_to a unit image (e.g., frame_(t+18)) corresponding to the object tracking stop request information.
  • a moving position of the object over unit images e.g., frame_(t+1) through frame_(t+18)
  • the unit image e.g., frame__(t+1)
  • the object recognition information_to e.g., frame_(t+18)
  • the object tracker 133 determines that the object disappears and outputs object tracking information based on a result of tracking a moving position of the object over unit images (e.g., frame_(t+1) through frame_(t+17)) from the unit image (e.g., frame_(t+1)) corresponding to the object recognition information o a unit image (e.g., frame_(t+17)) preceding the current unit image (e.g., frame_(t+18)).
  • a unit image e.g., frame_(t+17)
  • the object tracker 133 When it is determined that a current frame (e.g., frame_(t+19)) is the last frame of the motion image signal during object tracking, the object tracker 133 outputs object tracking information based on a result of tracking a moving position of the object over unit images (e.g., frame_(t+1) through frame_(t+19)) from the unit image (e.g., frame_(t+1)) corresponding to the object recognition information to the current unit image (e.g., frame_(t+19)).
  • a current frame e.g., frame_(t+19)
  • FIG. 3 is a flowchart of a method for providing additional information regarding an object in a digital broadcast image, according to an embodiment of the present invention.
  • a motion image signal that is a stream of sequential unit images is received in step S100.
  • Information on a unit image e.g., frame_t
  • scene transition information is detected by analyzing the motion image signal in step S110.
  • step S120 When object designation information is input by a user in step S120, among a unit image (e.g., frame_t) corresponding to the scene transition information detected in step S110 and unit images (e.g., frame_(t+1) through frame_(t+20)) which are input following the unit image (e.g., frame_t) corresponding to the current scene transition information and preceding a unit image (e.g., frame_(t+21)) corresponding to next scene transition information, a unit image (e.g., frame_(t+1)) corresponding to the object designation information is detected, an object target area is set in the detected unit image (e.g., frame_(t+1)), and an initial position of an object is detected in step S130.
  • a unit image e.g., frame_t
  • unit images e.g., frame_(t+1) through frame_(t+20)
  • a unit image e.g., frame_(t+1)
  • the object target area has been set in step S130
  • the object target area setting information i.e., object target area setting information
  • the object is sequentially extracted from each of a unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and unit images (e.g., frame_(t+2) through frame_(t+20)) which are input following the unit image (e.g., frame_(t+1)) corresponding to the object target area setting information and preceding the unit image (e.g., frame_(t+21)) corresponding to the next scene transition information in step S140.
  • a unit image e.g., frame_(t+1)
  • unit images e.g., frame_(t+2) through frame_(t+20
  • step S150 It is verified whether the object extracted from the unit images (e.g., frame_(t+1) through frame_(t+20)) exists in each of the unit images (e.g., frame_(t+1) through frame_(t+20)) in step S150.
  • the object is recognized in all of the unit images (e.g., frame_(t+1) through frame_(t+20)) as a result of verification in step S150, a moving position of the object is tracked over the stream of the unit images in step S160.
  • the method goes to step S120.
  • a range of the unit images (e.g., frame_(t+1) through frame_(t+20)), over which the object has been tracked, is detected, and additional information regarding the object is inserted into the detected range in step S170.
  • a result of inserting the additional information regarding the object i.e., object additional information
  • FIG. 4 is a flowchart of an operation of detecting scene transition in step S110, according to an embodiment of the present invention.
  • a unit image is received in step S111 and then stored in the first buffer 160 shown in FIG. 1 in step S112.
  • scene transition information that is information on a unit image (e.g., frame_(t-3)) having scene transition in the second buffer 170 shown in FIG. 1 in step S113
  • the unit image (e.g., frame_t) stored in the first buffer 160 is compared with a unit image (e.g., frame_(t-3)) corresponding to the scene transition information stored in the second buffer 170 in step S115.
  • the unit image (e.g., frame_t) stored in the first buffer 160 is determined as having scene transition, and scene transition information regarding the unit image (e.g., frame_t) stored in the first buffer 160 is stored in the second buffer 170 in step S114.
  • the operation goes to step S111 in which a unit image (e.g., frame_(t+1)) succeeding the unit image (e.g., frame_t) stored in the first buffer 160 is received.
  • steps S112 and S113 are repeated.
  • the unit image (e.g., frame_t) stored in the first buffer 160 is compared with a unit image (e.g., frame_(t-3)) corresponding to the scene transition information stored in the second buffer 170 in step S115.
  • a unit image e.g., frame_(t-3)
  • the unit image (e.g., frame_t) stored in the first buffer 160 is determined as having scene transition, and scene transition information regarding the unit image (e.g., frame_t) stored in the first buffer 160 is stored in the second buffer 170 so that the scene transition information stored in the second buffer 170 is updated in step S117.
  • step S118 it is determined whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame of the motion image signal in step S118.
  • the operation ends.
  • the operation goes to step S111 in which a unit image (e.g., frame_(t+1)) succeeding the unit image (e.g., frame_t) stored in the first buffer 160 is received, and then steps S112 through S118 are performed.
  • the unit image (e.g., frame_t) stored in the first buffer 160 and the unit image (e.g., frame_(t-3)) corresponding to the scene transition information stored in the second buffer 170 does not exceed the predetermined threshold value in step S116, the unit image (e.g., frame_t) is determined as not having scene transition, and it is determined whether the unit image (e.g., frame_t) stored in the first buffer 160 is the last frame of the motion image signal in step S118.
  • Detecting scene transition from a motion image signal and steps shown in FIG. 4 are techniques already known in the field of the present invention, and thus various known techniques can be selectively used.
  • FIG. 5 is a flowchart of an operation of tracking the object in step S160, according to an embodiment of the present invention.
  • a moving position of the object is tracked over the stream of the unit images (e.g., frame_(t+1) through frame_(t+20)) in step S161.
  • object tracking end information is received in step S162
  • the moving position of the object is tracked up to a specified unit image according to the object tracking end information, and a result of tracking the object is output as object tracking information.
  • the moving position of the object is tracked up to a current unit image (e.g., frame_(t+18)) corresponding to the object tracking stop request information in step S163, and a result of tracking the object is output as object tracking information in step S164.
  • a current unit image e.g., frame_(t+18)
  • the moving position of the object is tracked up to a unit image (e.g., frame_(t+17)) preceding the current unit image (e.g., frame_(t+18)) from which the object disappears in step S165, and object tracking information is output in step S166.
  • object tracking end information is information on the last frame of the motion image signal
  • a current unit image e.g., frame_(t+19)
  • the moving position of the object is tracked up to the current unit image (e.g., frame_(t+19)) corresponding to the last frame in step S168, and object tracking information is output in step S169.
  • the current unit image e.g., frame_(t+19)
  • the operation goes to step S140 shown in FIG. 3.
  • a particular object in a digital broadcast motion image is extracted, recognized, and tracked so that additional information regarding the particular object can be provided.
  • the present invention can be applied to normal motion images as well as digital broadcast images.
  • the present invention enables providing additional information regarding a particular object among objects appearing in a motion image
  • the present invention can be widely used in service systems providing detailed information regarding goods online, T-commerce systems, etc.
  • the present invention has an incidental effect of indirectly advertising an object regarding which additional information is provided.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Analysis (AREA)

Abstract

L'invention concerne un appareil et un procédé permettant de fournir des informations supplémentaires à propos d'un objet spécifique dans une image de diffusion numérique. Quand un utilisateur désigne un objet, celui-ci est extrait d'images unitaires constituant un signal d'une image aminée. Une position mobile de l'objet est suivie dans un flux d'images unitaires. Des informations supplémentaires à propos de l'objet sont introduites dans une gamme d'images unitaires dans laquelle l'objet a été suivi, de manière que les informations supplémentaires à propos de l'objet puissent être fournies. Etant donné qu'un objet spécifique dans une image aminée est extrait, reconnu et suivi, des informations supplémentaires concernant uniquement l'objet peuvent être fournies. De plus, l'appareil et le procédé peuvent être utilisés pour des systèmes fournissant des informations détaillées à propos de marchandises en ligne, de systèmes de commerce par téléviseur, etc.
PCT/KR2002/001895 2002-10-10 2002-10-10 Appareil et procede permettant de fournir de maniere distincte des informations supplementaires a propos de chaque objet dans une image de diffusion numerique WO2004034708A1 (fr)

Priority Applications (2)

Application Number Priority Date Filing Date Title
PCT/KR2002/001895 WO2004034708A1 (fr) 2002-10-10 2002-10-10 Appareil et procede permettant de fournir de maniere distincte des informations supplementaires a propos de chaque objet dans une image de diffusion numerique
AU2002348647A AU2002348647A1 (en) 2002-10-10 2002-10-10 Method and apparatus for separately providing additional information on each object in digital broadcasting image

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/KR2002/001895 WO2004034708A1 (fr) 2002-10-10 2002-10-10 Appareil et procede permettant de fournir de maniere distincte des informations supplementaires a propos de chaque objet dans une image de diffusion numerique

Publications (1)

Publication Number Publication Date
WO2004034708A1 true WO2004034708A1 (fr) 2004-04-22

Family

ID=32089644

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/KR2002/001895 WO2004034708A1 (fr) 2002-10-10 2002-10-10 Appareil et procede permettant de fournir de maniere distincte des informations supplementaires a propos de chaque objet dans une image de diffusion numerique

Country Status (2)

Country Link
AU (1) AU2002348647A1 (fr)
WO (1) WO2004034708A1 (fr)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130019267A1 (en) * 2010-06-28 2013-01-17 At&T Intellectual Property I, L.P. Systems and Methods for Producing Processed Media Content

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5636036A (en) * 1987-02-27 1997-06-03 Ashbey; James A. Interactive video system having frame recall dependent upon user input and current displayed image
KR20000057859A (ko) * 1999-02-01 2000-09-25 김영환 동영상의 움직임 활동 특징 기술 방법 및 장치
US6169573B1 (en) * 1997-07-03 2001-01-02 Hotv, Inc. Hypervideo system and method with object tracking in a compressed digital video environment

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5636036A (en) * 1987-02-27 1997-06-03 Ashbey; James A. Interactive video system having frame recall dependent upon user input and current displayed image
US6169573B1 (en) * 1997-07-03 2001-01-02 Hotv, Inc. Hypervideo system and method with object tracking in a compressed digital video environment
KR20000057859A (ko) * 1999-02-01 2000-09-25 김영환 동영상의 움직임 활동 특징 기술 방법 및 장치

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20130019267A1 (en) * 2010-06-28 2013-01-17 At&T Intellectual Property I, L.P. Systems and Methods for Producing Processed Media Content
US9906830B2 (en) * 2010-06-28 2018-02-27 At&T Intellectual Property I, L.P. Systems and methods for producing processed media content
US10827215B2 (en) 2010-06-28 2020-11-03 At&T Intellectual Property I, L.P. Systems and methods for producing processed media content

Also Published As

Publication number Publication date
AU2002348647A1 (en) 2004-05-04

Similar Documents

Publication Publication Date Title
CN112990191B (zh) 一种基于字幕视频的镜头边界检测与关键帧提取方法
US10304458B1 (en) Systems and methods for transcribing videos using speaker identification
US20120059914A1 (en) Systems and methods for determining attributes of media items accessed via a personal media broadcaster
US9213896B2 (en) Method for detecting and tracking objects in image sequences of scenes acquired by a stationary camera
US8264616B2 (en) Scene classification apparatus of video
EP3010235A1 (fr) Système et procédé permettant de détecter des annonces publicitaires sur la base d'empreintes
US20130083965A1 (en) Apparatus and method for detecting object in image
JP2005513663A (ja) コマーシャル及び他のビデオ内容の検出用のファミリーヒストグラムに基づく技術
US20060245625A1 (en) Data block detect by fingerprint
US20100246944A1 (en) Using a video processing and text extraction method to identify video segments of interest
CN113052169A (zh) 视频字幕识别方法、装置、介质及电子设备
EP3251053B1 (fr) Détection d'objets graphiques pour identifier des démarcations vidéo
US20200311898A1 (en) Method, apparatus and computer program product for storing images of a scene
US20110033115A1 (en) Method of detecting feature images
US20090180670A1 (en) Blocker image identification apparatus and method
US7734096B2 (en) Method and device for discriminating obscene video using time-based feature value
US8055062B2 (en) Information processing apparatus, information processing method, and program
WO2004034708A1 (fr) Appareil et procede permettant de fournir de maniere distincte des informations supplementaires a propos de chaque objet dans une image de diffusion numerique
KR101667011B1 (ko) 입체 영상의 장면 전환 검출 장치 및 방법
KR101672123B1 (ko) 편집본 동영상에 대한 자막파일을 생성하는 장치 및 방법
CN102667770B (zh) 用于计算机辅助地注解多媒体数据的方法和设备
JP4964044B2 (ja) 顔検出装置及び顔検出方法
JP2003143546A (ja) フットボールビデオ処理方法
JP4349004B2 (ja) テレビ受像機検出装置および方法
US11228803B1 (en) Method and apparatus for providing of section divided heterogeneous image recognition service in a single image recognition service operating environment

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE ES FI GB GD GE GH GM HR HU ID IL IN IS JP KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NO NZ OM PH PL PT RO RU SD SE SG SI SK SL TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW

AL Designated countries for regional patents

Kind code of ref document: A1

Designated state(s): GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR IE IT LU MC NL PT SE SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG

121 Ep: the epo has been informed by wipo that ep was designated in this application
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established
32PN Ep: public notification in the ep bulletin as address of the adressee cannot be established

Free format text: NOTING OF LOSS OF RIGHTS PURSUANT TO RULE 69(1) EPC. EPO FORM 1205A DATED 20-07-05

122 Ep: pct application non-entry in european phase
NENP Non-entry into the national phase

Ref country code: JP

WWW Wipo information: withdrawn in national office

Country of ref document: JP