US20020051081A1 - Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor - Google Patents

Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor Download PDF

Info

Publication number
US20020051081A1
US20020051081A1 US09/894,321 US89432101A US2002051081A1 US 20020051081 A1 US20020051081 A1 US 20020051081A1 US 89432101 A US89432101 A US 89432101A US 2002051081 A1 US2002051081 A1 US 2002051081A1
Authority
US
United States
Prior art keywords
information
frame
video
extracted
reproduction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US09/894,321
Other languages
English (en)
Inventor
Osamu Hori
Toshimitsu Kaneko
Takeshi Mita
Koji Yamamoto
Koichi Masukura
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Toshiba Corp
Original Assignee
Individual
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Individual filed Critical Individual
Assigned to KABUSHIKI KAISHA TOSHIBA reassignment KABUSHIKI KAISHA TOSHIBA ASSIGNMENT OF ASSIGNORS INTEREST (SEE DOCUMENT FOR DETAILS). Assignors: HORI, OSAMU, KANEKO, TOSHIMITSU, MASUKURA, KOICHI, MITA, TAKESHI, YAMAMOTO, KOJI
Publication of US20020051081A1 publication Critical patent/US20020051081A1/en
Priority to US10/226,352 priority Critical patent/US20030002853A1/en
Priority to US10/319,478 priority patent/US20030086692A1/en
Abandoned legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/02Editing, e.g. varying the order of information signals recorded on, or reproduced from, record carriers
    • G11B27/031Electronic editing of digitised analogue information signals, e.g. audio or video signals
    • G11B27/034Electronic editing of digitised analogue information signals, e.g. audio or video signals on discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B20/00Signal processing not specific to the method of recording or reproducing; Circuits therefor
    • G11B20/10Digital recording or reproducing
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/005Reproducing at a different information rate from the information rate of recording
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/102Programmed access in sequence to addressed parts of tracks of operating record carriers
    • G11B27/105Programmed access in sequence to addressed parts of tracks of operating record carriers of operating discs
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B27/00Editing; Indexing; Addressing; Timing or synchronising; Monitoring; Measuring tape travel
    • G11B27/10Indexing; Addressing; Timing or synchronising; Measuring tape travel
    • G11B27/19Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier
    • G11B27/28Indexing; Addressing; Timing or synchronising; Measuring tape travel by using information detectable on the record carrier by using information signals recorded by the same method as the main recording
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/76Television signal recording
    • H04N5/78Television signal recording using magnetic recording
    • H04N5/782Television signal recording using magnetic recording on tape
    • H04N5/783Adaptations for reproducing at a rate different from the recording rate
    • GPHYSICS
    • G11INFORMATION STORAGE
    • G11BINFORMATION STORAGE BASED ON RELATIVE MOVEMENT BETWEEN RECORD CARRIER AND TRANSDUCER
    • G11B2220/00Record carriers by type
    • G11B2220/20Disc-shaped record carriers
    • G11B2220/25Disc-shaped record carriers characterised in that the disc is based on a specific recording technology
    • G11B2220/2537Optical discs
    • G11B2220/2562DVDs [digital versatile discs]; Digital video discs; MMCDs; HDCDs
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/79Processing of colour television signals in connection with recording
    • H04N9/80Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback
    • H04N9/804Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components
    • H04N9/8042Transformation of the television signal for recording, e.g. modulation, frequency changing; Inverse transformation for playback involving pulse code modulation of the colour picture signal components involving data reduction

Definitions

  • the present invention relates to a special reproduction control information describing method for describing special reproduction control information used to perform special reproduction for target video contents, a special reproduction control information creating method for creating the special reproduction control information and a special reproduction control information creating apparatus and a video reproduction apparatus and method for performing special reproduction by using the special reproduction control information.
  • a motion picture is compressed as a digital video and is stored in disk media represented by a DVD, and a HDD so that a video can be reproduced at random.
  • a video can be reproduced halfway from a desired timing in the state of virtually no waiting time.
  • disk media can be fast reproduced at two to four times speed or can be reversely reproduced.
  • the present invention is directed to method and apparatus that substantially obviates one or more of the problems due to limitations and disadvantages of the related art.
  • a method of describing frame information comprises:
  • an article of manufacture comprising a computer usable medium storing frame information, the frame information comprises:
  • first information described for a frame extracted from a plurality of frames, specifying a location of the extracted frame in the source video data
  • an apparatus for creating frame information comprises:
  • a unit configured to extract a frame from a plurality of frames in a source video data
  • a unit configured to create the frame information including first information specifying a location of the extracted frame and second information relating to a display time of the extracted frame;
  • a unit configured to link the extracted frame to the frame information.
  • a method of creating frame information comprises:
  • creating the frame information including first information specifying a location of the extracted frame in the source video data and second information relating to a display time of the extracted frame.
  • an apparatus for performing a special reproduction comprises:
  • a unit configured to refer to frame information described for a frame extracted from a plurality of frames in a source video data and including first information specifying a location of the extracted frame in the source video data and second information relating to a display time of the extracted frame;
  • a unit configured to obtain the video data corresponding to the extracted frame based on the first information
  • a unit configured to determine the display time of the extracted frame based on the second information
  • a unit configured to display the obtained video data for the determined display time.
  • an article of manufacture comprising an article of manufacture comprising a computer usable medium having computer readable program code means embodied therein, the computer readable program code means performing a special reproduction, the computer readable program code means comprises:
  • computer readable program code means for causing a computer to refer to frame information described for a frame extracted from a plurality of frames in a source video data and including first information specifying a location of the extracted frame and second information relating to a display time of the extracted frame;
  • computer readable program code means for causing a computer to display the obtained video data for the determined display time.
  • an article of manufacture comprising a method of describing sound information, the method comprises:
  • an article of manufacture comprising an article of manufacture comprising a computer usable medium storing frame information, the frame information comprises:
  • first information described for a frame extracted from a plurality of sound frames, specifying a location of the extracted frame in the source sound data
  • second information described for the extracted frame, relating to a reproduction start time and reproduction time of the sound data of the extracted frame.
  • an article of manufacture comprising a method of describing text information, the method comprises:
  • an article of manufacture comprising an article of manufacture comprising a computer usable medium storing frame information, the frame information comprises:
  • first information described for a frame extracted from a plurality of text frames in a source text data, specifying a location of the extracted frame in the source text data;
  • second information described for the extracted frame, relating to a display start time and display time of the text data of the extracted frame.
  • FIG. 1 is a view showing an example of a data structure of special reproduction control information according to one embodiment of the present invention
  • FIG. 2 is a view showing an example of a structure of a special reproduction control information creating apparatus
  • FIG. 3 is a view showing an another example of structure of the special reproduction control information creating apparatus
  • FIG. 4 is a flowchart showing one example for the apparatus shown in FIG. 2;
  • FIG. 5 is a flowchart showing one example for the apparatus shown in FIG. 3;
  • FIG. 6 is a view showing an example of a structure of a video reproduction apparatus
  • FIG. 7 is a flowchart showing one example for the apparatus shown in FIG. 6;
  • FIG. 8 is a view showing an example of a data structure of special reproduction control information
  • FIG. 9 is a view explaining video location information for referring to an original video frame
  • FIG. 10 is a view explaining video location information for referring to a image data file
  • FIG. 11 is a view explaining a method for extracting video data in accordance with a motion of a screen
  • FIG. 12 is a view explaining video location information for referring to the original video frame
  • FIG. 13 is a view for explaining video location information for referring to the image data file
  • FIG. 14 is a view showing an example of a data structure of special reproduction control information in which plural original video frames are referred to;
  • FIG. 15 is a view explaining a relation between the video location information and the original plural video frames
  • FIG. 16 is a view explaining a relation between the image data file and the original plural video frames
  • FIG. 17 is a view explaining video location information for referring to the original video frame
  • FIG. 18 is a view for explaining video location information for referring to the image data file
  • FIG. 19 is a flow chart for explaining a special reproduction
  • FIG. 20 is a view for explaining a method for extracting video data in accordance with a motion of a screen
  • FIG. 21 is a view for explaining a method for extracting video data in accordance with a motion of a screen
  • FIG. 22 is a flowchart showing one example for calculating display time at which a scene change quantity becomes constant as much as possible;
  • FIG. 23 is a flowchart showing one example for calculating a scene change quantity of the whole frame from an MPEG video
  • FIG. 24 is a view for explaining a method for calculating a scene change quantity of a video from an MPEG stream
  • FIG. 25 is a view for explaining a processing procedure for calculating display time at which a scene change quantity becomes constant as much as possible;
  • FIG. 26 is a flowchart showing one example of the processing procedure for conducting special reproduction on the basis of special reproduction control information
  • FIG. 27 is a flowchart showing one example for conducting special reproduction on the basis of a display cycle
  • FIG. 28 is a view for explaining a relationship between a calculated display time and the display cycle
  • FIG. 29 is a view for explaining a relationship between a calculated display time and the display cycle
  • FIG. 30 is a view showing another example of a data structure of special reproduction control information
  • FIG. 31 is a view explaining a method for extracting video data in accordance with a motion of a screen
  • FIG. 32 is a view explaining video location information for referring to the original video frame
  • FIG. 33 is a view showing another example of a data structure of special reproduction control information
  • FIG. 34 is a view showing another example of a data structure of special reproduction control information
  • FIG. 35 is a view showing another example of a data structure of special reproduction control information
  • FIG. 36 is a flowchart showing one example for calculating display time from the importance
  • FIG. 37 is a view for explaining a method for calculating display time from the importance
  • FIG. 38 is a flowchart showing one example for calculating importance data on the basis of the idea that a scene having a large sound level is important;
  • FIG. 39 is a flowchart showing one example for calculating importance data on the basis of the idea that a scene on which many important words appear with sound recognition is important, or a processing procedure for calculating importance data on the basis of the idea that the scene in which the number of words talked per time is many is important;
  • FIG. 40 is a flowchart showing one example for calculating importance data on the basis of the idea that a scene on which many important words appear with telop recognition is important, or a processing procedure for calculating importance data on the basis of the idea that the scene in which the number of words included in the telop which appears per time is large with telop recognition is important;
  • FIG. 41 is a flowchart showing one example for calculating importance data on the basis of the idea that the scene in which a large character appears as a telop is important;
  • FIG. 42 is a flowchart showing one example for calculating importance data on the basis of the idea that the scene in which many human faces appear is important or a processing for calculating importance data on the basis of the idea that the scene where human faces are displayed in an enlarged manner is important;
  • FIG. 43 is a flowchart showing one example for calculating importance data on the basis of the idea that the scene in which videos similar to the registered important scene appear is important;
  • FIG. 44 is a view showing another example of a data structure of special reproduction control information
  • FIG. 45 is a view showing another example of a data structure of special reproduction control information
  • FIG. 46 is a view showing another example of a data structure of special reproduction control information
  • FIG. 47 is a view for explaining a relationship between information as to whether the scene is to be reproduced or not and the reproduced video;
  • FIG. 48 is a flowchart showing one example of a processing procedure of special reproduction including reproduction and non-reproduction judgment
  • FIG. 49 is a view showing one example of a data structure when sound information or text information is added.
  • FIG. 50 is a view showing one example of a data structure for describing only sound information separately from frame information
  • FIG. 51 is a view showing one example of a data structure for describing only text information separately from frame information
  • FIG. 52 is a view for explaining a synchronization of a reproduction of each of media
  • FIG. 53 is a flowchart showing one example of a determination procedure of a sound reproduction start time and a sound reproduction time in a video frame section;
  • FIG. 54 is a flowchart showing one example for preparing reproduction sound data and correcting video frame display time
  • FIG. 55 is a flowchart showing one example of a processing procedure of obtaining text information with telop recognition
  • FIG. 56 is a flowchart showing one example of a processing procedure of obtaining text information with sound recognition
  • FIG. 57 is a flowchart showing one example of a processing procedure of preparing text information
  • FIGS. 58A and 58B are views for explaining a method of displaying text information
  • FIG. 59 is a view showing one example of a data structure of special reproduction control information for sound information
  • FIG. 60 is a view showing another example of a data structure of special reproduction control information for sound information
  • FIG. 61 is a view explaining a summary reproduction of the sound/music data.
  • FIG. 62 is a view explaining another summary reproduction of the sound/music data.
  • the embodiments relate to a reproduction of video contents having video data using special reproduction control information.
  • the video data comprises a set of video frames (video frame group) constituting a motion picture.
  • the special reproduction control information is created from the video data by a special reproduction control information creating apparatus and attached to the video data.
  • the special reproduction is reproduction by a method other than a normal reproduction.
  • the special reproduction includes a double speed reproduction (or a high speed reproduction), jump reproduction (or jump continuous reproduction), and a trick reproduction.
  • the trick reproduction includes a substituted reproduction, an overlapped reproduction, a slow reproduction and the like.
  • the special reproduction control information is referred to when the special reproduction is executed in the video reproduction apparatus.
  • FIG. 1 shows one example of a basic data structure of the special reproduction control information.
  • Each frame information 100 includes a set of video location information 101 and display time control information 102 .
  • the video location information 101 indicates a location of video data to be displayed at the time of special reproduction.
  • the video data to be display may be one frame, a group of a plurality of continuous frames, or a group formed of a part of a plurality of continuous frames.
  • the display time control information 102 forms the basis of calculating the display time of the video data.
  • the frame information “i” is arranged in an order of the appearance of frames in the video data.
  • the frame information “i” may be arranged and described in any order.
  • the reproduction rate information 103 attached to a plurality of items of frame information “i” shows the reproduction speed rate and is used for designating the reproduction at a speed several times higher than that corresponding to the display time as described by the display time control information 102 .
  • the reproduction rate information 103 is not essential information.
  • the information 103 may constantly be attached, not constantly be attached, or selectively attached. Even when the reproduction rate information 103 is attached, the information may not be used at the time of special reproduction.
  • the reproduction rate information may constantly be used, may not constantly used, or is selectively used.
  • each information included in the special reproduction control information may be all used on the side of the video reproduction device, or a part of the information may be used.
  • FIG. 2 shows an example of a structure of an apparatus for creating special reproduction control information.
  • This special reproduction control information creating device comprises a video data storage unit 2 , a video data processing unit 1 including a video location information processing unit 11 and a display time control information processing unit 12 , and a special reproduction control information storage unit 3 .
  • the video data encoded data
  • the video data processing unit 1 including a video location information processing unit 11 and a display time control information processing unit 12
  • a special reproduction control information storage unit 3 a special reproduction control information storage unit 3 .
  • an image data file is used (the image data file may be constantly used, or the image data file is selectively used), an image data file creating unit 13 (in the video data processing unit 1 ) and an image data file storage unit 14 are further provided as shown in FIG. 3. If other control information is added which is determined on the basis of the video data to the special reproduction control information, the corresponding function is appropriately added to the inside of the video data processing unit 1 .
  • a GUI is used for displaying, for example, video data in frame units, and providing a function of receiving an input of an instruction by the user though omitted in FIGS. 2 and 3.
  • FIGS. 2 and 3 a CPU, a memory, an external storage device, and a network communication device is provided when needed, and software such as driver software used when needed and an OS are not shown.
  • the video data storage unit 2 stores video data which becomes an target of processing for creating special reproduction control information (or special reproduction control information and image data files).
  • the special reproduction control information storage unit 3 stores special reproduction control information that has been created.
  • the image data file storage unit 4 stores image data files that have been created.
  • the storage units 2 , 3 , and 4 comprise, for example, a hard disk, an optical disk and a semiconductor memory.
  • the storage units 2 , 3 , and 4 may comprise separate storage devices. All or part of the storage units may comprise the same storage device.
  • the video data processing unit 1 creates the special reproduction control information (or the special reproduction control information and image data file) on the basis of the video data which becomes an target of processing.
  • the video location information processing unit 11 determines (extracts) a video frame (group) which should be displayed or which can be displayed at the time of special reproduction to conduct processing of preparing the video location information 101 which should be described in each frame information “i”.
  • the display time control information processing unit 102 conducts a processing for preparing the display time control information 102 associated with the display time of the video frame (group) associated with each frame information “i”.
  • the image data file creating unit 13 conducts a processing for preparing an image data file from the video data.
  • the special reproduction control information creating apparatus can be realized, for example, in a form of conducting software on a computer.
  • the apparatus may be realized as a dedicated apparatus for creating the special reproduction control information.
  • FIG. 4 shows an example of a processing procedure in a case of a structure of FIG. 2.
  • the video data is read (step S 11 ), video location information 101 is created (step S 12 ), display time control information 102 is created (step S 13 ), and special reproduction control information is stored (step S 14 ).
  • the procedure of FIG. 4 may be consecutively conducted for each frame information, and each processing may be conducted in batches. The other procedures can also be conducted.
  • FIG. 5 shows an example of a processing procedure in a case of the structure of FIG. 3.
  • a procedure for preparing and storing image data files is added to a procedure of FIG. 4 (step S 22 ).
  • the image data file is created and/or stored together with the preparation of the video location information 101 . It is also possible to create the video location information 101 at a timing different from that of FIG. 4.
  • the procedure of FIG. 5 may be conducted for each frame information, or may be conducted in batches. The other procedures can also be conducted.
  • FIG. 6 shows an example of a video reproduction apparatus.
  • This video reproduction apparatus comprises a controller 21 , a normal reproduction processing unit 22 , a special reproduction processing unit 23 , a display device 24 , and a contents storage unit 25 . If contents are handled wherein audio such as sound or the like is added to the video data, it is preferable to provide a sound output section. If contents are handled wherein text data is added to the video data, the text may be displayed on the display device 24 , or may be output from the sound output section. If contents are handled wherein a program is attached, an attached program execution section may be provided.
  • the contents storage unit 25 stores at least video data and special reproduction control information.
  • the image data file is further stored.
  • the sound data, the text data, and the attached program are further stored in some cases.
  • the contents storage unit 25 may be arranged at one location in a concentrated manner, or may be arranged in a distributed manner. The point is that the contents can be accessed with the normal reproduction processing unit 22 and special reproduction processing unit 23 .
  • the video data, special reproduction control information, image data files, sound data, text data, and attached program may be stored in separate media or may be stored in the same medium. As the medium, for example, DVD is used. These may be data which are transmitted via a network.
  • the controller 21 basically receives an instruction such as a normal reproduction and a special reproduction with respect to the contents from the user via a user interface such as a GUI or the like.
  • the controller 21 controls for giving to the corresponding processing unit an instruction of reproduction by means of a method designated with respect to the designated contents.
  • the normal reproduction processing unit 22 is used for the normal reproduction of the designated contents.
  • the special reproduction processing unit 23 is used for the special reproduction (for example, a high speed reproduction, jump reproduction, trick reproduction, or the like) of the designated contents by referring to the special reproduction control information.
  • the display device 24 is used for displaying a video.
  • the video reproduction apparatus can be realized by computer software. It may partially be realized by hardware (for example, decode board (MPEG-2 decoder) or the like).
  • the video reproduction apparatus may be realized as a dedicated device for video reproduction.
  • FIG. 7 shows one example of a reproduction processing procedure of the video reproduction apparatus of FIG. 6.
  • step S 31 it is determined whether user requests a normal reproduction or a special reproduction.
  • the designated video data is read at step S 32 and a normal reproduction is conducted at step S 33 .
  • the special reproduction control information corresponding to the designated video data is read at step S 34 , the location of the video data to be displayed is specified and the display time is determined at step S 35 .
  • the corresponding frame (group) is read from the video data (or the image data file) at step S 36 to conduct special reproduction of the designated contents at step S 37 .
  • the location of the video data can be specified and the display time can be determined at a timing different from that in FIG. 7.
  • the procedure of the special reproduction of FIG. 7 may be consecutively conducted for each frame information, or each processing may be conducted in batches. Other procedures can be conducted. For example, in the case of the reproduction method in which the display time of each frame is equally set to a constant value, it is not necessary to determine the display time.
  • the user may demand various designations (for example, the start point of the reproduction or the end point of the reproduction in the contents, a reproduction speed in the high speed reproduction, and reproduction time in the high speed reproduction, and other method, such as special reproduction or the like).
  • the frame information to be used at the time of the special reproduction is determined from the video data, the video location information is created, and the display time control information is created.
  • the frame is determined by such methods as; 1) a method for calculating the video frame on the basis of some characteristic quantity with respect to the video data (for example, a method for extracting the video frames such that the total of characteristic quantity (for example, the scene change quantity) between the extracted frames becomes constant and a method for extracting the video frames such that the total of importance between the extracted frames becomes constant), and (2) a method for calculating the video frame on a fixed standard (for example, a method for extracting frames at random, and a method for extracting frames at an equal interval).
  • the scene change quantity is also called as a frame activity value.
  • the display time control information 121 there are available; (i) a method for calculating an absolute value or a relative value of the display time or a display frame number, (ii) a method for calculating reference information which is a base of the display time and a display frame number (for example, the information designated by the user, characters in the video, sound synchronized with video, and persons in the video, and the importance obtained on the basis of the specific pattern in the video), (iii) a method for describing both (i) and (ii).
  • the special reproduction is conducted by referring to the display time control information 121 of (i), (ii) or (iii) included in the frame information.
  • the described value may be followed or the described value may be corrected and used.
  • independently created other information, and information input from the user may be used.
  • only the independently created other information and the information input from the user may be used. A plurality of methods out of these methods are enabled and can be appropriately selected.
  • a double speed reproduction (or a high speed reproduction) carries out reproduction in a time shorter than the time required for the normal reproduction of the original contents by reproducing a part of the frames out of the whole frames constituting the video data contents.
  • the frames indicated by the frame information are displayed for each display time indicated by the display time control information 121 , in the order of time sequence.
  • a speed designation request for designating at what times speed of the normal reproduction the original contents are reproduced (in what factor of the time required for the normal reproduction the original contents are reproduced) and a time designation request for designating how much time is taken for reproducing the contents.
  • the high speed reproduction is called a summarized reproduction.
  • a jump reproduction (or a jump continuous reproduction) is such that a part of the frame shown in the frame information is subjected to non-reproduction, for example, on the basis of the reproduction/non-reproduction information described later in the high speed reproduction.
  • the high speed reproduction is conducted with respect to the frame excluding the frame which is subjected to non-reproduction out of the frames shown in shown in the frame information.
  • a trick reproduction excludes from the reproduction except for the normal reproduction the high speed reproduction and the jump reproduction.
  • various forms such as a substituted reproduction for reproducing a certain portion by replacing the order of time sequence, an overlapped reproduction for reproducing a certain portion repeatedly a plurality of times at the time of reproducing the frame shown in frame information, a variable speed reproduction in which at the time of reproducing the frame shown in the frame information, a certain portion is reproduced at a speed lower than the reproduction of another portion (including the case in which the portion is reproduced at the speed of normal reproduction, or the case in which the portion is reproduced at a speed lower than the normal reproduction time) or at a speed higher than another portion, or the reproduction of a certain portion is temporarily suspended, or such forms of reproduction are appropriately combined, a random reproduction for reproducing at a random time sequence for each of a constant set of frames shown in the frame information.
  • FIG. 8 shows one example of a data structure of the special reproduction control information created under the target video data.
  • the data structure is such that the display time information 121 is described which is information showing an absolute or a relative display time as display time control information 102 in FIG. 1 (or instead of the display time control information 102 ). A structure describing the importance in addition to the display time control information 102 will be described later.
  • the video location information 101 is information which enables the specification of the location in the original video frame of the video, and any of a frame number (for example, a sequence number from the first frame) or a number which specifies one frame in a stream like a time stamp may be used. If the video data corresponding to the frame extracted from the original video stream is set as a separate frame, a URL or the like may be used as information for specifying the file location.
  • the display time information 121 is information which specifies the time for displaying the video or the number of frames. It is possible to describe actual time or the number of frames as a unit and a relative value (for example, a normalized numeric value) which clarifies a relationship of the relative time length with the display time information described in other frame information. In the latter case, the actual reproduction time of each video is calculated from the total reproduction time as a whole. With respect to each video, the continuation time of the display is not described, but such description with a combination of a start time starting from a specific timing (for example, the start time of the first video is set to 0), and the end time and a description with a combination of the start time and the continuation time may be used.
  • the video present at the location specified with the video location information 101 is consecutively reproduced from the start time specified with the display time information 121 up to the end time or during the continuation time only for the number of items of the frame information “i” included in the arrangement.
  • the described display time can be processed and reproduced by using parameters such as reproduction rata information and additional information.
  • FIG. 9 explains a method for describing the video location information referring to the original video frame.
  • a time axis 200 corresponds to the original video stream based on which the frame information for the special reproduction is created and a video 201 corresponds to one frame which becomes a description target in the video stream.
  • a time axis 202 corresponds to reproduction time of a video at the time of the special reproduction by using the video 201 extracted from the original video stream.
  • a display time 203 is a section corresponding to one video 201 included in the display time 203 .
  • the video location information 101 showing the location of the video 201 and the video display time 121 showing the length of the display time 203 are described as frame information.
  • the description on the location of the video 201 may be given in any form such as a frame number, a time stamp or the like as long as one frame in the original video stream can be specified. This frame information will be described in the same manner with respect to the other videos 201 .
  • FIG. 10 explains a method for describing the video location information referring to the image data file.
  • the method for describing the video location information shown in FIG. 9 directly refers to the frame in the original data frame which is to be subjected to the special reproduction.
  • the method for describing the video location information shown in FIG. 10 is a method in which an image data file 300 corresponding to a single frame 302 extracted from the original video stream is created in a separate file, and the location thereof is described.
  • a method for describing the file location can be handled in the same manner by using, for example, the URL or the like both in the case where the file is present on a local storage device and in the case where the file is present on the network.
  • a set of the video location information 101 showing the location of this image data file and the video display time 121 showing the length of the corresponding display time 301 is described as frame information.
  • the information (similar to the video location information in the case of, for example, FIG. 9) showing a single frame 302 of the original video corresponding to the described frame information may be included in the frame information.
  • the frame information may comprise the video location information, the display time information and the original video information.
  • the configuration of the video data described with the method of FIG. 10 is not particularly restricted.
  • the frame of the original video may be used as it is or may be reduced. This is effective for conducting a reproduction processing at a high speed because it is not required to develop the original video.
  • a reduced video can be created at a high speed only by partially decoding the streams.
  • DCT the discrete cosine conversion
  • the image data files are stored in separate files. However, these files may be stored in a package in a video data group storage file having a video format (for example, a motion JPEG) which can be accessed at random.
  • the location of the video data is specified by a combination of the URL showing the location of the image data file, a frame number or a time stamp showing the location in the image data file.
  • the URL information showing the location of the image data file may be described in each frame information or may be described as additional information outside of the arrangement of the frame information.
  • the video data may be extracted at an equal interval from the original video. Where the motion of the screen quite often appears, the video data is selected in a narrow interval. Where the motion of the screen quite rarely appears, the video frame is selected in a wide interval.
  • a horizontal axis represents the selected frame number
  • a curve 800 represents a change in the scene change quantity (between adjacent frames).
  • a method for calculating the scene change quantity is the same as a method at the time of calculating the display time described later.
  • a method for calculating an interval at which the scene change quantity between video frames from which the video data is extracted becomes constant.
  • the total of the scene change quantity between video frames from which the video data is extracted is set to S i
  • the area S i of the scene change quantity curve 800 divided with the broken lines becomes constant. Then, for example, the scene change quantity is accumulated from the extracted frame, so that the video frame having the value exceeding the S/n is set as the frame F i from which the video data is extracted.
  • the video data is created by I picture frame of MPEG, the video frame from which the calculated video data is created is not necessarily the I picture, the video data is created from the I picture frame in the vicinity thereof.
  • the scene is important in many cases.
  • the scene change quantity 0 continues for more than a constant time
  • the frame at that time may be extracted.
  • the accumulated value of the scene change quantity may be or may not be cleared to 0. It is possible to selectively clear the accumulated value based on a request from the user.
  • the display time information 121 is described so that the display time becomes the same with respect to any of the frames.
  • the scene change quantity becomes constant.
  • the display time information 121 may be determined and described in a separate method.
  • FIGS. 12 through 14 a method for describing the video location information will be explained by using FIGS. 12 through 14.
  • FIG. 12 explains a method for describing the video location information for referring to the continuous frames of the original video.
  • a method for describing the video location information shown in FIG. 9 refers to one frame 201 in one original video for conducting the special reproduction.
  • the method for describing the video location information shown in FIG. 12 describes a set 500 of a plurality of continuous frames in the original video.
  • the set 500 of frames may include some frames extracted from the plural continuous frames within the original video.
  • the set 500 of frames may include only one frame.
  • the set 500 of frames includes a plurality of continuous frames or one frame in the original video
  • the location of the start frame and the location of the end frame are described, or the location of the start frame and the continuation time of the set 500 are described in the description of the frame location (if one frame is included, for example, the start frame is set equal to the end frame).
  • the frame number and the time stamp and the like are used which can specify frames in the streams.
  • the set 500 of frames is a part out of a plurality of continuous frames in the original video, information is described which enables the specification of the frames. If the method for extracting the frames is determined, and the specification of the frames can be specified with the description of the locations of the start frame and the end frame, the start frame or the end frame may be described.
  • the display time information 501 shows the total display time corresponding to the whole frame group included in the corresponding frame set 500 .
  • the display time of each frame included in the set 500 of frames can be appropriately determined on the side of device for the special reproduction.
  • As a simple method there is available a method in which the above total display time is equally divided with the total number of frames in the set 500 to provide one frame display time.
  • Various other methods are available.
  • FIG. 13 explains a method for describing video location information for referring to a set of the image data files.
  • the method for describing the video location information shown in FIG. 12 directly refers to continuous frames in the original video to be reproduced.
  • a method for describing the video location information shown in FIG. 13 creates a set 600 of the image data files corresponding to the original video frame set 602 extracted from the original video stream in a separate file and describes the location thereof.
  • the file can be handled in the same manner by using, for example, URL or the like, even if the file is present on a local storage device or if the file is present on a network.
  • a set of the video location information 101 showing the location of this image data file and the video display time 121 showing a length of the corresponding display time 601 can be described as the frame information.
  • information showing the frame set 602 of the original video corresponding to the described frame information may be included in the frame information.
  • the frame information may comprise the video location information, the display time information and the original video information.
  • the original video information is not required to be described when the information is not required.
  • the video data may be extracted at an equal interval from the original video. Where a motion of the screen quite often appears, a frame is extracted in a narrow interval. Where the motion of the screen rarely appears, a frame is extracted in a wide interval.
  • the image data file 300 is corresponded to the original video 302 in a frame to frame manner. It is possible to make the location information of the frame described as the original video information have a time width.
  • FIG. 14 shows an example in which the original video information is allowed to have a time width with respect to the FIG. 8.
  • An original video information 3701 is added to the frame information structure shown in FIG. 8.
  • the original video information 3701 comprises a start point information 3702 and a section length information 3703 which are the start point and the section length of the original video which is a target of the special reproduction.
  • the original video information 3701 comprises any information which can specify the section of the original video having the time width. It may comprise the start point information and an end point information in stead of the start point information and the length information.
  • FIG. 15 shows an example in which the original video information is allowed to have a time width with respect to the FIG. 9.
  • video location information display time information and original video information included in the same frame information
  • the location of the original video frame 3801 , the display time 3802 , and the original video frame section 3803 which comprises the start point (frame location) and the section length are described to show that these correspond to each other. That is, as a video representative of the original video frame section 3803 , the original video frame location 3801 described in the video location information is displayed.
  • FIG. 16 shows an example in which the original information is allowed to have a time width with respect to the FIG. 10.
  • the location of the image data file 3901 for the display, the display time 3902 , and the original video frame section 3903 which comprises the start point (frame location) and the section length are described to show that these correspond to each other.
  • a section different from the original video frame section for displaying the video may be allowed to correspond to the original video information.
  • FIG. 17 shows an example in which the original video information is allowed to have a time width with respect to the FIG. 12.
  • video location information display time information and original video information included in the same frame information
  • a set 4001 of frames in the original video, the display time 4002 , and the original video frame section 4003 which comprises the start point (frame location) and the section length are described to show that these correspond to each other.
  • the section 4001 of a set of frames which are described as video location information, and the original video frame section 4003 which is described as the original video information are not necessarily required to coincide with each other and a different section may be used for display.
  • FIG. 18 shows an example in which the original video information is allowed to have a time width with respect to the FIG. 13.
  • video location information display time information and original video information included in the same frame information
  • a set 4101 of frames in the video file, the display time 4102 , and the original video frame section 4103 which comprises the start point (frame location) and the section length are described to show that these correspond to each other.
  • the section of a set 4101 of frames described as video location information, and the original video frame section 4103 described as the original video are not necessarily required to coincide with each other. That is, the section of the set 4101 of the frames for the display may be shorter or longer than the original video frame section 4103 . Furthermore, a video having completely different contents may be included therein. In addition, only particularly important section may be extracted from the section described in the original video location as the image data file so that collected video data is used.
  • FIG. 19 shows a flow for starting the reproduction from the frame of the original video corresponding to the video frame displayed in special reproduction.
  • the reproduction start frame is specified in the special reproduction.
  • the original video frame corresponding to the specified frame is calculated with a method described later.
  • the original video is reproduced from the calculated frames.
  • This flow can be used for referring to the corresponding location of the original video in addition to special reproduction.
  • step S 3602 as one example of a method for calculating the corresponding original video frame, there is shown a method for using the proportional distribution with respect to display time of the specified frame.
  • the display time information included in the i-th frame information is set to D i sec
  • the section start location of the original video information is set to t i sec
  • FIGS. 20 and 21 as examples of a method for selecting a frame, there will be explained a method for extracting the frame in a narrow interval where the motion of the screen quite often appears while extracting the frame in a wide interval where the motion of the screen rarely appears in accordance with the motion of the screen.
  • the horizontal axis, the curve 800 , and S i and F i are the same as those in FIG. 11.
  • FIGS. 20 and 21 show examples in which a set of a plurality of frames are extracted based on the frame F i as reference.
  • the same number of continuous frames may be extracted from F i .
  • the frame length 811 and the frame length 812 equal to each other.
  • the corresponding number of continuous frames may be extracted so that the total of the scene change quantity from F i becomes constant.
  • the area 813 and the area 814 equal to each other.
  • Various other methods can be considered.
  • the display time information 121 may be described so that the same display time may be provided with respect to any of frame sets in the cases of FIGS. 20 and 21. Alternatively, the display time information may be determined and described in a different method.
  • FIG. 22 shows one example of a procedure of the basic processing for calculating the display time so that the scene change quantity becomes constant as much as possible when the video described in the video location information is continuously reproduced in accordance with time described in the display time information.
  • This processing can be applied to a case in which the frames are extracted in any method. For example, if the frames are extracted in a method shown in FIG. 11, the processing can be omitted. Since the processing shown in FIG. 11 selects the frames such that the scene change quantity becomes constant when the frames are displayed for a fixed time period.
  • the scene change quantity between adjacent frames is calculated with respect to all frames of the original video. If each frame of the video is represented in bit map, the differential value of the pixel between adjacent frames can be set to the scene change quantity. If the video is compressed with MPEG, the scene change quantity can be calculated by using a motion vector.
  • FIG. 23 shows one example of a basic processing procedure for calculating a scene change quantity of all frames from the video streams compressed with MPEG.
  • a motion vector is extracted from the P picture frame.
  • the video frame compressed with the MPEG is described with an arrangement of I picture (an inner-frame encoded frame), P picture (an inter-frame encoded frame in a forward prediction), and B picture (an inter-frame encoded frame in a backward prediction), as shown in FIG. 24.
  • the P picture includes a motion vector corresponding to a motion from the preceding I picture or P picture.
  • step S 82 the magnitude (intensity) of the each motion vector included in the frame of one P picture is calculated, and an average thereof is set as a scene change quantity from the preceding I picture or P picture.
  • the scene change quantity is calculated for each one frame corresponding to the frame other than the P picture. For example, if the average value of the motion vector of the P picture frame is p, and the interval from the preceding I picture or P picture from which the video is referred to is d, the scene change quantity per one frame of each frame is set to p/d.
  • step S 72 in the procedure of FIG. 22 the total of the scene change quantity of frames between the following description target frames is calculated from the description target frame described in the video location information.
  • FIG. 25 describes a change in the scene change quantity for each one frame.
  • the horizontal axis corresponds to the frame number while a curve 1000 denotes a change in the scene change quantity. If the display time of the video having the location information of the frame information Fi is calculated, the scene change quantity in the section 1001 up to F i+1 is added which corresponds to the frame location of the next description target frame. It is considered that this becomes an area Si of the hatching portion 1002 , which is a magnitude of a motion of the frame location F i .
  • step S 73 in the procedure of FIG. 22 the display time of each frame is calculated.
  • a larger quantity of the display time may only be allocated to the frame where the motion of the screen is large, so that the ratio of the display time allocated to the video of each frame location F i to the reproduction time may be set to Si/ESi.
  • the value of the total T of the reproduction time is defined as the total reproduction time of the original video.
  • the processing for calculating this display time can be conducted for the preparation of the frame information with the special reproduction control information creating apparatus, but the processing can be conducted at the time of the special reproduction on the side of the video reproduction apparatus.
  • FIG. 26 shows one example for the N times high-speed reproduction on the basis of the special reproduction control information that has been described.
  • the display time D′ i at the time of reproduction is calculated on the basis of the reproduction rate information.
  • step S 113 it is determined whether the display time D′ i of the i-th frame information is larger than the threshold value of the preset display time.
  • the video location information included in the i-th frame information Fi is displayed for D′ i seconds at step S 114 .
  • step S 115 search the i-th frame information which is not smaller than the threshold value in a forward direction.
  • the display time of the frame information which is smaller than the threshold value of the display time is all added to the display time of the i-th frame information.
  • the display time of the frame information which is smaller than the threshold value of the display time is set to 0. The reason why such processing is conducted is that the time for preparing the video to be displayed becomes longer than the display time when the display time at the time of reproduction becomes very short with the result that the display cannot be conducted in time. Then, if the display time becomes very short, the process proceeds to the next step without displaying the video. At that time, this display time of the video which is not displayed is added to the display time of the video to be displayed so that the total display time becomes unchanged.
  • step S 116 it is determined whether “i” is smaller than the total number of the frame information items in order to determine whether or not the frame information which is not displayed remains. If “i” is lower than the total number of the frame information items, the process proceeds to step S 117 to increment “i” by one to create for the display of the next frame information. When “i” reaches the total number of the frame information items, the reproduction processing is completed.
  • FIG. 27 shows one example for conducting the N times high-speed reproduction on the basis of the described special reproduction control information by taking the display cycle as a reference.
  • the calculated display time is actually associated with the display cycle so that the video cannot be always displayed in a calculated time.
  • FIG. 28 shows a relationship between the calculated display time and the display cycle.
  • the time axis 1300 shows the calculated display time while the time axis 1301 shows the display cycle based on the display rate. If the display rate is f frame/sec, an interval of the display cycle becomes 1/f sec.
  • step S 122 the frame information F i including the start point of the display cycle is searched while the video included in the frame information F i is displayed for one display cycle (1/f sec) at step S 123 .
  • the display cycle 1302 (FIG. 28) displays the video of the frame information corresponding to this display time because the display start point 1303 is included in the calculated display time 1304 .
  • a method for allowing the display cycle correspond to the frame information may display the video at the nearest location of the start point of the display cycle, as shown in FIG. 29. If the display time becomes smaller than the display cycle like the display time 1305 of FIG. 28, the display of the video may be omitted. If the video is forcibly displayed, the display time before and after the video is shortened to adjust so that the total display time becomes unchanged.
  • step S 124 it is determined whether the current display is the final display or not. If the current display is the final display, the processing is completed. If the display is not the final display, the process proceeds to step S 125 to conduct the processing of the next display cycle.
  • FIG. 30 shows another example of a data structure for describing the frame information.
  • the frame information included in the data structure of FIG. 8 or FIG. 14 summarizes a single original video.
  • a plurality of original videos can be summarized by expanding the frame information.
  • FIG. 30 shows such an example.
  • An original video location information 4202 for indicating the original video file location is added to the original video information 4201 included in the individual frame information.
  • the file described in the original video location information 4202 is not necessarily required to handle the entire file.
  • the file can be used in the form in which only a portion of the section is extracted. In this case, not only file information such as a file name or the like but also the section information showing which section of the file becomes an object are additionally described. Plural sections may be selected from the original video.
  • the original video identification information may be described in place of the original video location information.
  • FIG. 31 explains an example in which a plurality of original videos are summarized and displayed by using the frame information added with the original video location information.
  • three videos are summarized to display one summarized video.
  • two sections 4301 and 4302 are taken out to handle the respective videos.
  • the frame information, together with these original video information the frame location ( 4303 with respect to 4301 ) of respective representative video is described as the video location information while the display time ( 4304 with respect to 4301 ) is described as the display time information.
  • FIG. 32 explains another example in which a plurality of original videos are summarized and displayed by using the frame information added with the original video location information.
  • three videos are summarized to display one summarized video.
  • the video 2 in place of the whole section, a portion of the section is taken out.
  • a plurality of sections may be taken out as described in FIG. 31.
  • the frame information together with these items of the original video information (for example, the section information 4401 in addition to the video 2), the storage location of respective representative video files 4402 is described as the video location information and the display time 4403 is described as display time information.
  • FIG. 33 shows another data structure for describing the frame information.
  • a motion information 4501 and interest region information 4502 are added.
  • the motion information 4501 describes a magnitude of a motion (a scene change quantity) in a section (the section described in the original video information) of the original video corresponding to the frame information.
  • the interest region information 4502 refers to a description of the information which should be particularly interested in the video which is described in the video location information.
  • the motion information can be used for calculating the display time of the video described in the video location information as used at the time of calculating the display time from the motion of the video, as shown in FIG. 22.
  • special reproduction such as high-speed reproduction can be conducted in the same manner as in the case in which the display time is described. In this case, the display time is calculated at the time of reproduction.
  • the display time calculated irrespective of the motion is described in the display time information.
  • a method for calculating the display time for cutting out important scenes from the original video corresponds to this.
  • the motion information is used so that a portion with a large motion is reproduced slowly while a portion with a small motion is reproduced quickly with the result that a high-speed reproduction free from a large overlook is enabled.
  • the interest region information is used when the particularly interest region is present in the video described in the video location information of the frame information. For example, faces of persons who seem to be important correspond to this.
  • the display may be conducted by overlapping a square frame so that the interest region can be easily detected.
  • the frame display is not indispensable, and the video may only be displayed as it is.
  • the interest region information can be used for processing and displaying the special reproduction control information such as frame information or the like. For example, if a part of the frame information is reproduced and displayed, the frame information including the interest region information is displayed with priority. Further, it is assumed that the frame information including square area with large area has higher importance, thereby making it possible to selectively displaying he video.
  • FIG. 34 is a view showing examples of a data structure of the frame information attached to the video.
  • An importance information 122 is described in addition to or in place of the display time control information 102 in the data structure of the frame information of FIG. 1.
  • the display time is calculated based on the importance information 122 .
  • the importance information 122 represents the importance of the corresponding frame (or a set of frames).
  • the importance is represented, for example, as an integer in a constant range (for example, 0 to 100), or is represented as an actual number in a constant range (for example, 0 to 1). Otherwise, the importance information 122 may be represented as an integer or an actual number value without setting the upper limit.
  • the importance information 122 may be attached to all the frames of the video, or only the frame in which the importance is changed.
  • FIGS. 9, 10, 12 , and 13 it is possible to take any form of FIGS. 9, 10, 12 , and 13 .
  • the frame extraction method of FIGS. 11, 20, and 21 can be used.
  • the scene change quantity of FIGS. 11, 20, and 21 may be replaced by the importance.
  • the display time is set with the scene change quantity.
  • the display time may be set by the importance information.
  • the method for setting the display time will be explained.
  • the display time is set long where the change quantity is large and the display time is set short where the change quantity is small.
  • the display time is set long where the importance is high and the display time is set short where the importance is low. That is, since the method for setting the display time according to the importance is basically similar to the method for setting the display time based on the scene change quantity, the method will be briefly explained.
  • FIG. 36 shows one example of the basic processing procedure in this case.
  • step S 191 the importance of all frames of the original video will be calculated. A concrete method thereof will be exemplified later.
  • step S 192 the total of the importance from the description object frame described in the video location information to the next description object frame will be calculated.
  • FIG. 37 describes the change in the importance for each one frame.
  • Reference numeral 2200 denotes the importance. If the display time of the video having the location information of the frame information F i is calculated, the importance in the section up to F i+1 which is the next description object frame location is accumulated. The accumulation result is an area S′ i of the hatching portion 2202 .
  • the display time of each frame is calculated.
  • the ratio of the display time allocated to the video at each frame location F i the reproduction time is set to S′ i / ⁇ S′ j .
  • the value of the total T of the reproduction time is a standard reproduction time to be regulated as the total reproduction time of the original video.
  • the video location information 101 , the display time information 121 and the importance information 112 may be described in each frame information “i”.
  • the display time information 121 is used but the importance information 122 is not used; the importance information 122 is used but the display time information 121 is not used; both the importance information 122 and the display time information 121 are used; and neither the importance information 122 nor the display time information 121 is used.
  • the processing of calculating the display time can be conducted for preparing the frame information with the special reproduction control information creating apparatus. However, the processing may be conducted on the side of the video reproduction apparatus at the time of the special reproduction.
  • step S 191 of FIG. 36 a method for calculating the importance of each frame or the scene (video frame section) will be explained.
  • the most appropriate method for calculating the importance is a method in which man determines the importance.
  • importance evaluator evaluates the importance for each scene of the video, or for each of the constant interval, so that the importance is input as the importance data.
  • the importance data referred to here refer to a frame number or time and a correspondence table with the importance value.
  • a plurality of importance evaluators are allowed to evaluate the same video to calculate the average value (or a median or the like will do) for each scene or each video frame section so that the importance is finally determined.
  • FIG. 38 shows an example of a processing procedure at the time of automatically calculating important data on the basis of the idea that a scene having a large sound level is important.
  • FIG. 38 is established as a function block diagram.
  • the sound level calculation processing at step S 210 the sound level at each time is calculated out when the sound level attached to the video is calculated. Since the sound level largely changes in an instant, the smoothing processing or the like may be conducted in the sound level calculation processing at step S 210 .
  • a processing is conducted for converting into the importance the sound level output as a result of the sound level calculation processing.
  • the sound level input is linearly converted into a value of 0 to 100, the sound level having the lowest sound level set in advance being set to 0, and having the highest sound level being set to 100.
  • the sound level not more than the lowest sound level is set to 0 while the sound level not less than the highest sound level is set to 100.
  • the importance at each time is calculated to be output as importance data.
  • FIG. 39 shows an example of a processing procedure of a method for automatically calculating another importance level.
  • FIG. 39 is established as a function block diagram.
  • step S 220 when the sound data attached to the video is input, the language (words) man talks is converted into text data in the sound recognition processing.
  • the important word dictionary 221 words which are likely to appear in important scenes are registered. If the degree of importance of registered words differs, the weight is added to each of the registered words.
  • the word collation processing at step S 222 the text data which is an output of the sound recognition processing is collated with the words registered in the important word dictionary 221 to determine whether or not important words are talked.
  • the importance in each scene of the video or at each time is calculated from the result of the word collation processing.
  • the number of the appearances of important words and the weight of the important words are used so that the processing is conducted to increase the importance around the time at which, for example, important words have appeared (or of the scene in which the important words have appeared) by a constant value, or a value proportional to the weight of the important words.
  • the importance at each time is calculated to be output as importance data.
  • the important word dictionary 221 becomes unnecessary. This is because that it is assumed that the scene in which many words are spoken is important.
  • the processing of counting the number of words output from the sound recognition processing is conducted. Not only the number of words but also the number of characters may be counted.
  • FIG. 40 shows an example of a processing procedure of the method for automatically calculating the other importance level.
  • FIG. 40 is also established as a function block diagram.
  • FIG. 40 determines that the scene in which many important words appear which are registered in advance in the telop appearing in the video is important.
  • the character location in the video is specified to recognize characters by converting the video region at the character location into a binary value.
  • the recognized result is output as text data.
  • the important word dictionary 231 is the same as the important word dictionary 221 of FIG. 39.
  • the word collation processing at step S 232 in the same manner as at step S 222 in the procedure of FIG. 39, the text data which is an output of the telop recognition processing is collated with the words registered in the important word dictionary 231 to determine whether or not important words have appeared.
  • the importance at each scene or at each time is calculated from the number of appearances of important words, and weight of the important words in the same manner as at step S 223 in the procedure of FIG. 39.
  • the importance at each time is determined to be output as importance data.
  • the important word dictionary 231 becomes unnecessary. This is because that it is assumed that the scene in which many important words appear is an important scene.
  • processing is conducted for counting the number of words simply output from the telop recognition processing. Not only the number of words but also the number of characters may be counted.
  • FIG. 41 shows an example of a processing procedure of a method for automatically calculating still another importance level.
  • FIG. 41 is established as a function block diagram.
  • FIG. 41 determines that when the telop appearing in the video is in larger character size, the scene is more important.
  • the processing is conducted for specifying the location of character string in the video.
  • step S 241 In the character size calculation processing at step S 241 , individual characters are extracted to calculate the average value or the maximum value of the size (area) of the character.
  • the importance calculation processing at step S 242 the importance is calculated which is proportional to the size of the character which is an output of the character size calculation processing. If the calculated importance is too large or too small, the processing is conducted for restricting the importance to a preset range with the threshold value processing. As a result of the importance calculation processing, the importance at each time is calculated to be output as importance data.
  • FIG. 42 shows an example of the processing procedure of a method for automatically calculating still another importance level.
  • FIG. 42 is established as a function block diagram.
  • the processing is conducted for detecting an area which looks like a human face in the video.
  • the number of areas (number of faces) which are determined to be a human face is output.
  • the information on the size (area) of the face may be output at the same time.
  • the importance calculation processing at step S 251 the number of faces which is an output of the processing of detecting the faces is multiplied by several times to calculate the importance. If the output of the face detection processing includes face size information, calculation is conducted so that the importance increases with an increase in the size of faces. For example, the area of the face is multiplied by several times to calculate the importance. As a result of the importance calculation processing, the importance at each time is calculated to be output as importance data.
  • FIG. 43 shows an example of the processing procedure of a method for automatically calculating still other importance level.
  • FIG. 43 is also established as a function block diagram.
  • the video which should be determined to be important is registered in the important scene dictionary 260 .
  • the video is recorded as raw data or is recorded in a data compressed form. Instead of the video itself, the characteristic quantity (a color histogram, a frequency or the like) of the video may be recorded.
  • similarity/non-similarity calculation processing at step S 261 similarity/non-similarity between the video registered in the important scene dictionary 260 and the input video data is calculated.
  • the non-similarity the total of the square error or the total of the difference in the absolute value is used. If the video data is recorded in the important scene dictionary 260 , the total of the square error for each of the corresponding pixels and the total of the differential of the absolute valued are calculated as non-similarity. If the color histogram of the video is recorded in the important scene dictionary 260 , the same color histogram is calculated with respect to the input video data to calculate the total of the square error between histograms and the total of the difference in the absolute values to set these totals as non-similarity.
  • the importance is calculated from the similarity/ non-similarity which is an output of the similarity and non-similarity calculation processing.
  • the importance is calculated in such a manner that larger similarity provides greater importance if the similarity is input while larger non-similarity provides smaller importance if the non-similarity is input.
  • the importance at each time is calculated to be output as the importance data.
  • the scene having a high instant viewing rate is set as an important scene.
  • the data on the instant viewing rate is obtained as a result of the summing of the viewing rate investigation, so that importance is calculated by multiplying the instant viewing rate by constant times.
  • the importance calculation processing may be solely conducted, or a plurality of data items may be used at the same time to calculate the importance. In the latter case, for example, the importance of one video is calculated with several different methods to calculate the final importance as an average value or a maximum value.
  • the reproduction/non-reproduction information may be added to the frame information for controlling the reproduction or the non-reproduction. As a consequence, only a part of the video is reproduced or only a part of the video is not reproduced on the basis of the reproduction/non-reproduction information.
  • FIGS. 44, 45, and 46 show examples of a data structure in which the reproduction/non-reproduction information is added.
  • FIG. 44 shows a data structure in which the reproduction/non-reproduction information 123 is added to the data structure of FIG. 8.
  • FIG. 45 shows a data structure in which the reproduction/non-reproduction information 123 is added to the data structure of FIG. 34.
  • FIG. 46 shows a data structure in which the reproduction/non-reproduction information 123 is added to the data structure of FIG. 35. Though not shown, it is possible to add the reproduction/non-reproduction information 123 to the data structure of FIG. 1.
  • the reproduction/non-reproduction information 123 may be binary information specifying whether the video is reproduced or not or a continuous value such as reproduction level or the like.
  • the reproduction level exceeds a certain threshold value at the time of reproduction, the video is reproduced.
  • the reproduction level is less than the threshold value, the video is not reproduced.
  • the user can directly or indirectly specify the threshold value.
  • the reproduction/non-reproduction information 123 may be set as independent information to be stored. If the reproduction or non-reproduction is selectively specified, the non-reproduction can be specified when the display time shown in the display time information 121 is set to a specific value (for example, 0 or ⁇ 1). Alternatively, the non-reproduction can be specified when the importance indicated by the importance information 122 is set to a specific value (for example, 0 or ⁇ 1). The reproduction/non-reproduction information 123 may not be added.
  • the display time information 121 and/or the importance information 122 can be used as a substitute.
  • the reproduction/non-reproduction information 123 is maintained as independent information, the quantity of data increases by that quantity. It is possible to see a digest of the video by allowing the non-reproduction specification portion not to be reproduced on the reproduction side. It is also possible to see the whole video by reproducing the non-reproduction specified portion. If the reproduction/non-reproduction information 123 is not maintained as independent information, it is necessary to appropriately change the display time specified, for example, as 0 in order to see the whole video by reproducing the non-reproduction specified portion.
  • the reproduction/non-reproduction information 123 may be input by man or may be determined with some conditions. For example, when the motion information of the video is set to a constant value or more, the video is reproduced. When the motion information of the video is not set to a constant value or more, the video is not reproduced so that only brisk motion portion can be reproduced. When it is determined that the skin color is larger or smaller than the constant value from color information, only the scene where man appears can be reproduced. A method for calculating the information with the magnitude of sound, and a method for calculating the information from the reproduction program information which is input in advance can be considered. The importance may be calculated with some technique to create the reproduction/non-reproduction information 123 from the importance information. When the reproduction/non-reproduction information is set to a continuous value, the importance may be calculated by converting the information into the reproduction/non-reproduction information.
  • FIG. 47 shows an example in which reproduction/non-reproduction control is carried out so that video is reproduced on the basis of the reproduction/non-reproduction information 123 .
  • FIG. 47 it is supposed that the original video 2151 is reproduced on the basis of the video frame location information represented with F 1 through F 6 or the video frame group location information 2153 and the display time information represented with D 1 through D 6 .
  • the reproduction/non-reproduction information is added to the display time information 2154 .
  • the sections of D 1 , D 2 , D 4 and D 6 can be reproduced, and other sections cannot be reproduced, the sections of D 1 , D 2 , D 4 and D 6 are continuously reproduced as the reproduction video 2152 (while other sections cannot be reproduced).
  • the display time of D + i is set to a time which is required to reproduce the original video at a normal speed.
  • the reproduction speed may be set to a predetermined high-speed. Information may be described as to which times the speed is to be set.
  • the display time D + i of the reproduction portion is multiplied by 1/N times.
  • the display time D + i of each reproduction portion may be processed and displayed at D′/ ⁇ i D + i times.
  • the determined display time may be adjusted.
  • the display time which is calculated without taking into consideration the generation of the non-reproduction section is used as it is, so that when the display time exceeding 0 is originally allocated to the non-reproduction section the whole display time is shortened for that allocation portion.
  • the adjustment is made by multiplying by a constant number the display time of each of the frames (or the frame group) to be reproduced so that the whole display time becomes equal to the time at the time of the reproduction of the non-reproduction section.
  • the user may make a selection as to whether the adjustment is to be made.
  • the N times high-speed reproduction processing may be conducted without the adjustment of the calculated display time.
  • the N times high-speed reproduction processing may be conducted on the basis of the display time after the adjustment of the calculated display time in the above manner (the display time of the former becomes shorter).
  • the user may specify the whole display time.
  • the display time of each frame (or a frame group) to be reproduced is multiplied by a constant number to make an adjustment so that the display time becomes equal to the specified whole display time.
  • FIG. 48 shows one example of the processing procedure for reproducing only a portion of the video on the basis of the reproduction/non-reproduction information 123 .
  • the frame information (video location information and display time information) is read to determine whether the frame is to be reproduced from the reproduction/non-reproduction information in the display time information at step S 163 .
  • the frame is displayed for the portion of the display time at step S 164 .
  • the frame is not displayed and the processing is moved to the next frame processing.
  • step S 161 It is determined at step S 161 whether or not the whole video to be reproduced is processed. When the whole video is processed, the reproduction processing is also ended.
  • step S 163 When it is determined that the frame is to be reproduced or not at step S 163 , it is desired in some cases that the determination is depending on the taste of the user. At this time, it is determined from the user profile whether or not the non-reproduction portion is reproduced in advance before the reproduction of the video. When the non-reproduction portion is reproduced, the frame is reproduced without fail at step S 164 .
  • a threshold value is determined from the user profile for differentiating the reproduction and the non-reproduction to determine the reproduction or the non-reproduction depending on whether or not the reproduction/non-reproduction information exceeds the threshold value.
  • the threshold value is calculated from the importance set for each frame, or information may be received in advance from the user as to whether the reproduction or non-reproduction is provided in real time.
  • the video location information 101 and the display time information 102 are included in each frame information 100 .
  • the video location information 101 and importance information 122 are included in each frame information 100 .
  • the video location information 101 , the display time information 121 , and importance information 122 are included in each frame information 100 .
  • FIGS. 44, 45, and 46 there is further shown an example in which the reproduction/non-reproduction information 123 is included in each frame information 100 .
  • 0 or more sound location information 2703 , sound reproduction time information 2704 , 0 or more text information 2705 and text display time information 2706 may be added.
  • FIG. 49 shows an example in which one set of sound location information 2703 and sound reproduction time information 2704 and N sets of text information 2705 and text display time information 2706 are added to an example of the data structure of FIG. 8.
  • the sound is reproduced for the time indicated by the sound reproduction time information 2704 from the location indicated by the sound location information 2703 .
  • An object of reproduction may be sound information attached to the video from the beginning. Background music is created to be newly added.
  • the text displays the text information indicated by the text information 2705 for the time indicated by the text display time information 2706 .
  • a plurality of items of text information may be added to one video frame.
  • the time when the sound reproduction and the text display are started is the same as the time when the associated video frame is displayed.
  • the sound reproduction time and the text display time are set within the range of the associated video frame time. If continuous sound is reproduced over a plurality of video frames, the sound location information and the reproduction time may be set to be continuous.
  • FIG. 50 shows one example of a method for describing the sound information separately from the frame information. This is an example of a data structure for reproducing sound associated with the video frame which is displayed at the time when the special reproduction is conducted.
  • a set of the location information 2801 showing the location of the sound to be reproduced, reproduction start time 2802 when the sound reproduction is started, and reproduction time 2803 when the reproduction is continued is set as one item of sound information 2800 to be described as an arrangement of this sound information.
  • FIG. 51 shows a data structure for describing the text information.
  • the data structure has the same structure as the sound information of FIG. 50, and a set of character code location information 2901 of the text to be displayed, a display start time 2902 , and a display time 2903 is set as one item of text information 2900 to be described as an arrangement of this sound information.
  • the location information may be used which indicates a location where the character code is stored, or a location where the character is stored as a video.
  • the above sound information or the text information is synchronized with the display of the video frame to be displayed as information associated with the video frame or a constant video frame section in which the displayed video frame is present.
  • the reproduction or the display of the sound information or the text information is started with the lapse of time shown by the time axis 3001 .
  • the video 3002 is displayed and reproduced for the described display time in an order in which the respective video frames are described.
  • Reference numerals 3005 , 3006 and 3007 denote respective video frames and a predetermined display time is allocated thereto.
  • the sound 3003 is reproduced when the reproduction start time described in each sound information comes. When the reproduction time described in a similar manner has passed away, the reproduction is suspended. As shown in FIG.
  • a plurality of sounds 3008 and 3009 may be reproduced.
  • the text 3004 is also displayed when the display time described in the each of the text information comes. When the display time which is described has passed away, the display is suspended. A plurality of texts 3010 and 3011 may be displayed at the same time.
  • the sound reproduction start time and the text display start time coincides with the time at which the video frame is displayed. It is not required that the sound reproduction time and the text display time coincides with the display time of the video frame. These times can be freely set, on the contrary, the display time of the video frame may be changed in accordance with the sound reproduction time and the text display time.
  • FIG. 53 shows one example of a processing procedure in which a continuous video frame section is determined which is referred to as a shot from a change-over of the screen up to the next change-over of the screen, so that the total of the display time of the video frames included in the shot is defined as the sound reproduction time.
  • FIG. 53 is also established as a function block diagram.
  • the shot is detected from the video.
  • methods as a method for detecting a cut of a motion picture from the MPEG bit streams using a tolerance ratio detection method. (The transactions of the institute of electronics, information and communication engineers, Vol. J82-D-II, No. 3, pp. 361-370, 1999) and the like.
  • the video frame location information is referred to thereby investigating which shot respective video frames belong to. Furthermore, the display times of respective shots are calculated by taking the total of the display times of the video frames.
  • the sound location information is set as the sound location corresponding to the start of the shot.
  • the sound reproduction start time may be allowed to coincide with the display time of the initial video frame which belongs to each shot while the sound reproduction time may be set to be equal to the display time of the shot. Otherwise, in accordance with the reproduction time of the sound, the display time of the video frames included in each shot may be corrected.
  • the shot is detected here, if a data structure is taken wherein the importance information is described in the frame information, the section having importance exceeding the threshold value is determined by using the importance with respect to the video frame so that the sound included in the section may be reproduced.
  • FIG. 54 shows one example of a processing procedure in which important words are taken out from sound data corresponding to the shot or the video frame section having the high importance with sound recognition so that the words, or the sound including the words or the sound in which a plurality of words are combined are reproduced.
  • FIG. 54 is also established as a function block diagram.
  • step S 3201 the shot is detected. In place of the shot, the video frame section having the high importance is calculated.
  • step S 3202 the sound recognition is carried out with respect to the sound data section corresponding to the obtained video frame section.
  • step S 3203 sounds including the important word portion or sounds of the important word portion are determined from the recognition result.
  • an important word dictionary 3204 is referred to.
  • the sound for reproduction is created. Continuous sounds including the important words may be used as they are. Only important words may be extracted. Sounds having a combination of a plurality of important words may be created.
  • step S 3206 in accordance with the reproduction time of the created time, the display time of the video frame is corrected.
  • the number of selected words may be decreased and the reproduction time of the sound may be shortened so that the sound reproduction time is set to be within the display time of the video frame.
  • FIG. 55 shows one example of a procedure in which text information is obtained from the telop.
  • FIG. 55 is also established as a function block diagram.
  • the text information is obtained from the telop or the sound displayed in the video.
  • the telop displayed in the video is read.
  • a step S 3302 important words are taken out from the telop character string which has been read.
  • an important word dictionary 3303 is used.
  • the telop character string which is read may be text information as it is. Extracted words are arranged, and a sentence representing the video frame section may be constituted with only the important words to provide text information.
  • FIG. 56 shows one example for obtaining the text information from the sound.
  • FIG. 56 is also established as a function block diagram.
  • step S 3402 important words are taken out from the recognized sound data.
  • an important word dictionary 3403 is used.
  • the recognized sound data may be used as test information.
  • Extracted words are arranged, and a sentence is constituted which represents the video frame section with only the important words to provide text information.
  • FIG. 57 shows an example of processing procedure for taking out text information and preparing the text information with telop recognition from the shot or from the video frame section having high importance.
  • FIG. 57 is also established as a function block diagram.
  • step S 3501 the shot is detected from the video. Instead of the shot, the section having high importance may be determined.
  • step S 3502 the telop represented in the video frame section is recognized.
  • step S 3503 the important words are extracted by using an important word dictionary 3504 .
  • step S 3505 text for the display is created.
  • a telop character string including important words may be used. Only important words or a character string using the important words may be used as text information.
  • the telop recognition processing at step S 3502 is subjected to sound recognition processing to input sound data.
  • the text information is displayed together with the video frame in which the text is displayed as telop or video frame of the time at which the data is reproduced as sound. Otherwise, text information in the video frame section may be displayed at one time.
  • FIGS. 58A and 58B are views showing a display example of the text information.
  • the display may be divided into the text information display area 3601 and the video display area 3602 .
  • the text information may be overlapped with the video display area 3603 .
  • Respective display times (reproduction times) of the video frame, the sound information and the text information may be adjusted so that all the media information is synchronized. For example, at the time of the double speed reproduction of the video, important sounds are extracted by the above method, and a half time sound information of the normal reproduction is obtained. Next, the display time is allocated to the video frame associated with respective sounds. If the display time of the video frame is determined so that the scene change quantity becomes constant, the sound reproduction time or the text display time is set to be within the display time of the respectively associated video frames. Otherwise, a section including a plurality of video frames is determined like the shot, so that the sound or the text included in the section is determined or displayed in accordance with the display time of the section.
  • the data structure of the present invention can be modified to a data having no frame information, i.e., the sound data. It is possible to use sound information and text information in the form without the frame information.
  • a summary is created which comprises only sound information or text information with respect to the original video data.
  • a summary can be created which comprises only sound information and text information with respect to the sound data and music data.
  • FIGS. 50 and 51 are used to describe the sound information and text information in synchronization with the video data, it is possible to summarize the sound data and text data only.
  • the data structure shown in FIG. 50 can be used irrespective of the video information.
  • the data structure shown in FIG. 51 can be used irrespective of the video information.
  • the original data information may be added to describe a correspondence relationship between the original sound and music data to the sound information and text information.
  • FIG. 59 shows an example of a data structure in which the original data information 4901 is included in the sound information shown in FIG. 50. If the original data is the video, the original data information 4901 indicates the section of video frames (start point information 4902 and section length information 4903 ).
  • the original data information 4901 indicates the section of sound and music.
  • FIG. 60 shows an example of a data structure in which the original data information 4901 is included in the sound information shown in FIG. 30.
  • FIG. 61 explains an example in which sound/music is summarized by using the sound information.
  • the original sound/music is divided into several sections.
  • a portion of the section is extracted as the summarized sound/music so that the summary of the original data is created.
  • a portion 5001 of the section 2 is extracted as summarized sound/music to be reproduced as a section 5002 of the summary.
  • the music may be divided into chapters and the conversation may be divided by the contents.
  • the description of the original data file and the section are included in the sound information and the text information with the result that a plurality of sound/music data items can be summarized together.
  • the original data identification information may be described in place of the original data file and the section.
  • FIG. 62 explains an example in which sound/music is summarized by using the sound information. Portions of plural sound/music data items are extracted as the summarized sound/music so that the summary of the original data is created. For example, a portion 5001 of the sound/music data item 2 is extracted as summarized sound/music to be reproduced as a section 5102 of the summary. A piece of music included in one music album is extracted by a portion of the section, so that a summarized data for trial can be created as a usage.
  • the title of the music may be included in the music information when it is preferable that the title of the music can be known. This information is not indispensable.
  • Video data and special reproduction control information are recorded on one (or a plurality of) recording medium (or media) and provided at the same time;
  • Video data is recorded on one (or a plurality of) recording medium (or media) and provided, and the special reproduction control information is separately recorded on one (or a plurality of) recording medium (media) and provided;
  • Video data and the special reproduction control information are provided via the communication medium at the same occasion;
  • Video data and the special reproduction control information are provided via the communication media at different occasions.
  • a special reproduction control information describing method for describing special reproduction control information provided for special reproduction with respect to the video contents describes, as the frame information, for each of frames or groups of continuous or adjacent frames selectively extracted from the whole frame series of video data constituting the video contents, first information showing a location at which video data of the one frame or one group is present and second information associated with display time allocated to the one frame or the frame group, and/or third information showing importance allocated to the one frame or the frame group corresponding to the frame information.
  • a computer readable recording medium storing a special reproduction control information stores at least frame information described for each of frames or groups of continuous or adjacent frames selectively extracted from the whole frame series of video data constituting the video contents, the frame information comprising first information showing a location at which video data of the one frame or one group is present and second information associated with display time allocated to the one frame or the frame group, and/or third information showing importance allocated to the one frame or the frame group corresponding to the frame information.
  • a special reproduction control information describing apparatus/method for describing special reproduction control information provided for special reproduction with respect to the video contents describes, as the frame information, for each of frames or groups of continuous or adjacent frames selectively extracted from the whole frame series of video data constituting the video contents, video location information showing a location at which video data of the one frame or one group is present and display time control information including display time information and basic information based on which the display time is calculated, to be allocated to the one frame or the frame group.
  • a special reproduction apparatus/method which enables a special reproduction with respect to video contents, wherein special reproduction control information is referred to which includes at least frame information including video location information showing a location at which one frame data or one frame group data is present which information is described for each of the frame groups comprising one frame selectively, extracted out of the whole frame series of the video data allocated to the video contents and constituting the video contents or a plurality of continuous or adjacent frames; the one frame data or the frame group data corresponding to each frame information is obtained on the basis of video location information included in the frame information while the display time which should be allocated to each frame information is determined on the basis of display time control information included in at least each frame information and data on the one frame or the plurality of frames which is or are obtained is reproduced at the determined display time in a predetermined order thereby carrying out a special reproduction.
  • special reproduction control information is referred to which includes at least frame information including video location information showing a location at which one frame data or one frame group data is present which information is described for each of the frame groups comprising one frame
  • image data is created in advance, which is extracted in frame units from location information on an effective video frame or an original video which is used for display, and the video frame location information or information on the display time of the image data is created separately from the original video.
  • Either video frames or the image data extracted from the original video is continuously displayed on the basis of the display information so that a special reproduction such as a double speed reproduction, a trick reproduction, jump continuous reproduction or the like is enabled.
  • display time is determined in advance in such a manner that the display time is extended at a location where a motion of the scene is large while the display time is shortened at a location where the motion is small so that the change in the display screen becomes constant as much as possible.
  • the same effect can be obtained even when the location information is determined so that an interval of the extracted location is made small at a location where a motion of the video frame or video data used for the display is large while the interval is made small at a location where the motion is large.
  • a reproduction speed control value may be created so that a double speed value or a reproduction time is provided which is designated by a user as a whole.
  • a long video can be viewed at double speed reproduction, so that the video can be easily viewed in a short time, and the contents can be grasped in a short time.
  • an effective special reproduction is enabled on the basis of the control information on the reproduction side by arranging and describing as control information provided for a special reproduction of the video contents a plurality of frame information including a method for obtaining a frame or a group of frames selectively extracted from the original video, information on the display time (absolute or relative value) allocated to the frame or the group of frames and information which forms the basis for obtaining the information on the display time.
  • each of the above functions can be realized as software.
  • the above embodiments can be realized as a computer readable recording medium on which a program is recorded for allowing the computer to conduct predetermined means or for allowing the computer to function as predetermined means, or for allowing the computer to realize a predetermined function.
  • each of the embodiments are one example, and are not intended to exclude other structures. It is also possible to provide a structure which is obtained by replacing a part of the structure exemplified above with another structure, omitting a part of the exemplified structure, adding a different function to the exemplified structure, and combining such measures.
  • a different structure logically equivalent to the exemplified structure, a different structure including a part logically equivalent to the exemplified structure, and a different structure logically equivalent to the essential portion of the exemplified structure can be provided.
  • Another structure identical to or similar to the exemplified structure, or a different structure having the same effect as the exemplified structure or a similar effect can be provided.
  • Each of the embodiments includes or inherently contains an invention associated with various viewpoints, stages, concept or a category such as, for example, an invention as a method for describing information, an invention as information which is described, an invention as an apparatus or a method corresponding thereto, an invention as an inside of the apparatus or a method corresponding thereto.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Television Signal Processing For Recording (AREA)
  • Indexing, Searching, Synchronizing, And The Amount Of Synchronization Travel Of Record Carriers (AREA)
US09/894,321 2000-06-30 2001-06-29 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor Abandoned US20020051081A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
US10/226,352 US20030002853A1 (en) 2000-06-30 2002-08-23 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor
US10/319,478 US20030086692A1 (en) 2000-06-30 2002-12-16 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2000-200220 2000-06-30
JP2000200220 2000-06-30

Related Child Applications (2)

Application Number Title Priority Date Filing Date
US10/226,352 Division US20030002853A1 (en) 2000-06-30 2002-08-23 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor
US10/319,478 Division US20030086692A1 (en) 2000-06-30 2002-12-16 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor

Publications (1)

Publication Number Publication Date
US20020051081A1 true US20020051081A1 (en) 2002-05-02

Family

ID=18698116

Family Applications (3)

Application Number Title Priority Date Filing Date
US09/894,321 Abandoned US20020051081A1 (en) 2000-06-30 2001-06-29 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor
US10/226,352 Abandoned US20030002853A1 (en) 2000-06-30 2002-08-23 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor
US10/319,478 Abandoned US20030086692A1 (en) 2000-06-30 2002-12-16 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor

Family Applications After (2)

Application Number Title Priority Date Filing Date
US10/226,352 Abandoned US20030002853A1 (en) 2000-06-30 2002-08-23 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor
US10/319,478 Abandoned US20030086692A1 (en) 2000-06-30 2002-12-16 Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor

Country Status (4)

Country Link
US (3) US20020051081A1 (de)
EP (1) EP1168840A3 (de)
KR (1) KR100564893B1 (de)
CN (1) CN1367612A (de)

Cited By (20)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20040021974A1 (en) * 2002-06-10 2004-02-05 Hirotaka Shiiyama Reproduction control of reproduction apparatus based on remaining power of battery
US20050185929A1 (en) * 2004-02-21 2005-08-25 Samsung Electronics Co., Ltd Information storage medium having recorded thereon text subtitle data synchronized with AV data, and reproducing method and apparatus therefor
US20050198570A1 (en) * 2004-01-14 2005-09-08 Isao Otsuka Apparatus and method for browsing videos
US20060104609A1 (en) * 2004-11-08 2006-05-18 Kabushiki Kaisha Toshiba Reproducing device and method
US20060204224A1 (en) * 2002-04-17 2006-09-14 Samsung Electronics Co., Ltd. Recording medium containing thumbnail recorded thereon, recording apparatus and method therefor, and reproducing apparatus and method therefor
US20080050094A1 (en) * 2001-11-27 2008-02-28 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20080303943A1 (en) * 2007-06-05 2008-12-11 Tatsuhiko Numoto Digest generation for television broadcast program
US20090148133A1 (en) * 2004-01-30 2009-06-11 Kazuhiro Nomura Content playback apparatus
US20100161092A1 (en) * 2001-11-27 2010-06-24 Hyung Sun Kim Method of managing lyric data of audio data recorded on a rewritable recording medium
US20110080478A1 (en) * 2009-10-05 2011-04-07 Michinari Kohno Information processing apparatus, information processing method, and information processing system
US20110116759A1 (en) * 2009-10-13 2011-05-19 Nikon Corporation Imaging device and image processing apparatus
US20130314596A1 (en) * 2012-05-25 2013-11-28 Kabushiki Kaisha Toshiba Electronic apparatus, control method of electronic apparatus, and control program of electronic apparatus
US8606069B2 (en) 2004-09-08 2013-12-10 Panasonic Corporation Playback device, playback method, and computer-readable recording medium for ensuring stable application execution in synchronism with video data playback
US20140143235A1 (en) * 2012-11-20 2014-05-22 Lenovo (Beijing) Co., Ltd. Information outputting method and electronic device
US20150127626A1 (en) * 2013-11-07 2015-05-07 Samsung Tachwin Co., Ltd. Video search system and method
US10212386B2 (en) * 2015-08-28 2019-02-19 Xiaomi Inc. Method, device, terminal device, and storage medium for video effect processing
US10734024B1 (en) * 2018-09-04 2020-08-04 Adobe, Inc. Systems and methods of appending metadata to a media file for playing time-lapsed audio
US11128990B2 (en) * 2018-06-20 2021-09-21 Canon Kabushiki Kaisha Communication apparatus, control method, and storage medium
US11457267B2 (en) 2018-06-20 2022-09-27 Canon Kabushiki Kaisha Communication apparatus, communication method, and storage medium
US11900966B2 (en) 2020-12-17 2024-02-13 Samsung Electronics Co., Ltd. Method and electronic device for producing video summary

Families Citing this family (30)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002044572A (ja) 2000-07-21 2002-02-08 Sony Corp 情報信号処理装置及び情報信号処理方法および情報信号記録装置
US6870956B2 (en) 2001-06-14 2005-03-22 Microsoft Corporation Method and apparatus for shot detection
AU2003247037A1 (en) * 2002-07-30 2004-02-23 Koninklijke Philips Electronics N.V. Trick play behavior controlled by a user
US7116716B2 (en) 2002-11-01 2006-10-03 Microsoft Corporation Systems and methods for generating a motion attention model
US7127120B2 (en) 2002-11-01 2006-10-24 Microsoft Corporation Systems and methods for automatically editing a video
US7274741B2 (en) 2002-11-01 2007-09-25 Microsoft Corporation Systems and methods for generating a comprehensive user attention model
JP4222869B2 (ja) * 2002-12-10 2009-02-12 株式会社ソニー・コンピュータエンタテインメント 画像再生装置
US7164798B2 (en) 2003-02-18 2007-01-16 Microsoft Corporation Learning-based automatic commercial content detection
US7260261B2 (en) 2003-02-20 2007-08-21 Microsoft Corporation Systems and methods for enhanced image adaptation
JP4117615B2 (ja) * 2003-06-30 2008-07-16 ソニー株式会社 一時蓄積管理装置、一時蓄積管理方法及び一時蓄積管理プログラム
US7400761B2 (en) 2003-09-30 2008-07-15 Microsoft Corporation Contrast-based image attention analysis framework
JP4276042B2 (ja) 2003-10-07 2009-06-10 パイオニア株式会社 索引データ生成装置、索引データ生成方法、索引データ生成プログラムおよびそれを記録した情報記録媒体、並びに、コンテンツデータ再生装置、コンテンツデータ再生方法、コンテンツデータ再生プログラムおよびそれを記録した情報記録媒体
US7471827B2 (en) 2003-10-16 2008-12-30 Microsoft Corporation Automatic browsing path generation to present image areas with high attention value as a function of space and time
CN101494076B (zh) * 2004-05-11 2012-09-05 松下电器产业株式会社 再生装置
US9053754B2 (en) 2004-07-28 2015-06-09 Microsoft Technology Licensing, Llc Thumbnail generation and presentation for recorded TV programs
US7986372B2 (en) 2004-08-02 2011-07-26 Microsoft Corporation Systems and methods for smart media content thumbnail extraction
JP4297010B2 (ja) * 2004-08-13 2009-07-15 ソニー株式会社 情報処理装置および情報処理方法、並びに、プログラム
WO2006064749A1 (ja) 2004-12-16 2006-06-22 Sharp Kabushiki Kaisha 動画像再生方法および動画像再生装置
US7548936B2 (en) 2005-01-12 2009-06-16 Microsoft Corporation Systems and methods to present web image search results for effective image browsing
KR100716291B1 (ko) * 2005-07-27 2007-05-09 삼성전자주식회사 영상재생장치와 그 제어방법 및 pvr
US8180826B2 (en) 2005-10-31 2012-05-15 Microsoft Corporation Media sharing and authoring on the web
US7773813B2 (en) 2005-10-31 2010-08-10 Microsoft Corporation Capture-intention detection for video content analysis
US7796860B2 (en) * 2006-02-23 2010-09-14 Mitsubishi Electric Research Laboratories, Inc. Method and system for playing back videos at speeds adapted to content
US20070260634A1 (en) * 2006-05-04 2007-11-08 Nokia Corporation Apparatus, system, method, and computer program product for synchronizing the presentation of media content
EP2079231B1 (de) 2006-10-24 2014-04-30 Sony Corporation Abbildungseinrichtung und wiedergabesteuereinrichtung
JP2009089065A (ja) 2007-09-28 2009-04-23 Toshiba Corp 電子機器および顔画像表示装置
JP5444611B2 (ja) * 2007-12-18 2014-03-19 ソニー株式会社 信号処理装置、信号処理方法及びプログラム
US8625837B2 (en) * 2009-05-29 2014-01-07 Microsoft Corporation Protocol and format for communicating an image from a camera to a computing environment
US9824721B2 (en) * 2013-01-16 2017-11-21 Mitsuo Hayashi Video generation device, video generation program, and video generation method
CN112866805A (zh) * 2021-04-23 2021-05-28 北京金和网络股份有限公司 一种视频加速处理方法、装置和电子设备

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5500680A (en) * 1992-10-12 1996-03-19 Goldstar Co., Ltd. Caption display controlling device and the method thereof for selectively scrolling and displaying a caption for several scenes
US5933807A (en) * 1994-12-19 1999-08-03 Nitsuko Corporation Screen control apparatus and screen control method
US5974219A (en) * 1995-10-11 1999-10-26 Hitachi, Ltd. Control method for detecting change points in motion picture images and for stopping reproduction thereof and control system for monitoring picture images utilizing the same
US6026232A (en) * 1995-07-13 2000-02-15 Kabushiki Kaisha Toshiba Method and system to replace sections of an encoded video bitstream
US6252975B1 (en) * 1998-12-17 2001-06-26 Xerox Corporation Method and system for real time feature based motion analysis for key frame selection from a video

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0638164A (ja) * 1992-07-17 1994-02-10 Takaoka Electric Mfg Co Ltd 検索機能付ビデオ入出力装置
JPH06290573A (ja) * 1993-04-05 1994-10-18 Olympus Optical Co Ltd 画像記録・再生装置
JP3484832B2 (ja) * 1995-08-02 2004-01-06 ソニー株式会社 記録装置、記録方法、再生装置及び再生方法
JP3253530B2 (ja) * 1996-07-24 2002-02-04 三洋電機株式会社 動画像記録装置
KR19980049216A (ko) * 1996-12-19 1998-09-15 구자홍 디지탈 브이씨알의 콤마/저속재생 방법
JP3988205B2 (ja) * 1997-05-27 2007-10-10 ソニー株式会社 映像信号記録再生装置、映像信号記録再生方法、映像信号再生装置及び映像信号再生方法
KR100607931B1 (ko) * 1999-07-09 2006-08-03 삼성전자주식회사 A/v 컨텐트에 대한 검색을 위한 장면 전환점 정보를 저장하는 기록 매체, 이 정보를 자동으로 생성하는 방법, 이 정보를 기록하고 재생하는 장치

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US5500680A (en) * 1992-10-12 1996-03-19 Goldstar Co., Ltd. Caption display controlling device and the method thereof for selectively scrolling and displaying a caption for several scenes
US5933807A (en) * 1994-12-19 1999-08-03 Nitsuko Corporation Screen control apparatus and screen control method
US6026232A (en) * 1995-07-13 2000-02-15 Kabushiki Kaisha Toshiba Method and system to replace sections of an encoded video bitstream
US5974219A (en) * 1995-10-11 1999-10-26 Hitachi, Ltd. Control method for detecting change points in motion picture images and for stopping reproduction thereof and control system for monitoring picture images utilizing the same
US6252975B1 (en) * 1998-12-17 2001-06-26 Xerox Corporation Method and system for real time feature based motion analysis for key frame selection from a video

Cited By (46)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8041978B2 (en) * 2001-11-27 2011-10-18 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US8185769B2 (en) 2001-11-27 2012-05-22 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20080050094A1 (en) * 2001-11-27 2008-02-28 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US8683252B2 (en) 2001-11-27 2014-03-25 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20080059870A1 (en) * 2001-11-27 2008-03-06 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US8108706B2 (en) 2001-11-27 2012-01-31 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20100005334A1 (en) * 2001-11-27 2010-01-07 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20100003012A1 (en) * 2001-11-27 2010-01-07 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US8108707B2 (en) 2001-11-27 2012-01-31 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20100161092A1 (en) * 2001-11-27 2010-06-24 Hyung Sun Kim Method of managing lyric data of audio data recorded on a rewritable recording medium
US8671301B2 (en) 2001-11-27 2014-03-11 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US8074095B2 (en) 2001-11-27 2011-12-06 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20100169694A1 (en) * 2001-11-27 2010-07-01 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20100003013A1 (en) * 2001-11-27 2010-01-07 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20100003011A1 (en) * 2001-11-27 2010-01-07 Lg Electronics Inc. Method for ensuring synchronous presentation of additional data with audio data
US20060210253A1 (en) * 2002-04-17 2006-09-21 Samsung Electronics Co., Ltd. Recording medium containing thumbnail recorded thereon, recording apparatus and method therefor, and reproducing apparatus and method therefor
US7620295B2 (en) * 2002-04-17 2009-11-17 Samsung Electronics Co., Ltd. Recording medium containing thumbnail recorded thereon, recording apparatus and method therefor, and reproducing apparatus and method therefor
US8073306B2 (en) 2002-04-17 2011-12-06 Samsung Electronics Co., Ltd. Recording medium containing thumbnail recorded thereon, recording apparatus and method therefor, and reproducing apparatus and method therefor
US20060204224A1 (en) * 2002-04-17 2006-09-14 Samsung Electronics Co., Ltd. Recording medium containing thumbnail recorded thereon, recording apparatus and method therefor, and reproducing apparatus and method therefor
US7246249B2 (en) * 2002-06-10 2007-07-17 Canon Kabushiki Kaisha Reproduction control of reproduction apparatus based on remaining power of battery
US20040021974A1 (en) * 2002-06-10 2004-02-05 Hirotaka Shiiyama Reproduction control of reproduction apparatus based on remaining power of battery
US8238710B2 (en) 2002-06-10 2012-08-07 Canon Kabushiki Kaisha Reproduction control of reproduction apparatus base on remaining power of battery
US20070003212A1 (en) * 2002-06-10 2007-01-04 Canon Kabushiki Kaisha Reproduction control of reproduction apparatus based on remaining power of battery
US20050198570A1 (en) * 2004-01-14 2005-09-08 Isao Otsuka Apparatus and method for browsing videos
US20090148133A1 (en) * 2004-01-30 2009-06-11 Kazuhiro Nomura Content playback apparatus
US8081863B2 (en) 2004-01-30 2011-12-20 Panasonic Corporation Content playback apparatus
US20080267587A1 (en) * 2004-02-21 2008-10-30 Samsung Electronics Co., Ltd Information storage medium having recorded thereon text subtitle data synchronized with av data, and reproducing method and apparatus therefor
US20050185929A1 (en) * 2004-02-21 2005-08-25 Samsung Electronics Co., Ltd Information storage medium having recorded thereon text subtitle data synchronized with AV data, and reproducing method and apparatus therefor
US8606069B2 (en) 2004-09-08 2013-12-10 Panasonic Corporation Playback device, playback method, and computer-readable recording medium for ensuring stable application execution in synchronism with video data playback
US20060104609A1 (en) * 2004-11-08 2006-05-18 Kabushiki Kaisha Toshiba Reproducing device and method
US8290345B2 (en) 2007-06-05 2012-10-16 Panasonic Corporation Digest generation for television broadcast program
US20080303943A1 (en) * 2007-06-05 2008-12-11 Tatsuhiko Numoto Digest generation for television broadcast program
US10026438B2 (en) * 2009-10-05 2018-07-17 Sony Corporation Information processing apparatus for reproducing data based on position of content data
US20110080478A1 (en) * 2009-10-05 2011-04-07 Michinari Kohno Information processing apparatus, information processing method, and information processing system
US20110116759A1 (en) * 2009-10-13 2011-05-19 Nikon Corporation Imaging device and image processing apparatus
US8515237B2 (en) 2009-10-13 2013-08-20 Nikon Corporation Imaging device and image processing apparatus
US8976293B2 (en) * 2012-05-25 2015-03-10 Kabushiki Kaisha Toshiba Electronic apparatus, control method of electronic apparatus, and control program of electronic apparatus
US20130314596A1 (en) * 2012-05-25 2013-11-28 Kabushiki Kaisha Toshiba Electronic apparatus, control method of electronic apparatus, and control program of electronic apparatus
US20140143235A1 (en) * 2012-11-20 2014-05-22 Lenovo (Beijing) Co., Ltd. Information outputting method and electronic device
US20150127626A1 (en) * 2013-11-07 2015-05-07 Samsung Tachwin Co., Ltd. Video search system and method
US9792362B2 (en) * 2013-11-07 2017-10-17 Hanwha Techwin Co., Ltd. Video search system and method
US10212386B2 (en) * 2015-08-28 2019-02-19 Xiaomi Inc. Method, device, terminal device, and storage medium for video effect processing
US11128990B2 (en) * 2018-06-20 2021-09-21 Canon Kabushiki Kaisha Communication apparatus, control method, and storage medium
US11457267B2 (en) 2018-06-20 2022-09-27 Canon Kabushiki Kaisha Communication apparatus, communication method, and storage medium
US10734024B1 (en) * 2018-09-04 2020-08-04 Adobe, Inc. Systems and methods of appending metadata to a media file for playing time-lapsed audio
US11900966B2 (en) 2020-12-17 2024-02-13 Samsung Electronics Co., Ltd. Method and electronic device for producing video summary

Also Published As

Publication number Publication date
KR20020007158A (ko) 2002-01-26
KR100564893B1 (ko) 2006-03-30
EP1168840A2 (de) 2002-01-02
US20030002853A1 (en) 2003-01-02
CN1367612A (zh) 2002-09-04
EP1168840A3 (de) 2003-12-17
US20030086692A1 (en) 2003-05-08

Similar Documents

Publication Publication Date Title
US20020051081A1 (en) Special reproduction control information describing method, special reproduction control information creating apparatus and method therefor, and video reproduction apparatus and method therefor
US6964021B2 (en) Method and apparatus for skimming video data
JP5322550B2 (ja) 番組推奨装置
JP4253139B2 (ja) フレーム情報記述方法、フレーム情報生成装置及び方法、映像再生装置及び方法並びに記録媒体
US9594959B2 (en) Videolens media engine
US7203380B2 (en) Video production and compaction with collage picture frame user interface
US8938393B2 (en) Extended videolens media engine for audio recognition
US7487524B2 (en) Method and apparatus for presenting content of images
US8233769B2 (en) Content data processing device, content data processing method, program, and recording/ playing device
WO2007126096A1 (ja) 画像処理装置及び画像処理方法
US20020133486A1 (en) Video retrieval and browsing apparatus, video retrieval, browsing and editing apparatus, and recording medium
JP2003087785A (ja) 動画像符号化データの形式変換方法及び装置
US9749550B2 (en) Apparatus and method for tuning an audiovisual system to viewer attention level
JP5096259B2 (ja) 要約コンテンツ生成装置および要約コンテンツ生成プログラム
JP2003109022A (ja) 図書製作システムと図書製作方法
JP2010109852A (ja) 映像インデクシング方法、映像録画再生装置、及び映像再生装置
JP2008166895A (ja) 映像表示装置及びその制御方法、プログラム、記録媒体
JPH10276388A (ja) 画像処理装置および画像処理方法、画像再生装置および画像再生方法、並びに記録媒体
JP2003224791A (ja) 映像の検索方法および装置
KR20050031013A (ko) 엠펙 동영상에서의 주요 정지 영상 추출 장치 및 방법
KR20020023063A (ko) 비디오 콘텐트의 구조적 정보를 이용한 비디오 스키밍방법과 장치
JP2008134825A (ja) 情報処理装置および情報処理方法、並びにプログラム
KR100678895B1 (ko) 모델 기반 세그먼트 메타데이터를 생성하는 장치 및 방법
JP2002281438A (ja) 要約映像特定装置、および要約映像特定方法
JP2003333523A (ja) キャラクタ・サムネイル系列生成システム及びその方法

Legal Events

Date Code Title Description
AS Assignment

Owner name: KABUSHIKI KAISHA TOSHIBA, JAPAN

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNORS:HORI, OSAMU;KANEKO, TOSHIMITSU;MITA, TAKESHI;AND OTHERS;REEL/FRAME:011956/0545

Effective date: 20010622

STCB Information on status: application discontinuation

Free format text: ABANDONED -- FAILURE TO RESPOND TO AN OFFICE ACTION