WO2007072870A1 - Dispositif, procede et programme d'enregistrement de contenu video, dispositif, procede et programme de reproduction de contenu video, dispositif d'edition de contenu video, et support d'enregistrement d'informations - Google Patents

Dispositif, procede et programme d'enregistrement de contenu video, dispositif, procede et programme de reproduction de contenu video, dispositif d'edition de contenu video, et support d'enregistrement d'informations Download PDF

Info

Publication number
WO2007072870A1
WO2007072870A1 PCT/JP2006/325397 JP2006325397W WO2007072870A1 WO 2007072870 A1 WO2007072870 A1 WO 2007072870A1 JP 2006325397 W JP2006325397 W JP 2006325397W WO 2007072870 A1 WO2007072870 A1 WO 2007072870A1
Authority
WO
WIPO (PCT)
Prior art keywords
information
video
parameter information
video content
unit
Prior art date
Application number
PCT/JP2006/325397
Other languages
English (en)
Japanese (ja)
Inventor
Takaaki Namba
Yusuke Mizuno
Original Assignee
Matsushita Electric Industrial Co., Ltd.
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Matsushita Electric Industrial Co., Ltd. filed Critical Matsushita Electric Industrial Co., Ltd.
Publication of WO2007072870A1 publication Critical patent/WO2007072870A1/fr

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/229Image signal generators using stereoscopic image cameras using a single 2D image sensor using lenticular lenses, e.g. arrangements of cylindrical lenses
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • Video content recording device video content recording method, video content recording program, video content playback device, video content playback method, video content playback program, video content editing device, and information recording medium
  • the present invention relates to a video content recording apparatus, a video content recording method, and a video content recording program for recording video content, particularly three-dimensional video content, on an information recording medium.
  • the present invention also relates to an information recording medium on which video content is recorded.
  • the present invention also relates to a video content playback apparatus, a video content playback method, and a video content playback program for playing back video content recorded on an information recording medium.
  • the present invention also relates to a video content editing apparatus for editing video content recorded on an information recording medium.
  • FIG. 17 is a block diagram showing a configuration of a main part of a conventional stereo image display device 900.
  • This conventional stereo image display apparatus 900 can display a plurality of stereo images in a general-purpose order without requiring the user to input an image file name including a pair of left and right stereo images each time.
  • the overall control unit 901 operates in close cooperation with an OS (Operating System), thereby starting and ending the stereo image display program, controlling the cooperative operation of other modules, and controlling the entire program. Executes memorization, storage, and reading of the set values.
  • the modules directly controlled by the overall control unit 901 are a data processing unit 902 and a display control unit 903.
  • the data processing unit 902 responds to requests from the overall control unit 901 and the display control unit 903, and performs various processes. Aisle reading, stereo image data processing, etc. are executed.
  • the data processing unit 902 has three submodules: an image file processing unit 904, a display correction setting file processing unit 905, and a stereo image data processing unit 906. By controlling these modules, the data processing unit 902 It functions as the processing unit 902.
  • the image file processing unit 904 reads various types of image files, deciphers their contents, decodes the compressed data as necessary, and converts them into image data in a predetermined standard format. .
  • the display correction setting file processing unit 905 receives a request from the display control unit 903 via the data processing unit 902, and saves a setting value related to the display correction of the stereo image in a file, or reads out the reverse. To do.
  • the stereo image data processing unit 906 combines the image data in a pair of left and right standard formats received from the image file processing unit 904 based on the display correction setting data received from the display correction setting file processing unit 905 and performs direct viewing. Create stereo image data in a format suitable for display on a type display (not shown).
  • the display control unit 903 receives the stereo image data created by the stereo image data processing unit 906 via the data processing unit 902, and displays it on the direct view display. In addition, the display control unit 903 receives a user instruction regarding display control in which a keyboard (not shown) or mouse (not shown) force is input via the OS or the overall control unit 901, and receives these instructions. Display control corresponding to.
  • a plurality of stereo images are arranged in a general order without the user having to input the image file names of a pair of left and right stereo images each time. Force that can be displayed This stereo image must have been prepared in advance so that it can be displayed in stereo.
  • the power that can be corrected This is the data that can be displayed according to the specifications of various direct view displays and other 3D displays. It is a correction that does not generate a stereo image itself, but the stereo image must have been prepared in advance so that it can be displayed in stereo. [0013] Further, since the stereo image is limited to the prepared one so that the stereo display can be performed in advance, the displayed stereo image is also limited to the prepared one. It is impossible to display various deformations such as changing them.
  • the prepared stereo image has a data structure and a file structure that are unique to the device, and other devices that can display various stereo images and can be used for general purposes. There wasn't.
  • Patent Document 1 Japanese Patent Laid-Open No. 2001-215947
  • the present invention has been made to solve the above-described problem, and a video content recording apparatus and a video content recording device that can be used universally in a device capable of displaying various 3D videos. It is an object of the present invention to provide a method, a video content recording program, a video content playback device, a video content playback method, a video content playback program, a video content editing device, and an information recording medium.
  • a video content recording apparatus is a video content recording apparatus that records video content on an information recording medium, and at least one video data obtained by photographing a subject with a plurality of different viewpoint powers.
  • Dynamic parameter information acquisition unit that acquires dynamic parameter information that is information, the at least one video data acquired by the video data acquisition unit, and acquired by the static parameter information acquisition unit Associating the static parameter information with the dynamic parameter information acquired by the dynamic parameter information acquisition unit And a recording section for recording on the information recording medium as the image co Nten'.
  • the subject is photographed from a plurality of different viewpoints, and at least one video data is acquired.
  • static parameter information that is information unique to the video content recording apparatus is acquired
  • dynamic parameter information that is information that changes according to the situation when the video data is acquired is acquired.
  • at least one movie acquired The image data, the acquired static parameter information, and the acquired dynamic parameter information are associated with each other and recorded on the information recording medium as video content.
  • Dynamic parameter information which is information that changes, is recorded on the information recording medium as video content, so that it can be displayed in 3D using static parameter information and dynamic parameter information.
  • Video data can be covered, and it can be used universally for devices that can display various 3D video content.
  • the video content for displaying the 3D video recorded by the video content recording device can be used for various 3D display devices, the 3D video can be displayed. It will be possible to contribute to the promotion of the distribution of 3D video information while standardizing the information format.
  • FIG. 1 is a block diagram showing a configuration of a main part of a video content recording apparatus according to a first embodiment.
  • FIG. 2 is a diagram showing a video shooting state and a schematic configuration of an information recording medium in the video content recording apparatus according to the first embodiment.
  • FIG. 3 is a diagram showing in detail the directory and file structure of the information recording medium.
  • FIG. 4 is a flowchart for explaining video content recording processing by the video content recording apparatus shown in FIG.
  • FIG. 5 is a block diagram showing a configuration of a main part of the video content reproduction / editing device according to the second embodiment.
  • FIG. 6 is a diagram schematically showing an example when video content is reproduced while changing the distance between lenses.
  • FIG. 7 is a flowchart for explaining video content playback processing by the video content playback / editing apparatus shown in FIG. 5.
  • FIG.8 For video content editing and playback processing by the video content playback and editing device shown in Fig.5 It is a flowchart for demonstrating about it.
  • FIG. 9 is a block diagram showing a schematic configuration of a video content recording apparatus according to the third embodiment.
  • FIG. 10 is a schematic diagram for explaining a video content recording apparatus according to Embodiment 3.
  • FIG. 12 is a block diagram showing a configuration of a main part of a video content recording apparatus according to Embodiment 4.
  • FIG. 13 is a diagram showing a schematic configuration of a video input processing unit of the video content recording apparatus according to the fourth embodiment.
  • FIG.14 Video content recording processing by the video content recording device shown in Fig. 12! It is a flowchart for urgent explanation.
  • FIG. 15 is a block diagram showing a configuration of a main part of a video content editing apparatus according to Embodiment 5.
  • FIG.16 Video content editing processing by the video content editing device shown in Fig.15! It is a flowchart for explaining.
  • FIG. 17 is a block diagram showing a configuration of a main part of a conventional stereo image display device.
  • Embodiment 1 of the present invention will be described below with reference to the drawings.
  • FIG. 1 is a block diagram showing a configuration of a main part of video content recording apparatus 100 according to Embodiment 1.
  • This video content recording device 100 is used for video cameras, digital cameras, still cameras, and video shootings used for various purposes such as industrial, consumer, surveillance, robot, medical, analysis, measurement and measurement, entertainment, and various other purposes. Equivalent to the camera.
  • the video content recording apparatus 100 includes a first video input processing unit 121, a second video input processing unit 123, an AV control buffer processing unit 125, a CODEC processing unit 127, and dynamic device information management.
  • Unit 129 content processing unit 142, content attached information processing unit 144, static device information management unit 146, media access processing unit 161, menu processing unit 163, and user interface processing unit 165.
  • the video content recording device 100 has two parts for acquiring video data, that is, a first video input processing unit 121 and a second video input processing unit 123.
  • the first video input processing unit 121 and the second video input processing unit 123 are also a camera unit corresponding to the right eye and a camera unit corresponding to the left eye, for example, for three-dimensionalizing the input video.
  • the video input processing unit corresponding to the camera unit for acquiring the video data is not necessarily limited to two, and may have three, four, or more. Absent.
  • one video input processing unit is subdivided, and there is only one image sensor and video signal processing circuit, and only a part (the center of which is a lens) that forms an object image on the image sensor. It does not matter.
  • the image pickup element referred to in the present specification means that phototransistors, photodiodes, and other photoelectric elements of various materials' configurations have various structures in a two-dimensional planar shape, such as lattice matrix, staggered A two-dimensional planar imaging device arranged in a child or Hercam structure.
  • one CCD area image sensor chip or a CMOS planar image sensor corresponds to this.
  • the first subject image 102 is incident on the first video input processing unit 121, the first video data 103 is acquired, and the second video input processing unit 123 receives the first video data 103. Then, the second object image 104 is incident, and the second video data 105 is acquired.
  • the first video data 103 and the second video data 105 have the same subject viewed from different viewpoints, that is, the lens positions of the first video input processing unit 121 and the second video input processing unit 123, respectively. This is a captured image (still image or moving image).
  • the first video data 103 and the second video data 105 are not necessarily the same subject, and only a part of them may be the same subject. This part of the subject is the same subject as a part of the shooting scene (shooting area). This is equivalent to overlapping the parts.
  • the first video data 103 and the second video data 105 captured by the first video input processing unit 121 and the second video input processing unit 123 are sent to the AV control buffer processing unit 125, Accumulated there.
  • the AV control buffer processing unit 125 stores the first video data 103 and the second video data 105 captured by the first video input processing unit 121 and the second video input processing unit 123, and the user interface processing unit Based on the operation information output by 165, various processes are performed on the first video data 103 and the second video data 105.
  • the user interface processing unit 165 receives a user operation and outputs the received operation content to the AV control buffer processing unit 125 as operation information.
  • the operation performed by the user includes designation of a codec method such as MPEG2 and H.264, and operation information indicating the codec method designated by the user is also sent to the user interface processing unit 165 to the AV control buffer processing unit 125. It is done.
  • the AV control buffer processing unit 125 executes codec control based on the operation information output by the user interface processing unit 165 with respect to the CODEC processing unit 127.
  • the operation information is for performing three-dimensional video recording.
  • the first video data 103 and the second video data 105 stored in the AV control buffer processing unit 125 are sent to the CODEC processing unit 127.
  • the CODEC processing unit 127 performs a predetermined encoding process on the first video data 103 and the second video data 105.
  • the predetermined encoding process executed here is, for example, a video compression code in a format such as MPEG2, MPEG4, and H.264 when the video data is video data.
  • it is still image compression encoding in a format such as JPEG, PNG, GIF and BMP.
  • the first video data 103 and the second video data 105 that have been subjected to a predetermined encoding process by the CODEC processing unit 127 are sent to the content processing unit 142.
  • the content processing unit 142 performs a streaming process on the first video data 103 and the second video data 105 that have been subjected to the encoding process by the CODEC processing unit 127.
  • the stream processing here is, for example, processing for performing partial 'transport' stream (hereinafter referred to as “partial TS”). Note that, for example, encoding processing is performed on the partial TS here.
  • Audio data that consists of only moving images and still image data, various attribute data that will be described later, management data, and other various content ancillary information may be included. It does not matter. Examples of various content ancillary information will be described in detail later.
  • the media access processing unit 161 records the first video data 103 and the second video data 105 streamed by the content processing unit 142 on the information recording medium 167.
  • the information recording medium 167 is, for example, a removable card type recording medium using a non-volatile semiconductor memory such as an SD memory card, a removable recording medium using a semiconductor memory backed up by a battery, D Optically removable recording media using VD (digital versatile disc), optically removable recording media using BD (blue ray disc), HDD (hard disk drive) ) Is a magnetic and non-detachable (fixed) recording medium.
  • the media access processing unit 161 builds a device system for accessing various recording media as described above, and a file system for building, controlling, and using a file system on these various recording media. Includes system processing unit.
  • the first video input processing unit 121 and the second video input processing unit 123 are motions representing various states when the first video data 103 and the second video data 105 are acquired.
  • Device information is output to the dynamic device information management unit 129.
  • the dynamic device information indicates the direction in which the first video input processing unit 121 and the second video input processing unit 123 face when the first video data 103 and the second video data 105 are captured.
  • Lens position information indicating the position of each lens, and various other information.
  • the lens position information includes position information indicating the position of the lens in the apparatus and the height from the ground to the lens. Each of these pieces of information may be a value expressed with an absolute coordinate axis with the latitude and longitude of the earth as the coordinate axis, or with various other local coordinate axes. It may be a relative value expressed.
  • the dynamic device information is temporarily stored in the dynamic device information management unit 129, and may be selected as necessary. The necessary information is sent to the content-attached information processing unit 144. .
  • the content-attached information processing unit 144 executes a process necessary for recording the dynamic device information output by the dynamic device information management unit 129 in the information recording medium 167, for example, in a partial TS.
  • the dynamic device information may be recorded in the information recording medium 167 separately from the partial TS that is not necessarily included in the partial TS.
  • the static device information management unit 146 stores and manages various pieces of static device information related to shooting the first video data 103 and the second video data 105 in the video content recording apparatus 100.
  • the static device information includes, for example, the distance between the lenses of the photographing lenses provided in the first video input processing unit 121 and the second video input processing unit 123, the focal length of each lens, the distortion value of each lens, and Includes various other information.
  • the static device information management unit 146 outputs the static device information to the content-attached information processing unit 144.
  • the content ancillary information processing unit 144 executes processing necessary to record the static device information output by the static device information management unit 146 in the information recording medium 167, for example, in a partial TS.
  • the static device information may be recorded on the information recording medium 167 separately from the partial TS that is not necessarily included in the partial TS.
  • the partial TS, the dynamic device information, and the static device information are recorded on the information recording medium 167 as one file. Or some of these may be recorded on the information recording medium 167 as separate files, or may be recorded on a single directory. It may be an embodiment recorded in a different directory, an embodiment recorded in one information recording medium 167, or another information. The embodiment may be recorded on the recording medium 167.
  • the static device information captures the first video data 103 and the second video data 105.
  • This is various static parameter information related to shadowing and does not change during normal shooting.
  • the static device information is not necessarily recorded on the information recording medium 167 at the same time as or simultaneously with the partial TS or dynamic device information, but is recorded on the information recording medium 167 in advance. It may be a form, or an embodiment in which information is recorded on the information recording medium 167 later.
  • the dynamic device information and the static device information are also information that associates a video stream included in the video content with a viewpoint when the video stream is captured.
  • the dynamic device information and the static device information processed by the content-attached information processing unit 144 are sent to the content processing unit 142.
  • the content processing unit 142 embeds dynamic device information and static device information in the partial TS as a part of the partial TS, or as a separate file without being embedded in the partial TS.
  • the media access processing unit 161 records the dynamic device information and the static device information in the information recording medium 167 as a part of the partial TS or as a file different from the partial TS.
  • the contents and format of the partial TS recorded on the information recording medium 167 will be described in detail later.
  • the menu processing unit 163 displays a menu or a guide for assisting the operation and a process associated therewith. .
  • FIG. 2 is a diagram showing a video shooting state and a schematic configuration of the information recording medium 167 in the video content recording apparatus 100 according to the first embodiment.
  • the video content recording apparatus 100 includes the two video input processing units, the first video input processing unit 121 and the second video input processing unit 123, and these two video input processing units.
  • the image of the subject 201 is taken by the force processing unit.
  • the first video input processing unit 121 and the second video input processing unit 123 are not necessarily two video input processing units.
  • One video input processing unit is moved in time division, that is, moved.
  • the embodiment may be used as if the power is two video input processing units! / ⁇ .
  • the time information for moving one video input processing unit that is, the time used as the first video input processing unit 121 and the second video input processing unit 123 are used. It is necessary to record the time used as.
  • This time information is also information that associates each image of the captured video stream with the captured position (viewpoint).
  • the inter-lens distance d is not necessarily constant, and the inter-lens distance d can be changed by changing the distance to be moved.
  • the first video input processing unit 121 includes a lens 221, an image sensor 225, and some parts not shown.
  • the second video input processing unit 123 includes a lens 223, an image sensor 227, and some parts not shown.
  • one image sensor is not necessarily provided with two image sensors 225 and 227, and the area of the one image sensor is divided and used. You can do it.
  • the lenses 221 and 223 collect light from the subject 201 and form an image.
  • the image of the subject 201 is formed on the image sensor 225 and the image sensor 227 in this embodiment.
  • the image sensors 225 and 227 convert the image of the subject 201 into an electric signal, and the video content recording apparatus 100 electrically processes and stores the image of the subject 201 converted into the electric signal.
  • the lens 221 and the lens 223 are arranged with a distance d between the lenses, the video input from the first video input processing unit 121 and the second video input processing unit 123 are arranged. From the video input from, the viewpoint position is different by the distance d and the viewing angle is also different. Therefore, by using these two images, a three-dimensional image of the subject 201 can be reproduced.
  • the video content recording apparatus 100 is related to the fact that the first video input processing unit 121 and the second video input processing unit 123 that capture only the distance d between lenses capture an image of the subject 201.
  • This parameter information is information specific to the device, and includes parameter data that does not change at each shooting and parameter data that changes at each shooting. This information is unique to this device, and does not change during each shooting V, parameter data is called static device information, and parameter data that changes during each shooting is called dynamic device information.
  • the static device information includes, for example, the distance between lenses when video content is shot, the focal length of each lens, the distortion value of each lens, and other information.
  • the dynamic device information includes, for example, 1st video input process when shooting 1 video data 103 and 2nd video data 105 Angle information indicating the direction in which the processing unit 121 and the second video input processing unit 123 are facing, position information indicating the positions of the first video input processing unit 121 and the second video input processing unit 123, Includes angle information indicating the direction of the lens provided in the first video input processing unit 121 and the second video input processing unit 123, position information indicating the position of each lens, and various other information. .
  • the static device information and the dynamic device information are not necessarily fixed.
  • the video content recording apparatus 100 has a lens distance d, a lens focal length, a distortion value, and the like.
  • the values may be dynamic device information instead of static device information.
  • the static device information and the dynamic device information constitute one partial TS together with the video stream, or individually, or some of them together. Others are individually recorded in the information recording media 167 as a single video content.
  • the partial TS including the video information of the subject 201 taken by the first video input processing unit 121 and the second video input processing unit 123, the static device information, and the dynamic device information are recorded on an information recording medium. It is recorded in 167.
  • a file system having a directory structure as shown in FIG. Note that this file system and directory structure are merely examples of the embodiment, and the present invention is not limited to the configuration of such an embodiment (the other embodiments and their Same for all explanations).
  • the video content obtained by photographing the subject 201 is named content 1
  • this video content is recorded in the hierarchy below the content 1 folder, and the video data is content 1 video file 1 and content 1 video file. Recorded as 2.
  • the static device information and dynamic device information when this video content is shot are recorded as a content 1 management information file, a content 1 ancillary information file 1 and a content 1 ancillary information file 2.
  • Such file names, file formats, information recording methods, and the like are merely examples of embodiments, and the present invention is not limited to such embodiments. It is not limited.
  • FIG. 3 is a diagram showing the directory and file structure of the information recording medium 167 in detail.
  • the directory and file structure shown in FIG. 3 is also an example, and the present invention is not limited to such a directory and file structure.
  • the information recording medium 167 is, for example, an SD memory card
  • This SD memory card is also an example, and is not limited to an SD memory card.
  • ROOT As a folder including the entire SD memory card, for example, there is one folder named ROOT.
  • This ROOT folder corresponds to a directory that covers the entire recording area of one SD memory card.
  • what is called a folder is a folder structure or folder structure that is not substantially different from what is sometimes called a directory, and what is called a directory structure or directory structure is also sometimes called a directory structure or directory structure. There is virtually no change.
  • a media standard folder exists in a hierarchy below the ROOT folder.
  • SD memory capability has a standard that defines its folder structure (directory structure) and file system.
  • There are a method for recording information in accordance with this standard a method for recording information in a unique method that does not comply with this standard, and a method for recording information. Can be considered.
  • the media standard folder indicates that the folders and files in the lower layers comply with the standard of the SD memory card.
  • Fig. 3 only one media standard folder is shown, but in the hierarchy below the ROOT folder, there are multiple media standard folders and folders that do not conform to multiple SD memory card standards. It does n’t matter.
  • this media standard folder for example, there are a management information folder, a content 1 folder, a content 2 folder, a content 3 folder, ..., and a content n folder. Note that these folders do not necessarily exist. In FIG. 3, only the management information folder, the content 1 folder, and the content 2 folder are shown. Such content folders and the like may exist in any number of layers below one media standard folder.
  • management information folder information in the entire media standard folder is collectively managed. Management information files and other files are recorded.
  • this management information file the number of contents stored in the media standard folder, version information of the media standard, etc. are recorded. Note that all or part of this management information file and other files do not necessarily have to exist in this management information folder, and may or may not exist in other folders. It may exist in the hierarchy immediately below the folder.
  • Each content folder stores one content, such as video content and audio content, and multimedia content including both.
  • one content folder contains one content
  • one content folder is contained in one SD memory card. It is not necessarily limited to this, and other formats may be used. .
  • each content folder in which a plurality of content files are stored, and there is a separate folder for storing content ancillary information files.
  • This folder contains content ancillary information for all content.
  • the file may be stored.
  • a content folder may exist for each content, each content file may be stored in the content folder, and the content attached information file may be stored in a common folder or a media standard folder.
  • a content ancillary information folder exists for each content, each content ancillary information file may be stored in the content ancillary information folder, and the content file may be stored in a common folder or a media standard folder.
  • a content folder that stores several content files and content-attached information files in a group.
  • a content folder that stores several content files and content-attached information files in a group.
  • a plurality of content files and content attached information files may be stored immediately below the media standard folder.
  • other formats may be used.
  • one content corresponds to one program.
  • one content is a single shot. Is equivalent to
  • the content 1 management information file As shown in FIG. 3, in this example, for example, in the hierarchy below the content 1 folder, the content 1 management information file, the content 1 auxiliary information file 1, the content 1 video file 1, and the content 1 auxiliary information file 2 and content 1 video file 2 exist.
  • the content 1 management information file contains various management information related to content 1, particularly information common to the two video data included in content 1, data representing the relationship between these two video data, and these 2 Data that associates two pieces of video data, for example, all or part of static device information, dynamic device information, and other information representing the relationship between the video input processing units when content 1 is input Out.
  • Content 1 video file 1 includes a stream (partial TS) obtained by encoding first video data 103 input from first video input processing unit 121.
  • Content 1 ancillary information file 1 includes This includes static device information and dynamic device information when the first video input processing unit 121 captures the first video data 103.
  • the content 1 video file 2 includes a stream (partial TS) obtained by encoding the second video data 105 input from the second video input processing unit 123.
  • the content 1 ancillary information file 2 includes The second video input processing unit 123 includes static device information and dynamic device information when the second video data 105 is captured.
  • Each of these files may include information other than these.
  • Each file of content 1 management information file, content 1 ancillary information file 1, content 1 video file 1, content 1 ancillary information file 2, and content 1 video file 2 is actually composed of multiple files. It may be, or in fact, some of these may be organized as a single file.
  • content 1 video file 1 corresponds to a video stream when viewing subject 201 with lens 221 as a viewpoint, and content 1 video file 2 viewed subject 201 with lens 223 as a viewpoint. Corresponds to the video stream of the hour.
  • the folder for recording video content and the structure of the file system are defined in the hierarchy below the media standard folder. Therefore, it is possible to propose standardization of the recording format of video content, and to promote the expansion of the distribution of video content. This also leads to the creation of new industrial fields for video content.
  • the static device information and the dynamic device information are divided into one file, a content 1 management information file, a content ancillary information file 1, or a content ancillary information file 2.
  • the static device information and the dynamic device information are recorded in different files, for example, the static device information is recorded in the content ancillary information file 1 S and the content ancillary information file 2-S.
  • Dynamic device information may be recorded in content ancillary information file 1M and content ancillary information file 2—M.
  • static device information may be recorded together in a content 1 management information file, and dynamic device information may be recorded separately in content attached information file 1 and content attached information file 2.
  • a file for recording dynamic device information in which a file for recording static device information and a file for recording dynamic device information may be used as one file
  • a file that records video information can be a single file.
  • a file that records information and a file that records video information can be combined into a single file. It doesn't matter.
  • the embodiment in which the partial TS is created by the content processing unit 144 is an example of an embodiment in which all of these are streamed and recorded as one file (partial TS).
  • a content 1 management information file and content 1 ancillary information file 1 and content 1 ancillary information file 2 for recording static device information and dynamic device information, Content that records a video stream encoded with the first video data 103 1 Video file 1 and the second video data 105 are encoded
  • an embodiment in which management information common to these is recorded in any one folder or recording medium is also possible.
  • an embodiment of recording in all folders and recording media is possible, and an embodiment of recording on the video content recording apparatus side is also possible.
  • the static device information is information unique to the video content recording apparatus 100 and does not change depending on the state when the subject 201 is photographed.
  • the information recording medium 167 was formatted and initialized from when the information recording medium 167 was set to the video content recording apparatus 100 until the subject 201 was shot. The embodiment may be recorded in advance at the time and other free times.
  • the content 1 video file 1 for recording the video stream encoded with the first video data 103 and the second video data 105 are encoded.
  • Content that records the recorded video stream 1 Video file 2 is recorded on the information recording medium 167 as a separate file, but these multiple video streams are encoded as one video stream, and one content
  • An embodiment in which the information is recorded on the information recording medium 167 as one video file may be used.
  • one image sensor is divided into a plurality of areas, for example, two or more, and an imaging unit centered on a lens is provided for each image sensor in each area, and each has a different viewpoint (
  • An embodiment in which a plurality of images photographed at the lens position) are formed on a plurality of regions of one image sensor can also be considered.
  • this single video stream includes image information that also looks at multiple viewpoints.
  • This one video stream, static device information, and dynamic device information may be recorded as separate files in the same folder on the same information recording medium 167, and the same information recording may be performed.
  • Media 16 The embodiment may be recorded in seven different folders, or may be recorded in different information recording media 167.
  • this one video stream static device information, and dynamic device information are recorded in one file, and the others are recorded in another file. It may be an embodiment for recording in a folder, an embodiment for recording in a different folder on the same information recording medium 167, or an embodiment for recording on a different information recording medium 167. It may be a form. Alternatively, an embodiment in which all of this one video stream, static device information, and dynamic device information are recorded as one file in one folder of one information recording medium 167 may be possible. .
  • one image sensor is divided into a plurality of regions, and an image forming unit centered on the lens is provided for each image sensor in each region, so that a plurality of images taken from different viewpoints are combined into one image.
  • images with multiple viewpoints shot with one image sensor are divided into video streams with different viewpoints so that one video stream includes images with one viewpoint.
  • each video stream includes an image viewed from one viewpoint, and since there are a plurality of such video streams, there are a plurality of video input processing units. Detailed description that does not substantially change will be omitted.
  • FIG. 4 is a flowchart for explaining video content recording processing by the video content recording apparatus shown in FIG.
  • step S1 the static device information management unit 146 reads out static device information stored in advance therein.
  • the static device information management unit 146 outputs the read static device information to the content-attached information processing unit 144.
  • the static device information includes the distance between lenses of the lenses provided in the first video input processing unit 121 and the second video input processing unit 123, the focal length of each lens, and Position information indicating the position of each lens in the video content recording apparatus is included. Since these pieces of information are unique information at the time of shooting with the video content recording device that does not change according to the shooting situation, they are stored in advance in the internal memory of the static device information management unit 146. Yes.
  • step S2 the content-attached information processing unit 144 performs data conversion processing necessary for recording the static device information on the information recording medium 167.
  • the content-attached information processing unit 144 converts the data structure format of the static device information into a data structure format corresponding to the type of the information recording medium 167.
  • the content ancillary information processing unit 144 outputs the static device information after the data conversion to the content processing unit 142.
  • the content processing unit 142 outputs the static device information to the media access processing unit 161.
  • the media access processing unit 161 records the static device information output by the content processing unit 142 on the information recording medium 167.
  • the first video input processing unit 121 acquires the first video data obtained by photographing the subject and outputs the first video data to the AV control buffer processing unit 125. Further, the second video input processing unit 123 acquires second video data obtained by photographing the subject from a different viewpoint from that of the first video input processing unit 121 and outputs the second video data to the AV control buffer processing unit 125. Further, the dynamic device information management unit 129 acquires dynamic device information from the first video input processing unit 121 and also acquires dynamic device information from the second video input processing unit 123. The first video input processing unit 121 outputs the dynamic device information to the dynamic device information management unit 129. Also, the second video input processing unit 123 outputs the dynamic device information to the dynamic device information management unit 129. The dynamic device information management unit 129 outputs the acquired dynamic device information to the content-attached information processing unit 144.
  • the dynamic device information includes shooting direction information indicating the direction in which the first video input processing unit 121 and the second video input processing unit 123 are facing, and the first video input processing.
  • Shooting position information indicating the positions of the unit 121 and the second video input processing unit 123, and subject focus information indicating the subject focus are included.
  • the shooting direction information can be obtained by a gyroscope provided in at least one of the first video input processing unit 121 and the second video input processing unit 123.
  • the shooting position information can be obtained by GPS provided in at least one of the first video input processing unit 121 and the second video input processing unit 123.
  • a video container is provided by providing a GPS in one of the first video input processing unit 121 and the second video input processing unit 123.
  • the position of each video input processing unit can be obtained by providing GPS in both the first video input processing unit 121 and the second video input processing unit 123.
  • Subject focus information can be obtained by triangulation based on the focal length of the lens and the distance between the lenses.
  • the shooting position information includes the first video input processing unit 121 and the second video input processing acquired by the three-axis angle measurement of the gyroscope that is acquired only by the position on the earth acquired by GPS.
  • Information indicating the orientation of the unit 123 with respect to the ground axis, and relative movement distance information calculated based on the acceleration sensor and time information may be included.
  • the shooting position information may be acquired by a star sensor that grasps the relative position based on the light of the star and its position.
  • step S 4 the AV control buffer processing unit 125 temporarily stores the first video data output by the first video input processing unit 121 and performs the second video input processing.
  • the second video data output by the unit 123 is temporarily stored.
  • step S 5 the CODEC processing unit 127 encodes the first video data stored in the AV control buffer processing unit 125 and stores it in the AV control buffer processing unit 125.
  • the second video data is encoded.
  • the CODEC processing unit 127 encodes the first video data and the second video data in the format accepted by the user interface processing unit 165.
  • the power received by the user interface processing unit 165 in the encoding format is not particularly limited to this, and the first video is in a predetermined format. You can automatically sign the data and the second video data!
  • step S6 the content processing unit 142 streams the first video data encoded by the CODEC processing unit 127, and also encodes the first video data encoded by the CODEC processing unit 127. Stream the second video data.
  • the content-attached information processing unit 144 performs data conversion processing necessary for recording the dynamic device information on the information recording medium 167.
  • the content-attached information processing unit 144 converts the data structure format of the dynamic device information into a data structure format corresponding to the type of the information recording medium 167.
  • Content ancillary information office The processing unit 144 outputs the converted dynamic device information to the content processing unit 142.
  • the content processing unit 142 outputs the first video stream, the second video stream, and the dynamic device information to the media access processing unit 161.
  • the media access processing unit 161 records the first video stream, the second video stream, and the dynamic device information output by the content processing unit 142 on the information recording medium 167.
  • the first video stream, the second video stream, the static device information, and the dynamic device information are recorded on the information recording medium 167 as video content.
  • the video content recording apparatus 100 described above is an embodiment of the video content recording apparatus according to the present invention.
  • the video content recording method executed by the video content recording apparatus 100 is an embodiment of the video content recording method according to the present invention.
  • the information recording medium 167 is an embodiment of the information recording medium according to the present invention.
  • the video content recording apparatus records the video content on the information recording medium.
  • the video content playback / editing apparatus is the video content recorded on the information recording medium. Play and edit the.
  • Embodiment 2 for example, static device information that is recorded in the information recording medium 167 and is the contents of the content 1 management information file, the content 1 attached information file 1 and the content 1 attached information file 2
  • the dynamic device information can be read from the information recording medium 167 and changed.
  • the video content playback / editing device then plays back the video content recorded in Content 1 Video File 1 or Content 1 Video File 2 using the newly changed static device information and dynamic device information values. It can be displayed.
  • the video content playback / editing apparatus can record the changed static device information and dynamic device information values in the information recording medium 167 again.
  • Static device information and dynamic device information are usually numeric data or text data because of their nature, for example, text format, HTML (hypertext markup language) format, XML (extensible markup language) ) Format or binary format Recorded on information recording media 167. These data are recorded in a binary format using, for example, a word processing software if it is recorded in a text format, and a tool or software such as a browser if it is recorded in an HTML format or XML format. Can be changed using dedicated software for handling the data format, and the changed data can be recorded again.
  • FIG. 5 is a block diagram showing a configuration of a main part of video content reproduction / editing apparatus 500 according to Embodiment 2.
  • 5 includes a media access processing unit 501, a menu processing unit 503, a user interface processing unit 505, a content attached information processing unit 507, a content processing unit 509, and a video output processing unit 522.
  • menu processing section 503 displays a menu screen for guiding the operation of the operator on a menu display section constituted by, for example, a liquid crystal display or a CRT display device.
  • a user interface processing unit 505 controls a menu display unit and an input unit configured by a touch switch, a button switch, or the like.
  • the user interface processing unit 505 receives an operation input from an operator (user) and executes processing contents according to the instruction.
  • the content of a user 1 operation information file, content 1 attached information file 1 or content 1 attached information file 2 recorded in the information recording medium 167 is updated (edited). ),
  • the content-attached information processing unit 507 receives the specified content 1 management information file, content 1 attached information file 1 or content 1 attached information from the information recording medium 167 via the media access processing unit 501. Read the contents of file 2.
  • the content-attached information processing unit 507 changes the content using tool software such as word processing software or a browser, or other methods.
  • the content-attached information processing unit 507 records the changed content on the information recording medium 167 via the media access processing unit 501 again.
  • the content of the user 1 operation information file, content 1 attached information file 1 or content 1 attached information file 2 recorded on the information recording medium 167 is changed, and Changed content and content If video content 1 and content 1 video file 2 are to be used to display 3D video, content
  • the attached information processing unit 507 reads the contents of the specified content 1 management information file, content 1 attached information file 1 or content 1 attached information file 2 from the information recording medium 167 via the media access processing unit 501.
  • the content ancillary information processing unit 507 changes the content using tool software such as word processing software or a browser, or other methods.
  • the content-attached information processing unit 507 sends the contents of the changed content 1 management information file, content 1 attached information file 1 or content 1 attached information file 2 to the video output processing unit 522. Further, the content processing unit 509 reads the content 1 video file 1 and the content 1 video file 2 from the information recording medium 167 via the media access processing unit 501. The content processing unit 509 sends the content 1 video file 1 and the content 1 video file 2 read from the information recording medium 167 to the video output processing unit 522.
  • the video output processing unit 522 reads the content 1 video file 1 and content 1 video file 2 read from the information recording medium 167, the changed content 1 management information file, content 1 attached information file 1 and content.
  • FIG. 6 is a diagram schematically showing an example in which the distance between lenses is taken as an example of static device information and dynamic device information, and video content is reproduced by changing the distance between lenses.
  • FIG. 6 (A) is a schematic diagram showing a state when the subject 401 is photographed by the video content recording apparatus 100, and FIG. 6 (B) shows that the video content is reproduced by changing the distance between the lenses. It is a schematic diagram which shows the state of time.
  • FIGS. 6 (A) and 6 (B) show a state when the image is taken and a state when the image is reproduced as viewed from above.
  • the subject 401 is, for example, a quadrangular prism whose bottom surface is a square.
  • the video power obtained by photographing the subject 401 is input to the first video input processing unit 121 and the second video input processing unit 123 of the video content recording apparatus 100.
  • the input images are a first subject image 102 and a second subject image 104, respectively.
  • the image power taken by the first video input processing unit 121 and the second video input processing unit 123 corresponds to the first video 421 and the second video 423.
  • the first video input processing unit 121 corresponds to, for example, the left eye
  • the second video input processing unit 123 corresponds to, for example, the right eye. It is assumed that the distance between lenses at this time was dl.
  • This distance between lenses is recorded on the information recording medium 167 by the static device information management unit 146, for example.
  • the file in which the distance between the lenses is recorded is, for example, a content 1 management information file, a content 1 attached information file 1 or a content 1 attached information file 2.
  • the inter-lens distance may be recorded in some other file by the dynamic device information management unit 129, and is stored in the content 1 management information file, content 1 attached information file 1 or content 1 attached information file 2.
  • the static device information and dynamic device information recorded in the content 1 management information file, content 1 ancillary information file 1 or content 1 ancillary information file 2 can be changed and updated. . Therefore, in the second embodiment, for example, it is assumed that the distance between lenses is changed to dl force as well. Also assume that dl ⁇ d2 as an example.
  • the video stream data used for playback is the content 1 video file 1 and content 1 video file 2 recorded and recorded as they are, for example, the first video 421 and the second video 423 at the time of shooting are used. Is used for playback without change.
  • the reproduction process is executed on the assumption of the inter-lens distance d2. It is assumed that other parameters have not been changed.
  • the virtual subject 431 reproduced is reproduced as a quadrangular prism having a shape different from that of the actually photographed subject 401, for example, a bottom surface thereof having a diamond shape.
  • the static device information and dynamic device information recorded in the content 1 management information file, content 1 attached information file 1 or content 1 attached information file 2 can be changed and recorded (updated) again. It is. As a result, video content different from the original video content or video content with editing and deformation can be reproduced and displayed, and the user can view these video content.
  • the static device information and the dynamic device recorded in the content 1 management information file, the content 1 attached information file 1 or the content 1 attached information file 2 Power that only shows an example of changing information
  • By performing interpolative calculation or predictive calculation it is possible to generate video information based on the viewpoint power between the two captured viewpoints, or from the outer viewpoint between the two captured viewpoints. It is also possible to generate video information.
  • video information is generated by calculation as if a visual power different from the actually shot viewpoint was shot, and 3D video is generated using this video information. Is possible.
  • FIG. 7 is a flowchart for explaining video content playback processing by the video content playback editing apparatus shown in FIG.
  • the content-attached information processing unit 507 records the dynamic device information and static device information (content-attached information) recorded in the information recording medium 167 with respect to the media access processing unit 501. Is to be read. At this time, the content-attached information processing unit 507 designates the dynamic device information and static device information associated with the video content to be reproduced.
  • the media access processing unit 501 reads the specified dynamic device information and static device information from the information recording medium 167 and outputs the information to the content-attached information processing unit 507.
  • the content-attached information processing unit 507 outputs the dynamic device information and the static device information output by the media access processing unit 501 to the video output processing unit 522.
  • step S22 the content processing unit 509 instructs the media access processing unit 501 to read the first video stream recorded on the information recording medium 167.
  • the content processing unit 509 designates the first video stream associated with the video content to be reproduced.
  • the media access processing unit 501 reads the designated first video stream from the information recording medium 167 and outputs it to the content processing unit 509.
  • the content processing unit 509 outputs the first video stream output by the media access processing unit 501 to the video output processing unit 522.
  • step S23 the content processing unit 509 reads out the second video stream recorded on the information recording medium 167 from the media access processing unit 501. To instruct. At this time, the content processing unit 509 designates the second video stream associated with the video content to be reproduced. The media access processing unit 501 reads the designated second video stream from the information recording medium 167 and outputs it to the content processing unit 509. The content processing unit 509 outputs the second video stream output by the media access processing unit 501 to the video output processing unit 522.
  • step S24 the video output processing unit 522, the first video stream output by the content processing unit 509, the second video stream output by the content processing unit 509, and the content
  • a three-dimensional stereoscopic image is created using the dynamic device information and static device information output by the attached information processing unit 507.
  • the video output processing unit 522 creates a three-dimensional stereoscopic image in which a first video stream and a second video stream are shifted according to the distance between lenses.
  • step S25 the video output processing unit 522 outputs the created three-dimensional stereoscopic image to the stereoscopic image display unit 524.
  • the stereoscopic image display unit 524 displays the 3D stereoscopic image created by the video output processing unit 522.
  • FIG. 8 shows the video content editing / playback process using the video content playback / editing device shown in Figure 5! It is a flowchart for urgent explanation.
  • step S31 the content-attached information processing unit 507 records the dynamic device information and static device information (content-attached information) recorded on the information recording medium 167 with respect to the media access processing unit 501. Is to be read. At this time, the content-attached information processing unit 507 designates the dynamic device information and static device information associated with the video content to be reproduced. The media access processing unit 501 reads the specified dynamic device information and static device information from the information recording medium 167 and outputs the information to the content-attached information processing unit 507.
  • step S32 the user interface processing unit 505 includes the information to be changed and the change including the change contents among the dynamic device information and the static device information output by the media access processing unit 501. Accept instructions. User interface processor 505 outputs the received change instruction to content-attached information processing unit 507.
  • step S33 the content-attached information processing unit 507 changes the dynamic device information and the static device information based on the change instruction output by the user interface processing unit 505. That is, the content-attached information processing unit 507 changes the contents of the dynamic device information and the static device information to the contents according to the change instruction.
  • the content ancillary information processing unit 507 outputs the dynamic device information and the static device information whose contents have been changed to the video output processing unit 522.
  • step S34 the content-attached information processing unit 507 outputs the changed dynamic device information and static device information to the media access processing unit 161.
  • the media access processing unit 161 records the dynamic device information and the static device information output by the content attached information processing unit 507 on the information recording medium 167.
  • step S35 and step S36 is the same as the processing in step S22 and step S23 shown in FIG.
  • step S37 the video output processing unit 522, the first video stream output by the content processing unit 509, the second video stream output by the content processing unit 509, and the content
  • a three-dimensional stereoscopic image is created using the dynamic device information and static device information changed by the attached information processing unit 507. For example, when the lens focal length corresponding to the first video stream and the lens focal length corresponding to the second video stream are changed, the video output processing unit 522 performs blurring according to the changed lens focal length. Create a 3D stereoscopic image.
  • step S38 the video output processing unit 522 outputs the created three-dimensional stereoscopic image to the stereoscopic image display unit 524.
  • the stereoscopic image display unit 524 displays the 3D stereoscopic image created by the video output processing unit 522.
  • the video content playback / editing apparatus 500 is an embodiment of the video content playback apparatus according to the present invention.
  • the video content playback method and video content editing method executed by the video content playback / editing apparatus 500 are one embodiment of the video content playback method and video content editing method according to the present invention.
  • the shooting direction information can be obtained with the gyroscope provided on the other side, but this gyroscope is provided for camera shake correction and can be used with a gyroscope.
  • the video content recording apparatus and the video content playback / editing apparatus may further include a camera shake correction execution unit that performs camera shake correction and a switching unit that switches on and off the camera shake correction function. In this case, the video content recording apparatus records video data with the camera shake correction function turned off, and the video content playback / editing apparatus plays with the camera shake correction function turned on.
  • the camera shake correction function may be turned off and this information may be played back as it is, or the camera shake correction function may be turned on and the camera shake may be corrected and played back.
  • step S34 shown in FIG. 8 is omitted, and the first and second video streams are recorded without recording the changed dynamic device information and static device information in the media access processing unit 161.
  • a three-dimensional stereoscopic image may be created using the changed dynamic device information and static device information.
  • the recorded contents of the media access processing unit 161 are not changed, the dynamic device information and the static device information are changed only during playback, and the changed dynamic device information and static device information are used.
  • a dimensional image is displayed.
  • FIG. 9 is a block diagram showing a schematic configuration of a video content recording apparatus 600 according to the third embodiment of the present invention.
  • the video content recording apparatus 600 according to the third embodiment of the present invention is the same as the video content recording apparatus 100 according to the first embodiment of the present invention since many parts thereof are the same as those in the first embodiment of the present invention.
  • the description of the same parts as those in FIG. 1 will be omitted, and only the parts different from the first embodiment will be described.
  • the video content recording apparatus 100 according to the first embodiment of the present invention includes a video input processing unit. Has 2 (or more).
  • the video content recording apparatus 600 according to the third embodiment of the present invention has only one video input processing unit 621 and does not have any more video input processing units. Instead, it has a drive unit 622 that moves the single video input processing unit 621. This is the most different feature between the video content recording apparatus 600 according to the third embodiment of the present invention and the video content recording apparatus 100 according to the first embodiment of the present invention.
  • various well-known methods such as a method of mounting the video input processing unit 621 on the belt and a method of using a spring and a cam can be used.
  • FIG. 10 is a schematic diagram for explaining the video content recording apparatus according to the third embodiment.
  • a first position 651 represents a position before the video input processing unit 621 has moved
  • a second position 652 represents a position after the video input processing unit 621 has moved.
  • the distance traveled from the first position 651 to the second position 652 is represented by d2.
  • the moving distance d2 corresponds to the distance between lenses.
  • the video input processing unit 621 includes a lens 641 and an image sensor 625, which are substantially the same as in the first embodiment of the present invention.
  • the video input processing unit 621 receives the subject image 602 from the subject 601 with the lens 641 and connects the image to the image sensor 625.
  • Image This image is substantially the same as the image photographed by the first video input processing unit 121 in the video content recording apparatus 100 according to Embodiment 1.
  • this image is electrically recorded, for example, on the information recording medium 167, as with the video content recording apparatus 100 according to the first embodiment of the present invention. Since the recording method and format are substantially the same as those of the video content recording apparatus 100 according to the first embodiment of the present invention, the description thereof is omitted.
  • the drive unit 622 moves the video input processing unit 621 to the second position (position after the movement) 652. Then, at the second position 652, the video input processing unit 621 receives the subject image 604 of the subject 601 force by the lens 641 at the second position 652, and receives the image at the second position (movement). The image is formed on the image sensor 625 at the position after This image is displayed in the second video input process in the video content recording apparatus 100 according to the first embodiment of the present invention. This is substantially the same as the image photographed by the physical unit 123.
  • This image is also electrically recorded, for example, on the information recording medium 167, like the video content recording apparatus 100 according to the first embodiment of the present invention. Since the recording method and format are substantially the same as those of the video content recording apparatus 100 according to the first embodiment of the present invention, the description thereof is omitted.
  • video content recording apparatus 100 has two (or two or more) video input processing units, and two (or two or more) in parallel. Images taken from the viewpoint are taken and recorded in parallel on, for example, the information recording medium 167.
  • the video content recording apparatus 600 according to the third embodiment of the present invention has only one video input processing unit, and two (or two or more) video input processing units.
  • the shooting at the first position and the shooting at the second position are repeatedly executed, and the video information shot at each position is set as a separate video stream or one video stream. For example, it is recorded on the information recording medium 167 as a video stream.
  • the movement distance d2 the time required for movement, the time information photographed at the first position, the time information photographed at the second position, and the like as management information, for example, the content 1 management information file content.
  • 1 Attached information file 1 Nya contents 1 Attached information file 2 is recorded and saved.
  • FIG. 11 is a flowchart for explaining video content recording processing by the video content recording apparatus shown in FIG.
  • step S41 the static device information management unit 146 reads out static device information stored in advance therein.
  • the static device information management unit 146 outputs the read static device information to the content-attached information processing unit 144.
  • step S42 the content-attached information processing unit 144 performs data conversion processing necessary for recording the static device information on the information recording medium 167.
  • the content-attached information processing unit 144 outputs the static device information obtained by data conversion to the content processing unit 142.
  • the content processing unit 142 outputs the static device information to the media access processing unit 161.
  • the media access processing unit 161 records the static device information output by the content processing unit 142 on the information recording medium 167.
  • step S 43 the video input processing unit 621 acquires first video data obtained by photographing the subject at the first position, and outputs the first video data to the AV control buffer processing unit 125.
  • the dynamic device information management unit 129 acquires dynamic device information from the video input processing unit 621.
  • the video input processing unit 621 outputs the dynamic device information at the time of shooting at the first position to the dynamic device information management unit 129.
  • the dynamic device information management unit 129 outputs the acquired dynamic device information to the content-attached information processing unit 144.
  • the dynamic device information further includes time information at the time of shooting at the first position.
  • the video input processing unit 621 includes a timer, a clock, or an oscillator.
  • the video input processing unit 621 outputs the shooting start time and the shooting end time as time information for a moving image, and outputs the shooting time as time information for a still image.
  • step S44 the AV control buffer processing unit 125 temporarily stores the first video data output by the video input processing unit 621.
  • the CODEC processing unit 127 encodes the first video data stored in the AV control buffer processing unit 125.
  • step S46 the content processing unit 142 streams the first video data encoded by the CODEC processing unit 127.
  • step S47 the content-attached information processing unit 144 is moved to the first position.
  • the dynamic device information acquired in this way is subjected to data conversion processing necessary for recording on the information recording medium 167.
  • the content-attached information processing unit 144 outputs the data-converted dynamic device information to the content processing unit 142.
  • the content processing unit 142 outputs the first video stream and the dynamic device information to the media access processing unit 161.
  • the media access processing unit 161 records the first video stream and dynamic device information output by the content processing unit 142 on the information recording medium 167.
  • step S48 the drive unit 622 moves the video input processing unit 621 to a second position different from the first position.
  • the first position corresponds to the position of the human right eye
  • the second position corresponds to the position of the human left eye. Therefore, the movement distance is the same as the distance between the human eyes.
  • 3D video suitable for the user can be played back.
  • step S49 the video input processing unit 621 acquires second video data obtained by photographing the subject at the second position, and outputs the second video data to the AV control buffer processing unit 125.
  • the dynamic device information management unit 129 acquires dynamic device information from the video input processing unit 621.
  • the video input processing unit 621 outputs the dynamic device information at the time of shooting at the second position to the dynamic device information management unit 129.
  • the dynamic device information management unit 129 outputs the acquired dynamic device information to the content-attached information processing unit 144.
  • the dynamic device information further includes time information at the time of shooting at the second position.
  • the shooting start time, the shooting end time, and the shooting time at the first position and the second position are used as time information, but the shooting start time and the second time at the first position are used.
  • the difference value between the shooting start time at the position and the difference value between the shooting time at the first position and the shooting time at the second position may be used as time information.
  • step S50 the AV control buffer processing unit 125 temporarily stores the second video data output by the video input processing unit 621.
  • the CODEC processing unit 127 encodes the second video data stored in the AV control buffer processing unit 125.
  • step S52 the content processing unit 142 streams the second video data encoded by the CODEC processing unit 127.
  • step S53 the content-attached information processing unit 144 is necessary to record the dynamic device information acquired at the second position V on the information recording medium 167. Perform data conversion processing.
  • the content-attached information processing unit 144 outputs the data-converted dynamic device information to the content processing unit 142.
  • the content processing unit 142 outputs the second video stream and the dynamic device information to the media access processing unit 161.
  • the media access processing unit 161 records the second video stream and dynamic device information output by the content processing unit 142 on the information recording medium 167.
  • the first video stream, the second video stream, static device information, and dynamic device information are recorded on the information recording medium 167 as video content.
  • a force that moves one video input processing unit to the first position force second position in the video content recording apparatus is an image including one video input processing unit.
  • the content recording apparatus may be moved to the first position force and the second position.
  • the moving distance of the video content recording device is recorded as dynamic device information.
  • FIG. 12 is a block diagram showing a configuration of main parts of video content recording apparatus 700 according to Embodiment 4.
  • the configuration of the video content recording apparatus 700 according to the fourth embodiment is the same as the configuration of the video content recording apparatus 100 according to the first embodiment, except that the video content recording apparatus 700 includes one video input processing unit 721. Therefore, explanation is omitted.
  • the video input processing unit 721 includes a plurality of lenses and a single imaging device, and subject images 711-716 are incident on the plurality of lenses, and the subject images 711 to 716 are connected to the imaging devices. Imaged.
  • the video input processing unit 721 acquires one video data 717 including a plurality of videos taken from a plurality of viewpoints.
  • FIG. 13 is a diagram showing a schematic configuration of the video input processing unit of the video content recording apparatus according to the fourth embodiment of the present invention.
  • the video content is The entire tenth recording device and the entire video input processing unit are not shown, but instead, an outline of a more detailed configuration of the compound eye lens 741 that is the greatest feature of the embodiment 4 of the present invention, and imaging by the compound eye lens 741. An outline of the state of an image formed on the element 725 is shown.
  • the video content recording apparatus 100 according to the first embodiment of the present invention has two (or two or more) video input processing units.
  • the video content recording apparatus 600 according to the third embodiment of the present invention has only one video input processing unit, but it is located at two (or two or more) positions in a time division manner. Moving and shooting.
  • the video content recording apparatus 700 according to the fourth embodiment of the present invention has only one video input processing unit and does not move it.
  • a compound eye lens 741 in which a plurality of these single lenses are arranged in a plane is used instead of a convex lens and a concave lens that are usually used as lenses. This is because the video content recording apparatus according to the fourth embodiment of the present invention is compared with the video content recording apparatus 100 according to the first embodiment of the present invention and the video content recording apparatus 600 according to the third embodiment of the present invention. It is the most different feature.
  • the video input processing unit 721 shown in FIG. 13 includes a compound eye lens 741 and an image sensor 725.
  • FIG. 13 only six microlenses 751, microlens 752, microlens 753, microlens 754, microlens 755, and microlens 756 are depicted as the microlenses constituting the compound-eye lens 741. Are shown as if they were arranged in a row.
  • the compound eye lens 741 shown in FIG. 13 is an example, and more or less fine lenses may be arranged in a row or in a plane.
  • each microlens is illustrated with a force that only one convex lens is configured. This is an example for simplification, and each microlens is composed of a plurality of convex lenses, concave lenses, aspheric lenses, and other lenses. It may be configured by combining various lenses.
  • This subject photography method is a method used in the integral photography method known as the 3D stereoscopic image display method without glasses.
  • an inverted image is distributed with a refractive index distribution. Conversion to an upright image using a lens or other technique is performed.
  • the inverted image is electrically upright. Since it can be converted into an image, it is not always necessary to convert it into an upright image by an optical method.
  • Video information captured by the compound eye lens 741 and the image sensor 725 is electrically recorded, for example, on the information recording medium 167, as in the video content recording apparatus 100 according to the first embodiment of the present invention. . Since the recording method and format are substantially the same as those of the video content recording apparatus 100 according to the first embodiment of the present invention, the description thereof is omitted.
  • All or part of the video information captured by these six micro lenses may be recorded on the information recording medium 167 as separate video streams, or as a single video stream. For example, it may be recorded on the information recording medium 167! /.
  • management information such as a content 1 management information file and a content 1 ancillary information file. 1 or content 1 Recorded in attached information file 2.
  • content 1 management information file, content 1 ancillary information file 1 or content 1 ancillary information file 2 are also recorded on the information recording medium 167 together with the video stream or individually.
  • the video content playback / editing apparatus can read out the video information and the attached information from the information recording medium 167 and play back the three-dimensional video.
  • a plurality of images formed on the image sensor 725 and recorded on the information recording medium 167 are read as they are, and each inverted image is electrically converted to an upright image.
  • a three-dimensional stereoscopic image can be reproduced without glasses.
  • FIG. 14 is a flowchart for explaining video content recording processing by the video content recording apparatus shown in FIG.
  • the static device information management unit 146 reads out static device information stored in advance therein.
  • the static device information management unit 146 outputs the read static device information to the content-attached information processing unit 144.
  • the imaging element is divided into a plurality of regions, and subject images taken from different viewpoints are formed in the respective regions.
  • the static device information includes the position of the area corresponding to each lens in the image sensor, area information indicating the area into which the image sensor is divided, and the number of area divisions of the image sensor. included.
  • the first video data and the second video data are also cut out and the 3D video is created.
  • one video data force at the time of reproduction is not cut out, and a plurality of video data may be cut out from one video data camera at the time of recording.
  • the content processing unit 142 refers to the static device information, and the video data force encoded by the CODEC processing unit 127 also cuts out the first video data and the second video data, and erects the image data. Is converted to a stream.
  • step S62 the content-attached information processing unit 144 performs data conversion processing necessary for recording the static device information on the information recording medium 167.
  • the content-attached information processing unit 144 outputs the static device information obtained by data conversion to the content processing unit 142.
  • the content processing unit 142 outputs the static device information to the media access processing unit 161.
  • the media access processing unit 161 records the static device information output by the content processing unit 142 on the information recording medium 167.
  • step S 63 the video input processing unit 721 acquires video data obtained by photographing the subject from a plurality of viewpoints, and outputs the video data to the AV control buffer processing unit 125.
  • the dynamic device information management unit 129 acquires dynamic device information from the video input processing unit 721.
  • the video input processing unit 721 outputs the dynamic device information to the dynamic device information management unit 129.
  • step S64 the AV control buffer processing unit 125 temporarily stores the video data output by the video input processing unit 721.
  • step S65 the CODEC processing unit 127 encodes the video data stored in the AV control buffer processing unit 125.
  • step S66 the content processing unit 142 streams the video data encoded by the CODEC processing unit 127.
  • step S67 the content-attached information processing unit 144 performs data conversion processing necessary for recording the dynamic device information on the information recording medium 167.
  • Con The number-attached information processing unit 144 outputs the dynamic device information obtained by data conversion to the content processing unit 142.
  • the content processing unit 142 outputs the video stream and the dynamic device information to the media access processing unit 161.
  • the media access processing unit 161 records the video stream and dynamic device information output by the content processing unit 142 on the information recording medium 167. At this time, a video stream, static device information, and dynamic device information are recorded on the information recording medium 167 as video content.
  • the static device information may include lens size information indicating the size of the lens.
  • the lens size information is information indicating the lens diameter and thickness of each lens. For example, when multiple lenses are arranged in the central part and the outer part of the light incident range, the brightness of the video data changes according to the size of the lens arranged in the central part and the lens arranged in the outer part. .
  • the incident light range is a range of light incident on a compound eye lens composed of a plurality of lenses. If the size of the lens placed in the center part is larger than the size of the lens placed in the outside part, the image data will not become white. If it is smaller than the size, the video data is less likely to be dark. Image quality can be improved by using this lens size information for brightness correction during playback.
  • the video content recording apparatus may record lens arrangement information representing how a plurality of lenses are arranged as static device information.
  • the video content playback / editing device accepts selection of video data shot by a predetermined lens among a plurality of lenses, and plays back 3D video using the video data shot by the selected lens. To do. Further, when the video content playback / editing apparatus is a projector, playback can be performed with the same lens arrangement as during recording.
  • FIG. 15 is a block diagram showing the configuration of the main part of video content editing apparatus 800 according to Embodiment 5.
  • a video content editing apparatus 800 shown in FIG. 15 includes a media access processing unit 801, a menu processing unit 803, a user interface processing unit 805, and a content-attached information processing unit 807.
  • the configuration of video content editing apparatus 800 in the fifth embodiment is almost the same as the configuration of video content reproduction / editing apparatus 500 in the second embodiment, and thus detailed description thereof is omitted.
  • the information recording medium 167 acquires at least one video data obtained by shooting a plurality of different viewpoints of the subject, static device information that is unique to the video content recording device, and video data. Dynamic device information, which is information that changes according to the situation, is recorded as video content.
  • the user interface processing unit 805 specifies one or both of static device information and dynamic device information, and accepts a change instruction for changing the contents.
  • the media access processing unit 501 reads out one or both of static device information and dynamic device information corresponding to the change instruction received by the user interface processing unit 805 from the information recording medium 167.
  • the content-attached information processing unit 807 changes the contents of one or both of the static device information and the dynamic device information read by the media access processing unit 501 in accordance with the change instruction.
  • the content-attached information processing unit 807 records either or both of the changed static device information and dynamic device information on the information recording medium 167.
  • FIG. 16 is a flowchart for explaining video content editing processing by the video content editing apparatus shown in FIG.
  • step S71 the user interface processing unit 505 issues a change instruction including information to be changed and the change contents among the dynamic device information and static device information recorded in the information recording medium 167. Accept.
  • the user interface processing unit 505 outputs the received change instruction to the content-attached information processing unit 507.
  • step S72 the content-attached information processing unit 807 instructs the media access processing unit 801 to read out the dynamic device information and static device information specified by the user interface processing unit 505. To do.
  • the media access processing unit 801 reads the specified dynamic device information and static device information from the information recording medium 167 and outputs them to the content-attached information processing unit 807.
  • step S73 the content-attached information processing unit 507 changes the dynamic device information and the static device information based on the content of the change instruction output by the user interface processing unit 505. To do. That is, the content-attached information processing unit 507 changes the contents of the dynamic device information and the static device information to the contents according to the change instruction.
  • the content-attached information processing unit 507 outputs the dynamic device information and the static device information whose contents have been changed to the media access processing unit 801.
  • step S 74 the media access processing unit 801 records the dynamic device information and the static device information output by the content attached information processing unit 807 in the information recording medium 167.
  • the present invention can be used not only as a recorded video but also as a 3D video (moving image 'still image) as an original video for a photo frame, and in combination with a 3D display device, It is possible to obtain and store information unique to 3D beyond the viewpoint of Such information acquisition and storage can also be applied to a robot.
  • a 3D video moving image 'still image
  • the viewing angle can be changed to various viewpoints, and another angle force can avoid the overlap of cells and check the inside, so that it greatly contributes to the improvement of inspection accuracy. Is possible.
  • a video content recording apparatus is a video content recording apparatus that records video content on an information recording medium, and has at least one video data obtained by photographing a subject with a plurality of different viewpoint powers.
  • Dynamic parameter information acquisition unit that acquires dynamic parameter information that is information, the at least one video data acquired by the video data acquisition unit, and acquired by the static parameter information acquisition unit Associating the static parameter information with the dynamic parameter information acquired by the dynamic parameter information acquisition unit And a recording section for recording on the information recording medium as the image co Nten'.
  • a video content recording method is a video content recording method for recording video content on an information recording medium, and includes at least one video data obtained by photographing a subject with a plurality of different viewpoint powers.
  • the dynamic parameter information acquisition step for acquiring dynamic parameter information, the at least one video data acquired in the video data acquisition step, and the static parameter information acquisition step Static parameter information and acquired in the dynamic parameter information acquisition step.
  • a video content recording program is a video content recording program for recording video content on an information recording medium, and is photographed from a plurality of different viewpoints.
  • a video data acquisition unit that acquires at least one video data
  • a static parameter information acquisition unit that acquires static parameter information that is information unique to the video content recording device, and a situation when the video data is acquired
  • a dynamic parameter information acquisition unit that acquires dynamic parameter information that is information that changes in accordance with the at least one video data acquired by the video data acquisition unit
  • the static parameter information acquired by the static parameter information acquisition unit and the dynamic parameter information acquired by the dynamic parameter information acquisition unit in association with the information recording medium as the video content Makes the computer function as a recording unit for recording in
  • a subject is photographed from a plurality of different viewpoints, and at least one video data is acquired. Then, static parameter information that is information unique to the video content recording apparatus is acquired, and dynamic parameter information that is information that changes according to the situation when the video data is acquired is acquired. Thereafter, the acquired at least one video data, the acquired static parameter information, and the acquired dynamic parameter information are associated with each other and recorded on the information recording medium as video content.
  • Dynamic parameter information which is information that changes, is recorded on the information recording medium as video content, so that it can be displayed in 3D using static parameter information and dynamic parameter information.
  • Video data can be covered, and it can be used universally for devices that can display various 3D video content.
  • the video content for displaying the 3D video recorded by the video content recording device can be used for various 3D display devices, it is possible to display the 3D video. It will be possible to contribute to the promotion of the distribution of 3D video information while standardizing the information format.
  • the video data acquisition unit includes a photographing unit including at least one image sensor and a plurality of lenses that form a subject image on the image sensor.
  • the static parameter information includes at least one of a distance between lenses when the subject is photographed, a focal length of the lens, and a distortion value of the lens.
  • the video data is processed so that it can be displayed in three dimensions using the distance between the lenses of the photographing unit, the focal distance of the lens, and the distortion value of the lens when the video data is acquired. be able to.
  • the video data acquisition unit includes an imaging unit including at least one image sensor and a plurality of lenses that form a subject image on the image sensor.
  • the dynamic parameter information includes a direction angle of the photographing unit when photographing the subject, photographing unit position information representing a position of the photographing unit, a direction angle of the lens, and a lens position representing the position of the lens.
  • it contains at least one of the information.
  • the image can be displayed in three dimensions. Data can be processed. In addition, it is possible to reproduce 3D images by changing these values to various values just by using them as they are.
  • the video data acquisition unit may include one shooting unit and a moving unit that moves the one shooting unit to positions corresponding to the plurality of viewpoints. preferable.
  • the configuration can be made at a lower cost and with a simpler configuration than providing an imaging unit at a position corresponding to a plurality of viewpoints. Dimensional images can be recorded.
  • the dynamic parameter information acquisition unit acquires time information for specifying the shooting time at each viewpoint of the shooting unit moved by the moving unit. I prefer to do that.
  • time information for specifying the shooting time at each viewpoint of the moving shooting unit is acquired, and this time information is recorded on the information recording medium together with the video data.
  • 3D video can be combined as if it were an image of time and power, as well as images taken with multiple viewpoints simultaneously using time information. It becomes possible to do.
  • the video data acquisition unit includes one imaging unit having one imaging device and a plurality of lenses that form subject images on the one imaging device.
  • the one photographing unit is different for each of the plurality of lenses. It is preferable to acquire video data shot from a viewpoint.
  • the shooting unit acquires a plurality of video data shot from different viewpoints.
  • the dynamic parameter information acquisition unit is configured to obtain the plurality of video data acquired by the photographing unit and the viewpoint when the plurality of video data are acquired. It is preferable to acquire information that associates the position.
  • the video content recording apparatus further includes a storage unit that stores the static parameter information in advance, and the static parameter information acquisition unit receives the static parameter from the storage unit. It is preferable to read the information.
  • the recording unit receives at least one of the video data, the static parameter information, and the dynamic parameter information. It is preferable to record on the information recording medium as a file different from the above.
  • the recording unit may store at least one of the video data, the static parameter information, and the dynamic parameter information in a directory different from the rest. I prefer to record it.
  • At least one of video data, static parameter information, and dynamic parameter information is recorded in a different directory, so that this directory can be managed individually. become. It is also possible to standardize the recording and storage format of information necessary for selecting and disseminating only the necessary information.
  • the recording unit records the information as at least two of the video data, the static parameter information, and the dynamic parameter information as one file. It is preferable to record on a medium.
  • the video data acquisition unit acquires a plurality of video data, and the recording unit corresponds to each of the plurality of video data. It is preferable to record one or both of the dynamic parameter information and the dynamic parameter information as one file on the information recording medium.
  • the recording unit records at least two of the video data, the static parameter information, and the dynamic parameter information in one directory. Preferred.
  • the video data acquisition unit acquires a plurality of video data, and the recording unit corresponds to each of the plurality of video data. It is preferable to record one or both of parameter information and the dynamic parameter information in one directory.
  • the recording unit receives at least one of the video data, the static parameter information, and the dynamic parameter information as information different from the rest. I prefer to record on recording media.
  • An information recording medium is information capable of recording video content.
  • a recording medium, wherein the video content is recorded by the video content recording device described above.
  • At least one video data obtained by photographing a subject from a plurality of different viewpoints, static parameter information that is unique to the video content recording device, and video data are acquired. Since dynamic parameter information, which is information that changes according to the situation at the time, is associated and recorded on the information recording medium as video content, three-dimensional display is performed using static parameter information and dynamic parameter information. Video data can be processed as much as possible, and it can be used universally for devices that can display video content in various 3D formats. It will also be possible to standardize the recording format necessary for the smooth distribution of this information and to promote and expand the distribution of this information recording medium.
  • a video content playback apparatus is a video content playback apparatus that plays back 3D video from video content recorded on an information recording medium by the video content recording apparatus, wherein the information recording
  • the media includes at least one video data obtained by photographing a subject from a plurality of different viewpoints, static parameter information that is unique to the video content recording device, and a situation when the video data is acquired.
  • Dynamic parameter information which is information that changes in response, is recorded as video content, and the at least one video data, the static parameter information, and the dynamic parameter information are recorded in the information recording medium capability.
  • a video content playback method is a video content playback method for playing back 3D video from video content recorded on an information recording medium by a video content recording device, wherein the information recording
  • the media includes at least one video data obtained by photographing a subject from a plurality of different viewpoints, static parameter information that is unique to the video content recording device, and a situation when the video data is acquired.
  • Dynamic parameter information which is information that changes in response, is recorded as video content, the at least one video data, the static parameter information, and the dynamic parameter.
  • a playback step of playing back the video is a video content playback method for playing back 3D video from video content recorded on an information recording medium by a video content recording device, wherein the information recording The media includes at least one video data obtained by photographing a subject from a plurality of different viewpoints, static parameter information that is unique to the video content recording device, and
  • a video content playback program is a video content playback program for playing back 3D video from video content recorded on an information recording medium by a video content recording device.
  • the information recording medium includes at least one video data obtained by photographing a subject from a plurality of different viewpoints, static parameter information that is unique to the video content recording device, and the situation when the video data is acquired.
  • Dynamic parameter information which is information that changes in accordance with the video content, is recorded as video content, and the at least one video data, the static parameter information, and the dynamic parameter information are read from the information recording medium.
  • a reading unit and the at least one video data read by the reading unit; When the static parameter information to function the computer a three-dimensional image by using the said dynamic parameter information as a reproduction unit for reproducing.
  • the information recording medium includes at least one video data obtained by shooting the subject from a plurality of different viewpoints, and static parameter information that is information unique to the video content recording device.
  • static parameter information which is information that changes depending on the situation when video data is acquired, is recorded as video content.
  • dynamic parameter information which is information that changes depending on the situation when video data is acquired, is recorded as video content.
  • at least one video data, static parameter information, and dynamic parameter information are also read out as information recording media power, and at least one read video data, static parameter information, and dynamic information are read out. 3D video is reproduced using the target parameter information.
  • the video data includes a plurality of video data shot from different viewpoints
  • the playback unit includes the plurality of video data. It is preferable to change the viewpoint when playing back 3D video by changing the video data to be used. According to this configuration, it is possible to read video data viewed from a plurality of viewpoints and reproduce a 3D video with various viewpoints.
  • the video content reproduction apparatus further includes a changing unit that changes contents of either or both of the static parameter information and the dynamic parameter information read by the reading unit.
  • the reproducing unit when the contents of both the static parameter information and the dynamic parameter information are changed by the changing unit, and at least one video data read by the reading unit, A three-dimensional image is reproduced using the static parameter information whose content has been changed by the changing unit and the dynamic parameter information whose content has been changed by the changing unit, and the changing unit includes the static parameter information.
  • the contents are changed, at least one video data read by the reading unit and the contents are changed by the changing unit.
  • the three-dimensional video is reproduced using the updated static parameter information and the dynamic parameter information read by the reading unit, and the content of the dynamic parameter information is changed by the changing unit.
  • the at least one video data read by the reading unit, the static parameter information read by the reading unit, and the dynamic parameter information whose content has been changed by the changing unit It is preferable to play 3D images using
  • the contents of one or both of the static parameter information and the dynamic parameter information are changed. If the contents of both the static parameter information and the dynamic parameter information are changed, the read at least one video data, the static parameter information whose contents are changed, and the contents 3D video is played back using the dynamic parameter information that has been changed. In addition, when only the static parameter information content is changed, the read at least one video data, the static parameter information whose content is changed, and the read dynamic parameter information are used. 3D video is played back. Furthermore, when only the contents of the dynamic parameter information are changed, at least one read video data, the read static parameter information, and the changed dynamic parameter information are used. 3D video is played.
  • the playback unit may perform the static parameter information and the dynamic parameter information with respect to at least one video data read by the read unit. It is preferable to play back a 3D video imaged from a different viewpoint from the at least one video data recorded on the information recording medium by performing an interpolation calculation using.
  • At least one video data is subjected to interpolation calculation using static parameter information and dynamic parameter information, so that at least one video data is recorded on the information recording medium.
  • a 3D video shot from a different viewpoint than the video data is played back.
  • either or both of the static parameter information and the dynamic parameter information whose contents have been changed by the changing unit are stored in the information recording medium. It is preferable to further include an update unit for recording.
  • a video content editing apparatus is a video content editing apparatus that changes the content of video content recorded on an information recording medium by the video content recording apparatus, wherein the information recording medium Is based on at least one video data obtained by shooting a subject from a plurality of different viewpoints, static parameter information that is unique to the video content recording device, and the situation when the video data is acquired.
  • Dynamic parameter information which is changing information, is recorded as video content, and either one or both of the static parameter information and the dynamic parameter information is designated and changed to change the content
  • a receiving unit that receives an instruction; a reading unit that reads from the information recording medium one or both of the static parameter information and the dynamic parameter information corresponding to the change instruction received by the receiving unit;
  • a changing unit that changes the contents of either or both of the static parameter information and the dynamic parameter information read by the reading unit according to the change instruction, and the change unit changed by the change unit Either or both of static parameter information and dynamic parameter information are stored in the information.
  • the information recording medium includes at least one video data obtained by shooting the subject from a plurality of different viewpoints, static parameter information that is information unique to the video content recording device, and video.
  • Dynamic parameter information which is information that changes according to the situation when data is acquired, is recorded as video content.
  • static parameter information and dynamic parameter information is designated, and a change instruction for changing the contents is accepted.
  • One or both of static parameter information and dynamic parameter information corresponding to the accepted change instruction is read out from the information recording medium, and out of the read static parameter information and dynamic parameter information The contents of either or both are changed according to the change instruction
  • the content of either or both of the static parameter information and the dynamic parameter information is changed, and either the changed static parameter information or the dynamic parameter information or Since both are recorded on the information recording medium, it is possible to record information for playing back 3D video with various modifications to the video data just by recording the actual captured video data. It can be used universally for devices that can display various 3D contents.
  • the present invention it is possible to display a 3D image by making various modifications to the 3D image prepared in advance, such as changing the viewpoint of the 3D image prepared in advance. Furthermore, the information for displaying this 3D image can be used universally for various 3D display devices, and at the same time as standardizing the information format for displaying 3D images, It can contribute to the promotion of the distribution of video information, and its industrial applicability is extremely high.

Abstract

La présente invention concerne un contenu vidéo qui est généralement utilisé exprès pour un dispositif en mesure d'afficher différentes images tridimensionnelles et le format des informations pour afficher la vidéo tridimensionnelle est standardisé. Une première unité de traitement d'entrée vidéo (121) et une seconde unité de traitement d'entrée vidéo (123) traitent en tant qu'image un objet à partir d'une pluralité de différents points de vue. Elles saisissent des données vidéo, respectivement. Une unité de gestion d'informations de dispositif statiques (146) acquiert des informations de dispositif statiques en tant qu'informations uniques à un dispositif d'enregistrement de contenu vidéo (100). Une unité de gestion d'informations de dispositif dynamiques (129) acquiert des informations de dispositif dynamiques en tant qu'informations changeant selon l'état durant lequel les données vidéo sont acquises. Une unité de traitement de contenu (142) met en corrélation au moins une donnée vidéo, information de dispositif statique et information de dispositif dynamique dans un contenu vidéo et l'enregistre dans un support d'enregistrement d'informations (167).
PCT/JP2006/325397 2005-12-22 2006-12-20 Dispositif, procede et programme d'enregistrement de contenu video, dispositif, procede et programme de reproduction de contenu video, dispositif d'edition de contenu video, et support d'enregistrement d'informations WO2007072870A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005369651A JP2009060154A (ja) 2005-12-22 2005-12-22 映像コンテンツ記録方法、映像コンテンツ記録装置、映像コンテンツ再生方法及び映像コンテンツ再生装置
JP2005-369651 2005-12-22

Publications (1)

Publication Number Publication Date
WO2007072870A1 true WO2007072870A1 (fr) 2007-06-28

Family

ID=38188646

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/325397 WO2007072870A1 (fr) 2005-12-22 2006-12-20 Dispositif, procede et programme d'enregistrement de contenu video, dispositif, procede et programme de reproduction de contenu video, dispositif d'edition de contenu video, et support d'enregistrement d'informations

Country Status (2)

Country Link
JP (1) JP2009060154A (fr)
WO (1) WO2007072870A1 (fr)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011502375A (ja) * 2007-10-10 2011-01-20 韓國電子通信研究院 ステレオスコピックデータの保存および再生のためのメタデータ構造ならびにこれを利用するステレオスコピックコンテンツファイルの保存方法
WO2013080555A1 (fr) * 2011-12-01 2013-06-06 パナソニック株式会社 Dispositif de génération de données de vidéo stéréoscopique et appareil de lecture de vidéo stéréoscopique
JP2015039083A (ja) * 2011-12-07 2015-02-26 株式会社東芝 映像処理装置、映像処理方法、送信装置および記憶媒体
WO2018159145A1 (fr) * 2017-03-03 2018-09-07 パナソニックIpマネジメント株式会社 Dispositif de capture d'image et procédé d'affichage d'image capturée

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011182388A (ja) * 2010-02-02 2011-09-15 Panasonic Corp 映像記録装置および映像再生装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02172395A (ja) * 1988-12-26 1990-07-03 Toshiba Corp 立体静止画像記録装置
JPH08140116A (ja) * 1994-11-08 1996-05-31 Fuji Photo Film Co Ltd 写真撮影装置、画像処理装置及び立体写真作成方法
JPH09288735A (ja) * 1996-04-23 1997-11-04 Canon Inc 画像処理装置
JP2001281754A (ja) * 2000-03-31 2001-10-10 Minolta Co Ltd カメラ、画像変換装置、画像変換表示装置、立体画像表示システム及び画像変換プログラムが記録された可読記録媒体
JP2003111101A (ja) * 2001-09-26 2003-04-11 Sanyo Electric Co Ltd 立体画像処理方法、装置、およびシステム
JP2003143459A (ja) * 2001-11-02 2003-05-16 Canon Inc 複眼撮像系およびこれを備えた装置
JP2003187261A (ja) * 2001-12-14 2003-07-04 Canon Inc 3次元画像生成装置、3次元画像生成方法、立体画像処理装置、立体画像撮影表示システム、立体画像処理方法及び記憶媒体

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH02172395A (ja) * 1988-12-26 1990-07-03 Toshiba Corp 立体静止画像記録装置
JPH08140116A (ja) * 1994-11-08 1996-05-31 Fuji Photo Film Co Ltd 写真撮影装置、画像処理装置及び立体写真作成方法
JPH09288735A (ja) * 1996-04-23 1997-11-04 Canon Inc 画像処理装置
JP2001281754A (ja) * 2000-03-31 2001-10-10 Minolta Co Ltd カメラ、画像変換装置、画像変換表示装置、立体画像表示システム及び画像変換プログラムが記録された可読記録媒体
JP2003111101A (ja) * 2001-09-26 2003-04-11 Sanyo Electric Co Ltd 立体画像処理方法、装置、およびシステム
JP2003143459A (ja) * 2001-11-02 2003-05-16 Canon Inc 複眼撮像系およびこれを備えた装置
JP2003187261A (ja) * 2001-12-14 2003-07-04 Canon Inc 3次元画像生成装置、3次元画像生成方法、立体画像処理装置、立体画像撮影表示システム、立体画像処理方法及び記憶媒体

Cited By (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2011502375A (ja) * 2007-10-10 2011-01-20 韓國電子通信研究院 ステレオスコピックデータの保存および再生のためのメタデータ構造ならびにこれを利用するステレオスコピックコンテンツファイルの保存方法
WO2013080555A1 (fr) * 2011-12-01 2013-06-06 パナソニック株式会社 Dispositif de génération de données de vidéo stéréoscopique et appareil de lecture de vidéo stéréoscopique
JP2015039083A (ja) * 2011-12-07 2015-02-26 株式会社東芝 映像処理装置、映像処理方法、送信装置および記憶媒体
WO2018159145A1 (fr) * 2017-03-03 2018-09-07 パナソニックIpマネジメント株式会社 Dispositif de capture d'image et procédé d'affichage d'image capturée
JP2018148344A (ja) * 2017-03-03 2018-09-20 パナソニックIpマネジメント株式会社 撮像装置および撮像画像の表示方法
US10986272B2 (en) 2017-03-03 2021-04-20 Panasonic Intellectual Property Management Co., Ltd. Image capturing device and captured image display method

Also Published As

Publication number Publication date
JP2009060154A (ja) 2009-03-19

Similar Documents

Publication Publication Date Title
CN101951525B (zh) 图像处理设备、图像处理方法和程序
JP4332365B2 (ja) メタデータ表示システム,映像信号記録再生装置,撮像装置,メタデータ表示方法
JP4969474B2 (ja) 復号方法、復号装置、及び復号プログラム
JP4926533B2 (ja) 動画像処理装置、動画像処理方法及びプログラム
CN101388993B (zh) 图像记录设备和方法
CN101087384B (zh) 记录方法和装置、再现方法和装置及成像装置
JP4332364B2 (ja) 映像記録システム,映像記録方法
JP4686795B2 (ja) 画像生成装置及び画像再生装置
JP4708733B2 (ja) 撮像装置
JP6576122B2 (ja) データ記録装置およびその制御方法、撮像装置
US20220174216A1 (en) Image processing device, image processing method, and program
CN101263706A (zh) 摄像装置、记录方法
JPWO2011099299A1 (ja) 映像抽出装置、撮影装置、プログラム及び記録媒体
WO2019159620A1 (fr) Dispositif d'imagerie, dispositif d'enregistrement et dispositif de commande d'affichage
WO2007072870A1 (fr) Dispositif, procede et programme d'enregistrement de contenu video, dispositif, procede et programme de reproduction de contenu video, dispositif d'edition de contenu video, et support d'enregistrement d'informations
JP2015008387A (ja) 画像処理装置、画像処理方法およびプログラム並びに撮像装置
JP4161773B2 (ja) 映像編集装置,映像編集装置の処理方法
JP6053276B2 (ja) 撮像装置およびその制御方法、画像処理装置
JP5045213B2 (ja) 撮像装置、再生装置及び記録ファイル作成方法
WO2012011525A1 (fr) Procédé pour convertir une vidéo en un flux vidéo tridimensionnel
JP5864992B2 (ja) 撮像装置およびその制御方法、並びに画像処理装置およびその制御方法
JP2015192227A (ja) 撮像装置およびその制御方法、ならびにプログラム
JP2019145918A (ja) 撮像装置、表示制御装置及び表示制御装置の制御方法
JP5911276B2 (ja) 撮像装置およびその制御方法、画像処理装置
JP6366793B2 (ja) 画像処理装置およびその制御方法、撮像装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06842951

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP