WO2007029686A1 - 3d image recording/reproducing system - Google Patents

3d image recording/reproducing system Download PDF

Info

Publication number
WO2007029686A1
WO2007029686A1 PCT/JP2006/317531 JP2006317531W WO2007029686A1 WO 2007029686 A1 WO2007029686 A1 WO 2007029686A1 JP 2006317531 W JP2006317531 W JP 2006317531W WO 2007029686 A1 WO2007029686 A1 WO 2007029686A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
information
angle
image data
recording
Prior art date
Application number
PCT/JP2006/317531
Other languages
French (fr)
Japanese (ja)
Inventor
Ryuji Kitaura
Original Assignee
Sharp Kabushiki Kaisha
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sharp Kabushiki Kaisha filed Critical Sharp Kabushiki Kaisha
Priority to JP2007534423A priority Critical patent/JP4619412B2/en
Publication of WO2007029686A1 publication Critical patent/WO2007029686A1/en

Links

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/18Stereoscopic photography by simultaneous viewing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/156Mixing image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/189Recording image signals; Reproducing recorded image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/30Image reproducers
    • H04N13/302Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
    • H04N13/31Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using parallax barriers
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N19/00Methods or arrangements for coding, decoding, compressing or decompressing digital video signals
    • H04N19/50Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding
    • H04N19/597Methods or arrangements for coding, decoding, compressing or decompressing digital video signals using predictive coding specially adapted for multi-view video sequence encoding

Definitions

  • the present invention relates to a system for recording and reproducing stereoscopic images.
  • left-eye image and right-eye image with binocular parallax are prepared and projected to the left and right eyes, respectively.
  • left-eye image and right-eye image are prepared and projected to the left and right eyes, respectively.
  • 3D is a word that means 3D or 3D
  • 2D is a word that means 2D.
  • Stereoscopic image data is 3D image data
  • normal 2D image data is 2D images. Data.
  • FIG. 18 is a conceptual diagram for explaining the parallax barrier method.
  • FIG. 18 (a) is a diagram illustrating the principle of the occurrence of parallax.
  • FIG. 18 (b) is a diagram showing a screen displayed by the paralatras noria method.
  • the image display panel 100 displays an image in which the left-eye image and the right-eye image are arranged alternately every other pixel in the horizontal direction as shown in Fig. 18 (b).
  • the left eye image is only the left eye 102
  • the right eye image is the right eye 103. Therefore, stereoscopic observation can be performed.
  • FIG. 19 is a conceptual diagram showing an example of the recording data format of such a composite image.
  • the left-eye image 104 shown in FIG. 19 (a) and the right-eye image 105 shown in FIG. 19 (b) are arranged side by side to create and record one composite image 106 shown in FIG. 19 (c).
  • This composite image during playback By rearranging 106, the images are converted into images suitable for each display format as shown in Fig. 18 (b).
  • Patent Document 1 the force that an observer observes from an angle of 90 degrees with respect to the display surface on which an image is displayed.
  • Patent Document 2 the display surface is horizontally arranged and observed. It is said that the person observes from an oblique direction.
  • Patent Document 2 after capturing an image for the left eye and an image for the right eye from an obliquely upward direction with respect to an object placed on the reference plane, the depth direction pattern generated in each of the captured images. A method for correcting one spectrogram is described, and this method will be briefly described with reference to FIGS.
  • FIG. 20 is a diagram showing how the left-eye image and the right-eye image are captured at this time.
  • a reference plane 108 having a horizontal width H and a vertical width V is set horizontally on a horizontal plane 107.
  • An object 109 is placed on the reference plane 108, and a camera 110 for taking a right-eye image and a camera 111 for taking a left-eye image are respectively set obliquely above the reference plane 108 and the camera interval is set.
  • the camera 110 and the camera 111 are directed toward the object 109 to take an image.
  • the lines of sight of the camera 110 and the camera 111 are set to have the same angle ⁇ 1 with respect to the reference plane 108, respectively.
  • FIG. 21 is a diagram showing a left-eye image and a right-eye image taken at this time.
  • FIG. 21 (a) is an image for the left eye
  • FIG. 21 (b) is an image for the right eye
  • a reference plane 108 and an object 109 are captured in each image.
  • the four corner points of the reference plane 108 in FIG. 21 (a) are Pl, P2, P3, and P4, respectively
  • the four corner points of the reference plane 108 in FIG. 21 (b) are P 5, P6, P7, and P8, respectively.
  • FIG. 22 is a diagram showing a state in which the perspective is corrected for the left-eye image.
  • Fig. 21 (a) Force also cuts out the reference plane 108 as shown in Fig. 22 (a), and the Pl, P 2, P3, and P4 points of the cut out reference plane 108 are respectively shown in Fig. 22 (b).
  • the reference plane 108 is deformed and developed so as to be P9, P10, Pl1, and P12 of the image 113 for use.
  • the aspect ratio of the image in FIG. 22B is set to 3 ⁇ 4: V, that is, the same aspect ratio as that of the actual reference plane 108 is set.
  • FIG. 23 is a diagram showing a state in which the perspective is corrected for the right-eye image.
  • the reference plane 108 is cut out from FIG. 21 (b) as shown in FIG.
  • the points P5, P6, P7, and P8 on the projected reference plane 108 are expanded to P13, P14, P15, and P16 in the right-eye image 114 as shown in Fig. 23 (b).
  • the developed reference plane should have the same value as the actual aspect ratio of the reference plane.
  • FIG. 24 is a diagram illustrating a state in which stereoscopic viewing is performed using the left-eye image and the right-eye image subjected to perspective correction.
  • an anaglyph image is created using the perspective-corrected image for the left eye and the image for the right eye.
  • An anaglyph image is a single image created by extracting only the R component from the RGB image of the left-eye image, and extracting only the G or B component from the RGB image model of the right-eye image, and combining them. It is. The observer can view stereoscopically by wearing red and blue glasses and observing this image.
  • the created anaglyph image is printed on the printed material 115 with the same size as the reference surface 108 at the time of photographing, and is arranged on the horizontal surface 107.
  • the observer uses the red and blue glasses so that the position of the right eye 117 and the left eye 118 with respect to the printed material 115 and the angle ⁇ 1 between the line of sight 119 and the printed material 115 are the same as those of the camera at the time of shooting.
  • By looking at the printed matter 115 it is possible to observe a stereoscopic image in which the object 116 is raised on the printed matter 115.
  • Patent Document 1 JP 2002-125246 A
  • Patent Document 2 Japanese Patent No. 3579683
  • the reference plane does not necessarily need to be parallel to the horizontal plane, for example, it is included in EXIF (Exchangeable Image File Format) information attached to the image when the image is taken with a digital camera.
  • EXIF Exchangeable Image File Format
  • the present invention has been made to solve the above-described problems, in which an image is taken from an oblique direction with respect to a reference plane, and an angle formed by the reference plane and the line-of-sight direction of the camera at the time of shooting. Is recorded in the header of the captured image data as an observation angle, and the captured image data is converted into image data that has been corrected to eliminate the perspective in the depth direction. Create image data to do.
  • An object of the present invention is to provide a stereoscopic image recording / reproducing system in which an observer can perform stereoscopic viewing from an accurate direction by presenting an observation angle of a header to an observer when reproducing the created image data. And
  • the present invention is a stereoscopic image recording / reproducing system that generates, records, and reproduces stereoscopic image data from a plurality of image data corresponding to a plurality of viewpoints.
  • 3D image input means for outputting shooting angle information, which is information about the angle formed by the line-of-sight direction when shooting from the imaging device and the reference plane on which the subject is placed, together with image data and control information, and a stereoscopic image Control means for calculating an observation angle with respect to the display means for stereoscopic viewing from the photographing angle information, and the calculated observation angle information is recorded as 3D image control information in the stereoscopic image data together with the control information.
  • the 3D image recording means includes the imaging means, and the 3D image input means measures an inclination in the line-of-sight direction of the imaging means, and positions the imaging means and the reference plane based on the inclination.
  • the image processing apparatus further includes an imaging angle measuring unit that generates information and calculates an imaging angle according to the position information.
  • the 3D image input means is characterized in that an arbitrary value input from the outside is added as an offset to the shooting angle, and a newly calculated value is used as shooting angle information.
  • the 3D image input means the 3D image recording means records an arbitrary value inputted from the outside as offset angle information in the 3D image control information, and the 3D image reproduction means The offset angle information is analyzed from the image control information and output to the display means.
  • the 3D image recording means is characterized in that the 3D image recording means creates observation angle information by substituting the value of the photographing angle into the observation angle.
  • the 3D image input means analyzes the observation angle information included in the 3D image control information, and outputs the observation angle information to the display means. It is characterized by.
  • the 3D image reproduction means analyzes the observation angle information included in the 3D image control information, and the display means according to a value of the observation angle information. It is characterized by having an operating means capable of tilting.
  • the observation angle is recorded as the observation angle information in the control information of the stereoscopic image data, and the observation angle information is read from the control information of the stereoscopic image data by the reproducing means, and the observation is performed to the user at the time of output.
  • the observer can perform stereoscopic viewing from an accurate direction, and thus can perform stereoscopic viewing without distortion.
  • the observation angle information is recorded in the 3D image data to be recorded, so that the management and handling of the data becomes very simple.
  • the camera tilt at the time of shooting is measured, the position of the camera relative to the reference line is estimated from the measured camera tilt, and the camera tilt and the position relative to the camera reference line are estimated. Since the shooting angle is obtained from the relationship, the observation angle can be easily obtained.
  • an appropriate observation angle when the observer looks at the center of the display is displayed on the display.
  • the observer can easily know the appropriate observation angle, and as a result, the observer can perform stereoscopic viewing from the correct direction. It is possible to perform stereoscopic viewing without causing distortion of the generated stereoscopic image.
  • the present invention by adding an offset angle to the shooting angle when shooting or recording 3D image data, it is possible to arbitrarily set an observation angle that is uniquely determined by the shooting angle force at the time of shooting. Therefore, even if the shooting angle at the time of shooting is small, by adding an offset angle and setting the observation angle freely, it is possible to prevent creation of 3D image data that is difficult to observe or cannot be observed depending on the display be able to.
  • the offset angle is not calculated by adding it to the observation angle at the time of input or recording, it is also possible to show the viewer an appropriate angle for stereoscopic viewing by presenting the offset angle to the viewer during playback. .
  • FIG. 1 is a block diagram showing a configuration of a stereoscopic image recording / reproducing system according to an embodiment of the present invention.
  • FIG. 2 is a diagram showing an example of the data configuration of 3D image data.
  • FIG. 3 is a diagram illustrating an example of a left-eye image and a right-eye image.
  • FIG. 4 is a diagram showing an example of image data in 3D image data.
  • FIG. 5 is a diagram showing an example of image data in 3D image data.
  • FIG. 6 is a diagram for explaining an example of image data in which the image aspect ratio is changed.
  • FIG. 7 is a block diagram showing a configuration of 3D image input means 2.
  • FIG. 8 is a diagram showing the relationship between the camera tilt angle and the shooting angle.
  • FIG. 9 is a block diagram showing a configuration of 3D image recording means 3.
  • FIG. 10 is a flowchart for explaining the operation of the 3D image recording means 3.
  • FIG. 11 is a block diagram showing a configuration of 3D image reproduction means 4.
  • FIG. 12 is a flowchart for explaining the operation of the 3D image reproducing means 4.
  • FIG. 13 is a diagram for explaining a method of creating display image data from decoded image data.
  • FIG. 14 is a diagram for explaining a new reference line and a shooting angle when an offset angle is added to the shooting angle.
  • FIG. 15 is a diagram for explaining a new reference line and a shooting angle when an offset angle is added to the shooting angle.
  • FIG. 16 is a diagram showing an example of the data configuration of 3D image data when offset angle information is recorded.
  • FIG. 17 is a diagram showing a state of observation when the display surface is tilted.
  • FIG. 18 is a conceptual diagram for explaining the parallax barrier method.
  • FIG. 19 is a conceptual diagram showing an example of a recording data format of a composite image.
  • FIG. 20 is a diagram showing a state of taking a left-eye image and a right-eye image.
  • FIG. 21 is a diagram showing captured left-eye images and right-eye images.
  • FIG. 22 is a diagram showing how perspective is corrected for the left-eye image.
  • FIG. 23 is a diagram showing how the perspective is corrected for the right-eye image.
  • FIG. 24 is a diagram showing a state in which stereoscopic vision is performed using a perspective-corrected left-eye image and right-eye image.
  • Image cropping means Image correction means
  • Image composition means Compression means
  • Header information creation means Multiplexing means Separating means
  • 3D control information analysis means Display image creation means Display means
  • FIG. 1 is a block diagram showing a configuration of a stereoscopic image recording / playback system according to an embodiment of the present invention.
  • the stereoscopic image recording / reproducing system 1 includes a 3D image input unit 2, a 3D image recording unit 3, and a 3D image reproducing unit 4.
  • the 3D image input means 2 inputs a plurality of image data corresponding to a plurality of viewpoints from the outside, shooting angle information indicating an angle when the input image data of each viewpoint is shot, and input image data of each viewpoint.
  • the horizontal image size and the vertical image size, and the horizontal viewpoint number information and the vertical viewpoint number information indicating the number of viewpoints included in the 3D image data are generated as the control information of the image data.
  • the 3D image recording means 3 records the image data input from the 3D image input means 2 as 3D image data.
  • the 3D image playback means 4 plays back the 3D image data recorded by the 3D image recording means 3
  • the 3D image data is image data for stereoscopic viewing, and is data composed of image data and 3D control information.
  • the description about each means will be described later. First, 3D image data and 3D control information will be described.
  • FIG. 2 is a diagram illustrating an example of a data configuration of 3D image data.
  • the 3D image data includes a header 5 and image data 6.
  • the header 5 includes image size information of the image data 6 and 3D control information.
  • Examples of header 5 include EXIF (Exchangeable Image File Format) heg, AVI (Audio Video Interleaved), ASF (Advanced Streaming Format), WMV (Windows Media Video), MP4 file format header, etc. It is done.
  • Examples of image data include uncompressed image data and JPEG (Joint Photographic Expe rts Group) and MPEG (Moving Picture Experts Group) compressed image data compressed by a compression method.
  • 3D control information includes information on the configuration of image data in 3D image data
  • It shows information for controlling display when displaying D images, and includes the number of horizontal and vertical viewpoints and observation angle information.
  • the number of viewpoints in the horizontal direction and the vertical direction indicates information on the number of image data having different viewpoints included in the 3D image data.
  • a 3D image without distortion can be observed by observing a predetermined angular force on the display surface on which the 3D image is displayed. Information on the predetermined angle at this time is taken as observation angle information.
  • FIG. 3 is a diagram for explaining an example of an image for the left eye and an image for the right eye
  • FIGS. 4 and 5 are diagrams for explaining an example of the image data 6 in the 3D image data.
  • FIG. 3A shows a left-eye image
  • FIG. 3B shows a right-eye image
  • the horizontal image size h and the vertical image size V are the same.
  • the left eye image and the right eye image are arranged side by side in the order of viewpoint as shown in Fig. 4 to obtain a single image data.
  • the number of viewpoints of the image data is 2 in the horizontal direction and 1 in the vertical direction.
  • the image size of this 3D image data is 2 X h for horizontal and vertical force.
  • FIG. 5 is an example when the number of horizontal viewpoints is 4 and the number of vertical viewpoints is 2, and images of 8 viewpoints are numbered in the same manner as in the description of FIG.
  • the images are arranged in raster scan from the upper left to the lower right in the order of viewpoints, such as 1 to 8, and are used as one image data.
  • the image data at this time can change the image aspect ratio.
  • the image aspect ratio is information indicating the value obtained by dividing the vertical scaling factor of the image data by the horizontal scaling factor.
  • FIG. 6 is a diagram for explaining an example of image data in which the image aspect ratio is changed.
  • the image data in Fig. 4 and Fig. 5 is the power to change the image aspect ratio at the time of creation, so the image aspect ratio is 1.
  • the vertical scale of the image data in Fig. 4 Is the image data without changing the horizontal scale to 1/2, and the horizontal size is h, Vertical is v and image aspect ratio is 2.
  • the value “2” is used as the image aspect ratio information.
  • the image aspect ratio is fixed to 1 for the sake of simplicity.
  • FIG. 7 is a block diagram showing the configuration of the 3D image input means 2.
  • the 3D image input unit 2 includes an imaging unit 9, a control unit 10, and an imaging angle measurement unit 11.
  • the image pickup means 9 is also configured with, for example, at least one image pickup device force such as a CCD camera, and takes in an external video and outputs it as an input image.
  • image pickup device force such as a CCD camera
  • the control means 10 is a means for controlling the imaging means 9, for example, controlling the left and right angles and position of the imaging means 9, and is realized by a CPU or the like, not shown.
  • the photographing angle measuring means 11 is a means using a general digital angle meter using a liquid sensor or the like, or a general gyro sensor, and is not related to the present invention, and therefore will not be described in detail.
  • the photographing angle measuring means 11 measures the inclination angle of the imaging means 9 with respect to the horizontal plane in the imaging direction, and the measured value force can also determine and output the photographing angle.
  • FIG. 8 is a diagram showing the relationship between the camera tilt angle and the shooting angle.
  • the imaging angle is defined as the angle ⁇ ⁇ ( ⁇ is an integer from 1 to 4) formed by the camera's line-of-sight direction and the reference line, and the value range is defined as a value from 0 to 90 degrees. .
  • the reference line is parallel to the horizontal line.
  • Camera tilt angle j8 n (where n is an integer from 1 to 4, hereinafter referred to as “camera tilt angle”) is measured using a digital goniometer or gyro sensor.
  • the value of the angle ⁇ ⁇ ranges from 0 to less than 360 degrees. A method for calculating the photographing angle based on the measured value force at this time will be described.
  • the camera tilt angle j8 ⁇ when the camera is parallel to the reference line and the top and bottom of the captured image is not turned over is assumed to be 0 degree, and the camera and the reference line are viewed from the side.
  • the camera tilt angle increases as the camera is rotated clockwise around the center of Suppose that it makes one revolution and is in a 0 degree state.
  • FIG. 8 (a) shows the relationship with the shooting angle oc 1 when the camera tilt angle ⁇ 1 is not less than 0 degrees and not more than 90 degrees.
  • a shooting angle ⁇ 1 that is an angle formed by the line-of-sight direction 13 of the camera 12 and the reference line 14 coincides with the camera tilt angle j81.
  • FIG. 8 (b) shows a shooting angle ⁇ 2 when the camera tilt angle ⁇ 2 is greater than 90 degrees and equal to or less than 180 degrees.
  • an imaging angle ⁇ 2 that is an angle formed by the line-of-sight direction 13 of the camera 12 and the reference line 14 is (180—j8 2).
  • 8 3 and j8 4 as shown in FIG. 8 (c) and FIG. 8 (d) are larger than 180 degrees, and 3
  • the angle is less than 60 degrees
  • the image is taken above the reference line 14 described with reference to FIGS. 8A and 8B, so the reference line 14 is not included in the captured image. Therefore, the reference line 15 parallel to the reference line 14 and above the camera is used as a new reference line.
  • FIG. 8 (c) shows the shooting angle ex 3 when the camera tilt angle ⁇ 3 is larger than 180 degrees and smaller than 270 degrees.
  • the angle formed by the reference line 15 and the viewing direction 13 of the camera 12 is defined as an imaging angle oc3.
  • the value of ⁇ 3 at this time is (j8 3 ⁇ 180).
  • FIG. 8 (d) shows the shooting angle ⁇ 4 when the camera tilt angle j84 is larger than 270 degrees and smaller than 360 degrees (this is the same as 0 degrees).
  • the angle formed by the reference line 15 and the line-of-sight direction 13 of the camera 12 is defined as an imaging angle ⁇ 4.
  • the value of ⁇ 4 is (3
  • the shooting angle measuring means 11 measures the tilt of the camera at the time of shooting, estimates the position relative to the camera reference line from the measured camera tilt, and determines the camera tilt and the camera reference line. By obtaining the shooting angle from the positional relationship with respect to, it is possible to output the angle between the viewing direction of the camera used for shooting and the reference plane and the reference plane including the reference line in the shot image.
  • the photographing angle obtained above is information used for obtaining an observation angle necessary for performing stereoscopic viewing without distortion! How to obtain the observation angle will be described later.
  • the imaging means 9 is two CCD cameras, the left-eye image data and the right-eye image data are output, and the image data for each viewpoint is output.
  • the image size of the data is the same.
  • the imaging means 9 outputs the image data captured by the two CCD cameras as input image data.
  • a rectangular reference plane is set with paper or the like so as to be parallel to the horizontal line under the object to be photographed and photographed on the reference plane while viewing the captured image. Take a picture so that the object fits in.
  • specific marks are placed at positions corresponding to the four corners of the reference plane, and a square with these marks as vertices is used as a new reference plane so that the object to be photographed can be placed in the reference plane. You may take a picture.
  • both the reference plane and the mark may be installed, and shooting may be performed so that both are included in the image to be captured.
  • the above-mentioned mark and the outer frame constituting the reference plane are determined in advance as a predetermined image, overwritten at a predetermined position of the input image, and used as a new input image.
  • the position of the mark, the size of the frame on the reference plane, and the position of a specific point in the reference plane can be output by the user. Let's be able to input freely from the outside.
  • the information on the mark position and the size and position of the frame of the reference surface may be output together with the image data without overwriting the mark and the reference surface. This information can be used by the subsequent 3D image recording means to determine the size and position of the reference plane in the image.
  • the reference plane or the position of the mark may be set so that the aspect ratio of the actual size of the reference plane to be imaged is the same as the aspect ratio of the image data to be captured.
  • the reference plane or mark position may be set so that the aspect ratio of the actual size is a specific value.
  • photographing is performed such that the center of the photographed image is positioned on a horizontal line passing through the center of the reference plane.
  • control means 10 outputs the horizontal image size, the vertical image size, and the horizontal viewpoint number of the input image at that time as 2 and the vertical viewpoint number as 1, respectively. To do.
  • the photographing angle measuring means 11 outputs the photographing angle at this time as photographing angle information.
  • the shooting angle measuring means 11 automatically measures and outputs the shooting angle, but instead of the shooting angle measuring means 11, a shooting angle input means is installed, The photographer may input a numerical value for the photographing angle from an external source.
  • the 3D image input means 2 is an image signal input device that receives a video signal or the like instead of the imaging means 9, an image display device that receives and displays a TV signal, a video or DVD Any device that outputs image data, such as an image reproducing device that reproduces image data, an image reading device such as a scanner, or an image data file reading device, is not limited thereto.
  • the shooting angle information is input by the user with an external force.
  • the 3D image input means 2 uses a plurality of pieces of image data corresponding to a plurality of viewpoints as 3D photographed image data, and photographing angle information, a horizontal image size, a vertical image size, and a horizontal The number of viewpoints in the direction and the number of viewpoints in the vertical direction can be output.
  • the shooting angle information is the same. It can be calculated. If the number of viewpoints in the vertical direction is 2 or more, the shooting angle information can be calculated in the same way for each set of image data in the same vertical direction. As many as the number of viewpoints in the vertical direction are calculated and output.
  • FIG. 9 is a block diagram showing the configuration of the 3D image recording means 3.
  • the 3D image recording means 3 cuts a part of the input image data force image, outputs the cut image data which is the cut image data, and the depth direction with respect to the cut image data.
  • An image correcting unit 17 that corrects the perspective and outputs corrected image data
  • an image combining unit 18 that combines the corrected image data and outputs combined image data
  • a compression that compresses and encodes the combined image data into compressed encoded data.
  • control means 22 is realized by a CPU or the like (not shown), and is a means for controlling each means in the 3D image recording means 3.
  • control means 22 controls each means connected to the control means 22 using the input information, and the encoded image data of the 3D image and
  • FIG. 10 is a flowchart for explaining the operation of the 3D image recording means 3.
  • the horizontal viewpoint number information is 2
  • the vertical viewpoint number information is 1
  • the input image data is described as two image data for the left and right eyes.
  • step S1 the 3D image recording means 3 starts 3D image recording processing, and proceeds to step S2.
  • the control means 22 determines whether or not the input image data and the control information are input to the 3D image recording means 3, and if it is input !, if not, the process returns to the determination step S2. Otherwise, the input image data, the horizontal image size, the vertical image size, the horizontal viewpoint number information, the vertical viewpoint number information, and the shooting angle information of the input image data of each viewpoint as control information are input to the 3D image recording means 3. Proceed to step S3. At this time, in the 3D image recording means 3, the input image data is supplied to the image cutting means 16, and the horizontal image size, vertical image size, horizontal viewpoint number information, vertical viewpoint number information, and shooting angle information are supplied to the control means 22, respectively. Entered.
  • Image encoded data is created from the input image data by the processing of steps S3 to S6 described below. Further, the image cropping method and the image correction method performed by the image cropping unit 16 and the image correction unit 17 described in these steps are the same as the method disclosed in Patent Document 2, and are related to the present invention. Since there is no such description, their detailed explanation is omitted.
  • step S3 the left and right viewpoint input image data are input to the image cropping means 16, respectively.
  • the image cropping means 16 performs processing for each input image data of each viewpoint. Means to do.
  • the image cutout means 16 obtains a specific reference plane from these input image data by image matching or the like. When a mark is photographed instead of the reference plane, the mark is also obtained by matching or the like, and the inside of a rectangle including four marks is used as the reference plane.
  • an image obtained by cutting out the reference plane is output as a cut-out image for each of the left and right viewpoints.
  • step S4 an image obtained by cutting out the reference plane is output as a cut-out image for each of the left and right viewpoints.
  • a specific area may be cut out as the reference plane, or the user may input the reference plane directly from the outside.
  • Multiple different reference planes may be prepared, and the user may select which reference plane to use from the outside. Also, if the mark position described in the explanation of the 3D image input means 2 or the size and position of the frame of the reference plane is input, obtain the information ability reference plane.
  • step S 4 left and right viewpoint image data is input to the image correction means 17.
  • the image correction unit 17 is a unit that performs processing for each input image data of each viewpoint.
  • the image correcting means 17 uses the reference plane developed in the same way as in FIG. 22 (b) or FIG. 23 (b) for the cut-out image in FIG. 22 (a) or FIG. 23 (a).
  • the reference plane aspect ratio the actual aspect ratio
  • the aspect ratio of the reference plane may be the aspect ratio of the input image data, or it may be set in advance or the user should input from the outside.
  • the reference plane aspect ratio value may be treated as a specific value set in the stereoscopic image recording system.
  • the position of the reference plane or mark is adjusted at the time of shooting so that the aspect ratio of the actual size of the reference plane to be photographed is the same as the value of this reference plane aspect ratio.
  • the image correction unit 17 outputs the corrected image data for the left and right viewpoints, respectively, and proceeds to step S5.
  • step S5 the image compositing means 18 converts the 3D image data from the input image data. Perform the process of compositing.
  • the image synthesizing means 18 converts the input image data, which is the image data of each viewpoint, from the horizontal viewpoint number information and the vertical viewpoint number information in the same manner as described in FIG. 4, FIG. 5, and FIG. A means for arranging and creating image data.
  • the horizontal viewpoint number information is 2
  • the vertical viewpoint number information is 1
  • the input image data is two corrected image data for the left and right eyes.
  • the corrected image data is input to the image composition means 18.
  • the control unit 22 transmits the horizontal viewpoint number information and the vertical viewpoint number information to the image synthesizing unit 18 and controls the image synthesizing unit 18 to synthesize 3D image data.
  • image aspect ratio information is created.
  • the image aspect ratio information is set to 1 by combining the images so that the image aspect ratio is 1.
  • the image composition means 18 outputs the created 3D image data to the compression means 19, and outputs the horizontal image size, vertical image size, and image aspect ratio information of the 3D image data to the control means 22, respectively. Proceed to
  • step S6 the compression means 19 performs a process of encoding the input image data by using an encoding method such as JPEG or MPEG and outputting the encoded data.
  • the compression means 19 is composed of a general-purpose compression means and is not related to the present invention, so the configuration is omitted.
  • 3D image data is input to the compression means 19.
  • the compression means 19 encodes the input image data, outputs the encoded data, and proceeds to step S7.
  • the control means 22 includes, as information necessary for creating the header, the horizontal image size of the entire encoded image, the vertical image size, the horizontal viewpoint number information, the vertical viewpoint number information, Information including shooting angle information and image aspect ratio information is transmitted to header information creation means 20.
  • the header information creating means 20 creates and outputs the header 5 including 3D control information using the information input from the control means 22 as described in FIG.
  • 3D control information is created by substituting the imaging angle for the observation angle constituting the 3D control information.
  • viewing angle information is obtained from the shooting angle, and is used as 3D control information.
  • step S8 the multiplexing means 21 performs a process of multiplexing the input code data and the header.
  • the multiplexing means 21 multiplexes the encoded data input from the compression means 19 and the header input from the header information creation means 20 to create multiplexed data, and outputs it as 3D image data. Proceed to S9.
  • step S9 the 3D image data created by the multiplexing means 21 is recorded.
  • the stereoscopic image recording / reproducing system 1 includes data recording / reproducing means (not shown) inside.
  • This data recording / reproducing means can record 3D image data output from the multiplexing means 21 in the 3D image recording means 3 as data, for example, removable media such as cards, node disks, optical disks, magnetic tapes, etc. This means that data can be recorded on or read from a recording medium. Since the recording / reproducing means itself is a general one and its configuration is not related to the present invention, the description thereof will be omitted.
  • the data recording / reproducing means is included in the stereoscopic image recording / reproducing system 1.
  • the data recording / reproducing means may be provided outside.
  • the data recording / playback means are devices that can exchange data with the outside, that is, external hard disks, optical disk recording / playback devices, and card readers for general personal computers (hereinafter referred to as “PCs”). It may be the PC itself.
  • a digital video camera or digital video may be used.
  • the transmission path may be considered as the Internet, and the data recording / reproducing means may be a server connected to the Internet. Further, the data recorded in the data recording / reproducing means can be freely read by the 3D image reproducing means 4.
  • step S9 the 3D image data is recorded by the data recording / reproducing means, and the process proceeds to determination step S10.
  • the determination step S10 it is determined whether or not the recording process of the 3D image recording unit 3 is to be ended, and if it is determined that the recording process is to be ended, the process proceeds to step S11 and the recording of the 3D image recording unit 3 is performed. If not, return to step S2 and continue 3D image recording.
  • the factors for determining that recording is ended in the determination step S10 are the same as those of a normal recording device, for example, a user's interruption operation, a lack of recording medium capacity, a knottery outage, and interruption of power supply. , Faults such as disconnection, and accidents.
  • the 3D image recording unit 3 can record 3D image data.
  • FIG. 11 is a block diagram showing the configuration of the 3D image reproduction means 4.
  • the 3D image reproduction means 4 separates the 3D image data into a header and encoded image data, and outputs a separation means 23, and decodes the image data from the input encoded data to create a display image.
  • Decoding means 24 for outputting to means 26, 3D control information analyzing means 25 for analyzing and outputting 3D control information to control means 27, Display image creating means for creating and outputting a display image from the decoded image data cover 26, a control means 27 for controlling the display image creating means 26 and the display means 28, respectively, and a display means 28 for displaying the inputted 3D image data.
  • the display means 28 is assumed to be means for performing a stereoscopic display using, for example, a paralatus nore as shown in FIG.
  • FIG. 12 is a flowchart for explaining the operation of the 3D image reproduction means 4.
  • step S12 the 3D image playback means 4 starts playback processing.
  • the 3D image reproducing means 4 accesses the data recording / reproducing means described in the 3D image recording means 3.
  • step S13 it is determined whether or not 3D image data is input to the 3D image playback means 4, and if it is input, the process proceeds to step S14, and if not, step S14 is performed. 13 Go back.
  • step S14 3D image data is input to the separating means 23. Separating means 23 separates encoded data and header from the input 3D image data, and outputs the encoded data to decoding means 24 and the header to 3D control information analyzing means 25, respectively. Proceed to [0117] In step S15, the encoded data is input to the decoding unit 24. The decoding unit 24 decodes the input encoded data, and outputs the decoded image data to the display image creating unit 26. move on.
  • the decoding unit 24 is a unit that decodes input encoded data, such as JPEG or MPEG, and outputs decoded image data.
  • the decryption means 24 is composed of general-purpose decryption means and is not related to the present invention, and therefore its configuration is omitted.
  • step S 16 the header is input to the 3D control information analysis means 25.
  • the 3D control information analysis means 25 analyzes the 3D control information contained in the header, and controls the 3D control information including the number of viewpoints in the horizontal direction, the number of viewpoints in the vertical direction, the image aspect ratio information, and the observation angle information. And go to step S17.
  • step S 17 the decoded image data is input to the display image creating means 26.
  • control means 27 receives the number of viewpoints in the horizontal direction, the number of viewpoints in the vertical direction, and image aspect ratio information.
  • the display image creation means 26 converts the decoded image data using the information on the number of viewpoints, and creates display image data.
  • the number of horizontal viewpoints is 2
  • the number of vertical viewpoints is 1
  • the image aspect ratio information is 1.
  • FIG. 13 is a diagram for explaining a method of creating display image data from decoded image data.
  • FIG. 13 (a) shows decoded image data in which the number of viewpoints in the horizontal direction is 2, the number of viewpoints in the vertical direction is 1, and the image aspect ratio information is 1.
  • the left half of the decoded image data is image data for the left eye, and the right half is image data for the right eye.
  • the image powers of these viewpoints are arranged horizontally in the order of the viewpoints.
  • the display image creating means 26 interprets this structure from the number of horizontal viewpoints and the number of vertical viewpoints, and divides the image data of each viewpoint into vertically long strips. As shown in Fig. 13 (b), the left and right sides of the divided strips are rearranged in the order of viewing points to create display image data, which are output to the display means 28. Proceed to 18.
  • step S 18 display image data is input from the display image creation means 26 and observation angle information is input from the control means 27 to the display means 28.
  • the display means 28 is also configured with a display and a paralatus noria force, and as shown in FIG.
  • the image data is displayed in 3D, and the process proceeds to decision step S19.
  • the display means 28 may display the input observation angle information as a numerical value on the display of the display means 28.
  • the observer can easily know the appropriate observation angle when viewing the center of the display of the display means 28 and, as a result, can perform stereoscopic viewing from the correct direction, which is different from the assumption.
  • Angular force Enables stereoscopic viewing without causing distortion that occurs during observation.
  • the observation angle information is displayed as a numerical value on the display surface.
  • a horizontal slit or a wrench chiral sheet is prepared on the front surface of the display unit 28, and an appropriate observation angle is obtained.
  • a specific image pattern that can be observed only when the observer observes may be displayed. Thereby, the observer can know an appropriate observation position more easily.
  • a plurality of directional backlights or a backlight capable of switching the angle is prepared, and a backlight switching means for switching these backlights is installed in the 3D image reproduction means 4. Then, the backlight switching means may be controlled by the control means 27 so that light is emitted only in the direction indicated by the observation angle information. By doing so, the user can know an appropriate observation position more easily.
  • step S 19 it is determined whether or not the playback process of the 3D image playback means 4 is to be ended. If it is determined that the playback process is to be ended, the process proceeds to step S 20 and the 3D image playback means 4 of If not, the process returns to step S13, and the 3D image playback process is continued.
  • the reason for judging that the judgment step S20 finishes the reproduction is the same as the normal reproduction device, for example, the interruption operation of the user, the interruption of the power supply such as the battery running out, the failure such as the disconnection, and the broken data. For example, an accident such as an input.
  • the 3D image playback means 4 ends the playback process.
  • the 3D image playback means 4 can play back 3D image data for stereoscopic display.
  • a display, a stand for supporting the display, and a movable means for changing the angle of the display surface may be added. Also, the movable at this time The means may be composed of a motor, etc. so that the control means 27 can automatically change the angle of the display surface according to the observation angle information!
  • the display surface when the observation angle information is 0 degrees, the display surface is vertical, and when the observation angle information is 90 degrees, the display surface is horizontal.
  • A is A (0 ⁇ A ⁇ 90) degrees
  • the display surface may be controlled by tilting the upper part of the display surface by (90—A) degrees from the vertical state. In this way, by automatically changing the angle of the display surface according to the observation angle information, the user can observe from an appropriate observation angle without the need for operation, which is extremely simple. High stereoscopic display is possible.
  • the force that the display surface is tilted (90-A) degrees from the vertical state When the observation angle information is not 90 degrees, the display surface is leveled and the observation angle information is displayed. You may make it display. Further, at this time, the above-mentioned movable means may be provided with a mechanism that fluctuates up and down by rotation alone, and the display surface may be automatically lowered when it is horizontal. By doing so, the user 1 can also observe an appropriate observation angle and positional force.
  • the observation angle information is recorded, transmitted, and reproduced in the header area of the image data, so that the management and handling of the data is very simple. It becomes.
  • the reference line is a line parallel to the horizontal line when defining the shooting angle, but this need not be the case.
  • the user may add the offset angle r? From the outside to the shooting angle information output by the shooting angle measurement unit 11 inside the 3D image input unit 2.
  • the absolute value of the offset angle 7? Indicates the angle between the horizontal line and the new reference line changed by adding the offset angle. If the value of r? If the part in front of the correct reference line is close to the camera, and if the value of 7? Also, here, the value of r? Is limited so that the angle between the new reference line and the viewing direction of the camera is a value between 0 and 90 degrees.
  • FIG. 14 and FIG. 15 are diagrams for explaining a new reference line and an imaging angle when an offset angle is added to the imaging angle.
  • Figure 14 (a) shows the shooting angle when 7? Is negative.
  • Fig. 14 (b) shows the change in the shooting angle when 7? Is a positive value.
  • FIG. 15 shows only the camera 12 and the camera viewing direction 13, the new reference line 29, and the shooting angle 1.
  • FIG. 15 (a) corresponds to FIG. 14 (a)
  • FIG. 15 (b) corresponds to FIG. 14 (b).
  • FIG. 15 (a) and FIG. 15 (b) are the same as FIG. 8 (a) except that the imaging angle ⁇ 1 is only ⁇ ′ 1. Accordingly, the operations in the 3D image recording means 3 and the 3D image reproducing means 4 are not different from those described above, and the description thereof is omitted.
  • the 3D image recording unit 3 may allow the user to input from the outside.
  • the shooting angle information is updated by the control means 22 of the 3D image recording means 3 as in the case of the 3D image input means 2, and the observation angle is obtained based on the updated shooting angle information, and the 3D image data is obtained. Record.
  • the user can freely set an observation angle that is uniquely determined from the shooting angle at the time of shooting by setting the offset angle to the shooting angle when shooting or recording 3D image data.
  • the degree of freedom when shooting is also increased.
  • the observation angle at the time of display is also reduced if the offset angle is not added. If the viewing angle is extremely small, the display cannot be displayed even on a display with a small viewing angle, which is very difficult for the user to observe, but the viewing angle can be set freely by adding an offset angle as described above. Therefore, it is possible to prevent the creation of 3D image data that is difficult or impossible to observe depending on the display.
  • the offset angle ⁇ at this time is recorded in the 3D image data as offset angle information in the 3D image recording means 3, and the 3D image reproducing means 4 uses the offset angle information during reproduction. You can play back 3D image data and display it in 3D!
  • FIG. 16 is a diagram showing the structure of 3D image data when the offset angle information at this time is recorded.
  • the offset angle information may be recorded in the 3D control information in the header 5 like the number of viewpoints and the observation angle information.
  • the internal configuration is the same as in FIG. 11, and the operation of each means is also explained in FIG. 11 except for the 3D control information analysis means 25, the control means 27, and the display means 28. Therefore, the description of the separation means 23, the decoding means 24, and the display screen creation means 26 is omitted, and only the operations of the 3D control information analysis means 25, the control means 27, and the display means 28 are explained. .
  • the 3D control information analysis means 25 analyzes the 3D control information force and the offset angle information and outputs them to the control means 27.
  • control means 27 displays the display surface by an offset angle so that the horizontal plane in the stereoscopic image to be reproduced is parallel to the actual horizontal plane.
  • a message notifying that the camera is tilted may be displayed on the display means 28.
  • the 3D image playback means 4 is provided with a movable means for tilting the display surface, and the display is automatically displayed for the offset angle. The surface may be tilted.
  • FIG. 17 is a diagram showing a state of observation when the display surface is tilted at this time.
  • 3D image data to be played back by the 3D image playback means 4 is a 3D image created by shooting a cube 109 placed on the reference plane 108 as shown in Fig. 20 and adding an offset angle of 7 ?. Let it be image data.
  • the observer tilts the display surface from the line 30 parallel to the actual horizontal line by an offset angle of 7 ?, so that the angle between the observer's line-of-sight direction 31 and the display surface is the angle a'1. Force observation.
  • 3D image data created by adding an offset angle by the 3D image recording means 3 is reproduced by the 3D image reproduction means 4, and the 3D image is displayed in the 3D image with the display surface horizontal.
  • the horizontal plane is displayed tilted by an offset angle from the actual horizontal plane.
  • the observer can make the reference plane 108 parallel to the line 30 parallel to the horizontal line.
  • the cube 109 of the reference plane 108 can be observed in the same manner as the actual arrangement.
  • the display itself may be notified to be tilted by the offset angle or may be automatically tilted so that the horizontal plane in the stereoscopic image becomes an actual horizontal plane.
  • the stereoscopic image is the same as the reality at the time of shooting. Displayed by arrangement. That is, since the reference plane in the stereoscopic image displayed on the 3D image reproduction means 4 is parallel to the actual horizontal line, the observer can observe a very realistic image.
  • the shooting angle of the camera is different, so that the angle-of-view information may be obtained and recorded for the number of viewpoints in the vertical direction.
  • the stereoscopic image recording / reproducing system of the present invention is not limited to the above-described embodiment, and various changes can be made without departing from the scope of the present invention.
  • photographing is performed from an oblique direction with respect to the reference plane, and the angle formed by the reference plane at the time of photographing and the line-of-sight direction of the camera is recorded as the observation angle in the header of the photographed image data.
  • image data for stereoscopic viewing from an oblique direction is created.
  • the observation angle of the header is presented to the observer, so that the observer can perform a stereoscopic view from the correct direction.

Abstract

There is provided a 3D image recording/reproducing system enabling an observer to perform 3D viewing in a correct direction when viewing a 3D image in the 3D way. 3D image recording means (3) calculates observation angle information from imaging angle information as an angle defined by a reference surface upon imaging and the camera visual line direction and records it as control information in a header (5) of the captured image data. When reproducing the image data, 3D image reproducing means (4) reads the observation angle information from the header (5) of the image data and presents an appropriate observation angle to a user via display means (28).

Description

明 細 書  Specification
立体画像記録再生システム  Stereoscopic image recording / playback system
技術分野  Technical field
[0001] 本発明は、立体画像を記録および再生するためのシステムに関する。  [0001] The present invention relates to a system for recording and reproducing stereoscopic images.
背景技術  Background art
[0002] 従来、 3次元画像を表示する様々な方法が提案されてきた。その中でも一般的に 用いられているのは両眼視差を利用する「2眼式」と呼ばれるものである。すなわち、 両眼視差を持った左眼用の画像と右眼用の画像 (以下、それぞれ「左眼用画像」、「 右眼用画像」という)を用意し、それぞれ独立に左右の眼に投影することにより立体視 を行うことができる。  Conventionally, various methods for displaying a three-dimensional image have been proposed. Among them, what is commonly used is a so-called “two-lens type” that uses binocular parallax. In other words, a left-eye image and a right-eye image with binocular parallax (hereinafter referred to as “left-eye image” and “right-eye image”, respectively) are prepared and projected to the left and right eyes, respectively. By doing so, stereoscopic viewing can be performed.
[0003] 以下の説明では、 3Dは 3次元または立体を、 2Dは 2次元を意味する語としてそれ ぞれ用い、立体視用の画像データを 3D画像データ、通常の 2次元画像データを 2D 画像データとする。  [0003] In the following description, 3D is a word that means 3D or 3D, and 2D is a word that means 2D. Stereoscopic image data is 3D image data, and normal 2D image data is 2D images. Data.
[0004] ここで、 2眼式の代表的な方式のひとつとしてパララタスノリア方式が提案されてい る。図 18は、パララクスバリア方式を説明するための概念図である。図 18 (a)は、視差 が生じる原理を示す図である。一方、図 18 (b)は、パララタスノリア方式で表示される 画面を示す図である。  [0004] Here, the paralatras noria method has been proposed as one of the typical two-lens methods. FIG. 18 is a conceptual diagram for explaining the parallax barrier method. FIG. 18 (a) is a diagram illustrating the principle of the occurrence of parallax. On the other hand, FIG. 18 (b) is a diagram showing a screen displayed by the paralatras noria method.
図 18 (a)では、図 18 (b)に示すような左眼用画像と右眼用画像が水平方向 1画素 おきに交互にならんだ形に配置された画像を、画像表示パネル 100に表示し、同一 視点の画素の間隔よりも狭い間隔でスリットを持つパララタスノリア 101を画像表示パ ネル 100の前面に置くことにより、左眼用画像は左眼 102だけで、右眼用画像は右 眼 103だけで観察することになり、立体視を行うことができる。  In Fig. 18 (a), the image display panel 100 displays an image in which the left-eye image and the right-eye image are arranged alternately every other pixel in the horizontal direction as shown in Fig. 18 (b). By placing the paralatras nolia 101 with slits at an interval narrower than the interval between pixels of the same viewpoint in front of the image display panel 100, the left eye image is only the left eye 102, and the right eye image is the right eye 103. Therefore, stereoscopic observation can be performed.
[0005] また、左眼用画像と右眼用画像を撮影し、一枚の合成画像として合成する方法が 後述する特許文献 1にお ヽて開示されて 、る。  [0005] Further, a method for capturing an image for the left eye and an image for the right eye and combining them as a single composite image is disclosed in Patent Document 1 described later.
[0006] 図 19は、このような合成画像の記録データ形式の一例を示す概念図である。  FIG. 19 is a conceptual diagram showing an example of the recording data format of such a composite image.
図 19 (a)に示す左眼用画像 104と図 19 (b)に示す右眼用画像 105を左右に並べ 、図 19 (c)に示す 1枚の合成画像 106を作って記録する。再生時にはこの合成画像 106を並べ替えることにより、図 18 (b)に示したような、それぞれの表示形式に適した 画像に変換する。 The left-eye image 104 shown in FIG. 19 (a) and the right-eye image 105 shown in FIG. 19 (b) are arranged side by side to create and record one composite image 106 shown in FIG. 19 (c). This composite image during playback By rearranging 106, the images are converted into images suitable for each display format as shown in Fig. 18 (b).
[0007] 特許文献 1では、画像の表示されているディスプレイ面に対し、観察者が 90度の角 度から観察している力 特許文献 2では、ディスプレイ面を水平に配置し、それを観 察者が斜め方向から観察する方法にっ 、て述べられて 、る。  [0007] In Patent Document 1, the force that an observer observes from an angle of 90 degrees with respect to the display surface on which an image is displayed. In Patent Document 2, the display surface is horizontally arranged and observed. It is said that the person observes from an oblique direction.
[0008] また、特許文献 2では、基準面に配置した物体に対して、斜め上の方向から左眼用 画像と右眼用画像を撮影した後、撮影した画像それぞれに発生する奥行き方向のパ 一スぺクティブを補正する方法について述べられており、この方法について図 20から 図 23を用いて簡単に説明する。  [0008] Also, in Patent Document 2, after capturing an image for the left eye and an image for the right eye from an obliquely upward direction with respect to an object placed on the reference plane, the depth direction pattern generated in each of the captured images. A method for correcting one spectrogram is described, and this method will be briefly described with reference to FIGS.
[0009] 図 20は、このときの左眼用画像と右眼用画像の撮影の様子を示す図である。  FIG. 20 is a diagram showing how the left-eye image and the right-eye image are captured at this time.
図 20において、水平面 107上に、横幅 H、縦幅 Vの大きさの基準面 108を水平に 設置する。この基準面 108の上に物体 109を配置し、右眼用画像を撮影するための カメラ 110と左眼用画像を撮影するためのカメラ 111をそれぞれ、基準面 108の斜め 上に、カメラ間隔を左右の眼の幅と同じようにして配置する。そして、カメラ 110とカメ ラ 111の視線方向を物体 109に向けて画像を撮影する。このとき、カメラ 110とカメラ 111の視線が基準面 108に対して、それぞれ同じ角度 θ 1となるようにする。  In FIG. 20, a reference plane 108 having a horizontal width H and a vertical width V is set horizontally on a horizontal plane 107. An object 109 is placed on the reference plane 108, and a camera 110 for taking a right-eye image and a camera 111 for taking a left-eye image are respectively set obliquely above the reference plane 108 and the camera interval is set. Arrange in the same way as the width of the left and right eyes. Then, the camera 110 and the camera 111 are directed toward the object 109 to take an image. At this time, the lines of sight of the camera 110 and the camera 111 are set to have the same angle θ 1 with respect to the reference plane 108, respectively.
[0010] 図 21は、このとき撮影した左眼用画像と右眼用画像を示す図である。  FIG. 21 is a diagram showing a left-eye image and a right-eye image taken at this time.
図 21(a)は左眼用画像であり、図 21(b)が右眼用画像であり、それぞれの画像には、 基準面 108と物体 109が撮影されている。このとき、図 21(a)の基準面 108の四隅の 点をそれぞれ Pl、 P2、 P3、 P4とし、図 21(b)の基準面 108の四隅の点をそれぞれ P 5、 P6、 P7、 P8とする。  FIG. 21 (a) is an image for the left eye, FIG. 21 (b) is an image for the right eye, and a reference plane 108 and an object 109 are captured in each image. At this time, the four corner points of the reference plane 108 in FIG. 21 (a) are Pl, P2, P3, and P4, respectively, and the four corner points of the reference plane 108 in FIG. 21 (b) are P 5, P6, P7, and P8, respectively. And
[0011] 図 22は、左眼用画像に対してパースペクティブを補正する様子を示した図である。  FIG. 22 is a diagram showing a state in which the perspective is corrected for the left-eye image.
図 21(a)力も基準面 108を図 22(a)のように切り出し、切り出した基準面 108の Pl、 P 2、 P3、 P4の点をそれぞれ、図 22(b)のように、左眼用画像 113の P9、 P10、 Pl l、 P 12となるように、基準面 108を変形して展開する。このとき、図 22(b)の画像の縦横比 力 ¾ :Vとなるように、つまり実際の基準面 108と同じ縦横比になるようにする。  Fig. 21 (a) Force also cuts out the reference plane 108 as shown in Fig. 22 (a), and the Pl, P 2, P3, and P4 points of the cut out reference plane 108 are respectively shown in Fig. 22 (b). The reference plane 108 is deformed and developed so as to be P9, P10, Pl1, and P12 of the image 113 for use. At this time, the aspect ratio of the image in FIG. 22B is set to ¾: V, that is, the same aspect ratio as that of the actual reference plane 108 is set.
[0012] 図 23は、右眼用画像に対してパースペクティブを補正する様子を示した図である。 FIG. 23 is a diagram showing a state in which the perspective is corrected for the right-eye image.
図 22の説明と同様にして、図 21(b)から基準面 108を図 23 (a)のように切り出し、切 り出した基準面 108の P5、 P6、 P7、 P8の点をそれぞれ、図 23 (b)のように右眼用画 像 114の P13、 P14、 P15、 P16〖こ展開する。展開された基準面は、基準面の実寸 の縦横比と同じ値となるようにする。 In the same manner as described in FIG. 22, the reference plane 108 is cut out from FIG. 21 (b) as shown in FIG. The points P5, P6, P7, and P8 on the projected reference plane 108 are expanded to P13, P14, P15, and P16 in the right-eye image 114 as shown in Fig. 23 (b). The developed reference plane should have the same value as the actual aspect ratio of the reference plane.
[0013] これにより、斜めから撮影されたことにより画像内に発生したパースペクティブに対 する補正がなされた左眼用画像 113と右眼用画像 114が作成される。  [0013] Thereby, the left-eye image 113 and the right-eye image 114 that have been corrected for the perspective generated in the image by being photographed from an oblique direction are created.
[0014] 図 24は、パースペクティブ補正した左眼用画像と右眼用画像を用いて立体視を行 う際の様子を示す図である。  [0014] FIG. 24 is a diagram illustrating a state in which stereoscopic viewing is performed using the left-eye image and the right-eye image subjected to perspective correction.
[0015] パースペクティブ補正した左眼用画像と右眼用画像を使い、例えばアナグリフ画像 を作成する。アナグリフ画像とは、左眼用画像の RGB画像から R成分のみを、また、 右眼用画像の RGB画像カゝら Gまたは B成分のみを抜き出し、それぞれを合成して作 成した一枚の画像である。観察者は赤青メガネをかけ、この画像を観察することにより 立体視ができる。  For example, an anaglyph image is created using the perspective-corrected image for the left eye and the image for the right eye. An anaglyph image is a single image created by extracting only the R component from the RGB image of the left-eye image, and extracting only the G or B component from the RGB image model of the right-eye image, and combining them. It is. The observer can view stereoscopically by wearing red and blue glasses and observing this image.
[0016] ここで、作成したアナグリフ画像を撮影時の基準面 108と同じ大きさにして印刷物 1 15に印刷し、水平面 107に配置する。観察者は、この印刷物 115に対する右眼 117 および左眼 118の位置および、その視線 119と印刷物 115のなす角度 θ 1が、撮影 時のカメラと同じになるようにし、赤青メガネを使用して印刷物 115をみることにより、 印刷物 115の上に物体 116が浮き出るような立体画像を観察することができる。  Here, the created anaglyph image is printed on the printed material 115 with the same size as the reference surface 108 at the time of photographing, and is arranged on the horizontal surface 107. The observer uses the red and blue glasses so that the position of the right eye 117 and the left eye 118 with respect to the printed material 115 and the angle θ 1 between the line of sight 119 and the printed material 115 are the same as those of the camera at the time of shooting. By looking at the printed matter 115, it is possible to observe a stereoscopic image in which the object 116 is raised on the printed matter 115.
[0017] また、上記特許文献 2の説明にお 、て、アナグリフを使って立体視する方法につ!ヽ て述べたが、左右の眼の画像を作成する際の方法さえ同じであれば、レンチキュラス クリーンや、パララクスバリアや、偏光メガネゃ、シャッター式メガネを使う方法であつ ても、同様に立体視を行うことができる。さらにまた、作成した立体画像を印刷物とし て作成する例について述べているが、ブラウン管や、液晶などによる表示装置であつ ても同様に立体視を行うことができる。  [0017] Also, in the description of Patent Document 2 above, a method for stereoscopic viewing using anaglyphs has been described. If the method for creating left and right eye images is the same, Stereoscopic viewing can be performed in the same way using lenticular clean, parallax barrier, polarized glasses, and shutter-type glasses. Furthermore, although an example in which the created stereoscopic image is created as a printed matter is described, stereoscopic display can be similarly performed even with a display device using a cathode ray tube or liquid crystal.
特許文献 1 :特開 2002— 125246号公報  Patent Document 1: JP 2002-125246 A
特許文献 2:特許第 3579683号公報  Patent Document 2: Japanese Patent No. 3579683
発明の開示  Disclosure of the invention
発明が解決しょうとする課題  Problems to be solved by the invention
[0018] し力しながら、前記特許文献 2では、観察時に観察者の眼の視線方向と、印刷物上 の基準面とのなす観察角度が、撮影時に一意に決まる値であるため、観察者がこの 値を知らずに観察し、その結果、異なる観察角度から観察してしまった場合、立体視 の際にみえる物体の形が歪んでしまうという問題があった。 [0018] However, in Patent Document 2, the eye gaze direction of the observer at the time of observation, Since the observation angle with the reference plane is a value that is uniquely determined at the time of shooting, if the observer observes this value without knowing this value and, as a result, observes from a different observation angle, the stereoscopic view There was a problem that the shape of the visible object was distorted.
[0019] また、基準面は必ずしも水平面に平行である必要がな 、ため、例えばデジタルカメ ラで画像を撮影した際に画像に付カ卩される EXIF (Exchangeable Image File F ormat)情報に含まれるレンズの焦点距離や方位角、仰角と!、つたカメラパラメータだ けで、上記観察角度を求めることはできな 、と 、う問題があった。  [0019] In addition, since the reference plane does not necessarily need to be parallel to the horizontal plane, for example, it is included in EXIF (Exchangeable Image File Format) information attached to the image when the image is taken with a digital camera. There was a problem that the above-mentioned observation angle could not be obtained only with the lens focal length, azimuth, elevation, and other camera parameters.
[0020] また、上記観察角度は撮影時にしか取得できないため、撮影画像の観察角度が一 度分力 なくなってしまうと、どの角度で見ればいいかをユーザーが知るのが困難で あるという問題があった。  [0020] In addition, since the observation angle can be acquired only at the time of photographing, there is a problem that it is difficult for the user to know which angle to view when the observation angle of the photographed image is once reduced. there were.
[0021] 本発明は、以上のような問題点を解決するためになされたものであって、基準面に 対して斜め方向から撮影を行い、撮影時の基準面とカメラの視線方向がなす角度を 観察角度として撮影画像データのヘッダに記録し、かつ、撮影した画像データに対し て、奥行き方向のパースペクティブをなくすための補正を行つた画像データに変換す ることにより、斜め方向から立体視を行うための画像データを作成する。作成した画 像データを再生する際に、ヘッダの観察角度を観察者に提示することにより、観察者 が正確な方向から立体視を行うことが可能な立体画像記録再生システムを提供する ことを目的とする。  [0021] The present invention has been made to solve the above-described problems, in which an image is taken from an oblique direction with respect to a reference plane, and an angle formed by the reference plane and the line-of-sight direction of the camera at the time of shooting. Is recorded in the header of the captured image data as an observation angle, and the captured image data is converted into image data that has been corrected to eliminate the perspective in the depth direction. Create image data to do. An object of the present invention is to provide a stereoscopic image recording / reproducing system in which an observer can perform stereoscopic viewing from an accurate direction by presenting an observation angle of a header to an observer when reproducing the created image data. And
課題を解決するための手段  Means for solving the problem
[0022] 以上の課題を解決するために、本発明は、複数の視点に対応した複数の画像デー タより、立体画像データを生成し、記録、再生を行う立体画像記録再生システムであ つて、撮像装置より撮影する際の視線方向と、被撮影体を配置した基準面とが成す 角度についての情報である撮影角度情報を、画像データ及び制御情報とともに出力 する 3D画像入力手段と、立体画像を立体視するための表示手段に対する観察角度 を、前記撮影角度情報より算出する制御手段を備え、算出した前記観察角度情報を 3D画像制御情報として前記制御情報とともに立体画像データ内に記録する 3D画像 記録手段と、前記立体画像データを再生するとともに、前記 3D画像制御情報を解析 して、前記観察角度情報を出力する 3D画像再生手段とを備えることを特徴とする。 [0023] また、前記 3D画像記録手段は、前記 3D画像入力手段は、撮像手段を備え、前記 撮像手段の前記視線方向の傾きを計測し、該傾きから前記撮像手段と前記基準面と の位置情報を生成し、該位置情報に応じて撮影角度を算出する撮影角度測定手段 を更に備えることを特徴とする。 [0022] In order to solve the above problems, the present invention is a stereoscopic image recording / reproducing system that generates, records, and reproduces stereoscopic image data from a plurality of image data corresponding to a plurality of viewpoints. 3D image input means for outputting shooting angle information, which is information about the angle formed by the line-of-sight direction when shooting from the imaging device and the reference plane on which the subject is placed, together with image data and control information, and a stereoscopic image Control means for calculating an observation angle with respect to the display means for stereoscopic viewing from the photographing angle information, and the calculated observation angle information is recorded as 3D image control information in the stereoscopic image data together with the control information. And 3D image reproducing means for reproducing the stereoscopic image data, analyzing the 3D image control information, and outputting the observation angle information. That. [0023] Further, the 3D image recording means includes the imaging means, and the 3D image input means measures an inclination in the line-of-sight direction of the imaging means, and positions the imaging means and the reference plane based on the inclination. The image processing apparatus further includes an imaging angle measuring unit that generates information and calculates an imaging angle according to the position information.
[0024] また、前記 3D画像入力手段は、外部から入力された任意の値を前記撮影角度に 対してオフセットとして加え、新たに算出した値を撮影角度情報とすることを特徴とす る。  [0024] Further, the 3D image input means is characterized in that an arbitrary value input from the outside is added as an offset to the shooting angle, and a newly calculated value is used as shooting angle information.
[0025] また、前記 3D画像入力手段は、前記 3D画像記録手段は、外部から入力された任 意の値をオフセット角度情報として 3D画像制御情報内に記録し、前記 3D画像再生 手段において、 3D画像制御情報から該オフセット角度情報を解析し、表示手段に出 力することを特徴とする。  [0025] Further, the 3D image input means, the 3D image recording means records an arbitrary value inputted from the outside as offset angle information in the 3D image control information, and the 3D image reproduction means The offset angle information is analyzed from the image control information and output to the display means.
[0026] また、前記 3D画像記録手段は、前記 3D画像記録手段は、前記撮影角度の値を 前記観察角度に代入して観察角度情報を作成することを特徴とする。  [0026] Further, the 3D image recording means is characterized in that the 3D image recording means creates observation angle information by substituting the value of the photographing angle into the observation angle.
[0027] また、前記 3D画像入力手段は、前記 3D画像再生手段は、前記 3D画像制御情報 内に含まれた前記観察角度情報を解析し、前記観察角度情報を前記表示手段に出 力することを特徴とする。  [0027] Further, the 3D image input means, the 3D image reproduction means, analyzes the observation angle information included in the 3D image control information, and outputs the observation angle information to the display means. It is characterized by.
[0028] また、前記 3D画像再生手段は、前記 3D画像再生手段は、前記 3D画像制御情報 内に含まれた前記観察角度情報を解析し、前記観察角度情報の値に応じて前記表 示手段を傾けることのできる稼動手段を備えることを特徴とする。  [0028] Further, the 3D image reproduction means, the 3D image reproduction means analyzes the observation angle information included in the 3D image control information, and the display means according to a value of the observation angle information. It is characterized by having an operating means capable of tilting.
発明の効果  The invention's effect
[0029] 本発明によれば、立体画像データの制御情報に観察角度を観察角度情報として 記録し、再生手段で立体画像データの制御情報から該観察角度情報を読み取って 、出力時にユーザーに該観察角度を提示することにより、観察者が正確な方向から 立体視を行うことができるため、歪みのな 、立体視を行うことができる。  [0029] According to the present invention, the observation angle is recorded as the observation angle information in the control information of the stereoscopic image data, and the observation angle information is read from the control information of the stereoscopic image data by the reproducing means, and the observation is performed to the user at the time of output. By presenting the angle, the observer can perform stereoscopic viewing from an accurate direction, and thus can perform stereoscopic viewing without distortion.
[0030] また、本発明によれば、記録する 3D画像データ内に観察角度情報を記録するため 、データの管理や扱いが非常に簡易となる。  [0030] Further, according to the present invention, the observation angle information is recorded in the 3D image data to be recorded, so that the management and handling of the data becomes very simple.
[0031] また、本発明によれば、撮影時のカメラの傾きを計測し、計測したカメラの傾きから、 カメラの基準線に対する位置を推定し、カメラの傾きとカメラの基準線に対する位置 関係から撮影角度を求めるため、観察角度を容易に求めることができる。 [0031] Further, according to the present invention, the camera tilt at the time of shooting is measured, the position of the camera relative to the reference line is estimated from the measured camera tilt, and the camera tilt and the position relative to the camera reference line are estimated. Since the shooting angle is obtained from the relationship, the observation angle can be easily obtained.
[0032] また、本発明によれば、 3D画像記録手段により記録した 3D画像データを用いて立 体視を行う際、観察者がディスプレイの中心を見るときの適切な観察角度を、デイス プレイに数値として表示することにより、観察者は適切な観察角度を容易に知ること ができ、その結果、観察者は正確な方向から立体視を行うことができるため、想定と 異なる角度力 観察した際に生じる立体像の歪みを生じることなぐ立体視を行うこと ができる。  [0032] Further, according to the present invention, when performing a stereoscopic vision using the 3D image data recorded by the 3D image recording means, an appropriate observation angle when the observer looks at the center of the display is displayed on the display. By displaying as numerical values, the observer can easily know the appropriate observation angle, and as a result, the observer can perform stereoscopic viewing from the correct direction. It is possible to perform stereoscopic viewing without causing distortion of the generated stereoscopic image.
[0033] また、本発明によれば、 3D画像データの撮影または記録時に撮影角度にオフセッ ト角度を加えることにより、撮影時の撮影角度力 一意に決まる観察角度を任意に設 定することができるため、撮影時の撮影角度が小さくても、オフセット角度を加えて観 察角度を自由に設定することにより、ディスプレイによっては観察がしづらいまたは観 察ができないような 3D画像データの作成を防止することができる。  [0033] Further, according to the present invention, by adding an offset angle to the shooting angle when shooting or recording 3D image data, it is possible to arbitrarily set an observation angle that is uniquely determined by the shooting angle force at the time of shooting. Therefore, even if the shooting angle at the time of shooting is small, by adding an offset angle and setting the observation angle freely, it is possible to prevent creation of 3D image data that is difficult to observe or cannot be observed depending on the display be able to.
または、入力時もしくは記録時にオフセット角度を観察角度に加えて算出しない場 合は、再生時に観察者にオフセット角度を提示することにより、観察者に適切な立体 視のための角度を示すこともできる。  Alternatively, if the offset angle is not calculated by adding it to the observation angle at the time of input or recording, it is also possible to show the viewer an appropriate angle for stereoscopic viewing by presenting the offset angle to the viewer during playback. .
図面の簡単な説明  Brief Description of Drawings
[0034] [図 1]本発明の実施例の立体画像記録再生システムの構成を示すブロック図である。  FIG. 1 is a block diagram showing a configuration of a stereoscopic image recording / reproducing system according to an embodiment of the present invention.
[図 2]3D画像データのデータ構成の一例を示す図である。  FIG. 2 is a diagram showing an example of the data configuration of 3D image data.
[図 3]左眼用画像と右眼用画像の一例を示す図である。  FIG. 3 is a diagram illustrating an example of a left-eye image and a right-eye image.
[図 4]3D画像データ内の画像データの一例を示す図である。  FIG. 4 is a diagram showing an example of image data in 3D image data.
[図 5]3D画像データ内の画像データの一例を示す図である。  FIG. 5 is a diagram showing an example of image data in 3D image data.
[図 6]画像縦横比を変更した画像データの一例について説明するための図である。  FIG. 6 is a diagram for explaining an example of image data in which the image aspect ratio is changed.
[図 7]3D画像入力手段 2の構成を示すブロック図である。  FIG. 7 is a block diagram showing a configuration of 3D image input means 2.
[図 8]カメラ傾斜角と撮影角度の関係を示す図である。  FIG. 8 is a diagram showing the relationship between the camera tilt angle and the shooting angle.
[図 9]3D画像記録手段 3の構成を示すブロック図である。  FIG. 9 is a block diagram showing a configuration of 3D image recording means 3.
[図 10]3D画像記録手段 3の動作を説明するためのフローチャート図である。  FIG. 10 is a flowchart for explaining the operation of the 3D image recording means 3.
[図 11]3D画像再生手段 4の構成を示すブロック図である。  FIG. 11 is a block diagram showing a configuration of 3D image reproduction means 4.
[図 12]3D画像再生手段 4の動作を説明するためのフローチャート図である。 [図 13]復号画像データから表示用画像データを作成する方法について説明する図 である。 FIG. 12 is a flowchart for explaining the operation of the 3D image reproducing means 4. FIG. 13 is a diagram for explaining a method of creating display image data from decoded image data.
[図 14]撮影角度にオフセット角度を加えた際の、新たな基準線と撮影角度について 説明するための図である。  FIG. 14 is a diagram for explaining a new reference line and a shooting angle when an offset angle is added to the shooting angle.
[図 15]撮影角度にオフセット角度を加えた際の、新たな基準線と撮影角度について 説明するための図である。  FIG. 15 is a diagram for explaining a new reference line and a shooting angle when an offset angle is added to the shooting angle.
[図 16]オフセット角度情報を記録した場合の 3D画像データのデータ構成の一例を示 す図である。  FIG. 16 is a diagram showing an example of the data configuration of 3D image data when offset angle information is recorded.
[図 17]ディスプレイ面を傾けた場合の観察の様子を示す図である。  FIG. 17 is a diagram showing a state of observation when the display surface is tilted.
[図 18]パララクスバリア方式を説明するための概念図である。 FIG. 18 is a conceptual diagram for explaining the parallax barrier method.
[図 19]合成画像の記録データ形式の一例を示す概念図である。 FIG. 19 is a conceptual diagram showing an example of a recording data format of a composite image.
[図 20]左眼用画像と右眼用画像の撮影の様子を示す図である。 FIG. 20 is a diagram showing a state of taking a left-eye image and a right-eye image.
[図 21]撮影した左眼用画像と右眼用画像を示す図である。 FIG. 21 is a diagram showing captured left-eye images and right-eye images.
[図 22]左眼用画像に対して、パースペクティブを補正する様子を示した図である。 FIG. 22 is a diagram showing how perspective is corrected for the left-eye image.
[図 23]右眼用画像に対してパースペクティブを補正する様子を示した図である。 FIG. 23 is a diagram showing how the perspective is corrected for the right-eye image.
[図 24]パースペクティブ補正した左眼用画像と右眼用画像を用いて立体視を行う際 の様子を示す図である。 FIG. 24 is a diagram showing a state in which stereoscopic vision is performed using a perspective-corrected left-eye image and right-eye image.
符号の説明 Explanation of symbols
1 立体画像記録再生システム 1 Stereoscopic image recording / playback system
2 3D画像入力手段  2 3D image input means
3 3D画像記録手段  3 3D image recording means
4 3D画像再生手段  4 3D image playback means
5 ヘッグ 5 Heg
6 画像データ  6 Image data
7 左眼用画像  7 Left eye image
8 右眼用画像  8 Right eye image
9 撮像手段  9 Imaging means
10、 22、 27 制御手段 撮影角度測定手段 カメラ 10, 22, 27 Control means Shooting angle measuring means Camera
カメラの視線方向 、 15 基準線  Camera viewing direction, 15 datum
画像切り取り手段 画像補正手段 画像合成手段 圧縮手段  Image cropping means Image correction means Image composition means Compression means
ヘッダ情報作成手段 多重化手段 分離手段  Header information creation means Multiplexing means Separating means
復号手段  Decryption means
3D制御情報解析手段 表示画像作成手段 表示手段  3D control information analysis means Display image creation means Display means
新たな基準線 水平線と平行な線 観察者の視線方向 画像表示パネル1 パララタスノ リア2、 118 左眼 New reference line Line parallel to horizontal line Observer's gaze direction Image display panel 1 Paralatras noria 2, 118 Left eye
3、 117 右眼 3, 117 Right eye
、 113 左眼用画像 、 114 右眼用画像 合成画像  , 113 Left-eye image, 114 Right-eye image Composite image
水平面  Horizontal plane
基準面  Reference plane
、 116 物体 110、 111 カメラ 116 objects 110, 111 camera
115 印刷物  115 Printed matter
119 視線  119 eyes
発明を実施するための最良の形態  BEST MODE FOR CARRYING OUT THE INVENTION
[0036] 以下、本発明の実施例について、添付図面を参照しながら説明する。 Hereinafter, embodiments of the present invention will be described with reference to the accompanying drawings.
[0037] 図 1は、本発明の実施例の立体画像記録再生システムの構成を示すブロック図で ある。 FIG. 1 is a block diagram showing a configuration of a stereoscopic image recording / playback system according to an embodiment of the present invention.
立体画像記録再生システム 1は、 3D画像入力手段 2と、 3D画像記録手段 3と、 3D 画像再生手段 4とから構成されて 、る。  The stereoscopic image recording / reproducing system 1 includes a 3D image input unit 2, a 3D image recording unit 3, and a 3D image reproducing unit 4.
[0038] 3D画像入力手段 2は、複数の視点に対応した複数の画像データを外部から入力 し、各視点の入力画像データを撮影した際の角度を示す撮影角度情報、各視点の 入力画像データの水平画像サイズ及び垂直画像サイズ、 3D画像データ内に含まれ る視点数の数を表す水平視点数情報及び垂直視点数情報を画像データの制御情 報として生成する。 [0038] The 3D image input means 2 inputs a plurality of image data corresponding to a plurality of viewpoints from the outside, shooting angle information indicating an angle when the input image data of each viewpoint is shot, and input image data of each viewpoint The horizontal image size and the vertical image size, and the horizontal viewpoint number information and the vertical viewpoint number information indicating the number of viewpoints included in the 3D image data are generated as the control information of the image data.
3D画像記録手段 3は、前記 3D画像入力手段 2より入力された画像データを 3D画 像データとして記録する。  The 3D image recording means 3 records the image data input from the 3D image input means 2 as 3D image data.
3D画像再生手段 4は、 3D画像記録手段 3で記録された 3D画像データを再生する  The 3D image playback means 4 plays back the 3D image data recorded by the 3D image recording means 3
[0039] ここで、 3D画像データとは立体視用の画像データであり、画像データと 3D制御情 報から構成されたデータとする。前記各手段に関する説明は後述することとし、まず、 3D画像データと 3D制御情報について説明を行う。 [0039] Here, the 3D image data is image data for stereoscopic viewing, and is data composed of image data and 3D control information. The description about each means will be described later. First, 3D image data and 3D control information will be described.
[0040] 図 2は、 3D画像データのデータ構成の一例を示す図である。  FIG. 2 is a diagram illustrating an example of a data configuration of 3D image data.
3D画像データは、ヘッダ 5と画像データ 6から構成されており、ヘッダ 5内には、画 像データ 6の画像サイズ情報や 3D制御情報が含まれて 、る。ヘッダ 5の例としては、 EXIF (Exchangeable Image File Format)ヘッグや、 AVI (Audio Video In terleaved)、 ASF (Advanced Streaming Format)、 WMV (Windows Medi a Video)、 MP4などのファイルフォーマットのヘッダなどがあげられる。また、画像 データの例としては、未圧縮の画像データや、 JPEG (Joint Photographic Expe rts Group)や MPEG (Moving Picture Experts Group)といった圧縮方式で 圧縮された圧縮画像データがあげられる。 The 3D image data includes a header 5 and image data 6. The header 5 includes image size information of the image data 6 and 3D control information. Examples of header 5 include EXIF (Exchangeable Image File Format) heg, AVI (Audio Video Interleaved), ASF (Advanced Streaming Format), WMV (Windows Media Video), MP4 file format header, etc. It is done. Examples of image data include uncompressed image data and JPEG (Joint Photographic Expe rts Group) and MPEG (Moving Picture Experts Group) compressed image data compressed by a compression method.
[0041] また、 3D制御情報とは、 3D画像データ内の画像データの構成に関する情報や、 3[0041] Also, 3D control information includes information on the configuration of image data in 3D image data,
D画像を表示する際に表示を制御するための情報を示し、水平方向及び、垂直方向 の視点数、観察角度情報を含んだ構成になっている。 It shows information for controlling display when displaying D images, and includes the number of horizontal and vertical viewpoints and observation angle information.
[0042] ここで、水平方向及び、垂直方向の視点数とは、 3D画像データ内に含まれる視点 の異なる画像データの数の情報を示す。 Here, the number of viewpoints in the horizontal direction and the vertical direction indicates information on the number of image data having different viewpoints included in the 3D image data.
[0043] また、観察者がこの 3D画像データを立体視する際、 3D画像を表示する表示面に 対して、所定の角度力 観察すれば歪みのない 3D画像を観察することができる。こ のときの所定の角度の情報を観察角度情報とする。 [0043] When the observer stereoscopically views the 3D image data, a 3D image without distortion can be observed by observing a predetermined angular force on the display surface on which the 3D image is displayed. Information on the predetermined angle at this time is taken as observation angle information.
[0044] 図 3は、左眼用画像と右眼用画像の一例について、また、図 4、図 5は、 3D画像デ ータ内の画像データ 6の一例について説明するための図である。 FIG. 3 is a diagram for explaining an example of an image for the left eye and an image for the right eye, and FIGS. 4 and 5 are diagrams for explaining an example of the image data 6 in the 3D image data.
[0045] 例えば図 3の(a)は左眼用画像を、 (b)は右眼用画像を示し、それぞれの水平画像 サイズ h、垂直画像サイズ Vとも同じである。この左眼用画像と右眼用画像を、図 4のよ うに視点の順どおりに左力 水平に並べて、一枚の画像データとする。このときの画 像データの視点数は、水平方向が 2、垂直方向が 1となる。この 3D画像データの画 像サイズは、水平が 2 X h、垂直力 となる。 For example, FIG. 3A shows a left-eye image, and FIG. 3B shows a right-eye image. The horizontal image size h and the vertical image size V are the same. As shown in Fig. 4, the left eye image and the right eye image are arranged side by side in the order of viewpoint as shown in Fig. 4 to obtain a single image data. At this time, the number of viewpoints of the image data is 2 in the horizontal direction and 1 in the vertical direction. The image size of this 3D image data is 2 X h for horizontal and vertical force.
[0046] また、例えば図 5は、水平方向の視点の数が 4、垂直方向の視点の数が 2のときの 例であり、 8つの視点の画像を、図 4の説明と同様に、番号 1から 8のように視点順に 左上から右下にラスタスキャンで並べて 、き、一枚の画像データとしたものである。 [0046] For example, FIG. 5 is an example when the number of horizontal viewpoints is 4 and the number of vertical viewpoints is 2, and images of 8 viewpoints are numbered in the same manner as in the description of FIG. The images are arranged in raster scan from the upper left to the lower right in the order of viewpoints, such as 1 to 8, and are used as one image data.
[0047] また、このときの画像データは、画像縦横比を変更することができる。画像縦横比と は、画像データの垂直方向の拡大縮小率を水平方向の拡大縮小率で割った値を表 記した情報を示す。 [0047] Further, the image data at this time can change the image aspect ratio. The image aspect ratio is information indicating the value obtained by dividing the vertical scaling factor of the image data by the horizontal scaling factor.
[0048] 図 6は、画像縦横比を変更した画像データの一例について説明するための図であ る。  FIG. 6 is a diagram for explaining an example of image data in which the image aspect ratio is changed.
図 4や図 5の画像データは、作成時に画像縦横比を変更しなカゝつたので画像縦横 比は 1となる力 図 6の画像データの場合、例えば、図 4の画像データの縦の縮尺は 変更せず、横の縮尺を 2分の 1にした画像データであり、その画像サイズは水平が h、 垂直が v、画像縦横比が 2となる。このとき、画像縦横比情報として前記の値" 2"を、 3The image data in Fig. 4 and Fig. 5 is the power to change the image aspect ratio at the time of creation, so the image aspect ratio is 1. In the case of the image data in Fig. 6, for example, the vertical scale of the image data in Fig. 4 Is the image data without changing the horizontal scale to 1/2, and the horizontal size is h, Vertical is v and image aspect ratio is 2. At this time, the value “2” is used as the image aspect ratio information.
D制御情報に記録してもよ 、。 D You can record it in control information.
[0049] 以下では、説明の簡略ィ匕のため画像縦横比は 1に固定して説明を行う。 In the following description, the image aspect ratio is fixed to 1 for the sake of simplicity.
[0050] 次に、 3D画像入力手段 2と、 3D画像入力手段 2を構成する各手段の動作につい て、図面を用いて説明を行なう。 Next, the operation of the 3D image input means 2 and each means constituting the 3D image input means 2 will be described with reference to the drawings.
[0051] 図 7は 3D画像入力手段 2の構成を示すブロック図である。 FIG. 7 is a block diagram showing the configuration of the 3D image input means 2.
3D画像入力手段 2は、撮像手段 9と制御手段 10、撮影角度測定手段 11から構成 されている。  The 3D image input unit 2 includes an imaging unit 9, a control unit 10, and an imaging angle measurement unit 11.
[0052] 撮像手段 9は、例えば少なくともひとつ以上の CCDカメラのような撮像素子力も構 成されており、外部の映像を取り込み、入力画像として出力する手段である。  [0052] The image pickup means 9 is also configured with, for example, at least one image pickup device force such as a CCD camera, and takes in an external video and outputs it as an input image.
[0053] 制御手段 10は、撮像手段 9の制御を行 、、例えば、撮像手段 9の左右の角度や位 置を制御する手段であり、図示しな 、CPU等で実現される。  The control means 10 is a means for controlling the imaging means 9, for example, controlling the left and right angles and position of the imaging means 9, and is realized by a CPU or the like, not shown.
[0054] 撮影角度測定手段 11は、液体センサなどによる一般のデジタル角度計や、一般の ジャイロセンサを用いた手段であり、本発明とは関係がないため、詳細な説明は省く  The photographing angle measuring means 11 is a means using a general digital angle meter using a liquid sensor or the like, or a general gyro sensor, and is not related to the present invention, and therefore will not be described in detail.
[0055] この撮影角度測定手段 11により撮像手段 9の撮像方向の水平面に対する傾斜角 を計測し、計測した値力も撮影角度を求め、出力することができる。 The photographing angle measuring means 11 measures the inclination angle of the imaging means 9 with respect to the horizontal plane in the imaging direction, and the measured value force can also determine and output the photographing angle.
[0056] ここで、図面を用いて撮影角度について説明する。  Here, the photographing angle will be described with reference to the drawings.
図 8は、カメラ傾斜角と撮影角度の関係を示す図である。  FIG. 8 is a diagram showing the relationship between the camera tilt angle and the shooting angle.
[0057] まず、撮影角度を、カメラの視線方向と基準線とがなす角 α η (ηは 1から 4までの整 数)とし、その値の範囲は 0から 90度までの値と定義する。ここで、基準線は水平線に 平行な線とする。カメラを傾ける角度 j8 n(nは 1から 4までの整数とし、以下、該角度 を「カメラ傾斜角」という)を、デジタル角度計や、ジャイロセンサを用いて測定した場 合、得られるカメラ傾斜角 β ηの値は 0から 360度未満の値までとなる。このときの測 定された値力も撮影角度を算出する方法について述べる。  [0057] First, the imaging angle is defined as the angle α η (η is an integer from 1 to 4) formed by the camera's line-of-sight direction and the reference line, and the value range is defined as a value from 0 to 90 degrees. . Here, the reference line is parallel to the horizontal line. Camera tilt angle j8 n (where n is an integer from 1 to 4, hereinafter referred to as “camera tilt angle”) is measured using a digital goniometer or gyro sensor. The value of the angle β η ranges from 0 to less than 360 degrees. A method for calculating the photographing angle based on the measured value force at this time will be described.
[0058] 以下では、カメラを基準線に平行かつ、撮影した画像の上下がひっくり返つていな い状態のカメラ傾斜角 j8 ηを 0度とし、カメラと基準線を横から見て、カメラの中心を軸 に時計回りにカメラを回転させるに従い、カメラ傾斜角が増加することとし、 360度で 一回転して、 0度の状態になるものとする。 [0058] In the following, the camera tilt angle j8 η when the camera is parallel to the reference line and the top and bottom of the captured image is not turned over is assumed to be 0 degree, and the camera and the reference line are viewed from the side. The camera tilt angle increases as the camera is rotated clockwise around the center of Suppose that it makes one revolution and is in a 0 degree state.
[0059] 図 8 (a)に、カメラ傾斜角 β 1が 0度以上、かつ、 90度以下の場合の撮影角度 oc 1と の関係を示す。この場合、カメラ 12の視線方向 13と基準線 14のなす角である撮影 角度 α 1は、カメラ傾斜角 j8 1と一致する。ここで、図 8中のカメラ 12の黒い部分は、 カメラ 12の上部とする。 FIG. 8 (a) shows the relationship with the shooting angle oc 1 when the camera tilt angle β 1 is not less than 0 degrees and not more than 90 degrees. In this case, a shooting angle α 1 that is an angle formed by the line-of-sight direction 13 of the camera 12 and the reference line 14 coincides with the camera tilt angle j81. Here, the black part of the camera 12 in FIG.
[0060] 図 8 (b)に、カメラ傾斜角 β 2が 90度より大きぐかつ、 180度以下の場合の撮影角 度《2を示す。この場合、カメラ 12の視線方向 13と基準線 14のなす角である撮影角 度 α 2は、 ( 180— j8 2)となる。  [0060] FIG. 8 (b) shows a shooting angle << 2 when the camera tilt angle β 2 is greater than 90 degrees and equal to or less than 180 degrees. In this case, an imaging angle α 2 that is an angle formed by the line-of-sight direction 13 of the camera 12 and the reference line 14 is (180—j8 2).
[0061] 以下、図 8 (c)や図 8 (d)のようなカメラ傾斜角 |8 3、 j8 4が 180度より大きぐかつ、 3[0061] In the following, the camera tilt angles | 8 3 and j8 4 as shown in FIG. 8 (c) and FIG. 8 (d) are larger than 180 degrees, and 3
60度未満の場合は、図 8 (a)や図 8 (b)で説明した基準線 14よりも上で撮影を行うの で、基準線 14が撮影画像に含まれない。よって、基準線 14に平行で、かつ、カメラよ り上にある基準線 15を新たな基準線とする。 When the angle is less than 60 degrees, the image is taken above the reference line 14 described with reference to FIGS. 8A and 8B, so the reference line 14 is not included in the captured image. Therefore, the reference line 15 parallel to the reference line 14 and above the camera is used as a new reference line.
[0062] 図 8 (c)に、カメラ傾斜角 β 3が 180度より大きぐかつ、 270度以下の場合の撮影 角度 ex 3を示す。基準線 15とカメラ 12の視線方向 13とがなす角を撮影角度 oc 3とす る。このときの α 3の値は、(j8 3— 180)となる。 FIG. 8 (c) shows the shooting angle ex 3 when the camera tilt angle β 3 is larger than 180 degrees and smaller than 270 degrees. The angle formed by the reference line 15 and the viewing direction 13 of the camera 12 is defined as an imaging angle oc3. The value of α 3 at this time is (j8 3−180).
[0063] 図 8 (d)に、カメラ傾斜角 j8 4が 270度より大きぐかつ、 360度 (これは 0度と同じ)よ り小さい場合の撮影角度 α 4を示す。図 8 (c)で説明した場合と同様、基準線 15と、 カメラ 12の視線方向 13とがなす角を撮影角度《4とする。このときの《4の値は、 (3[0063] FIG. 8 (d) shows the shooting angle α4 when the camera tilt angle j84 is larger than 270 degrees and smaller than 360 degrees (this is the same as 0 degrees). As in the case described with reference to FIG. 8C, the angle formed by the reference line 15 and the line-of-sight direction 13 of the camera 12 is defined as an imaging angle << 4. At this time, the value of << 4 is (3
60— j8 4)となる。 60— j8 4).
[0064] このようにして、撮影角度測定手段 11は、撮影時のカメラの傾きを計測し、計測した カメラの傾きから、カメラの基準線に対する位置を推定し、カメラの傾きとカメラの基準 線に対する位置関係から、撮影角度を求めることにより、撮影に使ったカメラの視線 方向と、撮影画像内の、基準線および基準線を含む基準面のなす角度を出力するこ とがでさる。  [0064] In this way, the shooting angle measuring means 11 measures the tilt of the camera at the time of shooting, estimates the position relative to the camera reference line from the measured camera tilt, and determines the camera tilt and the camera reference line. By obtaining the shooting angle from the positional relationship with respect to, it is possible to output the angle between the viewing direction of the camera used for shooting and the reference plane and the reference plane including the reference line in the shot image.
[0065] また、上記で求めた撮影角度は、歪みのな!、立体視を行うのに必要な観察角度を 求める際に使用する情報であり、この観察角度の求め方については後述する。  [0065] Further, the photographing angle obtained above is information used for obtaining an observation angle necessary for performing stereoscopic viewing without distortion! How to obtain the observation angle will be described later.
[0066] 以下では、説明を簡単にするため、撮像手段 9を 2つの CCDカメラとし、左眼用画 像データと右眼用画像データをそれぞれ出力するものとし、また、各視点の画像デ ータの画像サイズは同じものとする。 [0066] In the following, for the sake of simplicity, it is assumed that the imaging means 9 is two CCD cameras, the left-eye image data and the right-eye image data are output, and the image data for each viewpoint is output. The image size of the data is the same.
[0067] 図 7において、撮像手段 9は、 2つの CCDカメラが撮像した画像データを入力画像 データとして出力する。  In FIG. 7, the imaging means 9 outputs the image data captured by the two CCD cameras as input image data.
[0068] このとき、図 20で説明したように撮影対象物の下に水平線に対して平行になるよう 、四角形の基準面を紙などで設置し、撮像した画像を見ながらその基準面に撮影対 象物が納まるようにして撮影を行う。また、基準面の代わりに、基準面の 4隅に当たる 位置に特定のマークを設置し、それらのマークを頂点とする四角形をあらたな基準面 として、撮影対象物がその基準面内に納まるように撮影を行ってもよい。さら〖ここのと き、この基準面とマークの両方を設置して、撮像する画像内に両方を含むようにして 撮影を行ってもよい。  At this time, as described with reference to FIG. 20, a rectangular reference plane is set with paper or the like so as to be parallel to the horizontal line under the object to be photographed and photographed on the reference plane while viewing the captured image. Take a picture so that the object fits in. Also, instead of the reference plane, specific marks are placed at positions corresponding to the four corners of the reference plane, and a square with these marks as vertices is used as a new reference plane so that the object to be photographed can be placed in the reference plane. You may take a picture. Furthermore, at this time, both the reference plane and the mark may be installed, and shooting may be performed so that both are included in the image to be captured.
[0069] また、上記のマークや、基準面を構成する外側の枠を、所定の画像として、あらかじ め決めて 、た入力画像のある所定の位置に上書きし、それを新たな入力画像として 出力してもよいし、それらマークの位置や基準面の枠の大きさと、基準面内の特定の 点(例えば、中心や、枠を構成する 4隅の点のいずれかなど)の位置をユーザーが外 部から自由に入力できるようにしてもょ 、。  [0069] In addition, the above-mentioned mark and the outer frame constituting the reference plane are determined in advance as a predetermined image, overwritten at a predetermined position of the input image, and used as a new input image. The position of the mark, the size of the frame on the reference plane, and the position of a specific point in the reference plane (for example, the center or one of the four corner points that make up the frame) can be output by the user. Let's be able to input freely from the outside.
[0070] また、マークや基準面の上書きを行わず、マークの位置や基準面の枠の大きさと位 置の情報のみを、画像データとともに出力してもよい。これらの情報は、後段の 3D画 像記録手段において、画像内の基準面の大きさと位置を判定するのに使うことができ る。  [0070] Further, only the information on the mark position and the size and position of the frame of the reference surface may be output together with the image data without overwriting the mark and the reference surface. This information can be used by the subsequent 3D image recording means to determine the size and position of the reference plane in the image.
[0071] また、撮影対象である基準面の実寸サイズの縦横比が、撮影する画像データの縦 横比と同じになるように、基準面もしくはマークの位置を設定してもよいし、基準面の 実寸サイズの縦横比が特定の値となるように、基準面もしくはマークの位置を設定し てもよい。  [0071] In addition, the reference plane or the position of the mark may be set so that the aspect ratio of the actual size of the reference plane to be imaged is the same as the aspect ratio of the image data to be captured. The reference plane or mark position may be set so that the aspect ratio of the actual size is a specific value.
[0072] またこのとき、上記の基準面の中心を通る、横方向の線上に、撮影した画像の中心 が位置するようにして撮影を行う。  [0072] At this time, photographing is performed such that the center of the photographed image is positioned on a horizontal line passing through the center of the reference plane.
[0073] 入力画像データを出力すると同時に、制御手段 10は、そのときの入力画像の水平 画像サイズ、垂直画像サイズ、及び、水平方向の視点数を 2、垂直方向の視点数を 1 としてそれぞれ出力する。 [0074] これと同時に撮影角度測定手段 11は、このときの撮影角度を撮影角度情報として 出力する。 [0073] At the same time as outputting the input image data, the control means 10 outputs the horizontal image size, the vertical image size, and the horizontal viewpoint number of the input image at that time as 2 and the vertical viewpoint number as 1, respectively. To do. At the same time, the photographing angle measuring means 11 outputs the photographing angle at this time as photographing angle information.
[0075] また、上記では、撮影角度測定手段 11により自動的に撮影角度を測定し、出力す るようにしたが、この撮影角度測定手段 11の代わりに、撮影角度入力手段を設置し て、撮影角度を撮影者が外部カゝら数値入力してもよい。  In the above, the shooting angle measuring means 11 automatically measures and outputs the shooting angle, but instead of the shooting angle measuring means 11, a shooting angle input means is installed, The photographer may input a numerical value for the photographing angle from an external source.
[0076] また、ここで、 3D画像入力手段 2は、撮像手段 9の代わりに、ビデオ信号などを入 力とする画像信号入力装置、 TV信号を受信して表示する画像表示装置、ビデオや DVDなどを再生する画像再生装置、スキャナなどの画像読み取り装置、画像データ ファイル読み取り装置など、画像データを出力する装置であればよぐこれらに限定さ れない。この場合、前述したように撮影角度情報はユーザーが外部力 入力するもの とする。  [0076] Here, the 3D image input means 2 is an image signal input device that receives a video signal or the like instead of the imaging means 9, an image display device that receives and displays a TV signal, a video or DVD Any device that outputs image data, such as an image reproducing device that reproduces image data, an image reading device such as a scanner, or an image data file reading device, is not limited thereto. In this case, as described above, the shooting angle information is input by the user with an external force.
[0077] 以上のようにして、 3D画像入力手段 2は、 3Dの撮影画像データとして、複数の視 点に対応した複数の画像データ、及び、撮影角度情報、水平画像サイズ、垂直画像 サイズ、水平方向の視点数、垂直方向の視点数をそれぞれ出力することができる。  [0077] As described above, the 3D image input means 2 uses a plurality of pieces of image data corresponding to a plurality of viewpoints as 3D photographed image data, and photographing angle information, a horizontal image size, a vertical image size, and a horizontal The number of viewpoints in the direction and the number of viewpoints in the vertical direction can be output.
[0078] 上記では、水平方向の視点数が 2、垂直方向の視点数が 1の場合について説明を 行ったが、垂直方向の視点数が 3以上ある場合でも、撮影角度情報は、同様にして 算出できる。また、垂直方向の視点数が 2以上ある場合は、同じ垂直方向にある画像 データの組それぞれに 1つずつに対して撮影角度情報を、同様にして算出すればよ ぐこの場合、撮影角度情報として垂直方向の視点数分を算出し、出力する。  In the above description, the case where the number of viewpoints in the horizontal direction is 2 and the number of viewpoints in the vertical direction is 1 has been described. However, even when the number of viewpoints in the vertical direction is 3 or more, the shooting angle information is the same. It can be calculated. If the number of viewpoints in the vertical direction is 2 or more, the shooting angle information can be calculated in the same way for each set of image data in the same vertical direction. As many as the number of viewpoints in the vertical direction are calculated and output.
[0079] 次に、 3D画像記録手段 3と、 3D画像記録手段 3を構成する各手段の動作につい て、図面を用いて説明を行う。  Next, the operation of the 3D image recording means 3 and each means constituting the 3D image recording means 3 will be described with reference to the drawings.
[0080] 図 9は 3D画像記録手段 3の構成を示すブロック図である。  FIG. 9 is a block diagram showing the configuration of the 3D image recording means 3.
[0081] 3D画像記録手段 3は、入力画像データ力 画像の一部を切り取り、この切り取った 画像データである切り取り画像データを出力する画像切り取り手段 16と、切り取り画 像データに対して奥行き方向のパースペクティブを補正し、補正画像データを出力 する画像補正手段 17と、補正画像データを合成し、合成画像データを出力する画像 合成手段 18と、合成画像データを圧縮符号化データに圧縮符号化する圧縮手段 1 9と、入力された水平画像サイズ、垂直画像サイズ、水平視点数情報、垂直視点数情 報、撮影角度情報からヘッダ情報を作成し、出力するヘッダ情報作成手段 20と、圧 縮符号ィ匕データとヘッダ情報を多重化し、 3D画像データを作成する多重化手段 21 と、制御手段 22から構成されている。 [0081] The 3D image recording means 3 cuts a part of the input image data force image, outputs the cut image data which is the cut image data, and the depth direction with respect to the cut image data. An image correcting unit 17 that corrects the perspective and outputs corrected image data, an image combining unit 18 that combines the corrected image data and outputs combined image data, and a compression that compresses and encodes the combined image data into compressed encoded data. Means 19 and the input horizontal image size, vertical image size, horizontal viewpoint number information, vertical viewpoint number information From header information creation means 20 that creates and outputs header information from information and photographing angle information, from multiplexing means 21 that multiplexes compression code key data and header information to create 3D image data, and from control means 22 It is configured.
[0082] ここで、制御手段 22は図示しない CPU等で実現され、 3D画像記録手段 3内の各 手段を制御する手段である。例えば、制御手段 22は、入力された情報を用いて、制 御手段 22に接続された各手段をそれぞれ制御して、 3D画像の画像符号化データとHere, the control means 22 is realized by a CPU or the like (not shown), and is a means for controlling each means in the 3D image recording means 3. For example, the control means 22 controls each means connected to the control means 22 using the input information, and the encoded image data of the 3D image and
3D制御情報を含むヘッダを作成する。 Create a header containing 3D control information.
[0083] 次に、 3D画像記録手段 3の動作について、フローチャート図を用いて説明する。 Next, the operation of the 3D image recording unit 3 will be described with reference to a flowchart.
[0084] 図 10は、 3D画像記録手段 3の動作を説明するためのフローチャート図である。 FIG. 10 is a flowchart for explaining the operation of the 3D image recording means 3.
以下では説明の簡略ィ匕のため、水平視点数情報は 2、垂直視点数情報は 1、入力 画像データは左右の眼用の 2枚の画像データとして説明を行う。  In the following, for simplicity of explanation, the horizontal viewpoint number information is 2, the vertical viewpoint number information is 1, and the input image data is described as two image data for the left and right eyes.
[0085] ステップ S1にお 、て、 3D画像記録手段 3は、 3D画像の記録処理を開始し、ステツ プ S 2へ進む。 [0085] In step S1, the 3D image recording means 3 starts 3D image recording processing, and proceeds to step S2.
[0086] 判定ステップ S2において、 3D画像記録手段 3に入力画像データ及び制御情報が 入力された力否かを制御手段 22が判定し、入力されて!、なければ判定ステップ S 2 へ戻り、そうでなければ、 3D画像記録手段 3に入力画像データ、制御情報としての 各視点の入力画像データの水平画像サイズ、垂直画像サイズ、水平視点数情報、垂 直視点数情報および、撮影角度情報が入力され、ステップ S3へ進む。このとき、 3D 画像記録手段 3内部では、入力画像データが画像切り取り手段 16に、水平画像サイ ズ、垂直画像サイズ、水平視点数情報、垂直視点数情報、撮影角度情報が制御手 段 22にそれぞれ入力される。  [0086] In the determination step S2, the control means 22 determines whether or not the input image data and the control information are input to the 3D image recording means 3, and if it is input !, if not, the process returns to the determination step S2. Otherwise, the input image data, the horizontal image size, the vertical image size, the horizontal viewpoint number information, the vertical viewpoint number information, and the shooting angle information of the input image data of each viewpoint as control information are input to the 3D image recording means 3. Proceed to step S3. At this time, in the 3D image recording means 3, the input image data is supplied to the image cutting means 16, and the horizontal image size, vertical image size, horizontal viewpoint number information, vertical viewpoint number information, and shooting angle information are supplied to the control means 22, respectively. Entered.
[0087] 以下で説明するステップ S3から S6の処理により、入力された画像データから、画像 符号化データを作成する。また、これらのステップで述べる画像切り取り手段 16およ び画像補正手段 17で行われる画像の切り取りの方法や補正の方法は、特許文献 2 で示される方法と同じであり、本発明とは関係がないため、それらの詳細な説明は省 略する。  [0087] Image encoded data is created from the input image data by the processing of steps S3 to S6 described below. Further, the image cropping method and the image correction method performed by the image cropping unit 16 and the image correction unit 17 described in these steps are the same as the method disclosed in Patent Document 2, and are related to the present invention. Since there is no such description, their detailed explanation is omitted.
[0088] ステップ S3において、画像切り取り手段 16に、左右の視点の入力画像データがそ れぞれ入力される。画像切り取り手段 16は、各視点の入力画像データごとに処理を 行う手段である。画像切り取り手段 16は、これらの入力画像データから、画像マッチ ングなどにより、特定の基準面を求める。基準面の代わりにマークを撮影した場合は 、マークを同じくマッチングなどで求め、 4つのマークを含むような四角形の内部を基 準面とする。 [0088] In step S3, the left and right viewpoint input image data are input to the image cropping means 16, respectively. The image cropping means 16 performs processing for each input image data of each viewpoint. Means to do. The image cutout means 16 obtains a specific reference plane from these input image data by image matching or the like. When a mark is photographed instead of the reference plane, the mark is also obtained by matching or the like, and the inside of a rectangle including four marks is used as the reference plane.
[0089] このようにして基準面を求めた後は、図 22 (a)や、図 23 (a)のように、基準面を切り 出した画像を切り出し画像として、左右の視点ごとにそれぞれ出力し、ステップ S4へ 進む。このとき、画像内にマークや基準面がない場合は、特定の領域を基準面として 切り出すようにしてもょ 、し、ユーザーが外部から基準面を直接入力するようにしても よいし、また、異なる基準面を複数用意しておき、ユーザーが外部からどの基準面を 使用するかを選択するようにしてもよい。また、 3D画像入力手段 2の説明で述べたマ ークの位置もしくは、基準面の枠の大きさや位置などが入力された場合は、それらの 情報力 基準面を求めてもょ 、。  [0089] After obtaining the reference plane in this way, as shown in Fig. 22 (a) and Fig. 23 (a), an image obtained by cutting out the reference plane is output as a cut-out image for each of the left and right viewpoints. Then go to step S4. At this time, if there is no mark or reference plane in the image, a specific area may be cut out as the reference plane, or the user may input the reference plane directly from the outside. Multiple different reference planes may be prepared, and the user may select which reference plane to use from the outside. Also, if the mark position described in the explanation of the 3D image input means 2 or the size and position of the frame of the reference plane is input, obtain the information ability reference plane.
[0090] ステップ S4において、画像補正手段 17に、左右の視点の画像データが入力される 。画像補正手段 17は、各視点の入力画像データごとに処理を行う手段である。画像 補正手段 17は、図 22 (a)または図 23 (a)にある切り出し画像に対して、図 22 (b)ま たは図 23 (b)と同じぐ展開された基準面が、基準面の実寸の縦横比 (以下、「基準 面縦横比」という)と同じ値となるように展開変形を行う。このようにして、 CCDカメラを 基準面に対して斜め方向に向けて撮影したことにより発生した奥行き方向の歪み (パ 一スぺクティブ)が補正される。このときの基準面縦横比の値は、入力画像データの 縦横比でもよいし、あら力じめ設定していてもよぐまた、ユーザーが外部より入力す るようにしてちょい。  In step S 4, left and right viewpoint image data is input to the image correction means 17. The image correction unit 17 is a unit that performs processing for each input image data of each viewpoint. The image correcting means 17 uses the reference plane developed in the same way as in FIG. 22 (b) or FIG. 23 (b) for the cut-out image in FIG. 22 (a) or FIG. 23 (a). Develop and deform so that it has the same value as the actual aspect ratio (hereinafter referred to as the “reference plane aspect ratio”). In this way, distortion in the depth direction (passive) caused by photographing the CCD camera in an oblique direction with respect to the reference plane is corrected. At this time, the aspect ratio of the reference plane may be the aspect ratio of the input image data, or it may be set in advance or the user should input from the outside.
[0091] またここで、基準面縦横比の値を、あら力じめ立体画像記録システムに設定されて いる特定の値として扱ってもよい。この場合は、撮影対象である基準面の実寸サイズ の縦横比が、この基準面縦横比の値と同じになるように、基準面もしくはマークの位 置を撮影時に調節しておく。  [0091] Here, the reference plane aspect ratio value may be treated as a specific value set in the stereoscopic image recording system. In this case, the position of the reference plane or mark is adjusted at the time of shooting so that the aspect ratio of the actual size of the reference plane to be photographed is the same as the value of this reference plane aspect ratio.
[0092] このようにして、画像補正手段 17は補正された左右の視点用の補正画像データを それぞれ出力し、ステップ S5へ進む。  In this way, the image correction unit 17 outputs the corrected image data for the left and right viewpoints, respectively, and proceeds to step S5.
[0093] ステップ S5において、画像合成手段 18は、入力画像データから 3D画像データを 合成する処理を行う。ここで、画像合成手段 18は、水平視点数情報、垂直視点数情 報から、各視点の画像データである入力画像データを、図 4、図 5、図 6で説明したも のと同様にして配置し、画像データを作成する手段である。ここでは、水平視点数情 報は 2、垂直視点数情報は 1であり、入力画像データは左右の眼用の 2枚の補正画 像データである。 [0093] In step S5, the image compositing means 18 converts the 3D image data from the input image data. Perform the process of compositing. Here, the image synthesizing means 18 converts the input image data, which is the image data of each viewpoint, from the horizontal viewpoint number information and the vertical viewpoint number information in the same manner as described in FIG. 4, FIG. 5, and FIG. A means for arranging and creating image data. Here, the horizontal viewpoint number information is 2, the vertical viewpoint number information is 1, and the input image data is two corrected image data for the left and right eyes.
[0094] まず、補正画像データが、画像合成手段 18に入力される。それと同時に、制御手 段 22は、水平視点数情報、垂直視点数情報を画像合成手段 18に伝送し、画像合 成手段 18が 3D画像データを合成するように制御する。合成と同時に、画像縦横比 情報を作成する。ここでは、図 5で説明したように、画像縦横比が 1となるように合成し 、画像縦横比情報を 1とする。  First, the corrected image data is input to the image composition means 18. At the same time, the control unit 22 transmits the horizontal viewpoint number information and the vertical viewpoint number information to the image synthesizing unit 18 and controls the image synthesizing unit 18 to synthesize 3D image data. At the same time as composition, image aspect ratio information is created. Here, as described in FIG. 5, the image aspect ratio information is set to 1 by combining the images so that the image aspect ratio is 1.
[0095] 画像合成手段 18は、作成した 3D画像データを圧縮手段 19に、 3D画像データの 水平画像サイズと垂直画像サイズ、画像縦横比情報をそれぞれ制御手段 22にそれ ぞれ出力し、ステップ S6へ進む。  The image composition means 18 outputs the created 3D image data to the compression means 19, and outputs the horizontal image size, vertical image size, and image aspect ratio information of the 3D image data to the control means 22, respectively. Proceed to
[0096] ステップ S6において、圧縮手段 19は、入力画像データを JPEGや MPEGなどの符 号ィ匕手法による符号ィ匕を行い、符号化データを出力する処理を行う。圧縮手段 19は 汎用の圧縮手段で構成されており、本発明に関係がないため、その構成は省略する  [0096] In step S6, the compression means 19 performs a process of encoding the input image data by using an encoding method such as JPEG or MPEG and outputting the encoded data. The compression means 19 is composed of a general-purpose compression means and is not related to the present invention, so the configuration is omitted.
[0097] まず、 3D画像データが圧縮手段 19に入力される。圧縮手段 19は、入力画像デー タを符号ィ匕し、符号化データを出力し、ステップ S7へ進む。 First, 3D image data is input to the compression means 19. The compression means 19 encodes the input image data, outputs the encoded data, and proceeds to step S7.
[0098] 上記では圧縮手段 19で圧縮するとしている力 この圧縮手段 19を省き、未圧縮の データを作成するようにしても力まわな!/、。  [0098] In the above, it is assumed that the compression means 19 compresses the force. Even if the compression means 19 is omitted and uncompressed data is created, the force is irrelevant!
[0099] ステップ S7において、制御手段 22は、ヘッダを作成するのに必要な情報として、符 号化した画像全体の水平画像サイズ、垂直画像サイズ、水平の視点数情報、垂直の 視点数情報、撮影角度情報、画像縦横比情報などを含む情報をヘッダ情報作成手 段 20に伝送する。ヘッダ情報作成手段 20は、制御手段 22から入力された情報を用 いて、図 2で説明したように、 3D制御情報を含むヘッダ 5を作成し、出力する。  [0099] In step S7, the control means 22 includes, as information necessary for creating the header, the horizontal image size of the entire encoded image, the vertical image size, the horizontal viewpoint number information, the vertical viewpoint number information, Information including shooting angle information and image aspect ratio information is transmitted to header information creation means 20. The header information creating means 20 creates and outputs the header 5 including 3D control information using the information input from the control means 22 as described in FIG.
[0100] このとき、 3D制御情報を構成する観察角度には、撮影角度を代入して、 3D制御情 報を作成する。このようにして、撮影角度から観察角度情報を求め、 3D制御情報とし て記録することにより、観察者は 3D画像データごとに観察角度を容易に管理すること ができる。 [0100] At this time, 3D control information is created by substituting the imaging angle for the observation angle constituting the 3D control information. In this way, viewing angle information is obtained from the shooting angle, and is used as 3D control information. By recording these images, the observer can easily manage the observation angle for each 3D image data.
[0101] ステップ S8において、多重化手段 21は、入力された符号ィ匕データとヘッダを多重 化する処理を行う。多重化手段 21は、圧縮手段 19から入力された符号ィ匕データとへ ッダ情報作成手段 20から入力されたヘッダを多重化して多重化データを作成し、 3D 画像データとして出力して、ステップ S9へ進む。  [0101] In step S8, the multiplexing means 21 performs a process of multiplexing the input code data and the header. The multiplexing means 21 multiplexes the encoded data input from the compression means 19 and the header input from the header information creation means 20 to create multiplexed data, and outputs it as 3D image data. Proceed to S9.
[0102] ステップ S9において、多重化手段 21で作成した 3D画像データの記録を行う。 [0102] In step S9, the 3D image data created by the multiplexing means 21 is recorded.
[0103] ここで、立体画像記録再生システム 1は、内部に図示しないデータ記録再生手段を 備えている。このデータ記録再生手段は、 3D画像記録手段 3内の多重化手段 21が 出力した 3D画像データを、データとして記録することのできる、例えばカードなどのリ ムーバムルメディアゃノヽードディスクや光ディスク、磁気テープなどと ヽつた記録媒体 に記録したり、記録媒体からデータを読み出したりできる手段である。記録再生手段 自体は一般のものであり、その構成は本発明とは関係がないので説明を略する。 Here, the stereoscopic image recording / reproducing system 1 includes data recording / reproducing means (not shown) inside. This data recording / reproducing means can record 3D image data output from the multiplexing means 21 in the 3D image recording means 3 as data, for example, removable media such as cards, node disks, optical disks, magnetic tapes, etc. This means that data can be recorded on or read from a recording medium. Since the recording / reproducing means itself is a general one and its configuration is not related to the present invention, the description thereof will be omitted.
[0104] また、上記では、立体画像記録再生システム 1の内部にデータ記録再生手段を含 むと説明したが、このデータ記録再生手段は外部に設置してもよい。例えばデータ記 録再生手段が、データを外部とやりとりのできるような機器、つまり、一般のパーソナ ルコンピュータ(以下、「PC」という)向けの外付けハードディスク、光ディスク記録再 生装置、カードリーダーであってもよいし、 PC自体であってもよい。さらにまた、デジ タルビデオカメラやデジタルビデオなどでもよい。また、伝送経路をインターネットと考 え、データ記録再生手段をインターネットに接続しているサーバーとしてもよい。また 、データ記録再生手段に記録されたデータは、 3D画像再生手段 4により自由に読み 出すことができる。 [0104] In the above description, the data recording / reproducing means is included in the stereoscopic image recording / reproducing system 1. However, the data recording / reproducing means may be provided outside. For example, the data recording / playback means are devices that can exchange data with the outside, that is, external hard disks, optical disk recording / playback devices, and card readers for general personal computers (hereinafter referred to as “PCs”). It may be the PC itself. In addition, a digital video camera or digital video may be used. The transmission path may be considered as the Internet, and the data recording / reproducing means may be a server connected to the Internet. Further, the data recorded in the data recording / reproducing means can be freely read by the 3D image reproducing means 4.
[0105] ステップ S9において、 3D画像データは、上記のデータ記録再生手段によって記録 され、判定ステップ S 10へ進む。  [0105] In step S9, the 3D image data is recorded by the data recording / reproducing means, and the process proceeds to determination step S10.
判定ステップ S10にお 、て、 3D画像記録手段 3の記録処理を終了するか否かを判 定し、記録処理を終了すると判定された場合は、ステップ S11に進み、 3D画像記録 手段 3の記録処理を終了し、そうでなければ、ステップ S2へ戻り、 3D画像の記録処 理を継続する。 [0106] またここで、判定ステップ S 10が記録を終了すると判定する要因は通常の記録装置 と同じぐ例えばユーザーの中断操作や記録媒体の容量不足、ノ ッテリー切れといつ た電力供給の途絶えや、断線などの故障と 、つたアクシデントなどがあげられる。 In the determination step S10, it is determined whether or not the recording process of the 3D image recording unit 3 is to be ended, and if it is determined that the recording process is to be ended, the process proceeds to step S11 and the recording of the 3D image recording unit 3 is performed. If not, return to step S2 and continue 3D image recording. [0106] Also, here, the factors for determining that recording is ended in the determination step S10 are the same as those of a normal recording device, for example, a user's interruption operation, a lack of recording medium capacity, a knottery outage, and interruption of power supply. , Faults such as disconnection, and accidents.
[0107] このようにして、 3D画像記録手段 3は、 3D画像データを記録することができる。 In this manner, the 3D image recording unit 3 can record 3D image data.
[0108] 次に、 3D画像再生手段 4と、 3D画像再生手段 4を構成する各手段の動作につい て、図面を用いて説明を行う。 Next, the operation of the 3D image reproducing means 4 and each means constituting the 3D image reproducing means 4 will be described with reference to the drawings.
[0109] 図 11は 3D画像再生手段 4の構成を示すブロック図である。 FIG. 11 is a block diagram showing the configuration of the 3D image reproduction means 4.
[0110] 3D画像再生手段 4は、 3D画像データをヘッダと画像の符号化データとに分離し、 出力する分離手段 23と、入力された符号化データから画像データを復号し、表示画 像作成手段 26に出力する復号手段 24、 3D制御情報を解析し、制御手段 27に出力 する 3D制御情報解析手段 25、復号された画像データカゝら表示用の画像を作成し、 出力する表示画像作成手段 26、表示画像作成手段 26及び表示手段 28をそれぞれ 制御する制御手段 27、並びに、入力された 3D画像データを表示する表示手段 28 から構成されている。 [0110] The 3D image reproduction means 4 separates the 3D image data into a header and encoded image data, and outputs a separation means 23, and decodes the image data from the input encoded data to create a display image. Decoding means 24 for outputting to means 26, 3D control information analyzing means 25 for analyzing and outputting 3D control information to control means 27, Display image creating means for creating and outputting a display image from the decoded image data cover 26, a control means 27 for controlling the display image creating means 26 and the display means 28, respectively, and a display means 28 for displaying the inputted 3D image data.
[0111] ここで、表示手段 28は例えば、図 18で示したようなパララタスノ リアを用いて立体 表示を行う手段とする。  [0111] Here, the display means 28 is assumed to be means for performing a stereoscopic display using, for example, a paralatus nore as shown in FIG.
[0112] 次に、 3D画像再生手段 4の動作についてフローチャート図を用いて説明する。 Next, the operation of the 3D image reproduction means 4 will be described using a flowchart.
[0113] 図 12は、 3D画像再生手段 4の動作を説明するためのフローチャート図である。 FIG. 12 is a flowchart for explaining the operation of the 3D image reproduction means 4.
[0114] ステップ S12において、 3D画像再生手段 4は再生処理を開始する。このとき、 3D 画像再生手段 4は、 3D画像記録手段 3で説明したデータ記録再生手段にアクセスし[0114] In step S12, the 3D image playback means 4 starts playback processing. At this time, the 3D image reproducing means 4 accesses the data recording / reproducing means described in the 3D image recording means 3.
、 3D画像データの読み出しを開始し、判定ステップ S13に進む。 The reading of 3D image data is started, and the process proceeds to determination step S13.
[0115] 判定ステップ S 13において、 3D画像再生手段 4に 3D画像データが入力されてい るカゝ否かを判定し、入力されている場合はステップ S 14に進み、そうでない場合は、 ステップ S 13〖こ戻る。 [0115] In determination step S13, it is determined whether or not 3D image data is input to the 3D image playback means 4, and if it is input, the process proceeds to step S14, and if not, step S14 is performed. 13 Go back.
[0116] ステップ S14において、分離手段 23に 3D画像データが入力される。分離手段 23 は、入力された 3D画像データから、符号化データと、ヘッダに分離して、符号化デー タを復号手段 24に、ヘッダを 3D制御情報解析手段 25にそれぞれ出力し、ステップ S 15に進む。 [0117] ステップ S15において、復号手段 24に符号ィ匕データが入力され、復号手段 24は 入力された符号化データを復号し、復号画像データを表示画像作成手段 26に出力 し、ステップ S 16に進む。 [0116] In step S14, 3D image data is input to the separating means 23. Separating means 23 separates encoded data and header from the input 3D image data, and outputs the encoded data to decoding means 24 and the header to 3D control information analyzing means 25, respectively. Proceed to [0117] In step S15, the encoded data is input to the decoding unit 24. The decoding unit 24 decodes the input encoded data, and outputs the decoded image data to the display image creating unit 26. move on.
[0118] このとき、復号手段 24は、入力された符号化データ、例え «JPEGや MPEGなどを 復号し、復号画像データを出力する手段である。復号手段 24は汎用の復号手段で 構成されており、本発明に関係がないため、その構成は省略する。  At this time, the decoding unit 24 is a unit that decodes input encoded data, such as JPEG or MPEG, and outputs decoded image data. The decryption means 24 is composed of general-purpose decryption means and is not related to the present invention, and therefore its configuration is omitted.
[0119] ステップ S16において、 3D制御情報解析手段 25にヘッダが入力される。 3D制御 情報解析手段 25は、ヘッダに含まれる 3D制御情報を解析し、水平方向の視点数、 垂直方向の視点数、画像縦横比情報、および、観察角度情報を含む 3D制御情報を 制御手段 27に出力し、ステップ S 17に進む。  In step S 16, the header is input to the 3D control information analysis means 25. The 3D control information analysis means 25 analyzes the 3D control information contained in the header, and controls the 3D control information including the number of viewpoints in the horizontal direction, the number of viewpoints in the vertical direction, the image aspect ratio information, and the observation angle information. And go to step S17.
[0120] ステップ S17において、表示画像作成手段 26に、復号画像データが入力される。  In step S 17, the decoded image data is input to the display image creating means 26.
これと同時に制御手段 27から、水平方向の視点数、垂直方向の視点数、および画 像縦横比情報が入力される。表示画像作成手段 26はこれらの視点数の情報を用い て復号画像データに対して変換を行い、表示用画像データを作成する。このときの 水平方向の視点数は 2、垂直方向の視点数は 1、画像縦横比情報は 1であるとする。  At the same time, the control means 27 receives the number of viewpoints in the horizontal direction, the number of viewpoints in the vertical direction, and image aspect ratio information. The display image creation means 26 converts the decoded image data using the information on the number of viewpoints, and creates display image data. In this case, the number of horizontal viewpoints is 2, the number of vertical viewpoints is 1, and the image aspect ratio information is 1.
[0121] 図 13は、復号画像データから表示用画像データを作成する方法について説明す る図である。図 13 (a)は、水平方向の視点数は 2、垂直方向の視点数は 1、画像縦横 比情報は 1であるような復号画像データである。復号画像データの左半分は左眼用 画像データ、右半分は右眼用画像データであり、これらの各視点の画像力 視点の 順に水平に並ぶようにして構成されて 、る。  FIG. 13 is a diagram for explaining a method of creating display image data from decoded image data. FIG. 13 (a) shows decoded image data in which the number of viewpoints in the horizontal direction is 2, the number of viewpoints in the vertical direction is 1, and the image aspect ratio information is 1. The left half of the decoded image data is image data for the left eye, and the right half is image data for the right eye. The image powers of these viewpoints are arranged horizontally in the order of the viewpoints.
[0122] 表示画像作成手段 26は、水平方向の視点数と垂直方向の視点数から、この構造 を解釈し、各視点の画像データのそれぞれにおいて、縦の長い短冊状に分割し、各 視点で分割した短冊のうち左端のものから、図 13 (b)のように、左、右というように視 点の順に並べ替えて、表示用画像データを作成し、表示手段 28に出力し、ステップ S 18に進む。  [0122] The display image creating means 26 interprets this structure from the number of horizontal viewpoints and the number of vertical viewpoints, and divides the image data of each viewpoint into vertically long strips. As shown in Fig. 13 (b), the left and right sides of the divided strips are rearranged in the order of viewing points to create display image data, which are output to the display means 28. Proceed to 18.
[0123] ステップ S18において、表示手段 28に、表示画像作成手段 26から表示用画像デ ータが、制御手段 27から観察角度情報が、それぞれ入力される。表示手段 28は、デ イスプレイと、パララタスノリア力も構成されており、図 18 (a)で説明したように、表示 用画像データを立体表示し、判定ステップ S 19に進む。 In step S 18, display image data is input from the display image creation means 26 and observation angle information is input from the control means 27 to the display means 28. The display means 28 is also configured with a display and a paralatus noria force, and as shown in FIG. The image data is displayed in 3D, and the process proceeds to decision step S19.
[0124] このとき、表示手段 28は、入力された観察角度情報を、表示手段 28のディスプレイ に、数値として表示してもよい。これにより、観察者は、表示手段 28のディスプレイの 中心を見る際の適切な観察角度を簡易に知ることができ、その結果、正確な方向か ら立体視を行うことができるため、想定と異なる角度力 観察した際に生じる歪みを生 じることなぐ立体視を行うことができる。  At this time, the display means 28 may display the input observation angle information as a numerical value on the display of the display means 28. As a result, the observer can easily know the appropriate observation angle when viewing the center of the display of the display means 28 and, as a result, can perform stereoscopic viewing from the correct direction, which is different from the assumption. Angular force Enables stereoscopic viewing without causing distortion that occurs during observation.
[0125] また、上記では、観察角度情報をディスプレイ面に数値として表示することについて 説明したが、例えば、表示手段 28の前面に水平なスリットやレンチキユラシートなどを 用意して、適切な観察角度から観察者が観察したときのみ観察できるような特定の画 像パターンを表示するようにしてもよい。これにより、観察者は、適切な観察位置を、 より簡易に知ることができる。  [0125] Further, in the above description, the observation angle information is displayed as a numerical value on the display surface. However, for example, a horizontal slit or a wrench chiral sheet is prepared on the front surface of the display unit 28, and an appropriate observation angle is obtained. Thus, a specific image pattern that can be observed only when the observer observes may be displayed. Thereby, the observer can know an appropriate observation position more easily.
[0126] さらにまた、複数の指向性のあるバックライト、もしくはその角度を切り替えることので きるバックライトを用意し、これらのバックライトを切り替えるためのバックライト切り替え 手段を、 3D画像再生手段 4に設置し、制御手段 27により、このバックライト切り替え 手段を制御して、観察角度情報が示す方向にのみ光を照射するようにしてもよい。こ のようにすることにより、ユーザーは、適切な観察位置をより簡易に知ることができる。  [0126] Furthermore, a plurality of directional backlights or a backlight capable of switching the angle is prepared, and a backlight switching means for switching these backlights is installed in the 3D image reproduction means 4. Then, the backlight switching means may be controlled by the control means 27 so that light is emitted only in the direction indicated by the observation angle information. By doing so, the user can know an appropriate observation position more easily.
[0127] 判定ステップ S 19において、 3D画像再生手段 4の再生処理を終了するか否かを判 定し、再生処理を終了すると判定された場合は、ステップ S20に進み、 3D画像再生 手段 4の再生処理を終了し、そうでなければ、ステップ S13へ戻り、 3D画像の再生処 理を «I続する。またここで、判定ステップ S20が再生を終了すると判定する要因は通 常の再生装置と同じぐ例えばユーザーの中断操作や、ノ ッテリー切れといった電力 供給の途絶え、断線などの故障や、壊れたデータが入力されていたなどといったァク シデントなどがあげられる。ステップ S20において、 3D画像再生手段 4は再生処理を 終了する。  In determination step S 19, it is determined whether or not the playback process of the 3D image playback means 4 is to be ended. If it is determined that the playback process is to be ended, the process proceeds to step S 20 and the 3D image playback means 4 of If not, the process returns to step S13, and the 3D image playback process is continued. In addition, here, the reason for judging that the judgment step S20 finishes the reproduction is the same as the normal reproduction device, for example, the interruption operation of the user, the interruption of the power supply such as the battery running out, the failure such as the disconnection, and the broken data. For example, an accident such as an input. In step S20, the 3D image playback means 4 ends the playback process.
[0128] このようにして、 3D画像再生手段 4は 3D画像データを再生し、立体表示することが できる。  [0128] In this way, the 3D image playback means 4 can play back 3D image data for stereoscopic display.
[0129] また、上記の 3D画像再生手段 4に、ディスプレイとそれを支える台、および、デイス プレイ面の角度を変更するための可動手段を追加してもよい。また、このときの可動 手段をモータなどで構成し、ディスプレイ面の角度を観察角度情報に応じて制御手 段 27が自動的に変更できるようにしてもよ!、。 [0129] Further, to the 3D image reproduction means 4 described above, a display, a stand for supporting the display, and a movable means for changing the angle of the display surface may be added. Also, the movable at this time The means may be composed of a motor, etc. so that the control means 27 can automatically change the angle of the display surface according to the observation angle information!
[0130] 例えば、観察角度情報が 0度のときには、ディスプレイ面が垂直になるように、また、 観察角度情報が 90度のときには、ディスプレイ面が水平になるように、さらにまた、観 察角度情報が A (0≤A≤ 90)度のときには、ディスプレイ面が垂直な状態から、ディ スプレイ面の上部を (90— A)度傾けるように、それぞれ制御を行ってもよい。このよう にして、観察角度情報に応じて、ディスプレイ面の角度を自動的に変えてやることに より、ユーザーは、操作の必要なく適切な観察角度から観察することができ、非常に 簡易性の高 、立体表示が可能となる。  [0130] For example, when the observation angle information is 0 degrees, the display surface is vertical, and when the observation angle information is 90 degrees, the display surface is horizontal. When A is A (0≤A≤90) degrees, the display surface may be controlled by tilting the upper part of the display surface by (90—A) degrees from the vertical state. In this way, by automatically changing the angle of the display surface according to the observation angle information, the user can observe from an appropriate observation angle without the need for operation, which is extremely simple. High stereoscopic display is possible.
[0131] さらにまた、上記で、ディスプレイ面が垂直となる状態から(90— A)度傾けるように するとした力 観察角度情報が 90度でない場合はすべて、ディスプレイ面を水平にし 、観察角度情報を表示するようにしてもよい。さらにこのとき、前述した可動手段に、 回転だけではなぐ上下に変動させる仕組みを備えるようにして、水平にした場合は 自動的にディスプレイ面を下げるようにしてもよい。このようにすることにより、ユーザ 一は、適切な観察角度や位置力も観察することができる。  [0131] Furthermore, in the above, the force that the display surface is tilted (90-A) degrees from the vertical state. When the observation angle information is not 90 degrees, the display surface is leveled and the observation angle information is displayed. You may make it display. Further, at this time, the above-mentioned movable means may be provided with a mechanism that fluctuates up and down by rotation alone, and the display surface may be automatically lowered when it is horizontal. By doing so, the user 1 can also observe an appropriate observation angle and positional force.
[0132] このようにして、本発明の立体画像記録再生システムによれば、上記の観察角度情 報を画像データのヘッダ領域に記録、伝送、再生するため、データの管理や扱いが 非常に簡易となる。  Thus, according to the stereoscopic image recording / reproducing system of the present invention, the observation angle information is recorded, transmitted, and reproduced in the header area of the image data, so that the management and handling of the data is very simple. It becomes.
[0133] また、上記では、撮影角度の定義の際に基準線を水平線に平行な線としたが、そう でなくても構わない。例えば、 3D画像入力手段 2内部の撮影角度測定手段 11が出 力する撮影角度情報に、ユーザーが外部からオフセット角度 r?を加えても構わない。  [0133] In the above description, the reference line is a line parallel to the horizontal line when defining the shooting angle, but this need not be the case. For example, the user may add the offset angle r? From the outside to the shooting angle information output by the shooting angle measurement unit 11 inside the 3D image input unit 2.
[0134] オフセット角度 7?の絶対値は、水平線とオフセット角度を加えて変更した新たな基 準線のなす角度を示し、 r?の値が負の値の場合は、カメラから見て、新たな基準線の 手前の部分がカメラに近くなり、逆に 7?の値が正の値の場合は、遠くなる。またここで 、新たな基準線とカメラの視線方向がなす角が、 0から 90度の間の値になるようにす るため、 r?の値には制限を設ける。  [0134] The absolute value of the offset angle 7? Indicates the angle between the horizontal line and the new reference line changed by adding the offset angle. If the value of r? If the part in front of the correct reference line is close to the camera, and if the value of 7? Also, here, the value of r? Is limited so that the angle between the new reference line and the viewing direction of the camera is a value between 0 and 90 degrees.
[0135] 図 14および図 15は、撮影角度にオフセット角度を加えた際の、新たな基準線と撮 影角度について説明するための図である。図 14 (a)は、 7?が負の値のときの撮影角 度の変化を、図 14 (b)は 7?が正の値のときの撮影角度の変化をそれぞれ示す図で ある。 FIG. 14 and FIG. 15 are diagrams for explaining a new reference line and an imaging angle when an offset angle is added to the imaging angle. Figure 14 (a) shows the shooting angle when 7? Is negative. Fig. 14 (b) shows the change in the shooting angle when 7? Is a positive value.
[0136] カメラ 12の視線方向 13と、水平線がなす角度を初期撮影角度 ex 1とした場合、オフ セット角度 7?を加えた新たな撮影角度ひ ' 1は、ひ 1 + 7?となる。このとき、撮影角度 OC [0136] When the line-of-sight direction 13 of the camera 12 and the angle formed by the horizontal line are defined as the initial shooting angle ex 1, the new shooting angle H 1 added with the offset angle 7? Is H 1 + 7 ?. At this time, the shooting angle OC
' 1が 0から 90度の値に収まるようにするため、 7?の値を、 'In order to make 1 fall within the range of 0 to 90 degrees, the value of 7?
- a l≤ 7? ≤ (90 - α 1)  -a l≤ 7? ≤ (90-α 1)
となるように制限してもよい。  You may restrict | limit so that it may become.
[0137] ここで撮影角度についての説明を簡単に示すために、図 14より、水平線 14を除き[0137] Here, in order to briefly explain the shooting angle, the horizontal line 14 is excluded from FIG.
、カメラ 12及びカメラの視線方向 13、新たな基準線 29、並びに、撮影角度ひ ' 1のみ を図 15に記す。 FIG. 15 shows only the camera 12 and the camera viewing direction 13, the new reference line 29, and the shooting angle 1.
[0138] 図 15 (a)は図 14 (a)に、図 15 (b)は図 14 (b)に、それぞれ対応している。これらの 図 15 (a)および図 15 (b)は、図 8 (a)で説明したものと比べて、撮影角度 α 1が α ' 1 になっただけであり、同様のものである。したがって、 3D画像記録手段 3と、 3D画像 再生手段 4における動作は前述したものと変わらないため、その説明は省略する。  FIG. 15 (a) corresponds to FIG. 14 (a), and FIG. 15 (b) corresponds to FIG. 14 (b). These FIG. 15 (a) and FIG. 15 (b) are the same as FIG. 8 (a) except that the imaging angle α 1 is only α ′ 1. Accordingly, the operations in the 3D image recording means 3 and the 3D image reproducing means 4 are not different from those described above, and the description thereof is omitted.
[0139] また、上記では、 3D画像入力手段 2においてオフセット角度 ηの入力を行う場合に ついて説明したが、 3D画像記録手段 3において、ユーザーが外部から入力できるよ うにしても力まわない。このとき、撮影角度情報は、 3D画像入力手段 2の場合と同様 に、 3D画像記録手段 3の制御手段 22で更新され、更新された撮影角度情報を元に 観察角度を求め、 3D画像データを記録する。  In the above description, the case where the offset angle η is input in the 3D image input unit 2 has been described. However, the 3D image recording unit 3 may allow the user to input from the outside. At this time, the shooting angle information is updated by the control means 22 of the 3D image recording means 3 as in the case of the 3D image input means 2, and the observation angle is obtained based on the updated shooting angle information, and the 3D image data is obtained. Record.
[0140] このようにして、 3D画像データの撮影または記録時に撮影角度にオフセット角度を カロえることにより、ユーザーは、撮影時の撮影角度から一意に決まる観察角度を自由 に設定することができ、また、撮影の際の自由度もあがる。  [0140] In this way, the user can freely set an observation angle that is uniquely determined from the shooting angle at the time of shooting by setting the offset angle to the shooting angle when shooting or recording 3D image data. In addition, the degree of freedom when shooting is also increased.
[0141] 例えば、外部環境といった要因により非常に小さい撮影角度でし力撮影できない場 合に、オフセット角度を加えなければ、表示の際の観察角度も小さくなる。観察角度 が極端に小さくなると、ユーザーは非常に観察しづらぐ視野角の小さいディスプレイ では、像を表示することすらできなくなるが、上記のようにオフセット角度を加えて観 察角度を自由に設定することにより、ディスプレイによっては観察がしづらいまたは観 察ができないような 3D画像データの作成を防ぐことができる。 [0142] また、このときのオフセット角度 ηを、 3D画像記録手段 3において、オフセット角度 情報として 3D画像データ内に記録し、 3D画像再生手段 4は、再生時に、該オフセッ ト角度情報を用いて 3D画像データを再生し、立体表示するようにしてもよ!ヽ。 [0141] For example, when force shooting is not possible at a very small shooting angle due to factors such as the external environment, the observation angle at the time of display is also reduced if the offset angle is not added. If the viewing angle is extremely small, the display cannot be displayed even on a display with a small viewing angle, which is very difficult for the user to observe, but the viewing angle can be set freely by adding an offset angle as described above. Therefore, it is possible to prevent the creation of 3D image data that is difficult or impossible to observe depending on the display. [0142] Further, the offset angle η at this time is recorded in the 3D image data as offset angle information in the 3D image recording means 3, and the 3D image reproducing means 4 uses the offset angle information during reproduction. You can play back 3D image data and display it in 3D!
[0143] 図 16は、このときのオフセット角度情報を記録した場合の 3D画像データの構造を 示す図である。図 16のように、オフセット角度情報は、視点数や観察角度情報などと 同じように、ヘッダ 5にある 3D制御情報に記録してもよい。  FIG. 16 is a diagram showing the structure of 3D image data when the offset angle information at this time is recorded. As shown in FIG. 16, the offset angle information may be recorded in the 3D control information in the header 5 like the number of viewpoints and the observation angle information.
[0144] 上記のように 3D画像データ内にオフセット角度 ηを記録した場合、 3D画像再生手 段 4がこの 3D画像データを再生する際の動作は、前述したものと若干異なる。以下 では、 3D制御情報内にオフセット角度情報が記録された 3D画像データを再生する 場合における 3D画像再生手段 4の動作について述べる。  [0144] When the offset angle η is recorded in the 3D image data as described above, the operation when the 3D image reproduction means 4 reproduces the 3D image data is slightly different from that described above. In the following, the operation of the 3D image reproduction means 4 when reproducing 3D image data in which offset angle information is recorded in the 3D control information will be described.
[0145] この場合の 3D画像再生手段 4に関して、内部の構成は図 11と変わらず、また、各 手段の動作も 3D制御情報解析手段 25と制御手段 27、表示手段 28以外は図 11で 説明した場合と同じ動作をするため、分離手段 23、復号手段 24、表示画面作成手 段 26の説明は省略し、 3D制御情報解析手段 25、制御手段 27、および表示手段 28 の動作のみについて説明する。  Regarding the 3D image playback means 4 in this case, the internal configuration is the same as in FIG. 11, and the operation of each means is also explained in FIG. 11 except for the 3D control information analysis means 25, the control means 27, and the display means 28. Therefore, the description of the separation means 23, the decoding means 24, and the display screen creation means 26 is omitted, and only the operations of the 3D control information analysis means 25, the control means 27, and the display means 28 are explained. .
[0146] 3D制御情報解析手段 25は図 11で説明した動作に加え、 3D制御情報力もオフセ ット角度情報も解析し、制御手段 27に出力する。  In addition to the operation described in FIG. 11, the 3D control information analysis means 25 analyzes the 3D control information force and the offset angle information and outputs them to the control means 27.
[0147] また、制御手段 27は、同じく図 11で説明した動作に加え、再生する立体像におけ る水平面が、現実の水平面に平行となるようにするために、オフセット角度分だけディ スプレイ面を傾ける旨を通知するメッセージを表示手段 28上に表示してもよ 、し、前 述したように、ディスプレイ面を傾ける可動手段を 3D画像再生手段 4に設け、自動的 にオフセット角度分だけディスプレイ面を傾けるようにしてもよい。  [0147] Further, in addition to the operation described in Fig. 11, the control means 27 displays the display surface by an offset angle so that the horizontal plane in the stereoscopic image to be reproduced is parallel to the actual horizontal plane. A message notifying that the camera is tilted may be displayed on the display means 28. As described above, the 3D image playback means 4 is provided with a movable means for tilting the display surface, and the display is automatically displayed for the offset angle. The surface may be tilted.
[0148] 図 17は、このときのディスプレイ面を傾けた場合の観察の様子を示す図である。例 えば、 3D画像再生手段 4において再生される 3D画像データは、図 20のように基準 面 108の上に立方体 109が設置されているものを撮影し、オフセット角度 7?を加えて 作成した 3D画像データとする。図 17において、観察者は、ディスプレイ面を現実の 水平線と平行な線 30からオフセット角度 7?分傾け、観察者の視線方向 31とディスプ レイ面のなす角度が角度 a ' 1となるような位置力 観察を行う。 [0149] 3D画像記録手段 3でオフセット角度を加えて作成された 3D画像データを、 3D画 像再生手段 4で再生し、ディスプレイ面を水平にして立体表示する際、表示される立 体像における水平面は、現実の水平面よりもオフセット角度分傾いて表示される。し かし、上記のように、この傾きを打ち消すような方向に、ディスプレイ面をオフセット角 度分だけ傾けて観察することにより、観察者は、基準面 108が水平線と平行な線 30 に平行となり、基準面 108の立方体 109は、現実の配置と同じように観察することが できる。このとき、オフセット角度分だけ、ディスプレイ自体を傾けるよう通知する、もし くは、自動的に傾けるようにして、立体像における水平面が現実の水平面になるよう にしてもよい。 FIG. 17 is a diagram showing a state of observation when the display surface is tilted at this time. For example, 3D image data to be played back by the 3D image playback means 4 is a 3D image created by shooting a cube 109 placed on the reference plane 108 as shown in Fig. 20 and adding an offset angle of 7 ?. Let it be image data. In FIG. 17, the observer tilts the display surface from the line 30 parallel to the actual horizontal line by an offset angle of 7 ?, so that the angle between the observer's line-of-sight direction 31 and the display surface is the angle a'1. Force observation. [0149] 3D image data created by adding an offset angle by the 3D image recording means 3 is reproduced by the 3D image reproduction means 4, and the 3D image is displayed in the 3D image with the display surface horizontal. The horizontal plane is displayed tilted by an offset angle from the actual horizontal plane. However, as described above, by observing the display surface by tilting it by an offset angle in a direction that cancels this inclination, the observer can make the reference plane 108 parallel to the line 30 parallel to the horizontal line. The cube 109 of the reference plane 108 can be observed in the same manner as the actual arrangement. At this time, the display itself may be notified to be tilted by the offset angle or may be automatically tilted so that the horizontal plane in the stereoscopic image becomes an actual horizontal plane.
[0150] このようにして、 3D画像データ内のオフセット角度情報を参照して、 3D画像再生手 段 4のディスプレイ面を傾けて表示することにより、立体像は、撮影時の現実と同じよ うな配置で表示される。つまり、 3D画像再生手段 4に表示される立体像における基 準面が、現実の水平線と平行になるため、観察者は、非常にリアリティのある像を観 察することができる。  [0150] In this way, by referring to the offset angle information in the 3D image data and displaying the tilted display surface of the 3D image playback means 4, the stereoscopic image is the same as the reality at the time of shooting. Displayed by arrangement. That is, since the reference plane in the stereoscopic image displayed on the 3D image reproduction means 4 is parallel to the actual horizontal line, the observer can observe a very realistic image.
[0151] また、以上の説明では、説明の簡単化のため、 2つの視差を有する画像を用いて 3 D画像データを作成する場合につ 、て述べたが、 3つ以上の視差を有する画像であ つても、観察角度情報の求め方は変わらないため、簡単に適用できる。  [0151] In the above description, for simplification of description, the case where 3D image data is created using an image having two parallaxes is described. However, an image having three or more parallaxes is described. However, the method for obtaining observation angle information does not change and can be easily applied.
[0152] また、前述したように、垂直方向の視点数が増える場合、カメラの撮影角度が異なる ため、垂直方向の視点数分、それぞ 察角度情報を求め、記録するようにしてもよ い。例えば図 5の視点 1から 4まででひとつの観察角度情報を記録し、視点 5の 8まで で別の観察角度情報を記録するようにしてもよ!、。  [0152] Also, as described above, when the number of viewpoints in the vertical direction increases, the shooting angle of the camera is different, so that the angle-of-view information may be obtained and recorded for the number of viewpoints in the vertical direction. . For example, it is possible to record one observation angle information from viewpoints 1 to 4 in FIG. 5 and record another observation angle information from viewpoint 5 to 8!
[0153] なお、本発明の立体画像記録再生システムは、上記した実施の形態に限定される ものではなぐ本発明の要旨を逸脱しない範囲内において種々変更をカ卩ぇ得ること は勿論である。  [0153] It should be noted that the stereoscopic image recording / reproducing system of the present invention is not limited to the above-described embodiment, and various changes can be made without departing from the scope of the present invention.
産業上の利用可能性  Industrial applicability
[0154] 以上のように、本発明によれば、基準面に対して斜め方向から撮影を行い、撮影時 の基準面とカメラの視線方向がなす角度を観察角度として撮影画像データのヘッダ に記録し、かつ、撮影した画像データに対して、奥行き方向のパースペクティブをなく すための補正を行った画像データに変換することにより、斜め方向から立体視を行う ための画像データを作成する。作成した画像データを再生する際に、ヘッダの観察 角度を観察者に提示することにより、観察者が正確な方向から立体視を行うことが可 會 になる。 [0154] As described above, according to the present invention, photographing is performed from an oblique direction with respect to the reference plane, and the angle formed by the reference plane at the time of photographing and the line-of-sight direction of the camera is recorded as the observation angle in the header of the photographed image data. In addition, there is no perspective in the depth direction for the captured image data. By converting to corrected image data, image data for stereoscopic viewing from an oblique direction is created. When the created image data is played back, the observation angle of the header is presented to the observer, so that the observer can perform a stereoscopic view from the correct direction.

Claims

請求の範囲 The scope of the claims
[1] 複数の視点に対応した複数の画像データより、立体画像データを生成し、記録、再 生を行う立体画像記録再生システムであって、  [1] A stereoscopic image recording / reproducing system for generating, recording, and reproducing stereoscopic image data from a plurality of image data corresponding to a plurality of viewpoints,
撮像手段より撮影する際の視線方向と、被撮影体を配置した基準面とが成す角度 についての情報である撮影角度情報を、画像データ及び制御情報とともに出力する Shooting angle information, which is information about the angle formed between the line-of-sight direction when shooting from the imaging means and the reference plane on which the subject is placed, is output together with the image data and control information.
3D画像入力手段と、 3D image input means;
立体画像を立体視するための表示手段に対する観察角度を、前記撮影角度情報 より算出する制御手段を備え、算出した前記観察角度情報を 3D画像制御情報とし て前記制御情報とともに立体画像データ内に記録する 3D画像記録手段と、 前記立体画像データを再生するとともに、前記 3D画像制御情報を解析して、前記 観察角度情報を出力する 3D画像再生手段と、  Control means for calculating an observation angle with respect to a display means for stereoscopically viewing a stereoscopic image from the photographing angle information is recorded in the stereoscopic image data together with the control information as the 3D image control information. 3D image recording means, 3D image reproducing means for reproducing the stereoscopic image data, analyzing the 3D image control information, and outputting the observation angle information;
を備えることを特徴とする立体画像記録再生システム。  A stereoscopic image recording / reproducing system comprising:
[2] 前記 3D画像入力手段は、撮像手段を備え、前記撮像手段の前記視線方向の傾き を計測し、該傾きから前記撮像手段と前記基準面との位置情報を生成し、該位置情 報に応じて撮影角度を算出する撮影角度測定手段を更に備えることを特徴とする請 求項 2記載の立体画像記録再生システム。  [2] The 3D image input unit includes an imaging unit, measures an inclination of the line-of-sight direction of the imaging unit, generates position information of the imaging unit and the reference plane from the inclination, and generates the position information. 3. The three-dimensional image recording / reproducing system according to claim 2, further comprising photographing angle measuring means for calculating a photographing angle according to.
[3] 外部力も入力された任意の値を前記撮影角度に対してオフセットとして加え、新た に算出した値を撮影角度情報とすることを特徴とする請求項 1又は 2記載の立体画像 記録再生システム。  [3] The stereoscopic image recording / reproducing system according to claim 1 or 2, wherein an arbitrary value to which an external force is also input is added as an offset to the photographing angle, and the newly calculated value is used as photographing angle information. .
[4] 前記 3D画像記録手段は、外部から入力された任意の値をオフセット角度情報とし て 3D画像制御情報内に記録し、  [4] The 3D image recording means records an arbitrary value input from the outside as offset angle information in the 3D image control information,
前記 3D画像再生手段において、 3D画像制御情報から該オフセット角度情報を解析 し、表示手段に出力することを特徴とする請求項 1又は 2記載の立体画像記録再生 システム。  3. The stereoscopic image recording / reproducing system according to claim 1, wherein the 3D image reproducing means analyzes the offset angle information from the 3D image control information and outputs the analyzed information to the display means.
[5] 前記 3D画像記録手段は、前記撮影角度の値を前記観察角度に代入して観察角 度情報を生成することを特徴とする請求項 1乃至 4のいずれか一つに記載の立体画 像記録再生システム。  5. The stereoscopic image according to any one of claims 1 to 4, wherein the 3D image recording means generates observation angle information by substituting the value of the photographing angle into the observation angle. Image recording / playback system.
[6] 前記 3D画像再生手段は、前記 3D画像制御情報内に含まれた前記観察角度情報 を解析し、前記観察角度情報を前記表示手段に出力することを特徴とする請求項 1 乃至 5のいずれか一つに記載の立体画像記録再生システム。 [6] The 3D image reproduction means includes the observation angle information included in the 3D image control information. 6. The stereoscopic image recording / reproducing system according to claim 1, wherein the observation angle information is output to the display means.
前記 3D画像再生手段は、前記 3D画像制御情報内に含まれた前記観察角度情報 を解析し、前記観察角度情報の値に応じて前記表示手段を傾けることのできる稼動 手段を備えることを特徴とする請求項 1乃至 5のいずれか一つに記載の立体画像記 録再生システム。  The 3D image reproduction means includes an operation means that analyzes the observation angle information included in the 3D image control information and can tilt the display means according to the value of the observation angle information. The stereoscopic image recording / reproducing system according to any one of claims 1 to 5.
PCT/JP2006/317531 2005-09-07 2006-09-05 3d image recording/reproducing system WO2007029686A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2007534423A JP4619412B2 (en) 2005-09-07 2006-09-05 Stereoscopic image recording / playback system

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2005259682 2005-09-07
JP2005-259682 2005-09-07

Publications (1)

Publication Number Publication Date
WO2007029686A1 true WO2007029686A1 (en) 2007-03-15

Family

ID=37835806

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2006/317531 WO2007029686A1 (en) 2005-09-07 2006-09-05 3d image recording/reproducing system

Country Status (2)

Country Link
JP (1) JP4619412B2 (en)
WO (1) WO2007029686A1 (en)

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008310696A (en) * 2007-06-15 2008-12-25 Fujifilm Corp Imaging device, stereoscopic image reproducing device, and stereoscopic image reproducing program
JP2011530706A (en) * 2008-08-12 2011-12-22 アイイーイー インターナショナル エレクトロニクス アンド エンジニアリング エス.エイ. 3D-TOF camera device and position / orientation calibration method therefor
WO2018116580A1 (en) * 2016-12-19 2018-06-28 ソニー株式会社 Information processing device, information processing method, and program
CN110059670A (en) * 2019-04-29 2019-07-26 杭州雅智医疗技术有限公司 Human body Head And Face, limb activity angle and body appearance non-contact measurement method and equipment
JP2020520032A (en) * 2016-04-08 2020-07-02 マックス メディア グループ, エルエルシーMaxx Media Group,Llc System, method, and software for creating a virtual three-dimensional image that is projected in front of or on an electronic display

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07298302A (en) * 1994-04-22 1995-11-10 Canon Inc Compound eye image pickup and display system
JPH08339043A (en) * 1995-06-12 1996-12-24 Minolta Co Ltd Video display device
JPH10234057A (en) * 1997-02-17 1998-09-02 Canon Inc Stereoscopic video device and computer system including the same

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP3667620B2 (en) * 2000-10-16 2005-07-06 株式会社アイ・オー・データ機器 Stereo image capturing adapter, stereo image capturing camera, and stereo image processing apparatus
JP4397217B2 (en) * 2002-11-12 2010-01-13 株式会社バンダイナムコゲームス Image generation system, image generation method, program, and information storage medium
JP3579683B2 (en) * 2002-11-12 2004-10-20 株式会社ナムコ Method for producing stereoscopic printed matter, stereoscopic printed matter

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH07298302A (en) * 1994-04-22 1995-11-10 Canon Inc Compound eye image pickup and display system
JPH08339043A (en) * 1995-06-12 1996-12-24 Minolta Co Ltd Video display device
JPH10234057A (en) * 1997-02-17 1998-09-02 Canon Inc Stereoscopic video device and computer system including the same

Cited By (13)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2008310696A (en) * 2007-06-15 2008-12-25 Fujifilm Corp Imaging device, stereoscopic image reproducing device, and stereoscopic image reproducing program
JP2011530706A (en) * 2008-08-12 2011-12-22 アイイーイー インターナショナル エレクトロニクス アンド エンジニアリング エス.エイ. 3D-TOF camera device and position / orientation calibration method therefor
JP2020520032A (en) * 2016-04-08 2020-07-02 マックス メディア グループ, エルエルシーMaxx Media Group,Llc System, method, and software for creating a virtual three-dimensional image that is projected in front of or on an electronic display
KR102402381B1 (en) * 2016-12-19 2022-05-27 소니그룹주식회사 Information processing device, information processing method, and program
CN110073660A (en) * 2016-12-19 2019-07-30 索尼公司 Information processing equipment, information processing method and program
KR20190096976A (en) * 2016-12-19 2019-08-20 소니 주식회사 Information processing apparatus, information processing method, and program
JPWO2018116580A1 (en) * 2016-12-19 2019-11-07 ソニー株式会社 Information processing apparatus, information processing method, and program
US11106050B2 (en) 2016-12-19 2021-08-31 Sony Corporation Information processing apparatus, and information processing method
WO2018116580A1 (en) * 2016-12-19 2018-06-28 ソニー株式会社 Information processing device, information processing method, and program
JP7099326B2 (en) 2016-12-19 2022-07-12 ソニーグループ株式会社 Information processing equipment, information processing methods, and programs
US11924402B2 (en) 2016-12-19 2024-03-05 Sony Group Corporation Information processing apparatus and information processing method
CN110059670A (en) * 2019-04-29 2019-07-26 杭州雅智医疗技术有限公司 Human body Head And Face, limb activity angle and body appearance non-contact measurement method and equipment
CN110059670B (en) * 2019-04-29 2024-03-26 杭州雅智医疗技术有限公司 Non-contact measuring method and equipment for head and face, limb movement angle and body posture of human body

Also Published As

Publication number Publication date
JP4619412B2 (en) 2011-01-26
JPWO2007029686A1 (en) 2009-03-19

Similar Documents

Publication Publication Date Title
US7349006B2 (en) Image processing apparatus and method, recording medium, and program
US8218855B2 (en) Method and apparatus for receiving multiview camera parameters for stereoscopic image, and method and apparatus for transmitting multiview camera parameters for stereoscopic image
US9544498B2 (en) Method for forming images
JP4476905B2 (en) Structure of stereoscopic display image data, recording method of stereoscopic display image data, display reproduction method, recording program, and display reproduction program
US20090284584A1 (en) Image processing device
US20100039499A1 (en) 3-dimensional image creating apparatus, 3-dimensional image reproducing apparatus, 3-dimensional image processing apparatus, 3-dimensional image processing program and recording medium recorded with the program
JP2002077943A (en) Image handling system
US20090244258A1 (en) Stereoscopic display apparatus, stereoscopic display method, and program
JP2010078768A (en) Stereoscopic image capturing apparatus and stereoscopic image capturing system
JP5420075B2 (en) Stereoscopic image reproduction apparatus, parallax adjustment method thereof, parallax adjustment program, and imaging apparatus
US20110242273A1 (en) Image processing apparatus, multi-eye digital camera, and program
JP4619412B2 (en) Stereoscopic image recording / playback system
US20110193937A1 (en) Image processing apparatus and method, and image producing apparatus, method and program
JP4975256B2 (en) 3D image presentation device
EP2566166B1 (en) Three-dimensional imaging device
JP2005130310A (en) Stereoscopic vision image processing device
JP4657066B2 (en) 3D display device
US20130272677A1 (en) Image file generation device, image file reproduction device, and image file generation method
JPH0715748A (en) Picture recording and reproducing device
JP2005130312A (en) Stereoscopic vision image processing device, computer program, and parallax correction method
JP2011119825A (en) Video processor and video processing method
JP2001052192A (en) Photographing display system, three-dimensional image display method and storage medium
JP2006128899A (en) Imaging apparatus
WO2012117460A1 (en) 3-d video player device
JP2002300608A (en) Stereoscopic image device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application
DPE1 Request for preliminary examination filed after expiration of 19th month from priority date (pct application filed from 20040101)
ENP Entry into the national phase

Ref document number: 2007534423

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 06797437

Country of ref document: EP

Kind code of ref document: A1