WO2012096065A1 - Parallax image display device and parallax image display method - Google Patents

Parallax image display device and parallax image display method Download PDF

Info

Publication number
WO2012096065A1
WO2012096065A1 PCT/JP2011/077716 JP2011077716W WO2012096065A1 WO 2012096065 A1 WO2012096065 A1 WO 2012096065A1 JP 2011077716 W JP2011077716 W JP 2011077716W WO 2012096065 A1 WO2012096065 A1 WO 2012096065A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
parallax
value
region
color
Prior art date
Application number
PCT/JP2011/077716
Other languages
French (fr)
Japanese (ja)
Inventor
矢作 宏一
敏 中村
雅子 末廣
三沢 岳志
友和 中村
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2012096065A1 publication Critical patent/WO2012096065A1/en

Links

Images

Classifications

    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/18Stereoscopic photography by simultaneous viewing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals

Definitions

  • the present invention relates to a parallax image display device and a parallax image display method, and in particular, parallax image display in which the stereoscopic effect of a generated stereoscopic image can be confirmed on a two-dimensional display monitor before generating the stereoscopic image.
  • the present invention relates to a device and a parallax image display method.
  • the parallax value is a parallax value of two or more images that are the basis of a stereoscopic image and are captured from two or more different viewpoints.
  • a method of determining the stereoscopic effect of the generated stereoscopic image from the parallax value has been considered.
  • the parallax value indicating the degree of parallax is merely a numerical value, and it is difficult to sensuously grasp the actual stereoscopic effect from the numerical value of the parallax value.
  • a parallax image in which a parallax map in which the parallax value is associated with the pixel position of the first image of two or more images is generated, and the degree of the parallax value on the parallax map is expressed by a luminance difference or a color transition.
  • a technique for confirming the stereoscopic effect of an image for stereoscopic viewing by generating the image is known.
  • the parallax image in which the parallax value is expressed by a luminance difference or the like may not show the characteristics of the original image (left in FIG. 1) at all as shown on the right in FIG.
  • the subject existing in the foreground in the left original image in FIG. 1 has the largest parallax value, and the subject with the largest parallax value is displayed in a dark color close to black. ing.
  • the disparity value is smaller as the subject is farther, and in the right disparity image in FIG. 1, the subject is displayed such that the luminance is larger as the subject has a smaller disparity value.
  • the right image and the left image are segmented into a plurality of striped image pieces, and these striped striped image pieces are alternately arranged.
  • a so-called lenticular image is created, and a stereoscopic image forming system, a stereoscopic image forming method, a program, and a storage medium according to which a stereoscopic effect is confirmed with the created lenticular image are disclosed.
  • Japanese Patent Laid-Open No. 2008-103820 discloses a stereoscopic image processing apparatus that generates a lenticular image for a moving image using parallax data between a right image and a left image, and confirms a stereoscopic effect using the generated lenticular image.
  • the invention is disclosed.
  • a stereoscopic image processing apparatus disclosed in Japanese Patent Application Laid-Open No. 2008-103820 also uses a dedicated stereoscopic image reproduction apparatus (3D monitor) corresponding to a lenticular image in order to confirm the stereoscopic effect of a moving image converted into a lenticular image. ), There is a problem that the moving image must be reproduced.
  • the present invention has been made to solve the above problem, and a parallax image display device and a parallax image in which the stereoscopic effect of a stereoscopic image can be confirmed in advance on a 2D monitor before the stereoscopic image is created. It is an object to provide a display method and a parallax image display program.
  • a parallax image display device includes an acquisition unit that acquires a plurality of images taken from two or more different viewpoints, and a plurality of images acquired by the acquisition unit.
  • the disparity value represented by the difference in position for each corresponding region between the first image included and the second image different from the first image is made to correspond to the position of the region of the first image.
  • a parallax map generating unit that generates a parallax map, a color determining unit that determines a color of a corresponding region according to a parallax value for each region, and a color determining unit that determines a color of each region of the first image
  • a parallax image creation unit that creates a parallax color image that has been changed to the color, and an image synthesis unit that synthesizes the parallax color image and the first image.
  • the parallax value indicating the stereoscopic effect of the stereoscopic image generated from a plurality of images taken from two or more different viewpoints can be recognized by color.
  • a parallax color image is created, and the image synthesis unit synthesizes the parallax image and the first image, so that the pixels of the first image are mixed into the parallax color image by synthesis, and the parallax value is recognized by the luminance difference or the like In addition to being able to do so, an image is generated in which an outline of the subject can be grasped.
  • the image generated by the image composition unit does not require a special display device such as a display device having a lenticular lens, and is a bitmap image that can be displayed on a normal 2D monitor.
  • the effect of the stereoscopic effect of the stereoscopic image created from the second image can be confirmed in advance on the 2D monitor before creating the stereoscopic image.
  • the parallax image display method acquires a plurality of images taken from two or more different viewpoints, and the first image included in the acquired plurality of images is different from the first image.
  • a disparity map in which a disparity value represented by a position difference for each corresponding region with an image is associated with a position of the region of the first image is generated, and a region corresponding to the disparity value for each region Is determined, a parallax color image in which the color of each region of the first image is changed to the determined color is generated, and the parallax color image and the first image are synthesized.
  • the parallax image display program is an acquisition unit that acquires a plurality of images taken from two or more different viewpoints, and a first image included in the plurality of images acquired by the acquisition unit. And a second image different from the first image generate a parallax map in which a parallax value represented by a position difference for each corresponding region corresponds to a position of the region of the first image
  • a parallax map generation unit, a color determination unit that determines a color of a corresponding region according to a parallax value for each region, and a parallax color image obtained by changing the color of each region of the first image to a color determined by the color determination unit Is a program for functioning as a parallax image creation unit that creates the image, and an image synthesis unit that synthesizes the parallax color image and the first image.
  • the region includes not only one pixel but also a region in which several pixels are gathered in a square shape, such as 2 ⁇ 2 to 3 ⁇ 3.
  • the parallax values between the first image and the second image are associated with each region of the first image, and the degree of the associated parallax value is expressed in color.
  • 1 is a block diagram of a parallax image display device according to a first embodiment of the present invention. It is a flowchart which shows the process of the parallax image display apparatus which concerns on the 1st Embodiment of this invention. It is a figure which shows an example of two or more images image
  • FIG. 2 is a block diagram of the parallax image display device according to the first embodiment of the present invention.
  • the parallax image display device 200 is input with an image input unit 201 to which two or more images taken from two or more different viewpoints are input. Based on two or more images, the first image (left image in the first to seventh embodiments) and the second image (first to seventh embodiments) taken from different viewpoints of the left image.
  • the right image an image processing unit 202 for creating an image for confirming the stereoscopic effect of the stereoscopic image created from the image processing unit 202, an operation unit 203 for operating the parallax image display device 200, and image processing
  • a display unit 204 that displays a processing result of the unit 202; and a storage unit 205 that stores an input image or the like.
  • the image input unit 201 receives two or more images taken from two or more different viewpoints taken by a camera or the like that takes a stereoscopic image.
  • the image input unit 201 may be directly connected to a camera that captures a stereoscopic image, but an information storage medium (magnetic disk, optical disk, magneto-optical disk, memory card, IC card) that stores the image captured by the camera. Etc.) may be a data reading device that reads and inputs an image captured by the camera.
  • the operation unit 203 is a device for a user or the like to input an instruction to the parallax image display device 200, and input devices such as a touch panel, a keyboard, a mouse, and a pen tablet are used.
  • the display device 204 is a device such as an LCD that displays a processing process or a processing result of the image processing unit 202, and is a 2D monitor in the present embodiment.
  • the storage unit 205 is a storage device such as a RAM (Random Access Memory), a HDD (Hard Disk Drive), or a flash memory.
  • the image processing unit 202 calculates a parallax represented by a difference in position for each corresponding pixel between a left image included in two or more input images and a right image taken from a different viewpoint from the left image.
  • a parallax map creation unit 2021 that creates a parallax map corresponding to the pixel position of the left image
  • a parallax image creation unit 2022 that creates a parallax image indicating the magnitude of the parallax value of the created parallax map in luminance
  • An image synthesis unit 2023 that synthesizes the created parallax image with the left image.
  • the image processing unit 202 may be a computer having a CPU and a memory. In this case, the image processing unit 202 operates according to a program for causing the computer stored in the storage unit 205 to function as an image processing apparatus.
  • the image obtained by combining the parallax image and the left image by the image combining unit 2023 is output to the display device 204 as the processing result of the image processing unit 202.
  • the input image, the parallax map, the parallax image, the image synthesized by the image synthesis unit 2023, and the like are stored in the storage unit 205.
  • FIG. 3 is a flowchart showing processing of the parallax image display device according to the first embodiment of the present invention.
  • step 301 it is determined in step 301 whether or not two or more images taken from two or more different viewpoints are input to the image input unit.
  • FIG. 4 is an example of two or more images captured from two or more different viewpoints input to the image input unit, and a right image (right in FIG. 2) and a left image captured from the right viewpoint. It is a figure which shows the left image (left of FIG. 2) image
  • a parallax map is created from two or more images photographed from two or more different viewpoints shown in FIG. In the left image and the right image in FIG. 4, it is assumed that the parallax occurs in the horizontal direction (horizontal direction) of the image and no shift occurs in the vertical direction (vertical direction).
  • step 302 the parallax map creation unit 2021 shown in FIG. 2 performs stereo matching on the left image and the right image to generate a parallax map.
  • Stereo matching uses a set of two images taken from different viewpoints, specifies which region of the left image corresponds to which region of the right image, and the three-dimensional position of each region It is a method to guess.
  • corresponding pixels (x2, y2) on the right image corresponding to the pixels (x1, y1) on the left image are extracted.
  • the parallax value stored at the pixel position of the left image with the parallax d as a reference is taken as a parallax map.
  • the parallax value is calculated for each pixel of the image.
  • the parallax is calculated for a region in which several pixels are gathered in a square shape, such as 2 ⁇ 2 to 3 ⁇ 3.
  • a value may be calculated, and when the parallax value is calculated for a region where pixels are gathered in this way, it is considered that the processing speed can be improved.
  • This parallax map shows the parallax value for each pixel, but since it is just numerical data, what kind of stereoscopic effect the parallax value can produce in an actual stereoscopic image even when read by a user or the like? I can't recognize that.
  • Step 303 the parallax image creation unit 2022 shown in FIG. 2 converts the parallax value for each pixel of the parallax map into a luminance value so that the user or the like can recognize the stereoscopic effect of the stereoscopic image, A parallax image, which is an image representing the parallax value corresponding to each pixel of the image in luminance, is created.
  • step 304 it is determined whether or not the conversion process from the parallax value to the luminance value has been performed for all the pixels. If all the pixels have been converted, the parallax image file created by the conversion in step 305 is determined. Is stored in the storage unit 205 shown in FIG.
  • FIG. 5 is a diagram showing an example of a parallax image according to the first embodiment of the present invention.
  • a subject with a large parallax value is represented with low luminance
  • a subject with a small parallax value is represented with high luminance.
  • step 306 the image composition unit 2023 shown in FIG. 2 synthesizes the left image and the parallax image in FIG. 4 for each pixel.
  • the image composition unit 2023 performs weighting of w1: w2, for example, 1: 9, on the RGB values of the respective image pixels between the left image and the parallax image, and synthesizes both images.
  • step 306 the left image and the parallax image are synthesized by making the weight w2 for the parallax image heavier than the weight w1 for the left image and calculating the weighted average value according to the following equation (1).
  • . a is the pixel value of the left image
  • b is the luminance value of the pixel of the parallax image corresponding to the pixel of the pixel value a.
  • This weighted average value is obtained for all the pixels of the parallax image and the left image.
  • each pixel of the left image has RGB values aR, aG, and aB, respectively. Therefore, a weighted average is given for each of the RGB values of each pixel as in the following equation (2).
  • Ask for. (AR ⁇ w1 + b ⁇ w2) / (w1 + w2) (AG ⁇ w1 + b ⁇ w2) / (w1 + w2) (2) (AB ⁇ w1 + b ⁇ w2) / (w1 + w2)
  • the weighting is performed on the RGB value of each pixel of the left image.
  • the parallax value of the parallax image is calculated for a region where several pixels are gathered in a square shape, such as 2 ⁇ 2 to 3 ⁇ 3
  • the weighting of the RGB values of the left image is also the parallax image
  • the RGB values are weighted for the area corresponding to this area.
  • the image composition unit 2023 determines whether or not the composition processing for all pixels has been completed. If the composition processing for all pixels has been completed, in step 308, the left image and the parallax image are obtained. The synthesized image is displayed on the display device 204 shown in FIG. 2, and the processing in steps 301 to 308 is completed.
  • FIG. 6 shows an example of an image obtained by synthesizing the left image and the parallax image in the first embodiment of the present invention.
  • the pixels of the left image are mixed by synthesis, and the color difference component of the left image is also displayed lightly. Therefore, an outline of a subject such as a tree in a distant view can be grasped from the image of FIG.
  • the image obtained by combining the left image and the parallax image is a bitmap image that can be displayed on a normal 2D monitor, without requiring a special display device such as a display device having a lenticular lens.
  • the user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by combining the left image and the parallax image displayed on the screen of the 2D monitor.
  • the first embodiment of the present invention by creating an image obtained by synthesizing the parallax image obtained by converting the parallax value for each pixel of the left image into luminance and the left image, Even in the 2D monitor, the stereoscopic effect of the stereoscopic image generated from the captured image can be grasped.
  • the weights in the synthesis of the left image and the parallax image are changed according to the parallax value in the first modification and the second modification of the first embodiment described below.
  • the weight w1 of the left image is changed to the weight w2 of the parallax image in the pixel related to the subject near the cross point where the optical axes of the left and right lenses of the stereoscopic camera intersect. Make it heavier.
  • the pixel related to the subject in front of or behind the cross point increases the weight w2 of the parallax image and decreases the weight w1 of the left image as the distance from the cross point increases.
  • the absolute value of the parallax value is the smallest “0” in the pixel of the subject related to the cross point, and the absolute value of the parallax value increases as the distance from the cross point increases in the pixel of the subject far from the cross point.
  • the weight w1 for the value obtained from the RGB values of the pixels included in the left image area corresponding to the parallax image area is increased.
  • the ratio of the weight w1 of the left image and the weight w2 of the parallax image uses the following formula (3).
  • m is a predetermined coefficient statistically determined through an experiment, and is a positive real number.
  • Left image weight w1: Parallax image weight w2 A ⁇ m
  • the parallax value of the pixel related to the subject existing at the cross point is “0” and the absolute value of the parallax value is the minimum, using the above formula (3), the subject existing at the cross point
  • the weight w1 to the RGB value for the pixel of the left image at “A” is “A”
  • the weight w2 to the luminance value for the pixel of the parallax image at the subject existing at the cross point is “B”.
  • a correspondence table in which disparity values are associated with weights may be prepared in advance, and the weights may be changed according to the disparity values according to the correspondence table (lookup table), regardless of the above equation (3).
  • the weight w1 of the pixel of the left image, which is the original image is relatively heavier than the weight w2 of the pixel of the parallax image. Since the left image and the parallax image are combined, the state of the subject near the cross point in the original image can be confirmed. In addition, the farther away from the cross point the subject located at the position away from the cross point, the higher the pixel weight w2 of the parallax image compared to the left image pixel weight w1, which is the original image, and the left. Since the image and the parallax image are combined, it is easy to grasp the stereoscopic effect of the stereoscopic image generated from the captured image.
  • the parallax value is the maximum at the pixel of the subject closest to the camera, and the parallax value becomes smaller at the pixel of the subject far from the camera as the distance from the camera is longer.
  • the weight w1 for the value obtained from the RGB value of the pixel included in the region of the left image corresponding to the region of the parallax image is reduced.
  • the weight w2 for the value obtained from the luminance value of the pixel included in the parallax image area is increased.
  • the ratio of the weight w1 of the left image and the weight w2 of the parallax image uses the following formula (4).
  • p is a predetermined coefficient that is statistically determined through experiments, and is a positive real number.
  • dn is the parallax value of the pixel of the subject closest to the camera.
  • Left image weight w1: parallax image weight w2 A ⁇ p
  • a correspondence table in which disparity values and weights are associated with each other may be prepared in advance, and the weights may be changed according to the disparity values according to the correspondence table, regardless of the above equation (4).
  • the pixel weight w1 of the left image which is the original image
  • the pixel weight w2 of the parallax image is relatively heavier than the pixel weight w1 of the left image that is the original image. Since the parallax image is synthesized, it is easy to grasp the stereoscopic effect of the stereoscopic image generated from the captured image.
  • FIG. 7 is a flowchart showing processing of the parallax image display device according to the second embodiment of the present invention.
  • step 701 to 705 is the same as the processing from step 301 to 305 in FIG. 3 in the first embodiment of the present invention, and thus the description thereof is omitted.
  • step 706 which is a process after the parallax image is created, the image synthesis unit 2023 synthesizes the color difference component of the left image with the parallax image.
  • the left image and the parallax image are weighted 1: 9 to the RGB values of the pixels of both images, and the two images are combined.
  • only the color difference component is extracted from the left image, and the extracted color difference component is combined with the parallax image.
  • the color difference in the present embodiment is RY and BY obtained by subtracting the luminance value Y from the R and B pixel values among the RGB pixel values originally including the luminance value Y.
  • the relationship between the color differences RY and BY and the luminance value Y is expressed by the following equation (5), where the color difference BY is Cb and the color difference RY is Cr.
  • Y 0.29900R + 0.58700G + 0.11400B
  • Cr 0.50,000R-0.48869G-0.81131B
  • Cb ⁇ 0.16874R ⁇ 0.33126G + 0.50000B
  • the parallax image is a monochrome image expressed only by luminance values.
  • a pixel in the parallax image has a luminance value Y ′, and the color difference between the pixels of the left image corresponding to the pixel of the parallax image having the luminance value Y ′ is Cr and Cb in Equation (6). .
  • step 706 the image composition unit 2023 substitutes Y ′, which is the luminance value of the parallax image, for the luminance value Y of the above equation (6) for each pixel, thereby extracting the color difference component and the parallax image extracted from the left image. Is synthesized.
  • the image composition unit 2023 determines whether or not the composition processing has been completed for all pixels. If the composition processing has been completed for all pixels, in step 708, the color difference component and the parallax of the left image are determined. The image combined with the image is displayed on the display device 204 shown in FIG. 2, and the processing in steps 701 to 708 is completed.
  • FIG. 8 shows an example of an image obtained by synthesizing the color difference component of the left image and the parallax image in the second embodiment of the present invention.
  • the color difference component of the left image is displayed more clearly than in the case of the first embodiment, so that the difference in color of each subject can be recognized, As a result, the outline of each subject can be grasped.
  • the color difference is extracted for the RGB value of each pixel of the left image.
  • the number of pixels of the parallax image is 2 ⁇ 2 to 3 ⁇ 3.
  • the color difference extracted from the left image is also extracted for a region corresponding to the region of the parallax image.
  • the image obtained by synthesizing the color difference component and the parallax image of the left image does not require a special display device such as a display device including a lenticular lens, and is a bitmap image that can be displayed on a normal 2D monitor.
  • the user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by synthesizing the color difference component of the left image displayed on the screen of the 2D monitor and the parallax image.
  • the second embodiment of the present invention by creating an image obtained by combining the parallax image obtained by converting the parallax value for each pixel of the left image into luminance and the left image color difference component. Even in a normal 2D monitor, the stereoscopic effect of the stereoscopic image generated from the captured image can be grasped.
  • the parallax image display device according to the third embodiment of the present invention differs from the parallax image display device according to the first embodiment of the present invention shown in FIG. Other configurations are the same.
  • FIG. 9 is a flowchart showing processing of the parallax image display device according to the third embodiment of the present invention.
  • step 901 it is determined in step 901 whether or not two or more images taken from two or more different viewpoints are input to the image input unit. Similar to the first embodiment, the input images are two or more images taken from two or more different viewpoints shown in FIG.
  • the parallax map creation unit 2021 shown in FIG. 2 creates a parallax map from the left image and the right image by stereo matching, as in the first embodiment.
  • the parallax occurs in the horizontal direction (horizontal direction) of the image and no shift occurs in the vertical direction (vertical direction).
  • step 903 the parallax value corresponding to each pixel of the left image, for example, as the left image so that the user or the like can recognize the stereoscopic effect of the stereoscopic image from the parallax map by the parallax image creation unit 2022 shown in FIG. Is represented by a color, for example, a color difference value, and an image in which the color difference value for each pixel is shown at the position of the corresponding pixel on the left image is generated, and this is used as a parallax color image.
  • a color for example, a color difference value
  • a separate color determination unit (not shown) that determines the color of the corresponding left image region according to the parallax value for each region of the parallax map is provided, and the parallax image creation unit 2022 is provided. May create a parallax color image in which the resolution of each region of the left image is changed to the color determined by the color determination unit.
  • the conversion is performed according to the correspondence table describing the correspondence between the parallax value and the color difference value.
  • the color difference changes from red to blue as the parallax value is small when the parallax value is large, and the correspondence table associates the large parallax value with the red color difference, A smaller parallax value corresponds to a color difference in which red to blue are emphasized.
  • the disparity value is classified for each determined range, and a correspondence table in which the color is associated with the disparity value of the classified range is provided, and the color corresponding to the disparity value of the region of the left image is supported. You may make it determine based on a table
  • step 904 the parallax image creation unit 2022 determines whether or not the conversion process from the parallax value to the color difference value has been performed for all the pixels. If all the pixels have been converted, the conversion is performed in step 905.
  • the parallax color image file created by the above is stored in the storage unit 205 shown in FIG.
  • step 906 as in the first embodiment, the image composition unit 2023 shown in FIG. 2 applies 1: 9 weighting to the RGB values for each pixel of the left image and the parallax image. The two images are combined, but the weight ratio may be other than 1: 9.
  • the above-described weighting is performed on the RGB value of each pixel of the left image.
  • some pixels have a parallax value of 2 ⁇ 2 to 3 ⁇ 3 of the parallax image.
  • the RGB value is weighted for the region corresponding to the region of the parallax image.
  • the image composition unit 2023 determines whether or not the composition processing has been completed for all pixels. If the composition processing has been completed for all pixels, in step 908, the left image and the parallax color image are determined. 2 is displayed on the display device 204 shown in FIG. 2, and the processing of steps 901 to 908 is completed.
  • FIG. 10 shows an example of an image obtained by synthesizing the left image and the parallax color image in the third embodiment of the present invention.
  • the image is indicated by the front dot that is the region having the largest parallax value.
  • the part with the small parallax value is represented in blue.
  • the pixels of the left image are mixed by synthesis and the pixels of the left image are also displayed, so that an outline of a subject such as a grove of a distant view can be grasped.
  • the image obtained by combining the left image and the parallax color image is a bitmap image that can be displayed on a normal 2D monitor, without requiring a special display device such as a display device having a lenticular lens.
  • the user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by synthesizing the left image displayed on the screen of the 2D monitor and the parallax color image.
  • the third embodiment of the present invention by creating an image obtained by combining the parallax color image obtained by converting the parallax value for each pixel of the left image into a color difference and the left image, Even in a normal 2D monitor, the stereoscopic effect of the stereoscopic image generated from the captured image can be grasped.
  • the image composition unit 2023 in FIG. 2 extracts the color difference components of the pixels included in each region of the left image, and the color difference between the region of the left image corresponding to each region of the parallax color image and each region of the parallax color image.
  • the ingredients may be synthesized.
  • the image composition unit 2023 in FIG. 2 performs pixel conversion of the pixels of the left image that is the original image according to the parallax value, as in Modification 1 of the first embodiment or Modification 2 of the first embodiment. You may make it change the weight w1 and the weight w2 of the pixel of a parallax color image.
  • a parallax luminance image in which the parallax value of the parallax map area is expressed by luminance may be created, and the image synthesis unit 2023 in FIG. 2 may synthesize the parallax luminance image and the parallax color image.
  • a resolution determining unit that determines the resolution of the corresponding region of the left image according to the parallax value for each region of the parallax map, and the parallax resolution image in which the resolution of each region of the left image is changed to the resolution determined by the resolution determining unit
  • a parallax resolution image creating unit that creates a parallax resolution image, and the image synthesis unit 2023 in FIG. 2 may synthesize the parallax resolution image and the parallax color image.
  • the resolution determination unit may reduce the resolution of the region of the left image as the parallax value corresponding to the region in the parallax map becomes smaller, or the absolute value of the parallax value corresponding to the region in the parallax map The resolution of the area of the left image may be lowered as the value increases.
  • the resolution determination unit classifies the parallax value for each determined range, has a correspondence table in which the resolution is associated with the parallax value of the classified range, and corresponds to the position of the region of the left image in the parallax map
  • the disparity value being used may be the disparity value of the region of the left image, and the resolution corresponding to the disparity value of the region of the left image may be determined based on the correspondence table.
  • a range in which the included parallax value is small may correspond to a lower resolution as the included parallax value is small, or a range in which the absolute value of the included parallax value is large Alternatively, a lower resolution may be associated with an increase in the absolute value of the included parallax value.
  • the resolution determination unit may be a sharpness determination unit that determines the sharpness of the corresponding left image region in accordance with the parallax value for each region of the parallax map.
  • the disparity value in the disparity map area is converted into a luminance value
  • the position of the disparity map in which the absolute value of the disparity value is within a threshold is specified
  • the specified area is colored with a predetermined color.
  • a cross point image creating unit that creates a point image may be further included, and the image composition unit 2023 in FIG. 2 may compose the cross point image and the parallax color image.
  • the parallax value of the area where the absolute value of the parallax value is maximum is determined by converting the parallax value of the area of the parallax map into a luminance value, specifying the position of the area where the absolute value of the parallax value is maximum in the parallax map.
  • 2 further includes a parallax absolute value maximum image creating unit that creates a parallax absolute value maximum image in which the region is colored with different colors depending on whether the area is positive or negative.
  • the image composition unit 2023 in FIG. The maximum image and the parallax color image may be combined.
  • the parallax image display device according to the fourth embodiment of the present invention is different from the parallax image display device according to the first embodiment of the present invention shown in FIG. Although the functions are different, other configurations are the same.
  • FIG. 11 is a flowchart showing processing of the parallax image display device according to the fourth embodiment of the present invention.
  • step 1101 it is determined in step 1101 whether a right image and a left image taken from two or more different viewpoints are input to the image input unit. Similar to the first embodiment, the input images are two or more images taken from two or more different viewpoints shown in FIG.
  • the parallax map creation unit 2021 shown in FIG. 2 creates a parallax map from the left image and the right image by stereo matching, as in the first embodiment.
  • the parallax occurs in the horizontal direction (horizontal direction) of the image and no shift occurs in the vertical direction (vertical direction).
  • the parallax image creation unit 2022 shown in FIG. 2 converts the resolution of the left image for each pixel according to the parallax value of the parallax map so that the user or the like can recognize the stereoscopic effect of the stereoscopic image. . Specifically, an image having a large parallax value, that is, a high resolution of a pixel related to a subject close to the camera and a low parallax value, that is, a low resolution of a pixel related to a subject far from the camera is generated. A resolution image is used.
  • a resolution determination unit (not shown) that determines the resolution of the corresponding region of the left image according to the parallax value for each region of the parallax map is separately provided, and the parallax image creation unit 2022 Alternatively, a parallax resolution image in which the resolution of each region of the left image is changed to the resolution determined by the resolution determination unit may be created.
  • the image is divided into rectangular blocks such as 3 ⁇ 3, 5 ⁇ 5, and 7 ⁇ 7, and the pixel values in each block are changed to the pixel values in the block.
  • Mosaic filling with average values is used.
  • the square block is reduced, and in a region where the parallax value is desired to be reduced for the resolution, the square block is increased.
  • the correspondence between the parallax value and the size of the block to be mosaicked is based on, for example, a correspondence table in which the two are matched.
  • pixels near the crosspoint are clarified. Can also be displayed.
  • step 1104 the parallax image creation unit 2022 determines whether or not processing has been performed on all pixels. If processing has been performed on all pixels, in step 1105, the resolution of the left image is changed according to the parallax value.
  • the parallax resolution image file is stored in the storage unit 205 shown in FIG.
  • step 1106 which is a step after the parallax resolution image is created, the image synthesis unit 2023 synthesizes the color difference component of the left image with the parallax resolution image for each pixel.
  • the color difference is extracted for the RGB value of each pixel of the left image.
  • some pixels have a parallax value of 2 ⁇ 2 to 3 ⁇ 3.
  • the color difference extracted from the left image is also extracted for a region corresponding to the region of the parallax image.
  • step 1106 the color difference component of the left image is synthesized with the parallax resolution image.
  • the first modification, or the second modification the pixel of the left image and the parallax resolution image. These pixels may be combined.
  • the image composition unit 2023 determines whether or not the composition process for all pixels has been completed. If the composition process for all pixels has been completed, in step 1108, the image composition unit 2023 An image obtained by synthesizing the color difference component and the parallax resolution image is displayed on the display device 204 shown in FIG. 2, and the processing of steps 1101 to 1108 is completed.
  • FIG. 12 shows an example of an image obtained by combining the color difference component of the left image and the parallax resolution image obtained by changing the resolution of the left image according to the parallax value in the fourth embodiment of the present invention.
  • the color difference component of the original left image is also displayed, so the difference in color of each subject can be recognized, and this allows the outline of each subject to be grasped. ing.
  • An image obtained by combining the color difference component of the original left image and the parallax resolution image does not require a special display device such as a display device having a lenticular lens, and can be displayed on a normal 2D monitor. It is a bitmap image.
  • the user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by combining the color difference component of the original left image displayed on the screen of the 2D monitor and the parallax resolution image. To do.
  • the image resolution is changed, but the sharpness of the image may be changed.
  • a separate sharpness determination unit (not shown) that determines the sharpness of the corresponding left image region according to the parallax value for each region of the parallax map is provided.
  • 2022 may create a parallax sharpness image in which the resolution of each region of the left image is changed to the sharpness determined by the sharpness determination unit.
  • Gaussian blur that smoothes an image using a Gaussian function is used to change the sharpness of the image.
  • the region is the center pixel, and the range of the surrounding pixels when calculating the weighted average of the pixel values of the center pixel and the surrounding pixels by a Gaussian function is used. Make it smaller.
  • the region is the central pixel
  • the surrounding pixels when calculating the weighted average of the pixel values of the central pixel and the pixels around the central pixel by a Gaussian function Increase the range.
  • an image in which a parallax resolution image obtained by converting a parallax value for each pixel of a left image into a difference in resolution and a left image color difference component is created.
  • the parallax image creation unit 2022 shown in FIG. 2 has the resolution of the left image for each pixel so that the user can recognize the stereoscopic effect of the stereoscopic image.
  • the parallax image creation unit 2022 in FIG. 2 classifies the parallax values for each determined range, and has a correspondence table in which resolutions corresponding to the classified ranges are set in advance. .
  • the parallax image creation unit 2022 identifies the parallax value corresponding to the pixel of the left image from the parallax map, identifies the resolution corresponding to the identified parallax value with reference to the correspondence table, and determines the pixel of the left image Is changed to the specified resolution.
  • the image is divided into rectangular blocks of 3 ⁇ 3, 5 ⁇ 5, 7 ⁇ 7, etc.
  • Mosaicing is used in which the pixel values are filled with the average value of the pixel values in the block. In a region where the parallax value for which the resolution is desired to be increased is large, the square block is reduced, and in a region where the parallax value is desired to be reduced for the resolution, the square block is increased.
  • the pixel of the left image whose parallax value is 0 to 2 has a 5 ⁇ 5 mosaic size and the parallax value is 3 to 5
  • the pixel of the left image has a resolution of the pixel of the left image corresponding to the range to which the parallax value of the pixel belongs, such as the size of the block to be mosaicized is 3 ⁇ 3. Apply and change the size.
  • the larger the parallax value included in this range the smaller the size of the block to be mosaicked, and the corresponding parallax value range.
  • the smaller the included parallax value the higher the resolution of the pixels related to the subject close to the camera, and the higher the resolution of the pixels related to the subject far from the camera. It is possible to generate and display an image that is intermittently lowered according to the distance from the image.
  • the smaller the absolute value of the parallax values included in this range the smaller the size of the block to be mosaicked, and the parallax value range.
  • the larger the absolute value of the disparity value included in this range the larger the size of the block to be mosaicked is associated, thereby clearly displaying the pixels near the crosspoint, and the crosspoint It is possible to generate and display an image in which the resolution of the subject away from the screen is intermittently lowered according to the distance from the cross point.
  • the parallax value corresponding to each pixel of the left image corresponds to the parallax map, that is, the case where the parallax value of the parallax image is calculated for each pixel.
  • the parallax value of the parallax image may be calculated for a region where several pixels are gathered in a square shape, such as 2 ⁇ 2 or 3 ⁇ 3.
  • the parallax image creation unit 2022 in FIG. 2 can determine the parallax value. And a correspondence table in which resolutions corresponding to the classified ranges are set in advance.
  • the parallax image creation unit 2022 identifies the parallax value corresponding to the area of the left image from the parallax map, identifies the resolution corresponding to the identified parallax value by referring to the correspondence table, and determines the area of the left image Is changed to the specified resolution.
  • step 1103 in FIG. 11 The processing after step 1103 in FIG. 11 is the same as that of the fourth embodiment in the modification of the fourth embodiment, and the resolution is changed according to the parallax value as described in step 1106 in FIG.
  • the color difference component or pixel of the left image may be combined with the processed image.
  • the parallax image creation unit 2022 in FIG. 2 classifies the parallax values for each determined range, and presets sharpness corresponding to the parallax values in the classified range.
  • the disparity value corresponding to the left image area is identified from the disparity map, the sharpness corresponding to the identified disparity value is identified with reference to the correspondence table, and the sharpness of the left image area is determined. May be changed to the specified sharpness.
  • the higher the parallax value included in this range the higher the sharpness, and the smaller the parallax value included in this range of the parallax value range.
  • the lower the sharpness of the range the higher the sharpness of the pixels related to the subject closer to the camera, and the lower the sharpness of the pixels related to the subject far from the camera is generated and displayed according to the distance from the camera. can do.
  • the smaller the absolute value of the parallax values included in this range the higher the sharpness is associated, and the parallax value ranges included in this range.
  • the higher the absolute value of the parallax value the lower the sharpness, so that the pixels near the crosspoint are clearly displayed, and the sharpness of the subject away from the crosspoint is intermittent according to the distance from the crosspoint. It is possible to generate and display an image with a low height.
  • Gaussian blur that smoothes an image using a Gaussian function is used to change the sharpness of the image.
  • the area is set as the central pixel, and the range of the surrounding pixels when the weighted average of the pixel values of the central pixel and the surrounding pixels is calculated by a Gaussian function is reduced.
  • the area is set as the central pixel, and the range of the surrounding pixels when calculating the weighted average of the pixel values of the central pixel and the surrounding pixels by a Gaussian function is increased. To do.
  • the stereoscopic effect of the stereoscopic image generated from the left image is changed by stepwise changing the resolution of the image for each predetermined range of parallax values. It becomes easy to grasp.
  • the parallax image display device according to the fifth embodiment of the present invention differs from the parallax image display device according to the first embodiment of the present invention shown in FIG. Other configurations are the same.
  • FIG. 13 is a flowchart showing processing of the parallax image display device according to the fifth embodiment of the present invention.
  • Steps 1301 to 1304 is the same as the processing from Steps 301 to 304 in FIG. 3 in the first embodiment of the present invention, and a description thereof will be omitted.
  • the parallax image creation unit 2022 identifies pixels whose absolute value of the parallax value in the parallax map is equal to or smaller than a predetermined threshold on the parallax image created in step 1303, and colors the identified pixels.
  • pixels having an actual parallax value of ⁇ 2 or more and 2 or less are colored in a predetermined color.
  • the color may be red or blue if it can be easily distinguished from other pixels.
  • the parallax image creation unit 2022 determines whether or not the parallax value is not less than ⁇ 2 and not more than 2 for all pixels, and when the confirmation process is completed, the parallax value 2 is displayed on the display device 204 in FIG. 2 by coloring the pixels of ⁇ 2 to 2 and displaying the cross points.
  • FIG. 14 shows a case where the cross point is in the foreground
  • FIG. 15 shows a case where the cross point is in the foreground.
  • FIGS. 14 and 15 a subject with a large parallax value is represented with low luminance, and a subject with a small parallax value is represented with high luminance.
  • the pixels other than the cross-point pixels are expressed in monochrome shades, but the outline of each subject is unclear, and the parts with the same parallax value are expressed with the same brightness. I can't figure out the details at all.
  • step 1308 the image composition unit 2023 shown in FIG. 2 synthesizes the left image and the cross point image for each pixel.
  • the above-described weighting is performed on the RGB value of each pixel of the left image.
  • some pixels have a parallax value of 2 ⁇ 2 to 3 ⁇ 3 of the parallax image.
  • the RGB value is weighted for the region corresponding to the region of the parallax image.
  • step 1309 the image composition unit 2023 determines whether or not the composition processing for all pixels has been completed. If the composition processing for all pixels has been completed, in step 1310, the left image, the crosspoint image, 2 is displayed on the display device 204 shown in FIG. 2, and the processing of steps 1301 to 1310 is completed.
  • the image obtained by synthesizing the left image and the cross-point image is a bitmap image that can be displayed on a normal 2D monitor, without requiring a special display device such as a display device having a lenticular lens.
  • the user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by synthesizing the left image and the crosspoint image displayed on the screen of the 2D monitor.
  • a crosspoint image is created from a parallax image obtained by converting the parallax value for each pixel of the left image into luminance, and the created crosspoint image and the left
  • the stereoscopic effect of the stereoscopic image generated from the captured image can be grasped even with a normal 2D monitor.
  • the parallax image display device according to the sixth embodiment of the present invention differs from the parallax image display device according to the first embodiment of the present invention shown in FIG. Other configurations are the same.
  • FIG. 16 is a flowchart showing processing of the parallax image display device according to the sixth embodiment of the present invention.
  • Steps 1601 to 1604 is the same as the processing from Steps 301 to 304 in FIG. 3 in the first embodiment of the present invention, and the description thereof will be omitted.
  • the parallax image creation unit 2022 identifies the region where the absolute value of the parallax value in the left image is the maximum on the parallax image created in step 1603 based on the parallax map created in step 1602, It is determined whether or not the specified region has the maximum parallax value in the pop-out direction, that is, whether or not it is the foreground portion of the left image.
  • the parallax value is 0 at the cross point where the optical axes of the left and right lenses of the stereoscopic image shooting camera intersect, and the farther away from the cross point, that is, the closer to the camera, or the farther from the camera across the cross point, the more.
  • the absolute value of the parallax value increases.
  • the position where the parallax value is positive is assumed to be a position close to the camera from the cross point, that is, the foreground, and the position where the parallax value is negative is set to the camera rather than the cross point. It is assumed that the position is far from the camera, that is, a distant view.
  • the parallax image creation unit 2022 determines that the pixel has the maximum parallax value in the pop-out direction. In step 1606, the parallax image The creation unit 2022 creates a parallax absolute value maximum image in which the pixel is colored with a predetermined color, red.
  • the color is not limited to red, and it is sufficient that the area can be distinguished from other areas.
  • the parallax image creation unit 2022 determines whether or not processing has been performed for all pixels. If processing has been performed for all pixels, in step 1608, the region where the parallax value is maximum in the pop-out direction. Is displayed on the display device 204 shown in FIG.
  • FIG. 17 is a parallax absolute value maximum image displaying a pixel region having the maximum parallax value in the pop-out direction in the sixth embodiment of the present invention.
  • a subject with a large parallax value is represented with low luminance
  • a subject with a small parallax value is represented with high luminance.
  • the parallax value is the largest in the pop-out direction, it is expressed in monochrome shades, but the outline of each subject is unclear, and the parts with the same parallax value are expressed with the same brightness. I cannot grasp details of subjects such as trees in a distant view.
  • step 1609 the image composition unit 2023 shown in FIG. 2 synthesizes the left image in FIG. 4 that is the left image and the parallax absolute value maximum image that displays the region of the pixel having the maximum parallax value in the pop-out direction. .
  • the image composition unit 2023 uses a left image and a parallax absolute value maximum image displaying a region of a pixel having the maximum parallax value in the pop-out direction. Is applied to the RGB values of the pixels of the respective images to synthesize both images.
  • the above-described weighting is performed on the RGB value of each pixel of the left image.
  • some pixels have a parallax value of 2 ⁇ 2 to 3 ⁇ 3 of the parallax image.
  • the RGB value is weighted for the region corresponding to the region of the parallax image.
  • the image composition unit 2023 determines whether or not the composition process for all pixels is completed. If the composition process for all pixels is completed, in step 1611, the image composition unit 2023 performs a parallax with the left image in the pop-out direction. An image obtained by combining the parallax absolute value maximum image displaying the pixel region having the maximum value is displayed on the display device 204 shown in FIG. 2, and the processing of steps 1601 to 1611 is completed.
  • step 1605 If it is determined in step 1605 that the parallax value has the maximum absolute value in the depth direction, the processing in steps 1627 to 1631 is performed.
  • the processing in steps 1627 to 1631 is the same as the processing in steps 1607 to 1611 described above except that the region having the maximum parallax value in the depth direction in step 1626 is colored with blue, which is a different color from step 1606. The description is omitted.
  • FIG. 18 shows a parallax absolute value maximum image displaying a region having the maximum parallax value in the depth direction in the sixth embodiment of the present invention.
  • the region where the parallax value in the depth direction is the maximum is displayed in blue.
  • the region where the absolute value of the parallax value is the maximum is in the pop-out direction (before the cross point) or in the depth direction (behind the cross point). it can. Furthermore, as in the fifth embodiment of the present invention described above, the cross point area can be displayed together with the area where the absolute value of the parallax value is maximum.
  • the seventh embodiment is an operation display unit of an apparatus capable of processing in the above first to sixth embodiments.
  • the configuration of the seventh embodiment is basically the same as that shown in the block diagram of the parallax image display device according to the first embodiment of the present invention in FIG. The difference is that 204 is an operation display unit 500 integrated in a touch panel format.
  • the operation display unit 500 may use an LCD or the like as the display unit and a pointing device such as a mouse or a pen tablet as the operation unit, in addition to the touch panel.
  • the operation display unit 500 illustrated in FIG. 19 displays the left image 501 that is the original image on the left of the top row, and the parallax created based on the parallax value calculated from the left image and the right image on the right side An image 502 is displayed.
  • pixel mixing that combines the left image and the parallax image by weighting the RGB values as in the first embodiment of the present invention.
  • a pixel mixing button 503 for selecting and a color difference mixing button 504 for selecting “color difference mixing” for synthesizing the color difference of the left image with the parallax image as in the second embodiment of the present invention are provided. ing.
  • a slider 505 for changing the weighting of RGB values when “Pixel mixture” is selected is provided on the middle right of FIG. 19. The user moves the slider 505 on the touch panel left and right. By performing the operation, the weighting ratio of the RGB values of the left image and the parallax image can be arbitrarily changed within the range of 0:10 to 10: 0.
  • a crosspoint adjustment interface 506 that is an interface that can change the crosspoint of the stereoscopic image.
  • the user can display the crosspoint according to the fifth embodiment of the present invention.
  • the position of the cross point can be changed in the front-rear direction from the near side to the depth while confirming on the parallax image displayed on the upper right of FIG.
  • the region having the maximum parallax value according to the sixth embodiment of the present invention can be reflected in the parallax image displayed on the upper right of FIG.
  • a size designation button 509 for designating is provided.
  • the user can determine what kind of stereoscopic effect from two or more images read from the image input unit 201 in FIG. Whether the image is created is confirmed by the processing according to the first to sixth embodiments of the present invention described above, and in some cases, a desired stereoscopic effect can be obtained by changing the cross point. it can.
  • the parallax value is adjusted for the entire parallax image, and the parallax image is created again based on the adjusted parallax value and displayed.
  • the left image and the right image are stereo-matched using the left image as a reference to generate a parallax map is described, but the right image may be used as a reference.
  • a parallax map may be generated for each of the images as the first image (reference image) and the other image as the second image.
  • the image processing unit 202 in FIG. 2 is a computer having a CPU and a memory
  • the processing routines of the first to sixth embodiments are programmed, and the program is executed by the CPU. May be.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)

Abstract

Provided is a parallax image display method whereby: a plurality of images captured from at least two different viewpoints is acquired; a parallax map is generated, said parallax map associating parallax values represented by the differences in the positions of corresponding regions between a first image and a second image captured from a different viewpoint than the first image, which are included in the plurality of acquired images, with the positions of the regions of the first image; the colors of the corresponding regions are determined according to the parallax value for each region; a parallax color image, obtained by changing the color of each region of the first image to the determined color, is created; and the parallax color image and the first image are synthesized.

Description

視差画像表示装置及び視差画像表示方法Parallax image display device and parallax image display method
 本発明は、視差画像表示装置及び視差画像表示方法に係り、特に、立体視用画像を生成するに先立ち、生成される立体視用画像の立体感を2次元表示のモニタで確認できる視差画像表示装置及び視差画像表示方法に関する。 The present invention relates to a parallax image display device and a parallax image display method, and in particular, parallax image display in which the stereoscopic effect of a generated stereoscopic image can be confirmed on a two-dimensional display monitor before generating the stereoscopic image. The present invention relates to a device and a parallax image display method.
 従来、3D画像の立体感を2Dモニタで把握しようとする場合、立体視用画像の元になる2以上の画像であって、異なる2以上の視点から撮影された画像の視差を視差値として数値化し、生成される立体視用画像の立体感を、その視差値から判断する方法が考えられてきた。しかしながら、視差の程度を示す視差値は単なる数値に過ぎず、視差値の数値から実際の立体感を感覚的に把握することは困難である。 Conventionally, when trying to grasp the stereoscopic effect of a 3D image on a 2D monitor, the parallax value is a parallax value of two or more images that are the basis of a stereoscopic image and are captured from two or more different viewpoints. A method of determining the stereoscopic effect of the generated stereoscopic image from the parallax value has been considered. However, the parallax value indicating the degree of parallax is merely a numerical value, and it is difficult to sensuously grasp the actual stereoscopic effect from the numerical value of the parallax value.
 そのため、視差値を2以上の画像のうちの第1の画像の画素位置に対応させた視差マップを生成し、その視差マップ上の視差値の程度を輝度差又は色彩の遷移で表現した視差画像を生成して立体視用画像の立体感を確認する技術が一般に知られている。 Therefore, a parallax image in which a parallax map in which the parallax value is associated with the pixel position of the first image of two or more images is generated, and the degree of the parallax value on the parallax map is expressed by a luminance difference or a color transition. In general, a technique for confirming the stereoscopic effect of an image for stereoscopic viewing by generating the image is known.
 しかしながら、視差値を輝度差等で表現した視差画像は、図1の右に示すように、元となった画像(図1の左)の特徴を全く示さないことがある。 However, the parallax image in which the parallax value is expressed by a luminance difference or the like may not show the characteristics of the original image (left in FIG. 1) at all as shown on the right in FIG.
 図1の右に示した視差画像では、図1の左の元の画像において手前に存在している被写体が最も視差値が大きく、この視差値が最大である被写体を黒に近い暗色で表示している。 In the parallax image shown on the right in FIG. 1, the subject existing in the foreground in the left original image in FIG. 1 has the largest parallax value, and the subject with the largest parallax value is displayed in a dark color close to black. ing.
 また、図1の左の元の画像では遠い被写体ほど視差値は小さくなっており、図1の右の視差画像では視差値が小さな被写体ほど輝度が大きくなるように表示されている。 In the original image on the left in FIG. 1, the disparity value is smaller as the subject is farther, and in the right disparity image in FIG. 1, the subject is displayed such that the luminance is larger as the subject has a smaller disparity value.
 その結果、図1の場合では、ある程度遠くの被写体では、視差値の差を輝度差で表現できる範囲を超えてしまい、遠い被写体は輝度が最大である白一色でしか視差画像では表示されなくなる。 As a result, in the case of FIG. 1, a subject far away to some extent exceeds the range in which the difference in parallax value can be expressed by a luminance difference, and the far subject can only be displayed in a parallax image with a white color having the maximum luminance.
 この図1のような場合では、視差画像によって、元の画像から生じる立体視用画像の立体感を大まかに把握することはできるものの、視差画像における視差値が大きい暗色の部分又は視差値が小さい白っぽい部分が、元の画像とどう対応しているのかが把握できないという問題があった。 In such a case as shown in FIG. 1, although the stereoscopic effect of the stereoscopic image generated from the original image can be roughly grasped from the parallax image, the dark color portion having the large parallax value or the parallax value in the parallax image is small. There was a problem that it was difficult to grasp how the whitish part corresponds to the original image.
 そのため、例えば特開2003-047027号公報には、作成した視差マップに基づき、右画像と左画像とをストライプ状の複数の画像片に切り分け、これら切り分けられたストライプ状の画像片を交互に配置した、いわゆるレンチキュラー画像を作成し、作成したレンチキュラー画像で立体視効果を確認するという立体画像形成システム、立体画像形成方法、プログラムおよび記憶媒体の発明が開示されている。 Therefore, for example, in Japanese Patent Application Laid-Open No. 2003-047027, based on the created parallax map, the right image and the left image are segmented into a plurality of striped image pieces, and these striped striped image pieces are alternately arranged. A so-called lenticular image is created, and a stereoscopic image forming system, a stereoscopic image forming method, a program, and a storage medium according to which a stereoscopic effect is confirmed with the created lenticular image are disclosed.
 また、特開2008-103820号公報には、右画像と左画像との視差データを用いて動画像についてレンチキュラー画像を生成し、この生成したレンチキュラー画像で立体視効果を確認するという立体画像処理装置の発明が開示されている。 Japanese Patent Laid-Open No. 2008-103820 discloses a stereoscopic image processing apparatus that generates a lenticular image for a moving image using parallax data between a right image and a left image, and confirms a stereoscopic effect using the generated lenticular image. The invention is disclosed.
 しかしながら、特開2003-047027号公報に開示されている技術は、作成したレンチキュラー画像を、レンチキュラーレンズ等の手段によって立体視の効果を確認するというものであるから、通常の2次元表示のモニタ(以下、「2Dモニタ」と記載)上で立体視用画像の立体感を確認できないという問題があった。 However, the technique disclosed in Japanese Patent Application Laid-Open No. 2003-047027 is to check the effect of stereoscopic vision of a created lenticular image by means such as a lenticular lens. Hereinafter, there was a problem that the stereoscopic effect of the stereoscopic image could not be confirmed on the “2D monitor”.
 また、特開2008-103820号公報に開示されている立体画像処理装置も、レンチキュラー画像化した動画像の立体視効果を確認するには、レンチキュラー画像に対応した専用の立体画像再生装置(3Dモニタ)で当該動画像を再生しなければならないという問題があった。 In addition, a stereoscopic image processing apparatus disclosed in Japanese Patent Application Laid-Open No. 2008-103820 also uses a dedicated stereoscopic image reproduction apparatus (3D monitor) corresponding to a lenticular image in order to confirm the stereoscopic effect of a moving image converted into a lenticular image. ), There is a problem that the moving image must be reproduced.
 本発明は、上記問題を解決するためになされたもので、立体視用画像の立体感の効果を、当該立体視用画像を作成する前に2Dモニタで予め確認できる視差画像表示装置、視差画像表示方法及び視差画像表示プログラムを提供することを目的とする。 The present invention has been made to solve the above problem, and a parallax image display device and a parallax image in which the stereoscopic effect of a stereoscopic image can be confirmed in advance on a 2D monitor before the stereoscopic image is created. It is an object to provide a display method and a parallax image display program.
 上記目的を達成するために、第1の態様に係る視差画像表示装置は、異なる2以上の視点から撮影された複数の画像を取得する取得部と、前記取得部により取得された複数の画像に含まれる第1の画像と該第1の画像とは異なる第2の画像との各々対応する領域毎の位置の差で表される視差値を、前記第1の画像の領域の位置に対応させた視差マップを生成する視差マップ生成部と、領域毎の視差値に応じて対応する領域の色を決定する色決定部と、前記第1の画像の各領域の色を前記色決定部において決定した色に変更した視差色画像を作成する視差画像作成部と、前記視差色画像と前記第1の画像とを合成する画像合成部と、を含んで構成されている。 To achieve the above object, a parallax image display device according to a first aspect includes an acquisition unit that acquires a plurality of images taken from two or more different viewpoints, and a plurality of images acquired by the acquisition unit. The disparity value represented by the difference in position for each corresponding region between the first image included and the second image different from the first image is made to correspond to the position of the region of the first image. A parallax map generating unit that generates a parallax map, a color determining unit that determines a color of a corresponding region according to a parallax value for each region, and a color determining unit that determines a color of each region of the first image A parallax image creation unit that creates a parallax color image that has been changed to the color, and an image synthesis unit that synthesizes the parallax color image and the first image.
 第1の態様に係る視差画像表示装置によれば、視差画像作成部が異なる2以上の視点から撮影された複数の画像から生成される立体視画像の立体感を示す視差値を色で認識できる視差色画像を作成し、画像合成部が、視差画像と第1の画像とを合成することで、第1の画像の画素が合成によって視差色画像に混合され、輝度差等によって視差値が認識できることに加えて、被写体の概略が把握できる画像を生成する。 According to the parallax image display device according to the first aspect, the parallax value indicating the stereoscopic effect of the stereoscopic image generated from a plurality of images taken from two or more different viewpoints can be recognized by color. A parallax color image is created, and the image synthesis unit synthesizes the parallax image and the first image, so that the pixels of the first image are mixed into the parallax color image by synthesis, and the parallax value is recognized by the luminance difference or the like In addition to being able to do so, an image is generated in which an outline of the subject can be grasped.
 画像合成部によって生成される画像は、レンチキュラーレンズを備えた表示機器のような特別な表示装置等を要するものではなく、通常の2Dモニタで表示できるビットマップ画像であるから、第1の画像と第2の画像から作成される立体視用画像の立体感の効果を、当該立体視用画像を作成する前に2Dモニタで予め確認することができる。 The image generated by the image composition unit does not require a special display device such as a display device having a lenticular lens, and is a bitmap image that can be displayed on a normal 2D monitor. The effect of the stereoscopic effect of the stereoscopic image created from the second image can be confirmed in advance on the 2D monitor before creating the stereoscopic image.
 第2の態様に係る視差画像表示方法は、異なる2以上の視点から撮影された複数の画像を取得し、該取得した複数の画像に含まれる第1の画像と該第1の画像とは異なる画像との各々対応する領域毎の位置の差で表される視差値を、前記第1の画像の領域の位置に対応させた視差マップを生成し、領域毎の視差値に応じて対応する領域の色を決定し、前記第1の画像の各領域の色を該決定した色に変更した視差色画像を作成し、前記視差色画像と前記第1の画像とを合成する方法である。 The parallax image display method according to the second aspect acquires a plurality of images taken from two or more different viewpoints, and the first image included in the acquired plurality of images is different from the first image. A disparity map in which a disparity value represented by a position difference for each corresponding region with an image is associated with a position of the region of the first image is generated, and a region corresponding to the disparity value for each region Is determined, a parallax color image in which the color of each region of the first image is changed to the determined color is generated, and the parallax color image and the first image are synthesized.
 第3の態様に係る視差画像表示プログラムは、コンピュータを、異なる2以上の視点から撮影された複数の画像を取得する取得部、前記取得部により取得された複数の画像に含まれる第1の画像と該第1の画像とは異なる第2の画像との各々対応する領域毎の位置の差で表される視差値を、前記第1の画像の領域の位置に対応させた視差マップを生成する視差マップ生成部、領域毎の視差値に応じて対応する領域の色を決定する色決定部、前記第1の画像の各領域の色を前記色決定部において決定した色に変更した視差色画像を作成する視差画像作成部、及び前記視差色画像と前記第1の画像とを合成する画像合成部として機能させるためのプログラムである。 The parallax image display program according to the third aspect is an acquisition unit that acquires a plurality of images taken from two or more different viewpoints, and a first image included in the plurality of images acquired by the acquisition unit. And a second image different from the first image generate a parallax map in which a parallax value represented by a position difference for each corresponding region corresponds to a position of the region of the first image A parallax map generation unit, a color determination unit that determines a color of a corresponding region according to a parallax value for each region, and a parallax color image obtained by changing the color of each region of the first image to a color determined by the color determination unit Is a program for functioning as a parallax image creation unit that creates the image, and an image synthesis unit that synthesizes the parallax color image and the first image.
 なお、領域は、1つの画素のみならず、画素が2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域も含む。 Note that the region includes not only one pixel but also a region in which several pixels are gathered in a square shape, such as 2 × 2 to 3 × 3.
 以上説明したように、本発明によれば、第1の画像と第2の画像との視差値を第1の画像の領域毎に対応させ、該対応させた視差値の程度を色で表現した視差色画像に、第1の画像を合成することで、第1の画像と第2の画像から作成される立体視用画像の立体感の効果を、当該立体視用画像を作成する前に2Dモニタで予め確認できるという効果が得られる。 As described above, according to the present invention, the parallax values between the first image and the second image are associated with each region of the first image, and the degree of the associated parallax value is expressed in color. By combining the first image with the parallax color image, the stereoscopic effect of the stereoscopic image created from the first image and the second image can be reduced to 2D before creating the stereoscopic image. The effect that it can confirm beforehand with a monitor is acquired.
元の画像と視差画像とを比較した図である。It is the figure which compared the original image and the parallax image. 本発明の第1の実施の形態に係る視差画像表示装置のブロック図である。1 is a block diagram of a parallax image display device according to a first embodiment of the present invention. 本発明の第1の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。It is a flowchart which shows the process of the parallax image display apparatus which concerns on the 1st Embodiment of this invention. 異なる2以上の視点から撮影された2以上の画像の一例を示す図である。It is a figure which shows an example of two or more images image | photographed from two or more different viewpoints. 本発明の第1の実施の形態に係る視差画像の一例を示す図である。It is a figure which shows an example of the parallax image which concerns on the 1st Embodiment of this invention. 本発明の第1の実施の形態において第1の画像と視差画像とを合成した画像の一例を示す図である。It is a figure which shows an example of the image which synthesize | combined the 1st image and the parallax image in the 1st Embodiment of this invention. 本発明の第2の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。It is a flowchart which shows the process of the parallax image display apparatus which concerns on the 2nd Embodiment of this invention. 本発明の第2の実施の形態において第1の画像の色差成分と視差画像とを合成した画像の一例を示す図である。It is a figure which shows an example of the image which synthesize | combined the color difference component and parallax image of the 1st image in the 2nd Embodiment of this invention. 本発明の第3の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。It is a flowchart which shows the process of the parallax image display apparatus which concerns on the 3rd Embodiment of this invention. 本発明の第3の実施の形態において第1の画像と視差色画像とを合成した画像の一例を示す図である。It is a figure which shows an example of the image which synthesize | combined the 1st image and the parallax color image in the 3rd Embodiment of this invention. 本発明の第4の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。It is a flowchart which shows the process of the parallax image display apparatus which concerns on the 4th Embodiment of this invention. 本発明の第4の実施の形態において第1の画像の色差成分と視差値にしたがって第1の画像の解像度を変化させた視差解像度画像とを合成した画像の一例を示す図である。It is a figure which shows an example of the image which synthesize | combined the parallax resolution image which changed the resolution of the 1st image according to the color difference component and parallax value of a 1st image in the 4th Embodiment of this invention. 本発明の第5の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。It is a flowchart which shows the process of the parallax image display apparatus which concerns on the 5th Embodiment of this invention. 本発明の第5の実施の形態において第1の画像のクロスポイントを表示したクロスポイント画像である。It is a crosspoint image which displayed the crosspoint of the 1st image in a 5th embodiment of the present invention. 本発明の第5の実施の形態において第1の画像のクロスポイントを表示したクロスポイント画像である。It is a crosspoint image which displayed the crosspoint of the 1st image in a 5th embodiment of the present invention. 本発明の第6の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。It is a flowchart which shows the process of the parallax image display apparatus which concerns on the 6th Embodiment of this invention. 本発明の第6の実施の形態において飛び出し方向に視差値が最大の画素の領域を表示した視差絶対値最大画像である。It is a parallax absolute value maximum image which displayed the area | region of the pixel with the largest parallax value in the pop-out direction in the 6th Embodiment of this invention. 本発明の第6の実施の形態において奥行き方向に視差値が最大の領域を表示した視差絶対値最大画像である。It is the parallax absolute value maximum image which displayed the area | region with the largest parallax value in the depth direction in the 6th Embodiment of this invention. 本発明の第1~6の実施の形態における処理が可能な装置の操作表示部である。6 is an operation display unit of an apparatus capable of processing in the first to sixth embodiments of the present invention.
 以下、図面を参照して、本発明の実施の形態を詳細に説明する。 Hereinafter, embodiments of the present invention will be described in detail with reference to the drawings.
 [第1の実施の形態]
 図2は、本発明の第1の実施の形態に係る視差画像表示装置のブロック図である。
[First Embodiment]
FIG. 2 is a block diagram of the parallax image display device according to the first embodiment of the present invention.
 図2に示すように、本発明の第1の実施の形態に係る視差画像表示装置200は、異なる2以上の視点から撮影された2以上の画像が入力される画像入力部201と、入力された2以上の画像に基づいて第1の画像(第1~第7の実施の形態では左画像)とこの左画像とは異なる視点から撮影された第2の画像(第1~第7の実施の形態では右画像)とから作成される立体視用画像の立体感を確認するための画像を作成する画像処理部202と、視差画像表示装置200を操作するための操作部203と、画像処理部202の処理結果を表示する表示装置204と、入力された画像等を記憶する記憶部205と、を有する。 As shown in FIG. 2, the parallax image display device 200 according to the first exemplary embodiment of the present invention is input with an image input unit 201 to which two or more images taken from two or more different viewpoints are input. Based on two or more images, the first image (left image in the first to seventh embodiments) and the second image (first to seventh embodiments) taken from different viewpoints of the left image. The right image), an image processing unit 202 for creating an image for confirming the stereoscopic effect of the stereoscopic image created from the image processing unit 202, an operation unit 203 for operating the parallax image display device 200, and image processing A display unit 204 that displays a processing result of the unit 202; and a storage unit 205 that stores an input image or the like.
 画像入力部201には、立体視用画像を撮影するカメラ等が撮影した異なる2以上の視点から撮影された2以上の画像が入力される。画像入力部201には立体視用画像を撮影するカメラが直結されていてもよいが、当該カメラが撮影した画像を格納した情報記憶媒体(磁気ディスク、光ディスク、光磁気ディスク、メモリカード、ICカード等の何れか)から当該カメラが撮影した画像を読み出して入力するデータ読出装置であってもよい。 The image input unit 201 receives two or more images taken from two or more different viewpoints taken by a camera or the like that takes a stereoscopic image. The image input unit 201 may be directly connected to a camera that captures a stereoscopic image, but an information storage medium (magnetic disk, optical disk, magneto-optical disk, memory card, IC card) that stores the image captured by the camera. Etc.) may be a data reading device that reads and inputs an image captured by the camera.
 操作部203は、ユーザ等が視差画像表示装置200に対する指示を入力するための装置であり、タッチパネル、キーボード、マウス及びペンタブレット等の入力デバイスが用いられる。 The operation unit 203 is a device for a user or the like to input an instruction to the parallax image display device 200, and input devices such as a touch panel, a keyboard, a mouse, and a pen tablet are used.
 表示装置204は、画像処理部202の処理過程又は処理結果を表示するLCD等の装置であり、本実施の形態では、2Dモニタである。 The display device 204 is a device such as an LCD that displays a processing process or a processing result of the image processing unit 202, and is a 2D monitor in the present embodiment.
 記憶部205は、RAM(Random Access Memory)、HDD(Hard Disk Drive)又はフラッシュメモリ等による記憶装置である。 The storage unit 205 is a storage device such as a RAM (Random Access Memory), a HDD (Hard Disk Drive), or a flash memory.
 さらに画像処理部202は、入力された2以上の画像に含まれる左画像とこの左画像とは異なる視点から撮影された右画像との各々対応する画素毎の位置の差で表される視差を、左画像の画素の位置に対応させた視差マップを作成する視差マップ作成部2021と、作成した視差マップの視差値の大きさを輝度で示した視差画像を作成する視差画像作成部2022と、作成した視差画像を左画像と合成する画像合成部2023と、を有する。 Further, the image processing unit 202 calculates a parallax represented by a difference in position for each corresponding pixel between a left image included in two or more input images and a right image taken from a different viewpoint from the left image. A parallax map creation unit 2021 that creates a parallax map corresponding to the pixel position of the left image, a parallax image creation unit 2022 that creates a parallax image indicating the magnitude of the parallax value of the created parallax map in luminance, An image synthesis unit 2023 that synthesizes the created parallax image with the left image.
 画像処理部202は、CPU及びメモリを有するコンピュータでもよく、この場合は、記憶部205に格納されている当該コンピュータを画像処理装置として機能させるためのプログラムにしたがって動作する。 The image processing unit 202 may be a computer having a CPU and a memory. In this case, the image processing unit 202 operates according to a program for causing the computer stored in the storage unit 205 to function as an image processing apparatus.
 画像合成部2023が視差画像と左画像とを合成した画像は、画像処理部202の処理結果として表示装置204に出力される。 The image obtained by combining the parallax image and the left image by the image combining unit 2023 is output to the display device 204 as the processing result of the image processing unit 202.
 また、入力された画像、視差マップ、視差画像及び画像合成部2023が合成した画像等は、記憶部205に記憶される。 Further, the input image, the parallax map, the parallax image, the image synthesized by the image synthesis unit 2023, and the like are stored in the storage unit 205.
 次に、図3にしたがって、本発明の第1の実施の形態に係る視差画像表示装置の処理について説明する。ここで図3は、本発明の第1の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。 Next, processing of the parallax image display device according to the first embodiment of the present invention will be described with reference to FIG. Here, FIG. 3 is a flowchart showing processing of the parallax image display device according to the first embodiment of the present invention.
 まず、画像入力部に異なる2以上の視点から撮影された2以上の画像が入力されたか否かがステップ301で判断される。 First, it is determined in step 301 whether or not two or more images taken from two or more different viewpoints are input to the image input unit.
 ここで、図4は、画像入力部に入力される異なる2以上の視点から撮影された2以上の画像の一例であり、右側の視点から撮影された右画像(図2の右)と左側の視点から撮影された左画像(図2の左)を示す図である。ステップ302では、図4に示した異なる2以上の視点から撮影された2以上の画像から視差マップを作成する。なお、図4の左画像と右画像は、視差が画像の横方向(水平方向)に発生しており、縦方向(垂直方向)にずれは生じていないものとする。 Here, FIG. 4 is an example of two or more images captured from two or more different viewpoints input to the image input unit, and a right image (right in FIG. 2) and a left image captured from the right viewpoint. It is a figure which shows the left image (left of FIG. 2) image | photographed from the viewpoint. In step 302, a parallax map is created from two or more images photographed from two or more different viewpoints shown in FIG. In the left image and the right image in FIG. 4, it is assumed that the parallax occurs in the horizontal direction (horizontal direction) of the image and no shift occurs in the vertical direction (vertical direction).
 このステップ302では、図2で示した視差マップ作成部2021が、左画像と右画像とに対してステレオマッチングを行って視差マップを生成する。 In step 302, the parallax map creation unit 2021 shown in FIG. 2 performs stereo matching on the left image and the right image to generate a parallax map.
 ステレオマッチングとは、異なる視点から撮影された2枚1組の画像を用い、左の画像のある領域が、右画像のどの領域に対応するかを特定して、各領域の3次元的な位置を推測する方法である。 Stereo matching uses a set of two images taken from different viewpoints, specifies which region of the left image corresponds to which region of the right image, and the three-dimensional position of each region It is a method to guess.
 具体的には、図4に示すように、例えば左画像を基準として、左画像上の画素(x1,y1)に対応する右画像上の対応画素(x2,y2)を抽出する。左画像上の画素(x1,y1)と右画像上の対応画素(x2,y2)との視差dは、d=x2-x1と計算でき、この視差dを、基準とした左画像の画素位置(x1,y1)に格納する。 Specifically, as shown in FIG. 4, for example, with the left image as a reference, corresponding pixels (x2, y2) on the right image corresponding to the pixels (x1, y1) on the left image are extracted. The parallax d between the pixel (x1, y1) on the left image and the corresponding pixel (x2, y2) on the right image can be calculated as d = x2-x1, and the pixel position of the left image with this parallax d as a reference Store in (x1, y1).
 視差dを、基準とした左画像の画素位置に格納した視差値を視差マップとする。なお、本実施の形態では、画像の1画素毎に視差値を算出しているが、画素が2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について視差値を算出してもよく、このように画素が集合した領域について視差値を算出する場合は、処理速度が向上し得ると考えられる。 The parallax value stored at the pixel position of the left image with the parallax d as a reference is taken as a parallax map. In the present embodiment, the parallax value is calculated for each pixel of the image. However, the parallax is calculated for a region in which several pixels are gathered in a square shape, such as 2 × 2 to 3 × 3. A value may be calculated, and when the parallax value is calculated for a region where pixels are gathered in this way, it is considered that the processing speed can be improved.
 この視差マップは、各画素について視差値が示されているが、単なる数値データであるから、ユーザ等が読んでも、視差値が実際の立体視用画像においてどのような立体感を生じ得るかということを認識することはできない。 This parallax map shows the parallax value for each pixel, but since it is just numerical data, what kind of stereoscopic effect the parallax value can produce in an actual stereoscopic image even when read by a user or the like? I can't recognize that.
 そこで、ステップ303では、図2で示した視差画像作成部2022が、ユーザ等が立体視用画像の立体感を認識できるように、視差マップの画素毎の視差値を輝度値に変換し、左画像の各画素に対応する視差値を輝度で表した画像である視差画像を作成する。 Therefore, in Step 303, the parallax image creation unit 2022 shown in FIG. 2 converts the parallax value for each pixel of the parallax map into a luminance value so that the user or the like can recognize the stereoscopic effect of the stereoscopic image, A parallax image, which is an image representing the parallax value corresponding to each pixel of the image in luminance, is created.
 視差値を輝度値に変換する方法は、種々考えられるが、視差値と輝度値との一対一の対応関係を記した対応表にしたがって変換するのが一般的である。 Various methods for converting the parallax value into the luminance value are conceivable. However, it is common to convert the parallax value according to a correspondence table in which a one-to-one correspondence between the parallax value and the luminance value is described.
 ステップ304では、全画素について視差値から輝度値への変換処理が行われたか否かが判断され、全画素が変換されたのであれば、ステップ305で、当該変換によって作成された視差画像のファイルを、図2に記した記憶部205に記憶する。 In step 304, it is determined whether or not the conversion process from the parallax value to the luminance value has been performed for all the pixels. If all the pixels have been converted, the parallax image file created by the conversion in step 305 is determined. Is stored in the storage unit 205 shown in FIG.
 図5は、本発明の第1の実施の形態に係る視差画像の一例を示す図である。この図5では、視差値が大きい被写体は低輝度で、視差値が小さい被写体は高輝度で表現されている。 FIG. 5 is a diagram showing an example of a parallax image according to the first embodiment of the present invention. In FIG. 5, a subject with a large parallax value is represented with low luminance, and a subject with a small parallax value is represented with high luminance.
 図5では、画像上の全ての被写体は、モノクロームの濃淡で表現されている。しかし、各被写体の輪郭は不明瞭であり、視差値が同一な部分は同一の輝度で表現されるため、遠景の木立等の被写体の細部が全く把握できない。 In FIG. 5, all the subjects on the image are expressed in monochrome shades. However, the outline of each subject is unclear, and portions with the same parallax value are expressed with the same brightness, so that details of the subject such as trees in a distant view cannot be grasped at all.
 この図5を元の画像である図4の左画像と交互に比較することによって、元の画像と視差画像との対応を把握することも考えられる。しかし、2つの画像を見比べるのは煩雑であるし、画像を見比べて元の画像との対応を把握した上で、最終的に生成される立体視用画像の立体感を、元の画像の被写体と対比させて認識することは困難である。 It is also conceivable to grasp the correspondence between the original image and the parallax image by alternately comparing FIG. 5 with the left image of FIG. 4 which is the original image. However, comparing the two images is cumbersome, and after comparing the images to understand the correspondence with the original image, the stereoscopic effect of the stereoscopic image that is finally generated is changed to the subject of the original image. It is difficult to recognize it in contrast to
 そこで、ステップ306では、図2で示した画像合成部2023が、図4の左画像と視差画像とを画素毎に合成する。 Therefore, in step 306, the image composition unit 2023 shown in FIG. 2 synthesizes the left image and the parallax image in FIG. 4 for each pixel.
 このステップ306では、画像合成部2023は、左画像と視差画像とで、w1:w2、例えば1:9の重み付けをそれぞれの画像画素のRGB値に施して、両画像の合成を行っている。 In this step 306, the image composition unit 2023 performs weighting of w1: w2, for example, 1: 9, on the RGB values of the respective image pixels between the left image and the parallax image, and synthesizes both images.
 ステップ306での画像の合成は、左画像に対する重みw1より視差画像に対する重みw2を重くして、以下の式(1)にしたがって加重平均値を演算することにより左画像と視差画像とを合成する。aは左画像の画素値、bは画素値aの画素に対応する視差画像の画素の輝度値である。この加重平均値は、視差画像及び左画像の画素の全てについて求める。
   (a・w1+b・w2)/(w1+w2)    ・・・(1)
 ただし、w1<w2であり、w1=1、w2=9とすることが好ましい。
In step 306, the left image and the parallax image are synthesized by making the weight w2 for the parallax image heavier than the weight w1 for the left image and calculating the weighted average value according to the following equation (1). . a is the pixel value of the left image, and b is the luminance value of the pixel of the parallax image corresponding to the pixel of the pixel value a. This weighted average value is obtained for all the pixels of the parallax image and the left image.
(A · w1 + b · w2) / (w1 + w2) (1)
However, it is preferable that w1 <w2, and w1 = 1 and w2 = 9.
 カラー画像の場合には、左画像の各画素は、RGBの各値aR、aG、aBを各々有しているので、各画素のRGB値の各々について以下の式(2)のように加重平均を求める。
   (aR・w1+b・w2)/(w1+w2)
   (aG・w1+b・w2)/(w1+w2)   ・・・(2)
   (aB・w1+b・w2)/(w1+w2)
In the case of a color image, each pixel of the left image has RGB values aR, aG, and aB, respectively. Therefore, a weighted average is given for each of the RGB values of each pixel as in the following equation (2). Ask for.
(AR · w1 + b · w2) / (w1 + w2)
(AG · w1 + b · w2) / (w1 + w2) (2)
(AB · w1 + b · w2) / (w1 + w2)
 なお、本実施の形態では、左画像の各1画素のRGB値について上記の重み付けを行っている。しかし、視差画像の視差値が2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について算出されている場合は、左画像のRGB値の重み付けも、視差画像の領域に対応する領域についてRGB値の重み付けをする。 In the present embodiment, the weighting is performed on the RGB value of each pixel of the left image. However, when the parallax value of the parallax image is calculated for a region where several pixels are gathered in a square shape, such as 2 × 2 to 3 × 3, the weighting of the RGB values of the left image is also the parallax image The RGB values are weighted for the area corresponding to this area.
 この2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域についてRGB値の重み付けをする場合は、当該領域の各画素のRGB値の平均値を算出し、算出したRGB値の平均値に対して上記式(2)による重み付けをする。 When weighting RGB values for a region where several pixels are gathered in a square shape such as 2 × 2 to 3 × 3, the average value of the RGB values of each pixel in the region is calculated and calculated. The average value of the obtained RGB values is weighted by the above equation (2).
 続くステップ307では、画像合成部2023は、全画素での合成処理が完了したか否かを判断し、全画素での合成処理が完了した場合は、ステップ308において、左画像と視差画像とを合成した画像を、図2に示した表示装置204に表示して、ステップ301~308の処理を完了する。 In the subsequent step 307, the image composition unit 2023 determines whether or not the composition processing for all pixels has been completed. If the composition processing for all pixels has been completed, in step 308, the left image and the parallax image are obtained. The synthesized image is displayed on the display device 204 shown in FIG. 2, and the processing in steps 301 to 308 is completed.
 図6は、本発明の第1の実施の形態において左画像と視差画像とを合成した画像の一例を示したものである。図6の画像には、輝度差によって視差値が認識できることに加えて、左画像の画素が合成によって混合され、左画像の色差成分も薄く表示されている。従って、図6の画像からは遠景の木立等の被写体の概略が把握できるようになっている。 FIG. 6 shows an example of an image obtained by synthesizing the left image and the parallax image in the first embodiment of the present invention. In the image of FIG. 6, in addition to recognizing the parallax value by the luminance difference, the pixels of the left image are mixed by synthesis, and the color difference component of the left image is also displayed lightly. Therefore, an outline of a subject such as a tree in a distant view can be grasped from the image of FIG.
 また、左画像と視差画像とを合成した画像は、レンチキュラーレンズを備えた表示機器のような特別な表示装置等を要するものではなく、通常の2Dモニタで表示できるビットマップ画像である。 Also, the image obtained by combining the left image and the parallax image is a bitmap image that can be displayed on a normal 2D monitor, without requiring a special display device such as a display device having a lenticular lens.
 ユーザ等は、2Dモニタの画面に表示された左画像と視差画像とを合成した画像を視認することにより、最終的に得られる立体視用画像の立体感を把握する。 The user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by combining the left image and the parallax image displayed on the screen of the 2D monitor.
 以上説明したように、本発明の第1の実施の形態によれば、左画像の各画素についての視差値を輝度に変換した視差画像と左画像とを合成した画像を作成することにより、通常の2Dモニタでも撮影画像から生成される立体視用画像の立体感を把握することができる。 As described above, according to the first embodiment of the present invention, by creating an image obtained by synthesizing the parallax image obtained by converting the parallax value for each pixel of the left image into luminance and the left image, Even in the 2D monitor, the stereoscopic effect of the stereoscopic image generated from the captured image can be grasped.
 [第1の実施の形態の変形例1]
 第1の実施の形態では、図3のステップ306の左画像と視差画像との合成において、左画像の画素のRGB値に対する重みをw1、視差画像の輝度値に対する重みをw2とし、一例としてw1:w2=1:9の重み付けをそれぞれの画像の画素のRGB値及び輝度値に施して、両画像の合成を行っていた。
[Modification 1 of the first embodiment]
In the first embodiment, in the synthesis of the left image and the parallax image in step 306 in FIG. 3, the weight for the RGB value of the pixel of the left image is w1, the weight for the luminance value of the parallax image is w2, : W2 = 1: 9 is applied to the RGB value and the luminance value of the pixels of each image to synthesize both images.
 この左画像と視差画像との合成における重み付けを、視差値に応じて変更するのが、以下で説明する第1の実施の形態の変形例1と同変形例2である。 The weights in the synthesis of the left image and the parallax image are changed according to the parallax value in the first modification and the second modification of the first embodiment described below.
 このうち、第1の実施の形態の変形例1は、立体視用カメラの左右のレンズの光軸が交差するクロスポイント付近の被写体に係る画素では左画像の重みw1を視差画像の重みw2に比して重くする。 Among these, in the first modification of the first embodiment, the weight w1 of the left image is changed to the weight w2 of the parallax image in the pixel related to the subject near the cross point where the optical axes of the left and right lenses of the stereoscopic camera intersect. Make it heavier.
 さらに、クロスポイントよりも手前又は奥の被写体に係る画素は、クロスポイントから離れているほど、視差画像の重みw2を重くしていき左画像の重みw1は軽くしていくというものである。 Furthermore, the pixel related to the subject in front of or behind the cross point increases the weight w2 of the parallax image and decreases the weight w1 of the left image as the distance from the cross point increases.
 視差値の絶対値は、クロスポイントに係る被写体の画素で最小の「0」となり、クロスポイントから離れた被写体の画素では、クロスポイントから離れるほど、視差値の絶対値は大きくなる。 The absolute value of the parallax value is the smallest “0” in the pixel of the subject related to the cross point, and the absolute value of the parallax value increases as the distance from the cross point increases in the pixel of the subject far from the cross point.
 このため、第1の実施の形態の変形例1では、視差値の絶対値が大きくなるに従って、視差画像の領域に対応する左画像の領域に含まれる画素のRGB値から得られる値に対する重みw1を軽くすると共に、視差画像の領域に含まれる画素の輝度値から得られる値に対する重みw2を重くしていく。 For this reason, in the first modification of the first embodiment, as the absolute value of the parallax value increases, the weight w1 for the value obtained from the RGB values of the pixels included in the left image area corresponding to the parallax image area , And the weight w2 for the value obtained from the luminance value of the pixel included in the parallax image area is increased.
 重み付けを視差値に応じて変化させる方法は種々考えられるが、視差値をdとしたとき、左画像の重みw1と視差画像の重みw2の比率は、以下の式(3)を用いる。ここでmは、実験を通じて統計的に決定される所定の係数であって、正の実数である。また、A及びBは、A≧BかつA+B=10の関係を有し、各々0~10の値をとり得る正の実数である。
 左画像の重みw1:視差画像の重みw2=A-m|d|:B+m|d|・・・(3)
Various methods for changing the weight according to the parallax value are conceivable. When the parallax value is d, the ratio of the weight w1 of the left image and the weight w2 of the parallax image uses the following formula (3). Here, m is a predetermined coefficient statistically determined through an experiment, and is a positive real number. A and B are positive real numbers that have a relationship of A ≧ B and A + B = 10 and can take values of 0 to 10, respectively.
Left image weight w1: Parallax image weight w2 = A−m | d |: B + m | d | (3)
 上述のように、クロスポイントに存在する被写体に係る画素の視差値は「0」で、その視差値の絶対値は最小となるから、上記式(3)を用いると、クロスポイントに存在する被写体における左画像の画素に対するRGB値への重みw1は「A」であり、クロスポイントに存在する被写体における視差画像の画素に対する輝度値への重みw2は「B」となる。 As described above, since the parallax value of the pixel related to the subject existing at the cross point is “0” and the absolute value of the parallax value is the minimum, using the above formula (3), the subject existing at the cross point The weight w1 to the RGB value for the pixel of the left image at “A” is “A”, and the weight w2 to the luminance value for the pixel of the parallax image at the subject existing at the cross point is “B”.
 上述のように、A及びBは、A≧BかつA+B=10の関係を有し、各々0~10の値をとり得る正の実数である。よって、一例として、A=9、B=1とすると、クロスポイントに存在する被写体ではw1=9、w2=1となり、左画像と視差画像とで、9:1の重み付けがそれぞれの画像画素のRGB値及び輝度値に施されて、両画像の合成が行われる。 As described above, A and B are positive real numbers that have a relationship of A ≧ B and A + B = 10 and can take values of 0 to 10, respectively. Therefore, as an example, if A = 9 and B = 1, w1 = 9 and w2 = 1 for the subject present at the cross point, and a weight of 9: 1 is assigned to each image pixel between the left image and the parallax image. The two values are synthesized by being applied to the RGB value and the luminance value.
 また、一例として、A=10、B=0とすると、クロスポイントに存在する被写体ではw1=10、w2=0となり、画像合成に際して、左画像の画素のみが使用され、視差画像の輝度値は、画像合成に際して用いられないことになる。 Also, as an example, if A = 10 and B = 0, the subject existing at the cross point is w1 = 10 and w2 = 0, and only the pixels of the left image are used for image synthesis, and the luminance value of the parallax image is This is not used for image synthesis.
 クロスポイントから離れた位置に存在する被写体に係る画素における視差値は、クロスポイントから離れるほど当該視差値の絶対値が増大する。よって、式(3)によると、相対的に左画像の重みw1は軽くなり、反対に視差画像の重みw2が相対的に重くなっていく。 The absolute value of the parallax value of the parallax value in the pixel related to the subject existing at a position away from the cross point increases as the distance from the cross point increases. Therefore, according to Expression (3), the weight w1 of the left image is relatively light, and the weight w2 of the parallax image is relatively heavy.
 また、上記の式(3)にはよらず、視差値と重み付けを対応させた対応表を予め用意し、この対応表(ルックアップテーブル)に従って視差値に応じて重み付けを変更してもよい。 Further, a correspondence table in which disparity values are associated with weights may be prepared in advance, and the weights may be changed according to the disparity values according to the correspondence table (lookup table), regardless of the above equation (3).
 この第1の実施の形態の変形例1によれば、クロスポイント付近の被写体においては、原画像である左画像の画素の重みw1を視差画像の画素の重みw2に比べて相対的に重くして左画像と視差画像とが合成されるので、クロスポイント付近の被写体の原画像での状態を確認することができる。また、クロスポイントから離れた位置に存在する被写体はクロスポイントから遠ざかるほど、原画像である左画像の画素の重みw1に比して、相対的に視差画像の画素の重みw2を重くして左画像と視差画像とが合成されるので、撮影画像から生成される立体視用画像の立体感の把握が容易となる。 According to the first modification of the first embodiment, in the subject near the cross point, the weight w1 of the pixel of the left image, which is the original image, is relatively heavier than the weight w2 of the pixel of the parallax image. Since the left image and the parallax image are combined, the state of the subject near the cross point in the original image can be confirmed. In addition, the farther away from the cross point the subject located at the position away from the cross point, the higher the pixel weight w2 of the parallax image compared to the left image pixel weight w1, which is the original image, and the left. Since the image and the parallax image are combined, it is easy to grasp the stereoscopic effect of the stereoscopic image generated from the captured image.
 [第1の実施の形態の変形例2]
 第1の実施の形態の変形例2は、カメラに近い被写体に係る画素では左画像の重みw1を視差画像の画素の重みw2に比して重くして、カメラから遠い被写体に係る画素ほど視差画像の重みw2を重くしていき左画像の重みw1は軽くしていくというものである。
[Modification 2 of the first embodiment]
In the second modification of the first embodiment, in the pixel related to the subject close to the camera, the weight w1 of the left image is heavier than the pixel weight w2 of the parallax image, and the pixel related to the subject farther from the camera is parallax. The weight w2 of the image is increased and the weight w1 of the left image is decreased.
 視差値は、カメラに最も近い被写体の画素で最大となり、カメラから離れた被写体の画素では、カメラから離れるほど視差値は小さくなる。 The parallax value is the maximum at the pixel of the subject closest to the camera, and the parallax value becomes smaller at the pixel of the subject far from the camera as the distance from the camera is longer.
 このため、第1の実施の形態の変形例2では、視差値が小さくなるに従って、視差画像の領域に対応する左画像の領域に含まれる画素のRGB値から得られる値に対する重みw1を軽くすると共に、視差画像の領域に含まれる画素の輝度値から得られる値に対する重みw2を重くしていく。 For this reason, in the second modification of the first embodiment, as the parallax value becomes smaller, the weight w1 for the value obtained from the RGB value of the pixel included in the region of the left image corresponding to the region of the parallax image is reduced. At the same time, the weight w2 for the value obtained from the luminance value of the pixel included in the parallax image area is increased.
 重み付けを視差値に応じて変化させる方法は種々考えられるが、視差値をdとしたとき、左画像の重みw1と視差画像の重みw2の比率は、以下の式(4)を用いる。ここでpは、実験を通じて統計的に決定される所定の係数であって、正の実数である。dnは、カメラから最も近い被写体の画素の視差値である。 There are various methods for changing the weight according to the parallax value. When the parallax value is d, the ratio of the weight w1 of the left image and the weight w2 of the parallax image uses the following formula (4). Here, p is a predetermined coefficient that is statistically determined through experiments, and is a positive real number. dn is the parallax value of the pixel of the subject closest to the camera.
 また、A及びBは、A≧BかつA+B=10の関係を有し、各々0~10の値をとり得る正の実数である。
 左画像の重みw1:視差画像の重みw2
           =A-p|d-dn|:B+p|d-dn|・・・(4)
A and B are positive real numbers that have a relationship of A ≧ B and A + B = 10 and can take values of 0 to 10, respectively.
Left image weight w1: parallax image weight w2
= A−p | d−dn |: B + p | d−dn | (4)
 カメラに最も近い被写体の視差値はdnであるから、式(4)の|d-dn|は「0」となる。このためカメラに最も近い被写体における左画像の画素に対するRGB値への重みw1は「A」であり、カメラに最も近い被写体における視差画像の画素に対する輝度値への重みw2は「B」となる。 Since the parallax value of the subject closest to the camera is dn, | d−dn | in Expression (4) is “0”. Therefore, the weight w1 to the RGB value for the pixel of the left image in the subject closest to the camera is “A”, and the weight w2 to the luminance value for the pixel of the parallax image in the subject closest to the camera is “B”.
 上述のように、A及びBは、A≧BかつA+B=10の関係を有する各々0~10の値をとり得る正の実数である。よって、一例として、A=9、B=1とすると、カメラに最も近い被写体ではw1=9、w2=1となり、左画像と視差画像とで、9:1の重み付けがそれぞれの画像画素のRGB値及び輝度値に施されて、両画像の合成が行われる。 As described above, A and B are positive real numbers that can take values of 0 to 10 each having a relationship of A ≧ B and A + B = 10. Therefore, as an example, when A = 9 and B = 1, w1 = 9 and w2 = 1 for the subject closest to the camera, and a 9: 1 weighting is applied to the RGB of each image pixel between the left image and the parallax image. The two images are combined by applying the value and the luminance value.
 また、一例として、A=10、B=0とすると、カメラに最も近い被写体ではw1=10、w2=0となり、画像合成に際して、左画像の画素のみが使用され、視差画像の輝度値は、画像合成に際して用いられないことになる。 As an example, if A = 10 and B = 0, w1 = 10 and w2 = 0 for the subject closest to the camera, and only the pixels of the left image are used for image synthesis, and the luminance value of the parallax image is It will not be used for image composition.
 カメラから離れた位置に存在する被写体においては、上記式(4)中の|d-dn|が増大するので、相対的に左画像の重みw1は軽くなり、逆に視差画像の重みw2が相対的に重くなっていく。 In a subject that is located away from the camera, | d−dn | in the above equation (4) increases, so the weight w1 of the left image is relatively light, and the weight w2 of the parallax image is relatively small. It gets heavier.
 また、上記の式(4)にはよらず、視差値と重み付けを対応させた対応表(ルックアップテーブル)を予め用意し、この対応表に従って視差値に応じて重み付けを変更してもよい。 Further, a correspondence table (lookup table) in which disparity values and weights are associated with each other may be prepared in advance, and the weights may be changed according to the disparity values according to the correspondence table, regardless of the above equation (4).
 この第1の実施の形態の変形例2によれば、カメラに最も近い被写体においては、原画像である左画像の画素の重みw1を視差画像の画素の重みw2に比べて相対的に重くして左画像と視差画像とが合成されるので、カメラに最も近い被写体の原画像での状態を確認することができる。また、カメラから離れた位置に存在する被写体はカメラから遠ざかるほど、原画像である左画像の画素の重みw1に比して、相対的に視差画像の画素の重みw2を重くして左画像と視差画像とが合成されるので、撮影画像から生成される立体視用画像の立体感の把握が容易となる。 According to the second modification of the first embodiment, in the subject closest to the camera, the pixel weight w1 of the left image, which is the original image, is relatively heavier than the pixel weight w2 of the parallax image. Thus, since the left image and the parallax image are combined, the state of the original image of the subject closest to the camera can be confirmed. In addition, as the subject that is located farther from the camera is farther away from the camera, the pixel weight w2 of the parallax image is relatively heavier than the pixel weight w1 of the left image that is the original image. Since the parallax image is synthesized, it is easy to grasp the stereoscopic effect of the stereoscopic image generated from the captured image.
 [第2の実施の形態]
 続いて、本発明の第2の実施の形態について説明する。本発明の第2の実施の形態に係る視差画像表示装置は、図2に示した本発明の第1の実施の形態に係る視差画像表示装置とは画像合成部2023の機能が異なるが、その他の構成については同一である。
[Second Embodiment]
Next, a second embodiment of the present invention will be described. The parallax image display device according to the second embodiment of the present invention is different from the parallax image display device according to the first embodiment of the present invention shown in FIG. The configuration is the same.
 次に、図7にしたがって本発明の第2の実施の形態に係る視差画像表示装置の処理について説明する。図7は、本発明の第2の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。 Next, processing of the parallax image display device according to the second embodiment of the present invention will be described with reference to FIG. FIG. 7 is a flowchart showing processing of the parallax image display device according to the second embodiment of the present invention.
 このうち、ステップ701~705までの処理は、本発明の第1の実施の形態における図3のステップ301~305までの処理と同様なので、説明を省略する。 Of these, the processing from step 701 to 705 is the same as the processing from step 301 to 305 in FIG. 3 in the first embodiment of the present invention, and thus the description thereof is omitted.
 視差画像が作成された後の工程であるステップ706では、画像合成部2023は、左画像の色差成分を視差画像に合成する。本発明の第1の実施の形態では、左画像と視差画像とで、1:9の重み付けを両画像の画素のRGB値に施して、両画像の合成を行ったが、本発明の第2の実施の形態では、左画像から色差成分のみを抽出し、抽出した色差成分を視差画像に合成している。 In step 706, which is a process after the parallax image is created, the image synthesis unit 2023 synthesizes the color difference component of the left image with the parallax image. In the first embodiment of the present invention, the left image and the parallax image are weighted 1: 9 to the RGB values of the pixels of both images, and the two images are combined. In this embodiment, only the color difference component is extracted from the left image, and the extracted color difference component is combined with the parallax image.
 本実施の形態における色差は、元々輝度値Yが含まれているRGBの各画素値のうち、R及びBの各々の画素値から輝度値Yを減算したR-Y及びB-Yである。 The color difference in the present embodiment is RY and BY obtained by subtracting the luminance value Y from the R and B pixel values among the RGB pixel values originally including the luminance value Y.
 色差R-Y及びB-Y並びに輝度値Yの関係は、例えば、YCrCb方式では、色差B-YをCb、色差R-YをCrとすると、以下の式(5)のようになる。
   Y=0.29900R+0.58700G+0.11400B
   Cr=0.50000R-0.41869G-0.08131B  ・・・(5)
   Cb=-0.16874R-0.33126G+0.50000B
For example, in the YCrCb system, the relationship between the color differences RY and BY and the luminance value Y is expressed by the following equation (5), where the color difference BY is Cb and the color difference RY is Cr.
Y = 0.29900R + 0.58700G + 0.11400B
Cr = 0.50,000R-0.48869G-0.81131B (5)
Cb = −0.16874R−0.33126G + 0.50000B
 上記式(5)から、YCrCbからRGBへの変換に係る以下の式(6)が算出される。
   R=Y+1.40200Cr
   G=Y-0.34414Cb-0.71414Cr        ・・・(6)
   B=Y+1.77200Cb
From the above equation (5), the following equation (6) relating to conversion from YCrCb to RGB is calculated.
R = Y + 1.40200Cr
G = Y−0.34414Cb−0.71414Cr (6)
B = Y + 1.77200 Cb
 一方、視差画像は、輝度値のみで表現されたモノクロ画像である。この視差画像のある画素が輝度値Y’を有し、この輝度値Y’を有する視差画像の画素に対応する左画像の画素の色差が、式(6)中のCr及びCbであるとする。 On the other hand, the parallax image is a monochrome image expressed only by luminance values. A pixel in the parallax image has a luminance value Y ′, and the color difference between the pixels of the left image corresponding to the pixel of the parallax image having the luminance value Y ′ is Cr and Cb in Equation (6). .
 ステップ706では、画像合成部2023は、画素毎に上記式(6)の輝度値Yに、視差画像の輝度値であるY’を代入することで、左画像から抽出した色差成分と視差画像とを合成している。 In step 706, the image composition unit 2023 substitutes Y ′, which is the luminance value of the parallax image, for the luminance value Y of the above equation (6) for each pixel, thereby extracting the color difference component and the parallax image extracted from the left image. Is synthesized.
 続くステップ707では、画像合成部2023は、全画素での合成処理が完了したか否かを判断し、全画素での合成処理が完了した場合は、ステップ708において、左画像の色差成分と視差画像とを合成した画像を、図2に示した表示装置204に表示して、ステップ701~708の処理を完了する。 In the subsequent step 707, the image composition unit 2023 determines whether or not the composition processing has been completed for all pixels. If the composition processing has been completed for all pixels, in step 708, the color difference component and the parallax of the left image are determined. The image combined with the image is displayed on the display device 204 shown in FIG. 2, and the processing in steps 701 to 708 is completed.
 図8は、本発明の第2の実施の形態において左画像の色差成分と視差画像とを合成した画像の一例を示したものである。各画素の輝度の違いから視差値が認識できることに加えて、第1の実施の形態の場合よりも、左画像の色差成分が明瞭に表示されるので、各被写体の色の違いを認識でき、これによって各被写体の概略が把握できるようになっている。 FIG. 8 shows an example of an image obtained by synthesizing the color difference component of the left image and the parallax image in the second embodiment of the present invention. In addition to recognizing the parallax value from the difference in luminance of each pixel, the color difference component of the left image is displayed more clearly than in the case of the first embodiment, so that the difference in color of each subject can be recognized, As a result, the outline of each subject can be grasped.
 また、本実施の形態では、左画像の各1画素のRGB値について色差を抽出しているが、視差画像の視差値が2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について算出されている場合は、左画像から抽出する色差も、視差画像の領域に対応する領域について抽出する。 In this embodiment, the color difference is extracted for the RGB value of each pixel of the left image. However, the number of pixels of the parallax image is 2 × 2 to 3 × 3. In the case where the calculation is performed for a region gathered in a shape, the color difference extracted from the left image is also extracted for a region corresponding to the region of the parallax image.
 この2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について色差を抽出する場合は、当該領域の各画素のRGB値の平均値を算出し、算出したRGB値の平均値から色差を抽出する。 When extracting a color difference for a region where several pixels are gathered in a square shape such as 2 × 2 to 3 × 3, an average value of RGB values of each pixel in the region is calculated, and the calculated RGB Color difference is extracted from the average value.
 左画像の色差成分と視差画像とを合成した画像は、レンチキュラーレンズを備えた表示機器のような特別な表示装置等を要するものではなく、通常の2Dモニタで表示できるビットマップ画像である。 The image obtained by synthesizing the color difference component and the parallax image of the left image does not require a special display device such as a display device including a lenticular lens, and is a bitmap image that can be displayed on a normal 2D monitor.
 ユーザ等は、2Dモニタの画面に表示された左画像の色差成分と視差画像とを合成した画像を視認することにより、最終的に得られる立体視用画像の立体感を把握する。 The user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by synthesizing the color difference component of the left image displayed on the screen of the 2D monitor and the parallax image.
 以上説明したように、本発明の第2の実施の形態によれば、左画像の各画素についての視差値を輝度に変換した視差画像と左画像色差成分とを合成した画像を作成することにより、通常の2Dモニタでも撮影画像から生成される立体視用画像の立体感を把握することができる。 As described above, according to the second embodiment of the present invention, by creating an image obtained by combining the parallax image obtained by converting the parallax value for each pixel of the left image into luminance and the left image color difference component. Even in a normal 2D monitor, the stereoscopic effect of the stereoscopic image generated from the captured image can be grasped.
 [第3の実施の形態]
 続いて、本発明の第3の実施の形態について説明する。本発明の第3の実施の形態に係る視差画像表示装置は、図2に示した本発明の第1の実施の形態に係る視差画像表示装置とは視差画像作成部2022の機能が異なるが、その他の構成については同一である。
[Third Embodiment]
Subsequently, a third embodiment of the present invention will be described. The parallax image display device according to the third embodiment of the present invention differs from the parallax image display device according to the first embodiment of the present invention shown in FIG. Other configurations are the same.
 次に、図9にしたがって本発明の第3の実施の形態に係る視差画像表示装置の処理について説明する。図9は、本発明の第3の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。 Next, processing of the parallax image display device according to the third embodiment of the present invention will be described with reference to FIG. FIG. 9 is a flowchart showing processing of the parallax image display device according to the third embodiment of the present invention.
 まず、画像入力部に異なる2以上の視点から撮影された2以上の画像が入力されたか否かがステップ901で判断される。入力される画像は、第1の実施の形態と同じく、図4に示した異なる2以上の視点から撮影された2以上の画像である。 First, it is determined in step 901 whether or not two or more images taken from two or more different viewpoints are input to the image input unit. Similar to the first embodiment, the input images are two or more images taken from two or more different viewpoints shown in FIG.
 続いて、ステップ902では、図2で示した視差マップ作成部2021が、第1の実施の形態と同様に、ステレオマッチングによって、左画像と右画像とから視差マップを作成する。なお、図4の左画像と右画像は、視差が画像の横方向(水平方向)に発生しており、縦方向(垂直方向)にずれは生じていないものとする。 Subsequently, in step 902, the parallax map creation unit 2021 shown in FIG. 2 creates a parallax map from the left image and the right image by stereo matching, as in the first embodiment. In the left image and the right image in FIG. 4, it is assumed that the parallax occurs in the horizontal direction (horizontal direction) of the image and no shift occurs in the vertical direction (vertical direction).
 ステップ903では、図2で示した視差画像作成部2022が、視差マップから、ユーザ等が立体視用画像の立体感を認識できるように、左画像として例えば左画像の各画素に対応する視差値を色、例えば色差の値で表し、この画素毎の色差の値を左画像上の対応する画素の位置に示した画像を生成し、これを視差色画像とする。 In step 903, the parallax value corresponding to each pixel of the left image, for example, as the left image so that the user or the like can recognize the stereoscopic effect of the stereoscopic image from the parallax map by the parallax image creation unit 2022 shown in FIG. Is represented by a color, for example, a color difference value, and an image in which the color difference value for each pixel is shown at the position of the corresponding pixel on the left image is generated, and this is used as a parallax color image.
 なお、視差画像作成部2022とは別に、視差マップの領域毎の視差値に応じて対応する左画像の領域の色を決定する色決定部(図示せず)を別途備え、視差画像作成部2022は、左画像の各領域の解像度を色決定部において決定した色に変更した視差色画像を作成するようにしてもよい。 In addition to the parallax image creation unit 2022, a separate color determination unit (not shown) that determines the color of the corresponding left image region according to the parallax value for each region of the parallax map is provided, and the parallax image creation unit 2022 is provided. May create a parallax color image in which the resolution of each region of the left image is changed to the color determined by the color determination unit.
 視差値を色差の値に変換する方法は、種々考えられるが、視差値と色差の値との対応を記した対応表にしたがって変換する。本実施の形態では、視差値が大きな場合は赤、視差値が小さくなるにしたがって赤から青へ色差が変化するようになっており、対応表は、大きな視差値は赤の色差に対応させ、小さな視差値になるほど赤から青が強調された色差に対応するようにしている。 There are various methods for converting the parallax value into the color difference value. However, the conversion is performed according to the correspondence table describing the correspondence between the parallax value and the color difference value. In this embodiment, the color difference changes from red to blue as the parallax value is small when the parallax value is large, and the correspondence table associates the large parallax value with the red color difference, A smaller parallax value corresponds to a color difference in which red to blue are emphasized.
 この場合、視差値を決められた範囲毎に分類し、該分類された範囲の視差値に色を対応させた対応表を有するようにして、左画像の領域の視差値に対応する色を対応表に基づいて決定するようにしてもよい。 In this case, the disparity value is classified for each determined range, and a correspondence table in which the color is associated with the disparity value of the classified range is provided, and the color corresponding to the disparity value of the region of the left image is supported. You may make it determine based on a table | surface.
 ステップ904では、視差画像作成部2022は、全画素について視差値から色差の値への変換処理が行われたか否かを判断し、全画素が変換されたのであれば、ステップ905で、当該変換によって作成された視差色画像のファイルを、図2に記した記憶部205に記憶する。 In step 904, the parallax image creation unit 2022 determines whether or not the conversion process from the parallax value to the color difference value has been performed for all the pixels. If all the pixels have been converted, the conversion is performed in step 905. The parallax color image file created by the above is stored in the storage unit 205 shown in FIG.
 視差値を色差で表現した視差色画像でも、ある程度は、左画像の特徴が観察できる場合がある。しかしながら、遠景の木立等の被写体の細部を把握するためには、第1の実施の形態と同様に、左画像と視差を色差で表現した視差色画像とを合成することが望ましい。 Even in a parallax color image in which the parallax value is expressed by a color difference, there are cases where the characteristics of the left image can be observed to some extent. However, in order to grasp the details of a subject such as a grove of a distant view, it is desirable to synthesize a left image and a parallax color image in which parallax is expressed by a color difference, as in the first embodiment.
 ステップ906では、第1の実施の形態と同様に、図2に示した画像合成部2023が、左画像と視差画像とで、1:9の重み付けをそれぞれの画像の画素毎のRGB値に施して、両画像の合成を行っているが、重みの比率は1:9以外でもよい。 In step 906, as in the first embodiment, the image composition unit 2023 shown in FIG. 2 applies 1: 9 weighting to the RGB values for each pixel of the left image and the parallax image. The two images are combined, but the weight ratio may be other than 1: 9.
 なお、本実施の形態では、左画像の各1画素のRGB値について上記の重み付けを行っているが、視差画像の視差値が2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について算出されている場合は、左画像のRGB値の重み付けも、視差画像の領域に対応する領域についてRGB値の重み付けをする。 In the present embodiment, the above-described weighting is performed on the RGB value of each pixel of the left image. However, some pixels have a parallax value of 2 × 2 to 3 × 3 of the parallax image. When the calculation is performed for a region gathered in a square shape, the RGB value is weighted for the region corresponding to the region of the parallax image.
 この2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域についてRGB値の重み付けをする場合は、当該領域の各画素のRGB値の平均値を算出し、算出したRGB値の平均値に対して上記式(2)による重み付けをする。 When weighting RGB values for a region where several pixels are gathered in a square shape such as 2 × 2 to 3 × 3, the average value of the RGB values of each pixel in the region is calculated and calculated. The average value of the obtained RGB values is weighted by the above equation (2).
 続くステップ907では、画像合成部2023は、全画素での合成処理が完了したか否かを判断し、全画素での合成処理が完了した場合は、ステップ908において、左画像と視差色画像とを合成した画像を、図2に示した表示装置204に表示して、ステップ901~908の処理を完了する。 In the subsequent step 907, the image composition unit 2023 determines whether or not the composition processing has been completed for all pixels. If the composition processing has been completed for all pixels, in step 908, the left image and the parallax color image are determined. 2 is displayed on the display device 204 shown in FIG. 2, and the processing of steps 901 to 908 is completed.
 図10は、本発明の第3の実施の形態において左画像と視差色画像を合成した画像の一例を示したもので、実画像では、視差値が最も大きい領域である手前のドットで示された部分は赤で表現され、視差値が小さい奥まった部分は青で表現されている。 FIG. 10 shows an example of an image obtained by synthesizing the left image and the parallax color image in the third embodiment of the present invention. In the actual image, the image is indicated by the front dot that is the region having the largest parallax value. The part with the small parallax value is represented in blue.
 また、色差によって視差値が認識できることに加えて、左画像の画素が合成によって混合され、左画像の画素も表示されるので、遠景の木立等の被写体の概略が把握できるようになっている。 Further, in addition to recognizing the parallax value by the color difference, the pixels of the left image are mixed by synthesis and the pixels of the left image are also displayed, so that an outline of a subject such as a grove of a distant view can be grasped.
 また、左画像と視差色画像とを合成した画像は、レンチキュラーレンズを備えた表示機器のような特別な表示装置等を要するものではなく、通常の2Dモニタで表示できるビットマップ画像である。 Also, the image obtained by combining the left image and the parallax color image is a bitmap image that can be displayed on a normal 2D monitor, without requiring a special display device such as a display device having a lenticular lens.
 ユーザ等は、2Dモニタの画面に表示された左画像と視差色画像とを合成した画像を視認することにより、最終的に得られる立体視用画像の立体感を把握する。 The user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by synthesizing the left image displayed on the screen of the 2D monitor and the parallax color image.
 以上説明したように、本発明の第3の実施の形態によれば、左画像の各画素についての視差値を色差に変換した視差色画像と左画像とを合成した画像を作成することにより、通常の2Dモニタでも撮影画像から生成される立体視用画像の立体感を把握することができる。 As described above, according to the third embodiment of the present invention, by creating an image obtained by combining the parallax color image obtained by converting the parallax value for each pixel of the left image into a color difference and the left image, Even in a normal 2D monitor, the stereoscopic effect of the stereoscopic image generated from the captured image can be grasped.
 この第3の実施の形態では、以下のような様々な変形例も可能である。 In the third embodiment, the following various modifications are possible.
 まず、図2の画像合成部2023は、左画像の各領域に含まれる画素の色差成分を抽出し、視差色画像の各領域と視差色画像の領域の各々に対応する左画像の領域の色差成分とを合成してもよい。 First, the image composition unit 2023 in FIG. 2 extracts the color difference components of the pixels included in each region of the left image, and the color difference between the region of the left image corresponding to each region of the parallax color image and each region of the parallax color image. The ingredients may be synthesized.
 また、図2の画像合成部2023は、第1の実施の形態の変形例1又は第1の実施の形態の変形例2のように、視差値に応じて原画像である左画像の画素の重みw1と視差色画像の画素の重みw2とを変化させるようにしてもよい。 Also, the image composition unit 2023 in FIG. 2 performs pixel conversion of the pixels of the left image that is the original image according to the parallax value, as in Modification 1 of the first embodiment or Modification 2 of the first embodiment. You may make it change the weight w1 and the weight w2 of the pixel of a parallax color image.
 また、視差マップの領域の視差値を輝度で表現した視差輝度画像を作成し、図2の画像合成部2023は、視差輝度画像と視差色画像とを合成してもよい。 Also, a parallax luminance image in which the parallax value of the parallax map area is expressed by luminance may be created, and the image synthesis unit 2023 in FIG. 2 may synthesize the parallax luminance image and the parallax color image.
 また、視差マップの領域毎の視差値に応じて対応する左画像の領域の解像度を決定する解像度決定部と、左画像の各領域の解像度を解像度決定部において決定した解像度に変更した視差解像度画像を作成する視差解像度画像作成部と、をさらに有し、図2の画像合成部2023は、視差解像度画像と視差色画像とを合成するようにしてもよい。 Also, a resolution determining unit that determines the resolution of the corresponding region of the left image according to the parallax value for each region of the parallax map, and the parallax resolution image in which the resolution of each region of the left image is changed to the resolution determined by the resolution determining unit And a parallax resolution image creating unit that creates a parallax resolution image, and the image synthesis unit 2023 in FIG. 2 may synthesize the parallax resolution image and the parallax color image.
 解像度決定部は、視差マップにおいて領域に対応している視差値が小さくなるに従って左画像の領域の解像度を低下させるようにしてもよいし、視差マップにおいて領域に対応している視差値の絶対値が大きくなるに従って左画像の領域の解像度を低下させるようにしてもよい。 The resolution determination unit may reduce the resolution of the region of the left image as the parallax value corresponding to the region in the parallax map becomes smaller, or the absolute value of the parallax value corresponding to the region in the parallax map The resolution of the area of the left image may be lowered as the value increases.
 また、解像度決定部は、視差値を決められた範囲毎に分類し、該分類された範囲の視差値に解像度を対応させた対応表を有し、視差マップにおいて左画像の領域の位置に対応させている視差値を左画像の領域の視差値とし、左画像の領域の視差値に対応する解像度を対応表に基づいて決定するようにしてもよい。 In addition, the resolution determination unit classifies the parallax value for each determined range, has a correspondence table in which the resolution is associated with the parallax value of the classified range, and corresponds to the position of the region of the left image in the parallax map The disparity value being used may be the disparity value of the region of the left image, and the resolution corresponding to the disparity value of the region of the left image may be determined based on the correspondence table.
 対応表には種々の記載が考えられるが、含まれる視差値が小さい範囲には、該含まれる視差値が小さいほど低い解像度を対応させてもよいし、含まれる視差値の絶対値が大きい範囲には、該含まれる前記視差値の絶対値が大きいほど低い解像度を対応させるようにしてもよい。 Although various descriptions can be considered in the correspondence table, a range in which the included parallax value is small may correspond to a lower resolution as the included parallax value is small, or a range in which the absolute value of the included parallax value is large Alternatively, a lower resolution may be associated with an increase in the absolute value of the included parallax value.
 さらに、上記の解像度決定部を視差マップの領域毎の視差値に応じて対応する左画像の領域のシャープネスを決定するシャープネス決定部としてもよい。 Furthermore, the resolution determination unit may be a sharpness determination unit that determines the sharpness of the corresponding left image region in accordance with the parallax value for each region of the parallax map.
 また、視差マップの領域の視差値を輝度値に変換し、かつ視差マップにおける視差値の絶対値が閾値以内である領域の位置を特定し、該特定した領域を決められた色で着色したクロスポイント画像を作成するクロスポイント画像作成部をさらに有し、図2の画像合成部2023は、クロスポイント画像と視差色画像とを合成するようにしてもよい。 In addition, the disparity value in the disparity map area is converted into a luminance value, the position of the disparity map in which the absolute value of the disparity value is within a threshold is specified, and the specified area is colored with a predetermined color. A cross point image creating unit that creates a point image may be further included, and the image composition unit 2023 in FIG. 2 may compose the cross point image and the parallax color image.
 さらに、視差マップの領域の視差値を輝度値に変換し、かつ視差マップにおける視差値の絶対値が最大である領域の位置を特定し、視差値の絶対値が最大である領域の視差値が正の場合と負の場合とで、該領域を異なる色で着色した視差絶対値最大画像を作成する視差絶対値最大画像作成部をさらに有し、図2の画像合成部2023は、視差絶対値最大画像と視差色画像とを合成するようにしてもよい。 Furthermore, the parallax value of the area where the absolute value of the parallax value is maximum is determined by converting the parallax value of the area of the parallax map into a luminance value, specifying the position of the area where the absolute value of the parallax value is maximum in the parallax map. 2 further includes a parallax absolute value maximum image creating unit that creates a parallax absolute value maximum image in which the region is colored with different colors depending on whether the area is positive or negative. The image composition unit 2023 in FIG. The maximum image and the parallax color image may be combined.
 [第4の実施の形態]
 続いて、本発明の第4の実施の形態について説明する。本発明の第4の実施の形態に係る視差画像表示装置は、図2に示した本発明の第1の実施の形態に係る視差画像表示装置とは視差画像作成部2022及び画像合成部2023の機能が異なるが、その他の構成については同一である。
[Fourth Embodiment]
Subsequently, a fourth embodiment of the present invention will be described. The parallax image display device according to the fourth embodiment of the present invention is different from the parallax image display device according to the first embodiment of the present invention shown in FIG. Although the functions are different, other configurations are the same.
 次に、図11にしたがって本発明の第4の実施の形態に係る視差 画像表示装置の処理について説明する。図11は、本発明の第4の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。 Next, processing of the parallax image display device according to the fourth embodiment of the present invention will be described with reference to FIG. FIG. 11 is a flowchart showing processing of the parallax image display device according to the fourth embodiment of the present invention.
 まず、画像入力部に異なる2以上の視点から撮影された右画像及び左画像が入力されたか否かがステップ1101で判断される。入力される画像は、第1の実施の形態と同じく、図4に示した異なる2以上の視点から撮影された2以上の画像である。 First, it is determined in step 1101 whether a right image and a left image taken from two or more different viewpoints are input to the image input unit. Similar to the first embodiment, the input images are two or more images taken from two or more different viewpoints shown in FIG.
 続いて、ステップ1102では、図2で示した視差マップ作成部2021が、第1の実施の形態と同様に、ステレオマッチングによって、左画像と右画像とから視差マップを作成する。なお、図4の左画像と右画像は、視差が画像の横方向(水平方向)に発生しており、縦方向(垂直方向)にずれは生じていないものとする。 Subsequently, in step 1102, the parallax map creation unit 2021 shown in FIG. 2 creates a parallax map from the left image and the right image by stereo matching, as in the first embodiment. In the left image and the right image in FIG. 4, it is assumed that the parallax occurs in the horizontal direction (horizontal direction) of the image and no shift occurs in the vertical direction (vertical direction).
 ステップ1103では、図2で示した視差画像作成部2022が、ユーザ等が立体視用画像の立体感を認識できるように、画素毎に左画像の解像度を視差マップの視差値に応じて変換する。具体的には、視差値が大きい、すなわち、カメラに近い被写体に係る画素の解像度は高く、視差値が小さい、すなわちカメラから遠い被写体に係る画素の解像度を低くした画像を生成し、これを視差解像度画像とする。 In step 1103, the parallax image creation unit 2022 shown in FIG. 2 converts the resolution of the left image for each pixel according to the parallax value of the parallax map so that the user or the like can recognize the stereoscopic effect of the stereoscopic image. . Specifically, an image having a large parallax value, that is, a high resolution of a pixel related to a subject close to the camera and a low parallax value, that is, a low resolution of a pixel related to a subject far from the camera is generated. A resolution image is used.
 なお、視差画像作成部2022とは別に、視差マップの領域毎の視差値に応じて対応する左画像の領域の解像度を決定する解像度決定部(図示せず)を別途備え、視差画像作成部2022は、左画像の各領域の解像度を解像度決定部において決定した解像度に変更した視差解像度画像を作成するようにしてもよい。 In addition to the parallax image creation unit 2022, a resolution determination unit (not shown) that determines the resolution of the corresponding region of the left image according to the parallax value for each region of the parallax map is separately provided, and the parallax image creation unit 2022 Alternatively, a parallax resolution image in which the resolution of each region of the left image is changed to the resolution determined by the resolution determination unit may be created.
 解像度を変化させる方法として、具体的には、画像を、例えば3×3、5×5、7×7等の方形状のブロックに区切り、それぞれのブロック内の画素値をブロック内の画素値の平均値で埋めるモザイク化が用いられる。解像度を高くしたい視差値が大きい領域では、方形状のブロックを小さくし、解像度を低くしたい視差値が小さい領域では、方形状のブロックを大きくする。 As a method of changing the resolution, specifically, the image is divided into rectangular blocks such as 3 × 3, 5 × 5, and 7 × 7, and the pixel values in each block are changed to the pixel values in the block. Mosaic filling with average values is used. In a region where the parallax value for which the resolution is desired to be increased is large, the square block is reduced, and in a region where the parallax value is desired to be reduced for the resolution, the square block is increased.
 なお、視差値に応じて画素をモザイク化する際には、解像度を高くしたい領域、すなわちモザイク化するブロックが小さい領域からモザイク化を開始し、順次、モザイク化するブロックが大きい領域のモザイク化を行うことが好ましい。これは、モザイク化するブロックが大きい領域からモザイク化を始めてしまうと、いきなり広範囲の領域で画素が平滑化されてしまうからである。 When mosaicking pixels according to the parallax value, start mosaicing from the area where you want to increase the resolution, that is, the area where the blocks to be mosaicked are small, and then sequentially mosaic the areas where the blocks to be mosaicked are large Preferably it is done. This is because if the mosaic is started from an area where the block to be mosaicked is large, the pixels are suddenly smoothed in a wide area.
 視差値とモザイク化するブロックの大きさの対応は、例えば、両者を対応させた対応表による。 The correspondence between the parallax value and the size of the block to be mosaicked is based on, for example, a correspondence table in which the two are matched.
 また、視差値の絶対値が小さい画素を中心とする領域の解像度を高くし、視差値の絶対値が大きい画素を中心とする領域の解像度を低くすることによって、クロスポイント近傍の画素を明りょうに表示することもできる。 In addition, by increasing the resolution of an area centered on a pixel having a small absolute parallax value and decreasing the resolution of an area centering on a pixel having a large absolute parallax value, pixels near the crosspoint are clarified. Can also be displayed.
 ステップ1104では、視差画像作成部2022は、全画素に処理を行ったか否かを判断し、全画素に処理を行ったのであれば、ステップ1105で、視差値にしたがって左画像の解像度を変化させた視差解像度画像のファイルを、図2に記した記憶部205に記憶する。 In step 1104, the parallax image creation unit 2022 determines whether or not processing has been performed on all pixels. If processing has been performed on all pixels, in step 1105, the resolution of the left image is changed according to the parallax value. The parallax resolution image file is stored in the storage unit 205 shown in FIG.
 視差値を解像度の違いで表現した視差解像度画像でも、ある程度は、左画像の特徴が観察できる場合がある。しかしながら、木の葉等の被写体の細部を把握するためには、第2の実施の形態と同様に、左画像の色差成分と視差値を解像度の違いで表現した視差解像度画像とを合成することが望ましい。 Even in a parallax resolution image in which the parallax value is expressed by a difference in resolution, the characteristics of the left image may be observed to some extent. However, in order to grasp the details of the subject such as leaves, it is desirable to synthesize the color difference component of the left image and the parallax resolution image expressing the parallax value with the difference in resolution, as in the second embodiment. .
 視差解像度画像が作成された後の工程であるステップ1106では、画像合成部2023が、画素毎に左画像の色差成分を視差解像度画像に合成する。 In step 1106, which is a step after the parallax resolution image is created, the image synthesis unit 2023 synthesizes the color difference component of the left image with the parallax resolution image for each pixel.
 なお、本実施の形態では、左画像の各1画素のRGB値について色差を抽出しているが、視差画像の視差値が2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について算出されている場合は、左画像から抽出する色差も、視差画像の領域に対応する領域について抽出する。 In this embodiment, the color difference is extracted for the RGB value of each pixel of the left image. However, some pixels have a parallax value of 2 × 2 to 3 × 3. In the case where the calculation is performed for a region gathered in a shape, the color difference extracted from the left image is also extracted for a region corresponding to the region of the parallax image.
 この2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について色差を抽出する場合は、当該領域の各画素のRGB値の平均値を算出し、算出したRGB値の平均値から色差を抽出する。 When extracting a color difference for a region where several pixels are gathered in a square shape such as 2 × 2 to 3 × 3, an average value of RGB values of each pixel in the region is calculated, and the calculated RGB Color difference is extracted from the average value.
 ステップ1106では左画像の色差成分を視差解像度画像に合成したが、第1の実施の形態、同変形例1又は同変形例2に記載のステップ306のように、左画像の画素と視差解像度画像の画素とを合成してもよい。 In step 1106, the color difference component of the left image is synthesized with the parallax resolution image. However, as in step 306 described in the first embodiment, the first modification, or the second modification, the pixel of the left image and the parallax resolution image. These pixels may be combined.
 続くステップ1107では、画像合成部2023は、全画素での合成処理が完了したか否かを判断し、全画素での合成処理が完了した場合は、ステップ1108において、元となった左画像の色差成分と視差解像度画像とを合成した画像を、図2に示した表示装置204に表示して、ステップ1101~1108の処理を完了する。 In the subsequent step 1107, the image composition unit 2023 determines whether or not the composition process for all pixels has been completed. If the composition process for all pixels has been completed, in step 1108, the image composition unit 2023 An image obtained by synthesizing the color difference component and the parallax resolution image is displayed on the display device 204 shown in FIG. 2, and the processing of steps 1101 to 1108 is completed.
 図12は、本発明の第4の実施の形態において左画像の色差成分と視差値にしたがって左画像の解像度を変化させた視差解像度画像とを合成した画像の一例を示したものである。解像度の違いよって視差値が認識できることに加えて、元となった左画像の色差成分も表示されるので、各被写体の色の違いを認識でき、これによって各被写体の概略が把握できるようになっている。 FIG. 12 shows an example of an image obtained by combining the color difference component of the left image and the parallax resolution image obtained by changing the resolution of the left image according to the parallax value in the fourth embodiment of the present invention. In addition to recognizing the parallax value due to the difference in resolution, the color difference component of the original left image is also displayed, so the difference in color of each subject can be recognized, and this allows the outline of each subject to be grasped. ing.
 また、元となった左画像の色差成分と視差解像度画像とを合成した画像は、レンチキュラーレンズを備えた表示機器のような特別な表示装置等を要するものではなく、通常の2Dモニタで表示できるビットマップ画像である。 An image obtained by combining the color difference component of the original left image and the parallax resolution image does not require a special display device such as a display device having a lenticular lens, and can be displayed on a normal 2D monitor. It is a bitmap image.
 ユーザ等は、2Dモニタの画面に表示された元となった左画像の色差成分と視差解像度画像とを合成した画像を視認することにより、最終的に得られる立体視用画像の立体感を把握する。 The user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by combining the color difference component of the original left image displayed on the screen of the 2D monitor and the parallax resolution image. To do.
 なお、本発明の第4の実施の形態では、画像の解像度を変更したが、画像のシャープネスを変更するようにしてもよい。この場合、視差画像作成部2022とは別に、視差マップの領域毎の視差値に応じて対応する左画像の領域のシャープネスを決定するシャープネス決定部(図示せず)を別途備え、視差画像作成部2022は、左画像の各領域の解像度をシャープネス決定部において決定したシャープネスに変更した視差シャープネス画像を作成するようにしてもよい。 In the fourth embodiment of the present invention, the image resolution is changed, but the sharpness of the image may be changed. In this case, in addition to the parallax image creation unit 2022, a separate sharpness determination unit (not shown) that determines the sharpness of the corresponding left image region according to the parallax value for each region of the parallax map is provided. 2022 may create a parallax sharpness image in which the resolution of each region of the left image is changed to the sharpness determined by the sharpness determination unit.
 画像のシャープネスの変更には、例えば、ガウス関数を用いて画像の平滑化をするガウシアンぼかしが用いられる。 For example, Gaussian blur that smoothes an image using a Gaussian function is used to change the sharpness of the image.
 シャープネスを高くしたい視差値が大きい領域では、当該領域を中心画素とし、ガウス関数によって当該中心画素及び当該中心画素の周囲の画素の画素値の加重平均を算出する際の当該周囲の画素の範囲を小さくする。 In a region where the parallax value for which sharpness is to be increased is large, the region is the center pixel, and the range of the surrounding pixels when calculating the weighted average of the pixel values of the center pixel and the surrounding pixels by a Gaussian function is used. Make it smaller.
 一方で、シャープネスを低くしたい視差値が小さい領域では、当該領域を中心画素とし、ガウス関数によって当該中心画素及び当該中心画素の周囲の画素の画素値の加重平均を算出する際の当該周囲の画素の範囲を大きくする。 On the other hand, in a region with a small parallax value for which sharpness is desired to be reduced, the region is the central pixel, and the surrounding pixels when calculating the weighted average of the pixel values of the central pixel and the pixels around the central pixel by a Gaussian function Increase the range.
 なお、視差値の絶対値が小さい画素の領域のシャープネスを高くし、視差値の絶対値が大きい画素の領域のシャープネスを低くすることによって、クロスポイント近傍の画素を明りょうに表示することもできる。 It is also possible to clearly display pixels in the vicinity of the cross point by increasing the sharpness of the pixel area having a small absolute value of the parallax value and decreasing the sharpness of the pixel area having the large absolute value of the parallax value. .
 以上説明したように、本発明の第4の実施の形態によれば、左画像の各画素についての視差値を解像度の差に変換した視差解像度画像と左画像色差成分とを合成した画像を作成することにより、通常の2Dモニタでも左画像から生成される立体視用画像の立体感を把握することができる。 As described above, according to the fourth embodiment of the present invention, an image in which a parallax resolution image obtained by converting a parallax value for each pixel of a left image into a difference in resolution and a left image color difference component is created. By doing so, it is possible to grasp the stereoscopic effect of the stereoscopic image generated from the left image even with a normal 2D monitor.
 [第4の実施の形態の変形例]
 第4の実施の形態では、図11のステップ1103において、図2で示した視差画像作成部2022が、ユーザ等が立体視用画像の立体感を認識できるように、画素毎に左画像の解像度を視差マップの視差値に応じて変換した。具体的には、視差値が大きい、すなわち、カメラに近い被写体に係る画素の解像度は高く、視差値が小さい、すなわちカメラから遠い被写体に係る画素の解像度を低くした画像を生成し、これを視差解像度画像とした。
[Modification of Fourth Embodiment]
In the fourth embodiment, in step 1103 of FIG. 11, the parallax image creation unit 2022 shown in FIG. 2 has the resolution of the left image for each pixel so that the user can recognize the stereoscopic effect of the stereoscopic image. Was converted according to the parallax value of the parallax map. Specifically, an image having a large parallax value, that is, a high resolution of a pixel related to a subject close to the camera and a low parallax value, that is, a low resolution of a pixel related to a subject far from the camera is generated. A resolution image was obtained.
 第4の実施の形態の変形例では、図2の視差画像作成部2022は、視差値を決められた範囲毎に分類し、この分類された範囲に対応する解像度を予め設定した対応表を有する。 In the modification of the fourth embodiment, the parallax image creation unit 2022 in FIG. 2 classifies the parallax values for each determined range, and has a correspondence table in which resolutions corresponding to the classified ranges are set in advance. .
 また、視差画像作成部2022は、左画像の画素に対応する視差値を視差マップから特定し、前述の対応表を参照して、特定した視差値に対応する解像度を特定し、左画像の画素の解像度を、この特定した解像度に変更する。 Also, the parallax image creation unit 2022 identifies the parallax value corresponding to the pixel of the left image from the parallax map, identifies the resolution corresponding to the identified parallax value with reference to the correspondence table, and determines the pixel of the left image Is changed to the specified resolution.
 解像度を変化させる方法として具体的には、第4の実施の形態と同様に、画像を、例えば3×3、5×5、7×7等の方形状のブロックに区切り、それぞれのブロック内の画素値をブロック内の画素値の平均値で埋めるモザイク化が用いられる。解像度を高くしたい視差値が大きい領域では、方形状のブロックを小さくし、解像度を低くしたい視差値が小さい領域では、方形状のブロックを大きくする。 As a method for changing the resolution, specifically, as in the fourth embodiment, the image is divided into rectangular blocks of 3 × 3, 5 × 5, 7 × 7, etc. Mosaicing is used in which the pixel values are filled with the average value of the pixel values in the block. In a region where the parallax value for which the resolution is desired to be increased is large, the square block is reduced, and in a region where the parallax value is desired to be reduced for the resolution, the square block is increased.
 ただし、第4の実施の形態の変形例では、例えば、視差値が0~2である左画像の画素は、モザイク化するブロックの大きさを5×5とし、視差値が3~5である左画像の画素は、モザイク化するブロックの大きさを3×3とする等のように、左画像の画素の解像度を、その画素の視差値が属している範囲に対応するモザイク化するブロックの大きさを適用して、変更する。 However, in the modification of the fourth embodiment, for example, the pixel of the left image whose parallax value is 0 to 2 has a 5 × 5 mosaic size and the parallax value is 3 to 5 The pixel of the left image has a resolution of the pixel of the left image corresponding to the range to which the parallax value of the pixel belongs, such as the size of the block to be mosaicized is 3 × 3. Apply and change the size.
 なお、ある範囲の視差値に応じて画素をモザイク化する際には、解像度を高くしたい領域、すなわちモザイク化するブロックが小さい領域からモザイク化を開始し、順次、モザイク化するブロックが大きい領域のモザイク化を行うことが好ましい。これは、モザイク化するブロックが大きい領域からモザイク化を始めてしまうと、いきなり広範囲の領域で画素が平滑化されてしまうからである。 Note that when mosaicking pixels according to a certain range of parallax values, start mosaicing from the area where you want to increase the resolution, i.e., the area where the block to be mosaicked is small, and the area where the block to be mosaicated is sequentially large It is preferable to perform mosaic. This is because if the mosaic is started from an area where the block to be mosaicked is large, the pixels are suddenly smoothed in a wide area.
 ある範囲の視差値と解像度に係るモザイク化するブロックの大きさとの対応は、上述のように両者を対応させた対応表による。 Correspondence between a certain range of parallax values and the size of the block to be mosaicked according to the resolution is based on the correspondence table associating both as described above.
 この対応表において、視差値の範囲のうち、この範囲に含まれている視差値が大きい範囲ほど、モザイク化するブロックの大きさが小さいものを対応させ、視差値の範囲のうち、この範囲に含まれている視差値が小さい範囲ほど、モザイク化するブロックの大きさが大きいものを対応させることによって、カメラに近い被写体に係る画素の解像度は高く、カメラから遠い被写体に係る画素の解像度をカメラからの距離に応じて断続的に低くした画像を生成し表示することができる。 In this correspondence table, among the parallax value ranges, the larger the parallax value included in this range, the smaller the size of the block to be mosaicked, and the corresponding parallax value range. The smaller the included parallax value, the higher the resolution of the pixels related to the subject close to the camera, and the higher the resolution of the pixels related to the subject far from the camera. It is possible to generate and display an image that is intermittently lowered according to the distance from the image.
 なお、上記の対応表において、視差値の範囲のうち、この範囲に含まれている視差値の絶対値が小さい範囲ほど、モザイク化するブロックの大きさが小さいものを対応させ、視差値の範囲のうち、この範囲に含まれている視差値の絶対値が大きい範囲ほど、モザイク化するブロックの大きさが大きいものを対応させることによって、クロスポイント近傍の画素を明りょうに表示し、クロスポイントから離れた被写体の解像度を、クロスポイントからの距離に応じて断続的に低くした画像を生成し表示することができる。 In the above correspondence table, among the parallax value ranges, the smaller the absolute value of the parallax values included in this range, the smaller the size of the block to be mosaicked, and the parallax value range. Among the ranges, the larger the absolute value of the disparity value included in this range, the larger the size of the block to be mosaicked is associated, thereby clearly displaying the pixels near the crosspoint, and the crosspoint It is possible to generate and display an image in which the resolution of the subject away from the screen is intermittently lowered according to the distance from the cross point.
 また、第4の実施の形態の変形例では、視差マップに左画像の各画素に対応する視差値が対応している場合、すなわち視差画像の視差値が画素毎に算出されている場合を前提としたが、視差画像の視差値が2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について算出されている場合であってもよい。 Also, in the modification of the fourth embodiment, it is assumed that the parallax value corresponding to each pixel of the left image corresponds to the parallax map, that is, the case where the parallax value of the parallax image is calculated for each pixel. However, the parallax value of the parallax image may be calculated for a region where several pixels are gathered in a square shape, such as 2 × 2 or 3 × 3.
 この2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について視差値が算出されている場合でも、図2の視差画像作成部2022は、視差値を決められた範囲毎に分類し、この分類された範囲に対応する解像度を予め設定した対応表を有する。 Even when the parallax value is calculated for a region where several pixels are gathered in a square shape such as 2 × 2 to 3 × 3, the parallax image creation unit 2022 in FIG. 2 can determine the parallax value. And a correspondence table in which resolutions corresponding to the classified ranges are set in advance.
 また、視差画像作成部2022は、左画像の領域に対応する視差値を視差マップから特定し、前述の対応表を参照して、特定した視差値に対応する解像度を特定し、左画像の領域の解像度を、この特定した解像度に変更する。 Also, the parallax image creation unit 2022 identifies the parallax value corresponding to the area of the left image from the parallax map, identifies the resolution corresponding to the identified parallax value by referring to the correspondence table, and determines the area of the left image Is changed to the specified resolution.
 図11のステップ1103以降の処理は、第4の実施の形態の変形例も第4の実施の形態と同様であり、図11のステップ1106に記したように、視差値に応じて解像度を変化させた画像に左画像の色差成分又は画素を合成してもよい。 The processing after step 1103 in FIG. 11 is the same as that of the fourth embodiment in the modification of the fourth embodiment, and the resolution is changed according to the parallax value as described in step 1106 in FIG. The color difference component or pixel of the left image may be combined with the processed image.
 なお、第4の実施の形態の変形例では、図2の視差画像作成部2022は、視差値を決められた範囲毎に分類し、この分類された範囲の視差値に対応するシャープネスを予め設定した対応表を有し、左画像の領域に対応する視差値を視差マップから特定し、前述の対応表を参照して、特定した視差値に対応するシャープネスを特定し、左画像の領域のシャープネスを、この特定したシャープネスに変更するようにしてもよい。 In the modification of the fourth embodiment, the parallax image creation unit 2022 in FIG. 2 classifies the parallax values for each determined range, and presets sharpness corresponding to the parallax values in the classified range. The disparity value corresponding to the left image area is identified from the disparity map, the sharpness corresponding to the identified disparity value is identified with reference to the correspondence table, and the sharpness of the left image area is determined. May be changed to the specified sharpness.
 この対応表において、視差値の範囲のうち、この範囲に含まれている視差値が大きい範囲ほど、高いシャープネスを対応させ、視差値の範囲のうち、この範囲に含まれている視差値が小さい範囲ほど、低いシャープネスを対応させることによって、カメラに近い被写体に係る画素のシャープネスは高く、カメラから遠い被写体に係る画素のシャープネスをカメラからの距離に応じて断続的に低くした画像を生成し表示することができる。 In this correspondence table, the higher the parallax value included in this range, the higher the sharpness, and the smaller the parallax value included in this range of the parallax value range. The lower the sharpness of the range, the higher the sharpness of the pixels related to the subject closer to the camera, and the lower the sharpness of the pixels related to the subject far from the camera is generated and displayed according to the distance from the camera. can do.
 上記の対応表において、視差値の範囲のうち、この範囲に含まれている視差値の絶対値が小さい範囲ほど、高いシャープネスを対応させ、視差値の範囲のうち、この範囲に含まれている視差値の絶対値が大きい範囲ほど、低いシャープネスを対応させることによって、クロスポイント近傍の画素を明りょうに表示し、クロスポイントから離れた被写体のシャープネスを、クロスポイントからの距離に応じて断続的に低くした画像を生成し表示することができる。 In the above correspondence table, among the parallax value ranges, the smaller the absolute value of the parallax values included in this range, the higher the sharpness is associated, and the parallax value ranges included in this range. The higher the absolute value of the parallax value, the lower the sharpness, so that the pixels near the crosspoint are clearly displayed, and the sharpness of the subject away from the crosspoint is intermittent according to the distance from the crosspoint. It is possible to generate and display an image with a low height.
 画像のシャープネスの変更には、第4の実施の形態と同様に、例えば、ガウス関数を用いて画像の平滑化をするガウシアンぼかしが用いられる。 As in the fourth embodiment, for example, Gaussian blur that smoothes an image using a Gaussian function is used to change the sharpness of the image.
 シャープネスを高くしたい領域では、当該領域を中心画素とし、ガウス関数によって当該中心画素及び当該中心画素の周囲の画素の画素値の加重平均を算出する際の当該周囲の画素の範囲を小さくする。 In an area where sharpness is desired to be increased, the area is set as the central pixel, and the range of the surrounding pixels when the weighted average of the pixel values of the central pixel and the surrounding pixels is calculated by a Gaussian function is reduced.
 一方で、シャープネスを低くしたい領域では、当該領域を中心画素とし、ガウス関数によって当該中心画素及び当該中心画素の周囲の画素の画素値の加重平均を算出する際の当該周囲の画素の範囲を大きくする。 On the other hand, in an area where sharpness is to be lowered, the area is set as the central pixel, and the range of the surrounding pixels when calculating the weighted average of the pixel values of the central pixel and the surrounding pixels by a Gaussian function is increased. To do.
 この第4の実施の形態の変形例によれば、決められた範囲の視差値毎に画像の解像度を段階的に変化させることにより、左画像から生成される立体視用画像の立体感の概略を把握することが容易となる。 According to the modification of the fourth embodiment, the stereoscopic effect of the stereoscopic image generated from the left image is changed by stepwise changing the resolution of the image for each predetermined range of parallax values. It becomes easy to grasp.
 [第5の実施の形態]
 続いて、本発明の第5の実施の形態について説明する。本発明の第5の実施の形態に係る視差画像表示装置は、図2に示した本発明の第1の実施の形態に係る視差画像表示装置とは視差画像作成部2022の機能が異なるが、その他の構成については同一である。
[Fifth Embodiment]
Subsequently, a fifth embodiment of the present invention will be described. The parallax image display device according to the fifth embodiment of the present invention differs from the parallax image display device according to the first embodiment of the present invention shown in FIG. Other configurations are the same.
 次に、図13にしたがって本発明の第5の実施の形態に係る視差画像表示装置の処理について説明する。図13は、本発明の第5の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。 Next, processing of the parallax image display device according to the fifth embodiment of the present invention will be described with reference to FIG. FIG. 13 is a flowchart showing processing of the parallax image display device according to the fifth embodiment of the present invention.
 このうち、ステップ1301~1304までの処理は、本発明の第1の実施の形態における図3のステップ301~304までの処理と同様なので、説明を省略する。 Of these, the processing from Steps 1301 to 1304 is the same as the processing from Steps 301 to 304 in FIG. 3 in the first embodiment of the present invention, and a description thereof will be omitted.
 ステップ1305では、視差画像作成部2022が、視差マップにおける視差値の絶対値が所定の閾値以下の画素をステップ1303で作成した視差画像上で特定し、特定した画素を着色する。 In step 1305, the parallax image creation unit 2022 identifies pixels whose absolute value of the parallax value in the parallax map is equal to or smaller than a predetermined threshold on the parallax image created in step 1303, and colors the identified pixels.
 例えば、当該閾値が「2」であるなら、実際の視差値が-2以上2以下の画素を決められた色に着色する。本実施の形態では緑色に着色したが、他の画素と識別が容易なものであれは、色は赤系統の色でも青系統の色でもよい。 For example, if the threshold value is “2”, pixels having an actual parallax value of −2 or more and 2 or less are colored in a predetermined color. Although colored green in this embodiment, the color may be red or blue if it can be easily distinguished from other pixels.
 続くステップ1306では、視差画像作成部2022は、全画素について視差値が-2以上2以下であるか否かを確認したか否かを判断し、当該確認の処理が完了した場合は、視差値が-2以上2以下の画素を着色してクロスポイントを表示したクロスポイント画像を図2の表示装置204に表示する。 In the following step 1306, the parallax image creation unit 2022 determines whether or not the parallax value is not less than −2 and not more than 2 for all pixels, and when the confirmation process is completed, the parallax value 2 is displayed on the display device 204 in FIG. 2 by coloring the pixels of −2 to 2 and displaying the cross points.
 図14及び図15は、本発明の第5の実施の形態においてクロスポイントを表示したクロスポイント画像である。 14 and 15 are cross-point images displaying cross-points in the fifth embodiment of the present invention.
 図14は、クロスポイントが中景にある場合であり、図15は、クロスポイントが前景にある場合である。 FIG. 14 shows a case where the cross point is in the foreground, and FIG. 15 shows a case where the cross point is in the foreground.
 また、図14及び図15は、視差値が大きい被写体は低輝度で、視差値が小さい被写体は高輝度で表現されている。クロスポイントの画素以外は、モノクロームの濃淡で表現されているが、各被写体の輪郭は不明瞭であり、視差値が同一な部分は同一の輝度で表現されるため、遠景の木立等の被写体の細部が全く把握できない。 In FIGS. 14 and 15, a subject with a large parallax value is represented with low luminance, and a subject with a small parallax value is represented with high luminance. The pixels other than the cross-point pixels are expressed in monochrome shades, but the outline of each subject is unclear, and the parts with the same parallax value are expressed with the same brightness. I can't figure out the details at all.
 これら図14または図15を、元の画像である図4の左と交互に比較することによって、元の画像とクロスポイント画像との対応を把握することも考えられる。しかし、2つの画像を見比べるのは煩雑であるし、画像を見比べて元の画像との対応を把握した上で、最終的に生成される立体視用画像の立体感を、元の画像の被写体と対比させて認識することは困難である。 It is also conceivable to grasp the correspondence between the original image and the cross-point image by alternately comparing these FIG. 14 or FIG. 15 with the left of FIG. 4 which is the original image. However, comparing the two images is cumbersome, and after comparing the images to understand the correspondence with the original image, the stereoscopic effect of the stereoscopic image that is finally generated is changed to the subject of the original image. It is difficult to recognize it in contrast to
 そこで、ステップ1308では、図2で示した画像合成部2023が、画素毎に左画像とクロスポイント画像とを合成する。 Therefore, in step 1308, the image composition unit 2023 shown in FIG. 2 synthesizes the left image and the cross point image for each pixel.
 このステップ1308では、画像合成部2023は、第1の実施の形態と同様に、左画像とクロスポイント画像とで、1:9の重み付けをそれぞれの画像の画素のRGB値に施して、両画像の合成を行っている。 In step 1308, as in the first embodiment, the image composition unit 2023 applies a 1: 9 weighting to the RGB values of the pixels of each image in the left image and the crosspoint image, and outputs both images. Is being synthesized.
 なお、本実施の形態では、左画像の各1画素のRGB値について上記の重み付けを行っているが、視差画像の視差値が2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について算出されている場合は、左画像のRGB値の重み付けも、視差画像の領域に対応する領域についてRGB値の重み付けをする。 In the present embodiment, the above-described weighting is performed on the RGB value of each pixel of the left image. However, some pixels have a parallax value of 2 × 2 to 3 × 3 of the parallax image. When the calculation is performed for a region gathered in a square shape, the RGB value is weighted for the region corresponding to the region of the parallax image.
 この2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域についてRGB値の重み付けをする場合は、当該領域の各画素のRGB値の平均値を算出し、算出したRGB値の平均値に対して上記式(2)による重み付けをする。 When weighting RGB values for a region where several pixels are gathered in a square shape such as 2 × 2 to 3 × 3, the average value of the RGB values of each pixel in the region is calculated and calculated. The average value of the obtained RGB values is weighted by the above equation (2).
 続くステップ1309では、画像合成部2023は、全画素での合成処理が完了したか否かを判断し、全画素での合成処理が完了した場合は、ステップ1310において、左画像とクロスポイント画像とを合成した画像を、図2に示した表示装置204に表示して、ステップ1301~1310の処理を完了する。 In subsequent step 1309, the image composition unit 2023 determines whether or not the composition processing for all pixels has been completed. If the composition processing for all pixels has been completed, in step 1310, the left image, the crosspoint image, 2 is displayed on the display device 204 shown in FIG. 2, and the processing of steps 1301 to 1310 is completed.
 左画像とクロスポイント画像とを合成した画像は、レンチキュラーレンズを備えた表示機器のような特別な表示装置等を要するものではなく、通常の2Dモニタで表示できるビットマップ画像である。 The image obtained by synthesizing the left image and the cross-point image is a bitmap image that can be displayed on a normal 2D monitor, without requiring a special display device such as a display device having a lenticular lens.
 ユーザ等は、2Dモニタの画面に表示された左画像とクロスポイント画像とを合成した画像を視認することにより、最終的に得られる立体視用画像の立体感を把握する。 The user or the like grasps the stereoscopic effect of the finally obtained stereoscopic image by visually recognizing an image obtained by synthesizing the left image and the crosspoint image displayed on the screen of the 2D monitor.
 以上説明したように、本発明の第5の実施の形態によれば、左画像の各画素についての視差値を輝度に変換した視差画像からクロスポイント画像を作成し、作成したクロスポイント画像と左画像とを合成した画像を作成することにより、通常の2Dモニタでも撮影画像から生成される立体視用画像の立体感を把握することができる。 As described above, according to the fifth embodiment of the present invention, a crosspoint image is created from a parallax image obtained by converting the parallax value for each pixel of the left image into luminance, and the created crosspoint image and the left By creating an image synthesized with the image, the stereoscopic effect of the stereoscopic image generated from the captured image can be grasped even with a normal 2D monitor.
 [第6の実施の形態]
 続いて、本発明の第6の実施の形態について説明する。本発明の第6の実施の形態に係る視差画像表示装置は、図2に示した本発明の第1の実施の形態に係る視差画像表示装置とは視差画像作成部2022の機能が異なるが、その他の構成については同一である。
[Sixth Embodiment]
Subsequently, a sixth embodiment of the present invention will be described. The parallax image display device according to the sixth embodiment of the present invention differs from the parallax image display device according to the first embodiment of the present invention shown in FIG. Other configurations are the same.
 次に、図16にしたがって本発明の第6の実施の形態に係る視差画像表示装置の処理について説明する。図16は、本発明の第6の実施の形態に係る視差画像表示装置の処理を示すフローチャートである。 Next, processing of the parallax image display device according to the sixth embodiment of the present invention will be described with reference to FIG. FIG. 16 is a flowchart showing processing of the parallax image display device according to the sixth embodiment of the present invention.
 図16において、ステップ1601~1604までの処理は、本発明の第1の実施の形態における図3のステップ301~304までの処理と同様なので、説明を省略する。 In FIG. 16, the processing from Steps 1601 to 1604 is the same as the processing from Steps 301 to 304 in FIG. 3 in the first embodiment of the present invention, and the description thereof will be omitted.
 ステップ1605では、視差画像作成部2022が、ステップ1602で作成した視差マップに基づき、左画像で視差値の絶対値が最大となっている領域をステップ1603で作成した視差画像上で特定し、その特定した領域が飛び出し方向について視差値が最大になっているか否か、つまりは、左画像の前景部分か否かを判断する。 In step 1605, the parallax image creation unit 2022 identifies the region where the absolute value of the parallax value in the left image is the maximum on the parallax image created in step 1603 based on the parallax map created in step 1602, It is determined whether or not the specified region has the maximum parallax value in the pop-out direction, that is, whether or not it is the foreground portion of the left image.
 視差値は、立体視画像撮影用カメラの左右のレンズの光軸が交差するクロスポイントで0となり、クロスポイントから離れた、つまりはカメラに近いほど、又はクロスポイントをはさんでカメラから遠いほど、視差値の絶対値は大きくなる。 The parallax value is 0 at the cross point where the optical axes of the left and right lenses of the stereoscopic image shooting camera intersect, and the farther away from the cross point, that is, the closer to the camera, or the farther from the camera across the cross point, the more. The absolute value of the parallax value increases.
 本発明の第6の実施の形態では、視差値が正となっている位置をクロスポイントからカメラに近い位置、すなわち前景であるとし、視差値が負となっている位置をクロスポイントよりもカメラから遠い位置、すなわち遠景であるとする。 In the sixth embodiment of the present invention, the position where the parallax value is positive is assumed to be a position close to the camera from the cross point, that is, the foreground, and the position where the parallax value is negative is set to the camera rather than the cross point. It is assumed that the position is far from the camera, that is, a distant view.
 ステップ1605で、視差値の絶対値が大きい画素の視差値が正である場合、視差画像作成部2022は、その画素は飛び出し方向に視差値が最大であると判断し、ステップ1606では、視差画像作成部2022が、当該画素を決められた色である赤で着色した視差絶対値最大画像を作成する。 If the parallax value of a pixel having a large absolute value of the parallax value is positive in step 1605, the parallax image creation unit 2022 determines that the pixel has the maximum parallax value in the pop-out direction. In step 1606, the parallax image The creation unit 2022 creates a parallax absolute value maximum image in which the pixel is colored with a predetermined color, red.
 なお、色は赤に限定されず、当該領域が他の領域と識別可能な色であればよい。 Note that the color is not limited to red, and it is sufficient that the area can be distinguished from other areas.
 続くステップ1607では、視差画像作成部2022は、全画素で処理を行ったか否かが判断され、全画素で処理を行ったのであれば、ステップ1608において、飛び出し方向に視差値が最大である領域を赤く着色した視差絶対値最大画像を、図2に示した表示装置204に表示する。 In subsequent step 1607, the parallax image creation unit 2022 determines whether or not processing has been performed for all pixels. If processing has been performed for all pixels, in step 1608, the region where the parallax value is maximum in the pop-out direction. Is displayed on the display device 204 shown in FIG.
 図17は、本発明の第6の実施の形態において飛び出し方向に視差値が最大の画素の領域を表示した視差絶対値最大画像である。 FIG. 17 is a parallax absolute value maximum image displaying a pixel region having the maximum parallax value in the pop-out direction in the sixth embodiment of the present invention.
 図17は、視差値が大きい被写体は低輝度で、視差値が小さい被写体は高輝度で表現されている。飛び出し方向に視差値が最大の画素の領域以外は、モノクロームの濃淡で表現されているが、各被写体の輪郭は不明瞭であり、視差値が同一な部分は同一の輝度で表現されるため、遠景の木立等の被写体の細部が全く把握できない。 In FIG. 17, a subject with a large parallax value is represented with low luminance, and a subject with a small parallax value is represented with high luminance. Except for the pixel area where the parallax value is the largest in the pop-out direction, it is expressed in monochrome shades, but the outline of each subject is unclear, and the parts with the same parallax value are expressed with the same brightness. I cannot grasp details of subjects such as trees in a distant view.
 図17を、元の画像である図4の左画像と交互に比較することによって、元の画像と飛び出し方向に視差値が最大の画素の領域を表示した視差絶対値最大画像との対応を把握することも考えられる。しかし、2つの画像を見比べるのは煩雑であるし、画像を見比べて元の画像との対応を把握した上で、最終的に生成される立体視用画像の立体感を、元の画像の被写体と対比させて認識することは困難である。 By comparing FIG. 17 with the original left image of FIG. 4 alternately, the correspondence between the original image and the parallax absolute value maximum image displaying the region of the pixel having the maximum parallax value in the pop-out direction is grasped. It is also possible to do. However, comparing the two images is cumbersome, and after comparing the images to understand the correspondence with the original image, the stereoscopic effect of the stereoscopic image that is finally generated is changed to the subject of the original image. It is difficult to recognize it in contrast to
 そこで、ステップ1609では、図2で示した画像合成部2023が、左画像である図4の左画像と飛び出し方向に視差値が最大の画素の領域を表示した視差絶対値最大画像とを合成する。 Therefore, in step 1609, the image composition unit 2023 shown in FIG. 2 synthesizes the left image in FIG. 4 that is the left image and the parallax absolute value maximum image that displays the region of the pixel having the maximum parallax value in the pop-out direction. .
 このステップ1609では、画像合成部2023は、第1の実施の形態と同様に、左画像と、飛び出し方向に視差値が最大の画素の領域を表示した視差絶対値最大画像とで、1:9の重み付けをそれぞれの画像の画素のRGB値に施して、両画像の合成を行っている。 In step 1609, as in the first embodiment, the image composition unit 2023 uses a left image and a parallax absolute value maximum image displaying a region of a pixel having the maximum parallax value in the pop-out direction. Is applied to the RGB values of the pixels of the respective images to synthesize both images.
 なお、本実施の形態では、左画像の各1画素のRGB値について上記の重み付けを行っているが、視差画像の視差値が2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域について算出されている場合は、左画像のRGB値の重み付けも、視差画像の領域に対応する領域についてRGB値の重み付けをする。 In the present embodiment, the above-described weighting is performed on the RGB value of each pixel of the left image. However, some pixels have a parallax value of 2 × 2 to 3 × 3 of the parallax image. When the calculation is performed for a region gathered in a square shape, the RGB value is weighted for the region corresponding to the region of the parallax image.
 この2×2乃至は3×3というように、いくつかの画素が方形状に集合した領域についてRGB値の重み付けをする場合は、当該領域の各画素のRGB値の平均値を算出し、算出したRGB値の平均値に対して上記式(2)による重み付けをする。 When weighting RGB values for a region where several pixels are gathered in a square shape such as 2 × 2 to 3 × 3, the average value of the RGB values of each pixel in the region is calculated and calculated. The average value of the obtained RGB values is weighted by the above equation (2).
 続くステップ1610では、画像合成部2023は、全画素での合成処理が完了したか否かを判断し、全画素での合成処理が完了した場合は、ステップ1611において、左画像と飛び出し方向に視差値が最大の画素の領域を表示した視差絶対値最大画像とを合成した画像を、図2に示した表示装置204に表示して、ステップ1601~1611の処理を完了する。 In the following step 1610, the image composition unit 2023 determines whether or not the composition process for all pixels is completed. If the composition process for all pixels is completed, in step 1611, the image composition unit 2023 performs a parallax with the left image in the pop-out direction. An image obtained by combining the parallax absolute value maximum image displaying the pixel region having the maximum value is displayed on the display device 204 shown in FIG. 2, and the processing of steps 1601 to 1611 is completed.
 なお、ステップ1605で、視差値の絶対値が最大の領域が、奥行き方向において視差値が最大となっている場合は、ステップ1627~1631の処理が行われる。このステップ1627~1631の処理は、ステップ1626で奥行き方向に視差値が最大の領域をステップ1606とは異なる色である青で着色する以外は、前述のステップ1607~1611の処理と同様であるので、説明は省略する。 If it is determined in step 1605 that the parallax value has the maximum absolute value in the depth direction, the processing in steps 1627 to 1631 is performed. The processing in steps 1627 to 1631 is the same as the processing in steps 1607 to 1611 described above except that the region having the maximum parallax value in the depth direction in step 1626 is colored with blue, which is a different color from step 1606. The description is omitted.
 図18は、本発明の第6の実施の形態において奥行き方向に視差値が最大の領域を表示した視差絶対値最大画像である。奥行き方向の視差値が最大となっている領域が青で表示されている。 FIG. 18 shows a parallax absolute value maximum image displaying a region having the maximum parallax value in the depth direction in the sixth embodiment of the present invention. The region where the parallax value in the depth direction is the maximum is displayed in blue.
 本発明の第6の実施の形態では、視差値の絶対値が最大の領域が、飛び出し方向(クロスポイントよりも手前)にあるのか、または奥行き方向(クロスポイントよりも奥)にあるのかが判断できる。さらには、前述の本発明の第5の実施の形態のように、クロスポイントの領域を、視差値の絶対値が最大となっている領域と共に表示することも可能である。 In the sixth embodiment of the present invention, it is determined whether the region where the absolute value of the parallax value is the maximum is in the pop-out direction (before the cross point) or in the depth direction (behind the cross point). it can. Furthermore, as in the fifth embodiment of the present invention described above, the cross point area can be displayed together with the area where the absolute value of the parallax value is maximum.
 [第7の実施の形態]
 第7の実施の形態は、上述の第1~6の実施の形態における処理が可能な装置の操作表示部である。
[Seventh Embodiment]
The seventh embodiment is an operation display unit of an apparatus capable of processing in the above first to sixth embodiments.
 第7の実施の形態の構成は、基本的には、図2の本発明の第1の実施の形態に係る視差画像表示装置のブロック図に示した通りであるが、操作部203と表示部204とがタッチパネル形式で一体となっている操作表示部500である点が相違する。 The configuration of the seventh embodiment is basically the same as that shown in the block diagram of the parallax image display device according to the first embodiment of the present invention in FIG. The difference is that 204 is an operation display unit 500 integrated in a touch panel format.
 なお、本実施の形態における操作表示部500は、タッチパネル以外にも、表示部にLCD等を用い、操作部としてマウスまたはペンタブレット等のポインティングデバイスを有してもよい。 Note that the operation display unit 500 according to the present embodiment may use an LCD or the like as the display unit and a pointing device such as a mouse or a pen tablet as the operation unit, in addition to the touch panel.
 図19に示した操作表示部500は、最上段左に、元の画像である左画像501を表示し、その右隣に左画像と右画像とから算出した視差値に基づいて作成された視差画像502が表示されている。 The operation display unit 500 illustrated in FIG. 19 displays the left image 501 that is the original image on the left of the top row, and the parallax created based on the parallax value calculated from the left image and the right image on the right side An image 502 is displayed.
 図19の中段左には、画像合成を行う際に、本発明の第1の実施の形態のように、左画像と視差画像とを、RGB値に重み付けをして合成する「画素混合」を選択するための画素混合ボタン503と、本発明の第2の実施の形態のように、左画像の色差を視差画像に合成する「色差混合」を選択するための色差混合ボタン504とが設けられている。 In the middle left of FIG. 19, when performing image composition, “pixel mixing” that combines the left image and the parallax image by weighting the RGB values as in the first embodiment of the present invention. A pixel mixing button 503 for selecting and a color difference mixing button 504 for selecting “color difference mixing” for synthesizing the color difference of the left image with the parallax image as in the second embodiment of the present invention are provided. ing.
 また、図19の中段右には、「画素混合」を選択した場合に、RGB値の重み付けを変更するためのスライダー505が設けられており、ユーザは、タッチパネル上のスライダー505を左右に移動させる操作をすることにより、左画像と視差画像とのRGB値の重み付けの比率を0:10~10:0の範囲で任意に変更することができる。 In addition, a slider 505 for changing the weighting of RGB values when “Pixel mixture” is selected is provided on the middle right of FIG. 19. The user moves the slider 505 on the touch panel left and right. By performing the operation, the weighting ratio of the RGB values of the left image and the parallax image can be arbitrarily changed within the range of 0:10 to 10: 0.
 図19の下から2段目左には、立体視画像のクロスポイントを変更できるインターフェースであるクロスポイント調整インターフェース506があり、ユーザは、本発明の第5の実施の形態に係るクロスポイントの表示を図19上部右に表示されている視差画像上で確認しながら、クロスポイントの位置を手前から奥行きまでの前後方向に変更することができる。 At the left of the second row from the bottom of FIG. 19 is a crosspoint adjustment interface 506 that is an interface that can change the crosspoint of the stereoscopic image. The user can display the crosspoint according to the fifth embodiment of the present invention. The position of the cross point can be changed in the front-rear direction from the near side to the depth while confirming on the parallax image displayed on the upper right of FIG.
 図19の下から2段目中央には、本発明の第6の実施の形態に係る最大視差値となっている領域を、図19上部右に表示されている視差画像に反映することができる最大視差ボタン507があり、図19の下から2段目右には、画像をトリミングする際のインターフェースであるトリミングボタン508があり、図19の最下段には、プリントされる立体視画像のサイズを指定するサイズ指定ボタン509が設けられている。 In the center of the second row from the bottom of FIG. 19, the region having the maximum parallax value according to the sixth embodiment of the present invention can be reflected in the parallax image displayed on the upper right of FIG. There is a maximum parallax button 507, and on the right of the second row from the bottom in FIG. 19, there is a trimming button 508 that is an interface for trimming an image, and in the bottom row in FIG. A size designation button 509 for designating is provided.
 以上説明したように、第7の実施の形態の視差画像表示装置によれば、ユーザは、図2の画像入力部201から読み込ませた2以上の画像から、どのような立体感を有する画像が作成されるのかを、上述の本発明の第1~6の実施の形態に係る処理によって確認し、場合によっては、クロスポイントを変更することによって、好みの立体感が得られるようにすることができる。 As described above, according to the parallax image display device of the seventh embodiment, the user can determine what kind of stereoscopic effect from two or more images read from the image input unit 201 in FIG. Whether the image is created is confirmed by the processing according to the first to sixth embodiments of the present invention described above, and in some cases, a desired stereoscopic effect can be obtained by changing the cross point. it can.
 なお、クロスポイントを変更する方法は種々考えられるが、視差画像の画像全体で視差値を調整し、調整した視差値に基づいて視差画像を再度作成し、表示する。 Although various methods for changing the cross point are conceivable, the parallax value is adjusted for the entire parallax image, and the parallax image is created again based on the adjusted parallax value and displayed.
 なお、上記第1~第7の実施の形態では、左画像と右画像とを左画像を基準としてステレオマッチングして視差マップを生成する場合について説明したが、右画像を基準としてもよい。 In the first to seventh embodiments, the case where the left image and the right image are stereo-matched using the left image as a reference to generate a parallax map is described, but the right image may be used as a reference.
 また、3以上の画像を取得した場合には、いずれかの画像を第1の画像(基準画像)とし、その他の画像を第2の画像として、それぞれについて視差マップを生成するようにするとよい。 In addition, when three or more images are acquired, a parallax map may be generated for each of the images as the first image (reference image) and the other image as the second image.
 また、図2の画像処理部202が、CPU及びメモリを有するコンピュータである場合は、上記第1~第6の実施の形態の処理ルーチンをプログラム化して、そのプログラムを当該CPUにより実行するようにしてもよい。 If the image processing unit 202 in FIG. 2 is a computer having a CPU and a memory, the processing routines of the first to sixth embodiments are programmed, and the program is executed by the CPU. May be.
 また、特願2011-005011号の開示はその全体が参照により本明細書に取り込まれる。本明細書に記載された全ての文献、特許出願、および技術規格は、個々の文献、特許出願、および技術規格が参照により取り込まれることが具体的かつ個々に記された場合と同程度に、本明細書中に参照により取り込まれる。 The entire disclosure of Japanese Patent Application No. 2011-005011 is incorporated herein by reference. All documents, patent applications, and technical standards mentioned in this specification are to the same extent as if each individual document, patent application, and technical standard were specifically and individually described to be incorporated by reference, Incorporated herein by reference.

Claims (29)

  1.  異なる2以上の視点から撮影された複数の画像を取得する取得部と、
     前記取得部により取得された複数の画像に含まれる第1の画像と該第1の画像とは異なる第2の画像との各々対応する領域毎の位置の差で表される視差値を、前記第1の画像の領域の位置に対応させた視差マップを生成する視差マップ生成部と、
     領域毎の視差値に応じて対応する領域の色を決定する色決定部と、
     前記第1の画像の各領域の色を前記色決定部において決定した色に変更した視差色画像を作成する視差画像作成部と、
     前記視差色画像と前記第1の画像とを合成する画像合成部と、
     を含む視差画像表示装置。
    An acquisition unit for acquiring a plurality of images taken from two or more different viewpoints;
    The parallax value represented by the difference in position for each corresponding region between the first image included in the plurality of images acquired by the acquisition unit and the second image different from the first image, A parallax map generating unit that generates a parallax map corresponding to the position of the region of the first image;
    A color determining unit that determines a color of a corresponding region according to a parallax value for each region;
    A parallax image creation unit that creates a parallax color image in which the color of each region of the first image is changed to the color determined in the color determination unit;
    An image synthesis unit that synthesizes the parallax color image and the first image;
    A parallax image display device including:
  2.  前記色決定部は、前記視差値を決められた範囲毎に分類し、該分類された範囲の視差値に色を対応させた対応表を有し、前記視差マップにおいて前記第1の画像の前記領域の位置に対応させている視差値を前記第1の画像の前記領域の視差値とし、前記第1の画像の前記領域の視差値に対応する色を前記対応表に基づいて決定する請求項1に記載の視差画像表示装置。 The color determination unit includes a correspondence table that classifies the parallax values for each determined range, and associates colors with the parallax values of the classified range, and the color determination unit includes the correspondence table in which the color of the first image is included in the parallax map. The parallax value corresponding to the position of the area is set as the parallax value of the area of the first image, and a color corresponding to the parallax value of the area of the first image is determined based on the correspondence table. The parallax image display device according to 1.
  3.  前記画像合成部は、前記第1の画像の各領域に含まれる画素の色差成分を抽出し、前記視差色画像の各領域と前記視差色画像の領域の各々に対応する前記第1の画像の領域の色差成分とを合成する請求項1または請求項2に記載の視差画像表示装置。 The image composition unit extracts a color difference component of a pixel included in each region of the first image, and extracts the color difference component of the first image corresponding to each region of the parallax color image and each region of the parallax color image. The parallax image display device according to claim 1, wherein the parallax image display device combines the color difference components of the regions.
  4.  前記画像合成部は、前記視差色画像及び前記第1の画像の各々を複数の領域に分割した際の前記視差色画像の各領域に含まれる画素のRGB値から得られる値に対する重みを、前記第1の画像の各領域に含まれる画素のRGB値から得られる値に対する重みより重くして、前記視差色画像の各領域と前記視差色画像の領域の各々に対応する前記第1の画像の領域とを合成する請求項1または請求項2に記載の視差画像表示装置。 The image synthesizing unit assigns a weight to a value obtained from an RGB value of a pixel included in each region of the parallax color image when each of the parallax color image and the first image is divided into a plurality of regions. The first image corresponding to each of the regions of the parallax color image and each of the regions of the parallax color image is made heavier than the weight obtained from the RGB values of the pixels included in each region of the first image. The parallax image display device according to claim 1, wherein the parallax image display device combines the region.
  5.  前記画像合成部は、前記視差マップにおいて前記視差値の絶対値が最小値となる領域に対応する前記視差色画像の領域に含まれる画素のRGB値から得られる値に対する重みよりも、前記視差色画像の領域に対応する前記第1の画像の領域に含まれる画素のRGB値から得られる値に対する重みを重くし、前記視差マップにおいて前記視差値の絶対値が前記最小値よりも大きい領域に対応する前記視差色画像の領域と前記視差色画像の該領域に対応する前記第1の画像の領域とでは、前記視差値の絶対値が大きくなるに従って、前記視差色画像の該領域に対応する前記第1の画像の領域に含まれる画素のRGB値から得られる値に対する重みを軽くすると共に、前記視差色画像の該領域に含まれる画素のRGB値から得られる値に対する重みを重くして、前記視差色画像の各領域と前記視差色画像の領域の各々に対応する前記第1の画像の領域とを合成する請求項1または請求項2に記載の視差画像表示装置。 The image synthesizing unit is configured to use the parallax color rather than a weight for a value obtained from an RGB value of a pixel included in a region of the parallax color image corresponding to a region where the absolute value of the parallax value is a minimum value in the parallax map. Weight is increased with respect to values obtained from RGB values of pixels included in the first image region corresponding to the image region, and the absolute value of the parallax value is larger than the minimum value in the parallax map The parallax color image region and the first image region corresponding to the region of the parallax color image correspond to the region of the parallax color image as the absolute value of the parallax value increases. The weight for the value obtained from the RGB value of the pixel included in the region of the first image is reduced, and the weight for the value obtained from the RGB value of the pixel included in the region of the parallax color image is reduced. The it was heavy, the parallax image display device according to claim 1 or claim 2 is combined with the region of the first image corresponding to each of the regions of the parallax-color image and the areas of the parallax-color image.
  6.  前記画像合成部は、前記視差マップにおいて前記視差値が最大値となる領域に対応する前記視差色画像の領域に含まれる画素のRGB値から得られる値に対する重みよりも、前記視差色画像の領域に対応する前記第1の画像の領域に含まれる画素のRGB値から得られる値に対する重みを重くし、前記視差マップにおいて前記視差値が前記最大値よりも小さい領域に対応する前記視差色画像の領域と前記視差色画像の該領域に対応する前記第1の画像の領域とでは、前記視差値が小さくなるに従って、前記視差色画像の該領域に対応する前記第1の画像の領域に含まれる画素のRGB値から得られる値に対する重みを軽くすると共に、前記視差色画像の該領域に含まれる画素のRGB値から得られる値に対する重みを重くして、前記視差色画像の各領域と前記視差色画像の領域の各々に対応する前記第1の画像の領域とを合成する請求項1または請求項2に記載の視差画像表示装置。 The parallax color image area is more than a weight for a value obtained from an RGB value of a pixel included in the parallax color image area corresponding to the area where the parallax value is the maximum value in the parallax map. Of the parallax color image corresponding to the region where the parallax value is smaller than the maximum value in the parallax map. The region and the region of the first image corresponding to the region of the parallax color image are included in the region of the first image corresponding to the region of the parallax color image as the parallax value decreases. The parallax color is reduced by reducing the weight for the value obtained from the RGB value of the pixel and increasing the weight for the value obtained from the RGB value of the pixel included in the region of the parallax color image. Parallax image displaying device according to claim 1 or claim 2 is combined with the region of the first image corresponding to each of the regions of the parallax-color image and the areas of the image.
  7.  前記視差マップの領域の視差値を輝度で表現した視差輝度画像を作成する視差輝度画像作成部をさらに有し、
     前記画像合成部は、前記視差輝度画像と前記視差色画像とを合成する請求項1または請求項2に記載の視差画像表示装置。
    A parallax luminance image creating unit that creates a parallax luminance image in which the parallax value of the region of the parallax map is expressed by luminance;
    The parallax image display device according to claim 1, wherein the image synthesis unit synthesizes the parallax luminance image and the parallax color image.
  8.  前記視差マップの領域毎の視差値に応じて対応する前記第1の画像の領域の解像度を決定する解像度決定部と、
     前記第1の画像の各領域の解像度を前記解像度決定部において決定した解像度に変更した視差解像度画像を作成する視差解像度画像作成部と、をさらに有し、
     前記画像合成部は、前記視差解像度画像と前記視差色画像とを合成する請求項1または請求項2に記載の視差画像表示装置。
    A resolution determining unit that determines the resolution of the region of the first image corresponding to the disparity value for each region of the disparity map;
    A parallax resolution image creation unit that creates a parallax resolution image in which the resolution of each region of the first image is changed to the resolution determined in the resolution determination unit;
    The parallax image display device according to claim 1, wherein the image synthesis unit synthesizes the parallax resolution image and the parallax color image.
  9.  前記解像度決定部は、前記視差マップにおいて前記領域に対応している前記視差値が小さくなるに従って前記第1の画像の前記領域の解像度を低下させるようにする請求項8に記載の視差画像表示装置。 The parallax image display device according to claim 8, wherein the resolution determination unit decreases the resolution of the region of the first image as the parallax value corresponding to the region in the parallax map decreases. .
  10.  前記解像度決定部は、前記視差マップにおいて前記領域に対応している前記視差値の絶対値が大きくなるに従って前記第1の画像の前記領域の解像度を低下させるようにする請求項8に記載の視差画像表示装置。 The parallax according to claim 8, wherein the resolution determination unit decreases the resolution of the region of the first image as the absolute value of the parallax value corresponding to the region in the parallax map increases. Image display device.
  11.  前記解像度決定部は、前記視差値を決められた範囲毎に分類し、該分類された範囲の視差値に解像度を対応させた対応表を有し、前記視差マップにおいて前記第1の画像の前記領域の位置に対応させている視差値を前記第1の画像の前記領域の視差値とし、前記第1の画像の前記領域の視差値に対応する解像度を前記対応表に基づいて決定する請求項8に記載の視差画像表示装置。 The resolution determination unit includes a correspondence table that classifies the parallax values for each determined range, and associates resolutions with the parallax values of the classified range, and the resolution determination unit includes the correspondence table that corresponds to the resolution of the first image in the parallax map. The parallax value corresponding to the position of the area is set as the parallax value of the area of the first image, and the resolution corresponding to the parallax value of the area of the first image is determined based on the correspondence table. The parallax image display device according to 8.
  12.  前記解像度決定部は、前記対応表において、前記範囲に含まれる前記視差値が小さい前記範囲には、該含まれる前記視差値が小さいほど低い解像度を対応させた請求項11に記載の視差画像表示装置。 The parallax image display according to claim 11, wherein the resolution determination unit associates a lower resolution with a smaller parallax value included in the range where the parallax value included in the range is smaller in the correspondence table. apparatus.
  13.  前記解像度決定部は、前記対応表において、前記範囲に含まれる前記視差値の絶対値が大きい前記範囲には、該含まれる前記視差値の絶対値が大きいほど低い解像度を対応させた請求項11に記載の視差画像表示装置。 The resolution determination unit associates a lower resolution with a larger absolute value of the included parallax value in the range in which the absolute value of the parallax value included in the range is larger in the correspondence table. The parallax image display device described in 1.
  14.  前記視差マップの領域毎の視差値に応じて対応する前記第1の画像の領域のシャープネスを決定するシャープネス決定部と、
     前記第1の画像の各領域の解像度を前記シャープネス決定部において決定したシャープネスに変更した視差シャープネス画像を作成する視差シャープネス画像作成部と、をさらに有し、
     前記画像合成部は、前記視差シャープネス画像と前記視差色画像とを合成する請求項1または請求項2に記載の視差画像表示装置。
     
    A sharpness determination unit that determines the sharpness of the region of the first image corresponding to the disparity value for each region of the parallax map;
    A parallax sharpness image creation unit that creates a parallax sharpness image in which the resolution of each region of the first image is changed to the sharpness determined in the sharpness determination unit;
    The parallax image display device according to claim 1, wherein the image synthesis unit synthesizes the parallax sharpness image and the parallax color image.
  15.  前記シャープネス決定部は、前記視差マップにおいて前記領域に対応している前記視差値が小さくなるに従って前記第1の画像の前記領域のシャープネスを低下させるようにする請求項14に記載の視差画像表示装置。 The parallax image display device according to claim 14, wherein the sharpness determination unit reduces the sharpness of the region of the first image as the parallax value corresponding to the region in the parallax map decreases. .
  16.  前記シャープネス決定部は、前記視差マップにおいて前記領域に対応している前記視差値の絶対値が大きくなるに従って前記第1の画像の前記領域のシャープネスを低下させるようにする請求項14に記載の視差画像表示装置。 The parallax according to claim 14, wherein the sharpness determination unit reduces the sharpness of the region of the first image as the absolute value of the parallax value corresponding to the region in the parallax map increases. Image display device.
  17.  前記シャープネス決定部は、前記視差値を決められた範囲毎に分類し、該分類された範囲の視差値にシャープネスを対応させた対応表を有し、前記視差マップにおいて前記第1の画像の前記領域の位置に対応させている視差値を前記第1の画像の前記領域の視差値とし、前記第1の画像の前記領域の視差値に対応するシャープネスを前記対応表に基づいて決定する請求項14に記載の視差画像表示装置。 The sharpness determination unit has a correspondence table that classifies the parallax values for each determined range, and associates the parallax values of the classified ranges with sharpness, and the parallax map includes the correspondence table The parallax value corresponding to the position of the area is set as the parallax value of the area of the first image, and the sharpness corresponding to the parallax value of the area of the first image is determined based on the correspondence table. 14. The parallax image display device according to 14.
  18.  前記シャープネス決定部は、前記対応表において、前記範囲に含まれる前記視差値が小さい前記範囲には、該含まれる前記視差値が小さいほど低いシャープネスを対応させた請求項17に記載の視差画像表示装置。 The parallax image display according to claim 17, wherein the sharpness determination unit associates a lower sharpness with a smaller parallax value included in the range where the parallax value included in the range is smaller in the correspondence table. apparatus.
  19.  前記シャープネス決定部は、前記対応表において、前記範囲に含まれる前記視差値の絶対値が大きい前記範囲には、該含まれる前記視差値の絶対値が大きいほど低いシャープネスを対応させた請求項17に記載の視差画像表示装置。 The sharpness determination unit associates, in the correspondence table, the range having a large absolute value of the parallax value included in the range with a lower sharpness as the absolute value of the parallax value included is larger. The parallax image display device described in 1.
  20.  前記視差マップの領域の視差値を輝度値に変換し、かつ前記視差マップにおける視差値の絶対値が閾値以内である領域の位置を特定し、該特定した領域を決められた色で着色したクロスポイント画像を作成するクロスポイント画像作成部をさらに有し、
     前記画像合成部は、前記クロスポイント画像と前記視差色画像とを合成する請求項1または請求項2に記載の視差画像表示装置。
    A cross formed by converting the parallax value of the area of the parallax map into a luminance value, specifying the position of the area where the absolute value of the parallax value is within a threshold in the parallax map, and coloring the specified area with a predetermined color It further has a cross point image creation unit for creating a point image,
    The parallax image display device according to claim 1, wherein the image synthesis unit synthesizes the cross point image and the parallax color image.
  21.  前記視差マップの領域の視差値を輝度値に変換し、かつ前記視差マップにおける視差値の絶対値が最大である領域の位置を特定し、前記視差値の絶対値が最大である領域の視差値が正の場合と負の場合とで、該領域を異なる色で着色した視差絶対値最大画像を作成する視差絶対値最大画像作成部をさらに有し、
     前記画像合成部は、前記視差絶対値最大画像と前記視差色画像とを合成する請求項1または請求項2に記載の視差画像表示装置。
    The parallax value of the area where the parallax value of the parallax map is converted into a luminance value, the position of the area where the absolute value of the parallax value is maximum is specified in the parallax map, and the parallax value of the area where the absolute value of the parallax value is maximum A parallax absolute value maximum image creating unit that creates a parallax absolute value maximum image in which the region is colored with a different color in a case where is positive and a negative case,
    The parallax image display device according to claim 1, wherein the image synthesis unit synthesizes the parallax absolute value maximum image and the parallax color image.
  22.  前記視差マップ生成部、前記色決定部、前記視差輝度画像作成部、前記解像度決定部、前記視差解像度画像作成部、前記シャープネス決定部、前記視差シャープネス画像作成部、前記視差画像作成部及び前記画像合成部に対する指示を入力する操作部と、
     前記視差マップ生成部、前記色決定部、前記視差輝度画像作成部、前記解像度決定部、前記視差解像度画像作成部、前記シャープネス決定部、前記視差シャープネス画像作成部、前記視差画像作成部及び前記画像合成部による処理結果を表示する表示部と、
     をさらに含む請求項1~請求項19のいずれか1項に記載の視差画像表示装置。
    The parallax map generation unit, the color determination unit, the parallax luminance image generation unit, the resolution determination unit, the parallax resolution image generation unit, the sharpness determination unit, the parallax sharpness image generation unit, the parallax image generation unit, and the image An operation unit for inputting an instruction to the synthesis unit;
    The parallax map generation unit, the color determination unit, the parallax luminance image generation unit, the resolution determination unit, the parallax resolution image generation unit, the sharpness determination unit, the parallax sharpness image generation unit, the parallax image generation unit, and the image A display unit for displaying a processing result by the synthesis unit;
    The parallax image display device according to any one of claims 1 to 19, further comprising:
  23.  前記領域は1つの画素が含まれる領域である請求項1~請求項22のいずれか1項に記載の視差画像表示装置。 The parallax image display device according to any one of claims 1 to 22, wherein the region is a region including one pixel.
  24.  異なる2以上の視点から撮影された複数の画像を取得し、
     該取得した複数の画像に含まれる第1の画像と該第1の画像とは異なる画像との各々対応する領域毎の位置の差で表される視差値を、前記第1の画像の領域の位置に対応させた視差マップを生成し、
     領域毎の視差値に応じて対応する領域の色を決定し、
     前記第1の画像の各領域の色を該決定した色に変更した視差色画像を作成し、
     前記視差色画像と前記第1の画像とを合成する視差画像表示方法。
    Acquire multiple images taken from two or more different viewpoints,
    The disparity value represented by the difference in position for each corresponding region between the first image included in the acquired plurality of images and an image different from the first image is expressed in the region of the first image. Generate a disparity map corresponding to the position,
    Determine the color of the corresponding area according to the parallax value for each area,
    Creating a parallax color image in which the color of each region of the first image is changed to the determined color;
    A parallax image display method for synthesizing the parallax color image and the first image.
  25.  前記視差色画像と前記第1の画像とを合成するときに、前記視差色画像及び前記第1の画像の各々を複数の領域に分割した際の前記視差色画像の各領域に含まれる画素のRGB値から得られる値に対する重みを、前記第1の画像の各領域に含まれる画素のRGB値から得られる値に対する重みより重くして、前記視差色画像の各領域と前記視差色画像の領域の各々に対応する前記第1の画像の領域とを合成する請求項24に記載の視差画像表示方法 When the parallax color image and the first image are combined, the pixels included in each region of the parallax color image when the parallax color image and the first image are each divided into a plurality of regions. Weights for values obtained from RGB values are heavier than weights for values obtained from RGB values of pixels included in each region of the first image, and each region of the parallax color image and the region of the parallax color image 25. The parallax image display method according to claim 24, wherein the first image region corresponding to each of the first image region and the first image region is combined.
  26.  前記領域は1つの画素が含まれる領域である請求項24または請求項25に記載の視差画像表示方法。 The parallax image display method according to claim 24 or 25, wherein the area is an area including one pixel.
  27.  コンピュータを、
     異なる2以上の視点から撮影された複数の画像を取得する取得部、
     前記取得部により取得された複数の画像に含まれる第1の画像と該第1の画像とは異なる第2の画像との各々対応する領域毎の位置の差で表される視差値を、前記第1の画像の領域の位置に対応させた視差マップを生成する視差マップ生成部、
     領域毎の視差値に応じて対応する領域の色を決定する色決定部、
     前記第1の画像の各領域の色を前記色決定部において決定した色に変更した視差色画像を作成する視差画像作成部、
    及び
     前記視差色画像と前記第1の画像とを合成する画像合成部として機能させるための視差画像表示プログラム。
    Computer
    An acquisition unit for acquiring a plurality of images taken from two or more different viewpoints;
    The parallax value represented by the difference in position for each corresponding region between the first image included in the plurality of images acquired by the acquisition unit and the second image different from the first image, A parallax map generating unit that generates a parallax map corresponding to the position of the region of the first image;
    A color determining unit that determines a color of a corresponding region according to a parallax value for each region;
    A parallax image creation unit that creates a parallax color image in which the color of each region of the first image is changed to the color determined in the color determination unit;
    And a parallax image display program for causing the parallax color image and the first image to function as an image synthesis unit.
  28.  前記画像合成部は、前記視差色画像及び前記第1の画像の各々を複数の領域に分割した際の前記視差色画像の各領域に含まれる画素のRGB値から得られる値に対する重みを、前記第1の画像の各領域に含まれる画素のRGB値から得られる値に対する重みより重くして、前記視差色画像の各領域と前記視差色画像の領域の各々に対応する前記第1の画像の領域とを合成する請求項27に記載の視差画像表示プログラム。 The image synthesizing unit assigns a weight to a value obtained from an RGB value of a pixel included in each region of the parallax color image when each of the parallax color image and the first image is divided into a plurality of regions. The first image corresponding to each of the regions of the parallax color image and each of the regions of the parallax color image is made heavier than the weight obtained from the RGB values of the pixels included in each region of the first image. The parallax image display program according to claim 27, wherein the parallax image display program combines the areas.
  29.  前記領域は1つの画素が含まれる領域である請求項27または請求項28に記載の視差画像表示プログラム。 29. The parallax image display program according to claim 27 or 28, wherein the area is an area including one pixel.
PCT/JP2011/077716 2011-01-13 2011-11-30 Parallax image display device and parallax image display method WO2012096065A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-005011 2011-01-13
JP2011005011 2011-01-13

Publications (1)

Publication Number Publication Date
WO2012096065A1 true WO2012096065A1 (en) 2012-07-19

Family

ID=46506982

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2011/077716 WO2012096065A1 (en) 2011-01-13 2011-11-30 Parallax image display device and parallax image display method

Country Status (1)

Country Link
WO (1) WO2012096065A1 (en)

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014053655A (en) * 2012-09-05 2014-03-20 Panasonic Corp Image display device
JP2015046177A (en) * 2014-10-15 2015-03-12 グリー株式会社 Image display device, image display method, and program

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001346226A (en) * 2000-06-02 2001-12-14 Canon Inc Image processor, stereoscopic photograph print system, image processing method, stereoscopic photograph print method, and medium recorded with processing program
JP2003047027A (en) * 2001-07-31 2003-02-14 Canon Inc Stereogram formation system and method therefor, and program and record medium
JP2006041811A (en) * 2004-07-26 2006-02-09 Kddi Corp Free visual point picture streaming method
JP2008082870A (en) * 2006-09-27 2008-04-10 Setsunan Univ Image processing program, and road surface state measuring system using this
JP2008103820A (en) * 2006-10-17 2008-05-01 Sharp Corp Stereoscopic image processing apparatus
JP2010506287A (en) * 2006-10-04 2010-02-25 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Image enhancement

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2001346226A (en) * 2000-06-02 2001-12-14 Canon Inc Image processor, stereoscopic photograph print system, image processing method, stereoscopic photograph print method, and medium recorded with processing program
JP2003047027A (en) * 2001-07-31 2003-02-14 Canon Inc Stereogram formation system and method therefor, and program and record medium
JP2006041811A (en) * 2004-07-26 2006-02-09 Kddi Corp Free visual point picture streaming method
JP2008082870A (en) * 2006-09-27 2008-04-10 Setsunan Univ Image processing program, and road surface state measuring system using this
JP2010506287A (en) * 2006-10-04 2010-02-25 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Image enhancement
JP2008103820A (en) * 2006-10-17 2008-05-01 Sharp Corp Stereoscopic image processing apparatus

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2014053655A (en) * 2012-09-05 2014-03-20 Panasonic Corp Image display device
JP2015046177A (en) * 2014-10-15 2015-03-12 グリー株式会社 Image display device, image display method, and program

Similar Documents

Publication Publication Date Title
US8638329B2 (en) Auto-stereoscopic interpolation
US8977039B2 (en) Pulling keys from color segmented images
US8213711B2 (en) Method and graphical user interface for modifying depth maps
AU760594B2 (en) System and method for creating 3D models from 2D sequential image data
US9544576B2 (en) 3D photo creation system and method
US9992473B2 (en) Digital multi-dimensional image photon platform system and methods of use
WO2011033673A1 (en) Image processing apparatus
KR101345362B1 (en) Method and apparatus for volume rendering using depth weighted colorization
CN104081768B (en) Different visual point image generating means and different visual point image generating method
AU2005331138A1 (en) 3D image generation and display system
TW201029443A (en) Method and device for generating a depth map
JP2014056466A (en) Image processing device and method
CN102124749A (en) Stereoscopic image display apparatus
KR20110093828A (en) Method and system for encoding a 3d image signal, encoded 3d image signal, method and system for decoding a 3d image signal
JP7387434B2 (en) Image generation method and image generation device
CN106664397A (en) Method and apparatus for generating a three dimensional image
JP5356590B1 (en) Image processing apparatus and method
JP5862635B2 (en) Image processing apparatus, three-dimensional data generation method, and program
US11043019B2 (en) Method of displaying a wide-format augmented reality object
WO2012096065A1 (en) Parallax image display device and parallax image display method
JP6611588B2 (en) Data recording apparatus, imaging apparatus, data recording method and program
CA2674104C (en) Method and graphical user interface for modifying depth maps
WO2011071978A1 (en) Three dimensional image rendering
WO2012096054A1 (en) Parallax image display device, parallax image display method, and parallax image display program
GB2585197A (en) Method and system for obtaining depth data

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 11855290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 11855290

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP