WO2012014695A1 - Dispositif d'imagerie en trois dimensions et procédé d'imagerie correspondant - Google Patents
Dispositif d'imagerie en trois dimensions et procédé d'imagerie correspondant Download PDFInfo
- Publication number
- WO2012014695A1 WO2012014695A1 PCT/JP2011/066089 JP2011066089W WO2012014695A1 WO 2012014695 A1 WO2012014695 A1 WO 2012014695A1 JP 2011066089 W JP2011066089 W JP 2011066089W WO 2012014695 A1 WO2012014695 A1 WO 2012014695A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- imaging
- image
- unit
- units
- video
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
- G03B35/10—Stereoscopic photography by simultaneous recording having single camera with stereoscopic-base-defining system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the present invention relates to a stereoscopic imaging device and an imaging method thereof.
- stereoscopic image devices have been actively developed in order to enhance the power and presence of images.
- a technique for generating a stereoscopic image there is known a technique in which two imaging devices for the left channel (L) and the right channel (R) are arranged on the left and right, and the subject is simultaneously photographed by the two imaging devices. Yes.
- a technique for displaying a stereoscopic image a left channel (L) image and a right (R) channel image are alternately displayed on a single display screen for each pixel, and a kamaboko-shaped lens is provided in a predetermined manner.
- the viewer's left and right eyes use a special optical system such as a lenticular lens arranged at intervals, a parallax barrier with fine slits arranged at a predetermined interval, and a patterning phase difference plate with regularly arranged fine polarizing elements.
- a technique is known in which only the left channel (L) image is visible to the viewer's left eye and only the right channel (R) image is visible to the right eye by adjusting the viewing area.
- the thin color camera includes four lenses 22a to 22d, four color filters 25a to 25d, and a detector array 24.
- the color filter 25 includes a filter 25a that transmits red light (R), filters 25b and 25c that transmit green light (G), and a filter 25d that transmits blue light (B). Take green and blue images.
- a high-resolution composite image is formed from two green images having high sensitivity in the human visual system, and a full-color image can be obtained by combining red and blue.
- the conventional technology for capturing a stereoscopic image requires two imaging devices for the left channel (L) and the right channel (R) in order to obtain one stereoscopic image.
- the imaging devices described in Patent Document 1 are used for the left channel and the right channel, the two imaging devices are arranged side by side. Since the imaging apparatus described in Patent Literature 1 includes four sub cameras, eight sub cameras that are twice that number are required.
- the present invention has been made in view of such circumstances, and an object thereof is to provide a stereoscopic imaging apparatus and an imaging method thereof in which an increase in apparatus scale is suppressed.
- a stereoscopic imaging apparatus includes two imaging units that capture the same subject, and imaging of the two imaging units.
- a disparity calculating unit that detects corresponding points between the images and calculates disparity information of the captured images of the two image capturing units; and the disparity information and the two image capturing units on the basis of the viewpoints of the two image capturing units, respectively.
- a synthesis processing unit that synthesizes an image having a larger number of pixels than the captured image and generates two systems of images having the larger number of pixels.
- the imaging unit includes an optical system that forms an image of a subject on an imaging surface, and an imaging element that generates a signal of a captured video of the subject formed on the imaging surface,
- One of the imaging units may be displaced up or down by half of the imaging pixel of the imaging device with respect to the optical system as compared with the other imaging unit.
- the above-described stereoscopic imaging device includes three or more imaging units, and among the imaging units, between the side-by-side imaging units, the position of the imaging element with respect to the optical system is the imaging pixel of the imaging element.
- the position of the image sensor with respect to the optical system is either half left or right of the shooting pixel of the image sensor. It may be shifted.
- the four imaging units are arranged at the vertices of a square along which each side is either horizontal or vertical
- the parallax calculation unit includes the four imaging units
- the parallax information of the captured video of the two imaging units arranged at the adjacent vertices of the square is calculated, and the synthesis processing unit performs parallax in the horizontal direction and the vertical direction of the captured video performed when the video is synthesized.
- the parallax information may be used for correction.
- the stereoscopic imaging device described above includes at least three of the imaging units, and the synthesis processing unit captures the parallax information and at least two of the imaging units with reference to the viewpoints of the at least three imaging units. Based on the video, a video having a larger number of pixels than the captured video may be synthesized to generate a video having a larger number of pixels of at least three systems.
- a corresponding point is detected between captured images of two imaging units that capture the same subject, and parallax information of the captured images of the two imaging units is calculated. And synthesizing an image having a larger number of pixels than the captured image based on the parallax information and the captured images of the two image capturing units on the basis of the viewpoints of the two image capturing units. Generating a large number of images.
- the step of generating a video having a large number of pixels may include a step of performing parallax correction of the captured video using the parallax information when the video is synthesized.
- the step of generating the video with a large number of pixels is based on the parallax information and at least two captured images of the imaging units based on the viewpoints of the at least three imaging units that capture the same subject.
- the method may include a step of synthesizing an image having a larger number of pixels than the captured image and generating at least three systems of the image having the larger number of pixels.
- FIG. 1 is an overview diagram showing an overview of a stereoscopic imaging apparatus 10 according to a first embodiment of the present invention. It is a schematic block diagram which shows the structure of the three-dimensional imaging device 10 in the embodiment. It is a figure which shows the example of arrangement
- FIG. 1 An overview of a stereoscopic imaging apparatus 10 according to an embodiment of the present invention is shown in FIG. 1, and a schematic block diagram showing its functional configuration is shown in FIG.
- the stereoscopic imaging device 10 includes an imaging unit 101, an imaging unit 102, a parallax calculation unit 21, and a high-resolution composition processing unit 20 arranged in the x-axis direction in the drawings.
- the imaging unit 101 includes an imaging lens 11-1 and an imaging element 12-1.
- the imaging unit 102 includes an imaging lens 11-2 and an imaging element 12-2. Note that the imaging unit 101 and the imaging unit 102 are arranged so that their optical axes are parallel so as to capture the same subject.
- the imaging lens 11 forms an image of light from the subject on the imaging element 12.
- the imaging element 12 optical system
- the image sensor 12 is a CMOS image sensor or the like, and photoelectrically converts the image formed and outputs it as a video signal.
- the video signal output from the imaging device 12-1 of the imaging unit 101 is referred to as a video signal R
- the video signal output from the imaging device 12-2 of the imaging unit 102 is referred to as a video signal L.
- Two systems of video signals (video signal R and video signal L) output by the imaging unit 101 and the imaging unit 102 are input to the parallax calculation unit 21 and the high-resolution composition processing unit 20.
- the parallax calculation unit 21 searches for corresponding points between the two input video signals, and R reference parallax data RS based on the viewpoint of the imaging unit 101 and the viewpoint of the imaging unit 102 based on the search result.
- the L reference parallax data LS is calculated and output to the high resolution composition processing unit 20.
- the high-resolution composition processing unit 20 synthesizes the two input video signals based on the parallax data (disparity information), and outputs a right-eye video signal RC and a left-eye video signal LC.
- FIG. 3 is a diagram illustrating an arrangement example of the imaging lens and the imaging element.
- the x axis is taken in the horizontal direction (lateral direction)
- the y axis is taken in the vertical direction (up and down direction)
- the z axis is taken in the depth direction. That is, FIG. 3 shows the arrangement of the imaging lens and the imaging element when the stereoscopic imaging device 10 is viewed from the front.
- the imaging lens 11-1 and the imaging lens 11-2 are arranged at the same position in the y-axis direction.
- the image pickup device 12-1 is shifted from the image pickup device 12-2 by py / 2 in the y-axis direction (vertical direction).
- py is the length of the pixel in the image sensor 12 in the y-axis direction. That is, the image pickup device 12-1 and the image pickup device 12-2 are arranged so as to be shifted in the y-axis direction (vertical direction) by half the pixel height of the image pickup device 12.
- the position of the imaging device with respect to the imaging lens is shifted upward by half of the imaging pixel of the imaging device, as compared with the imaging unit 102.
- the image sensor 12-1 may be arranged to be shifted by py / 2 below the image sensor 12-2 in the y-axis direction (vertical direction). In that case, the arrangement order of the pixels when the composition processing is performed in the high-resolution composition processing unit 20 described later is reversed.
- FIG. 4 is a diagram illustrating another arrangement example of the imaging lens and the imaging element.
- the x axis is taken in the horizontal direction (lateral direction)
- the y axis is taken in the vertical direction (up and down direction)
- the z axis is taken in the depth direction.
- the imaging lens 11-1 is arranged to be shifted downward by py / 2 in the y-axis direction (vertical direction) from the imaging lens 11-2.
- the image sensor 12-1 and the image sensor 12-2 are disposed at the same position in the y-axis direction. That is, the image pickup lens 11-1 and the image pickup lens 11-2 are arranged so as to be shifted in the y-axis direction (vertical direction) by half the pixel height of the image pickup element 12.
- the position of the imaging element 12 with respect to the imaging lens 11 is shifted by half of the imaging pixel of the imaging element 12 as compared with the imaging unit 102.
- the image sensor 12-1 may be arranged to be shifted by py / 2 below the image sensor 12-2 in the y-axis direction (vertical direction). In that case, the arrangement order of the pixels when the composition processing is performed in the high-resolution composition processing unit 20 described later is reversed.
- FIG. 5 is a schematic block diagram illustrating the configuration of the parallax calculation unit 21.
- the parallax calculation unit 21 calculates parallax data from the video signal R output from the imaging unit 101 in FIG. 1 and the video signal L output from the imaging unit L102.
- the parallax calculation unit 21 includes coordinate conversion units 31 and 32, a right camera parameter storage unit 30R, a left camera parameter storage unit 30L, and a corresponding point search unit 33.
- the right camera parameter storage unit 30R holds camera parameters including internal parameters such as a focal length and lens distortion parameters specific to the imaging unit 101, and external parameters representing the positional relationship between the two imaging units 101 and 102. .
- the left camera parameter storage unit 30L holds camera parameters specific to the imaging unit 102.
- the coordinate conversion unit 31 performs geometric conversion (coordinate conversion) on the video represented by the video signal R output from the imaging unit 101 by a known method for the purpose of placing the video of the imaging unit 101 and the imaging unit 102 on the same plane. To make the epipolar line parallel. At this time, the coordinate conversion unit 31 uses the camera parameters stored in the right camera parameter storage unit 30R.
- the coordinate conversion unit 32 geometrically converts the video represented by the video signal L output from the imaging unit 102 by an known method for the purpose of placing the video images of the imaging unit 101 and the imaging unit 102 on the same plane. Is parallelized. At this time, the coordinate conversion unit 32 uses the camera parameters stored in the left camera parameter storage unit 30L.
- the corresponding point search unit 33 searches for the corresponding pixel between the images in which the epipolar lines are parallelized by the coordinate conversion unit 31 and the coordinate conversion unit 32, and represents the parallax between the viewpoint of the imaging unit 101 and the viewpoint of the imaging unit 102 Ask for data.
- the corresponding point search unit 33 is composed of blocks that calculate two types of parallax data. One is an R reference parallax calculation unit 34, and the other is an L reference parallax calculation unit 35.
- the R reference parallax calculation unit 34 corresponds to each pixel of the reference image, using the image of the imaging unit 101 with the epipolar line parallelized as a reference image and the image of the imaging unit 102 with the epipolar line parallelized as a reference image.
- the reference video pixel is searched to calculate R-standard parallax data RS.
- the L reference parallax calculation unit 35 uses the video of the imaging unit 102 with the epipolar line parallelized as a reference video and the video of the imaging unit 101 with the epipolar line parallelized as a reference video, and references corresponding to each pixel of the standard video The pixel of the video is searched to calculate L reference parallax data.
- the R-standard parallax calculation block 34 and the L-standard parallax calculation block 35 have the same corresponding point search operation except that the standard video and the reference video are reversed.
- FIG. 6 is a diagram illustrating the reference image RG.
- FIG. 7 is a diagram illustrating the reference image BG.
- the epipolar lines are parallelized in both the reference image RG and the standard image BG.
- a method of moving the target pixel on the standard image BG will be described with reference to FIG.
- the R reference parallax calculation unit 34 sets a block centered on the target pixel on the reference image BG (hereinafter referred to as a reference target block BB) from the upper left end (reference start block BS) of the reference image BG to the right along the line. If the moved reference attention block BB reaches the right end of the line, the reference attention block BB is moved pixel by pixel from the left end under one line to the right along the line. This is repeated until the block at the lower right corner of the reference image BG (reference end block BE) is reached.
- the R reference parallax calculation unit 34 first refers to a block (reference start block RS) on the reference image RG having the same coordinates as the coordinates (x, y) of the reference target block BB on the reference image BG shown in FIG.
- the target block RB is set, and thereafter, the reference target block is moved pixel by pixel along the line to the right. Then, as shown in FIG.
- the search range is a value corresponding to the maximum parallax of the photographed subject, and the shortest distance to the subject for which parallax data can be calculated is determined by the set search range.
- the R reference parallax calculation unit 34 searches the reference attention block BB on the reference image BG illustrated in FIG. 7 for the reference attention block described above.
- FIG. 9 is a diagram illustrating the configuration of the reference block of interest BB.
- the reference attention block BB is a block having a size of horizontal M ⁇ vertical N around the target pixel on the reference image BG.
- FIG. 8 is a diagram illustrating a configuration of the reference attention block RB.
- the reference attention block RB is a block having a size of horizontal M ⁇ vertical N around the target pixel on the reference image RG.
- pixel values of coordinates (i, j) where the horizontal direction is i and the vertical direction is j are respectively R Let (i, j), T (i, j).
- the R reference parallax calculation unit 34 calculates a similarity for each combination of the reference attention block BB and the reference attention block RB, and determines a reference attention block RB similar to each reference attention block BB.
- SAD Sud of Absolute Difference
- SAD calculates the absolute value of the difference between R (i, j) and T (i, j) for all the pixels of the block as in the similarity determination formula shown in equation (1), and sums them (SSAD) It is.
- the R reference parallax calculation unit 34 selects a reference attention block having the smallest SSAD value in the expression (1) among the reference attention blocks RG in the search range on the reference image RG corresponding to a certain reference attention block BB. It is determined that the reference attention block is similar. Then, the R reference parallax calculation unit 34 sets a pixel at the center of the reference attention block RB similar to the reference attention block BB as a pixel corresponding to the attention pixel at the center of the reference attention block BB.
- the processing operation for searching for a corresponding pixel by the L reference parallax calculation unit 35 is substantially the same as that of the R reference parallax calculation unit 34, but the search range is different.
- the reference start block RS is a block that is separated from the same coordinates as the reference target block BB by a search range to the left side
- the reference end block RE is a block at the same coordinates as the reference target block BB. It is.
- the above description describes the method for calculating parallax data for each processing unit.
- the R reference parallax calculation unit 34 first sets the reference attention block at the head (reference start block BS) of the reference image BG (FIG. 7) (step S900). Then, all pixel values of the reference target block BB are read from the reference image BG (step S901). Next, the R standard parallax calculation unit 34 sets the block in the reference image RG (FIG. 6) having the same coordinates as the standard target block BB, that is, the head of the reference image RG (reference start block RS) as the reference target block RB. (Step S902).
- the R reference parallax calculation unit 34 calculates and stores the SSAD values of the pixel values of the read reference attention block BB and reference attention block RB according to the equation (1) (step S904).
- the R reference parallax calculation unit 34 determines whether or not the search range has ended (step S905). If the search range has not ended, the R reference parallax calculation unit 34 moves the reference block of interest to the right along the line direction by one pixel (step S906). ), Step S903 and Step S904 are performed again. These steps S903 to S906 are repeated while the reference block of interest RB is within the search range, and all SSAD values within the search range are calculated. Based on these calculation results, the R reference parallax calculation unit 34 detects the reference block of interest RB having the smallest SSAD value (step S907). Here, the minimum SSAD value calculated in step S907 is not necessarily a correct similar block.
- the parallax cannot be detected correctly when the reference target block BB has no pattern (texture) or contour as a feature point, or when the search area on the reference image RG is an occlusion area. Whether the parallax has been detected correctly can be determined from how small the minimum SSAD value is.
- the R reference parallax calculation unit 34 compares the minimum SSAD value with the threshold (step 908), and when the SSAD value is equal to or lower than the threshold (that is, when the similarity is high), the center of the reference block of interest BB (reference The difference between the x coordinate of the target pixel on the image BG) and the x coordinate of the center of the detected reference target block RB (the corresponding pixel on the reference image RG) is output as parallax data of the target pixel (step S909). ).
- the R reference parallax calculation unit 34 determines that the parallax has not been detected, and sets the parallax data to 0 or a unique value as the error flag. And output (step 910).
- the R reference parallax calculation unit 34 determines whether or not the reference block of interest BB has reached the reference end block BE, that is, whether or not the processing has ended (step S911).
- the block BB is moved to the right by one pixel along the line direction (step S912), and steps S901 to S910 are performed again. If it is determined in step S911 that the process has ended, the process ends. In this way, the steps S901 to S910 are repeated until the reference target block BB becomes the search end block of the reference image BG, and the parallax data of each pixel on the reference image BG is obtained.
- the pixel on the reference image similar to the target pixel on the base image is searched using the SAD similarity evaluation function.
- the parallax data may be obtained using any technique as long as it is a technique for searching for similar pixels on the standard image and the reference image.
- FIG. 11 is a schematic block diagram showing a functional configuration of the high resolution composition processing unit 20.
- the high-resolution composition processing unit 20 includes a left-eye composition unit 908 that generates a left-eye video signal LC, a right-eye composition unit 909 that generates a right-eye video signal RC, and a right camera parameter storage unit 902R. And a left camera parameter storage unit 902L.
- Each of the left-eye composition unit 908 and the right-eye composition block unit 909 includes an alignment correction processing unit 901, a correction processing unit 903, and a composition processing unit 906.
- the left-eye synthesis unit 908 Since the basic operation is the same for the left-eye synthesis unit 908 and the right-eye synthesis unit 909 except that the combination of input parallax data is different, the operation of the left-eye synthesis unit 908 will be described here. Description is omitted.
- the video signal R of the imaging unit 101 is input to the alignment correction processing unit 901, and the video signal L of the imaging unit 102 is input to the correction processing unit 903.
- the alignment correction processing unit 901 corrects the lens distortion of the video represented by the video signal R based on the camera parameters indicating the lens distortion state of the imaging unit 101 stored in the right camera parameter storage unit 902R, and then the parallax calculation unit 21 Based on the L reference parallax data LS input from, and the camera parameters indicating the orientation and orientation of the image capturing unit 101 stored in the right camera parameter storage unit 902R, each pixel of the image with the lens distortion corrected is an image of the image capturing unit 102 ( Alignment is performed so as to capture the same subject position as a pixel having the same coordinates in the image corrected by the correction processing unit 903.
- the position of the image pickup device 12-1 with respect to the image pickup lens 11-1 is determined by the image pickup device 12-1. Correction is not performed for a state of being shifted upward by half of the pixel. That is, the pixel at the coordinate (x, y) as the processing result of the alignment correction processing unit 901 is the pixel at the coordinate (x, y) as the processing result of the correction processing unit 903 and the pixel at the coordinate (x, y ⁇ 1). The subject position between is captured.
- the correction processing unit 903 corrects the lens distortion of the video represented by the video signal L based on the camera parameters indicating the lens distortion state stored in the left camera parameter storage unit 902L.
- the alignment correction processing unit 901 and the correction processing unit 903 perform parallelization of epipolar lines using camera parameters and correction of parallax using L reference parallax data LS.
- the parallelization of the epipolar line can be performed by a known method.
- the alignment correction processing unit 901 shifts the pixels of the video obtained by parallelizing the epipolar line to the video represented by the video signal R by the amount corresponding to the parallax indicated by the L reference parallax data LS. Move to. For example, if the L reference parallax data LS at the coordinates (x, y) is d, the pixel at the coordinates (x + d, y) is moved to the coordinates (x, y).
- the video signal R and the camera parameters of the imaging unit 101 are input to the correction processing unit 903 of the right-eye synthesis unit 909, and the video signal L and the camera of the imaging unit 102 are input to the alignment correction processing unit 901.
- a parameter and R reference parallax data RS are input.
- the pixel is moved to the right by the amount of parallax indicated by the parallax data RS.
- the horizontal axis indicates the spread (size) of the space in the y-axis direction
- the vertical axis indicates the light amplitude (light intensity).
- an image of a subject is formed by the imaging lenses 11-1 and 11-2 among the imaging elements 12-1 and 12-2 of the imaging units 101 and 102, and is incident on a certain vertical row of pixels. Shows the distribution of light.
- the graph denoted by reference numeral 40e indicates the distribution of the output of the alignment correction processing unit 901 corresponding to the pixel on which the light of the graph 40a is incident among the pixels of the imaging element 12-1 of the imaging unit 101.
- a graph denoted by reference numeral 40f indicates an output distribution of the correction processing unit 903 corresponding to the pixel on which the light of the graph 40a is incident among the pixels of the imaging element 12-2 of the imaging unit 102.
- the graph of reference numeral 40g shows the distribution of the output of the synthesis processing unit 906 with respect to the distribution of reference numerals 40e and 40f.
- the relationship between the graphs will be described ignoring the effects of correction by the alignment correction processing unit 901 and the correction processing unit 903.
- the solid line is the boundary line of the pixel of the imaging element 12-1 of the imaging unit 101
- the broken line is the boundary line of the pixel of the imaging element 12-2 of the imaging unit 102.
- reference numerals 40b and 40c in the figure are pixels of the imaging unit 101 and the imaging unit 102, respectively, and the relative positional relationship is shifted by an offset indicated by an arrow 40d.
- the offset is preferably set to be half the size of the pixels (reference numerals 40b and 40c) of the imaging elements 12-1 and 12-2. A half-pixel offset makes it possible to generate the highest definition image.
- the image sensors 12-1 and 12-2 integrate the light intensity in units of pixels, when an image of a subject indicated by reference numeral 40a is taken by the image pickup element 12-1, a video signal having a light intensity distribution indicated by reference numeral 40e. When a picture is taken with the image sensor 12-2, a video signal having a light intensity distribution indicated by reference numeral 40f is obtained.
- the composition processing unit 906 synthesizes an image by alternately arranging the output of the alignment correction processing unit 901 and the output of the correction processing unit 903 in the y-axis direction, and compared with the outputs of the imaging units 101 and 102.
- a left-eye video signal LC in which the resolution in the y-axis direction is doubled is generated.
- the imaging unit 101 considers that the position of the imaging device 12-1 with respect to the imaging lens 11-1 is shifted upward by half of the imaging pixel of the imaging device 12-1, as compared with the imaging unit 102.
- the pixel b of the correction processing unit 903 derived from the imaging unit 102 and having the same coordinates as the pixel a of the alignment correction processing unit 901 derived from the imaging unit 101 is disposed below the pixel a. That is, when the origin is the upper right of the image, the x-axis is the right direction, and the y-axis is the lower direction, the pixel at the coordinates (x, y) being output from the alignment correction processing unit 901 is The pixel at the coordinates (2x, 2y) being outputted, and the pixel at the coordinates (x, y) being outputted by the correction processing unit 903 is a pixel at the coordinates (2x, 2y + 1) being outputted by the synthesis processing unit 906.
- the composition processing unit 906 can reproduce a high-definition image close to the graph 40a indicated by reference numeral 40g by combining the two images in this way.
- the synthesis processing unit 906 of the right-eye synthesis unit 909 operates in the same manner as the synthesis processing unit 906 of the left-eye synthesis unit 908, but is a pixel having the same coordinates as the pixel c of the correction processing unit 903 derived from the imaging unit 101. Therefore, the pixel d of the alignment correction processing unit 901 derived from the imaging unit 102 is arranged below the pixel c.
- the above combining operation is executed by the left eye combining unit 908 and the right eye combining unit 909.
- the left-eye compositing unit 908 outputs a left-eye video signal LC, which is a high-definition video signal taken from the position of the imaging unit 102 (ie, viewed from the left eye)
- the right-eye compositing unit 909 captures an image.
- a right-eye video signal RC that is a high-definition video signal shot from the position of the unit 102 (that is, viewed from the right eye) is output.
- the left-eye video signal LC and the right-eye video signal RC output from the stereoscopic imaging device 10 have twice the resolution as compared with the video signals R and L output from the imaging units 101 and 102. ing. Therefore, the stereoscopic imaging device 10 can generate a stereoscopic video image having a resolution twice that of a stereoscopic imaging device having a resolution equivalent to that of the video signal output by the imaging unit 101.
- FIG. 13 is an overview diagram illustrating an overview of the stereoscopic imaging apparatus according to the present embodiment.
- the stereoscopic imaging apparatus 111 according to the present embodiment differs from the first embodiment in the number of imaging units, from two eyes (reference numerals 101, 102) to four eyes (reference numerals 101, 102, 103, 104). ) And increasing. That is, the stereoscopic imaging device 111 includes an imaging unit 101, an imaging unit 102, an imaging unit 103, and an imaging unit 104.
- the imaging unit 101 includes an imaging lens 11-1 and an imaging element 12-1.
- the imaging unit 102 includes an imaging lens 11-2 and an imaging element 12-2
- the imaging unit 103 includes an imaging lens 11-3 and an imaging element 12-3, and the imaging unit 104 includes an imaging lens. 11-4 and an image sensor 12-4.
- FIG. 14 is a schematic block diagram illustrating a functional configuration of the stereoscopic imaging device 111 according to the present embodiment.
- the stereoscopic imaging device 111 includes imaging units 101, 102, 103, 104, a parallax calculation unit 21, and a multi-view high-resolution composition processing unit 121.
- the video signals R and L output from the imaging unit 101 and the imaging unit 102 are input to the multi-view high resolution synthesis processing unit 121 and the parallax calculation unit 21.
- the video signal R ′ output from the imaging unit 103 and the video signal L ′ output from the imaging unit 104 are input to the multi-view high-resolution composition processing unit 121.
- the processing of the parallax calculation unit 21 is the same as that of the first embodiment, and calculates the R reference parallax data and the L reference parallax data, and outputs them to the multi-view high-resolution synthesis processing unit 121.
- the high-resolution synthesis processing unit 121 performs synthesis processing on the input four video signals based on the two parallax data, and outputs a right-eye video signal RC and a left-eye video signal LC.
- FIG. 15 is a diagram illustrating the arrangement of the imaging lens and the imaging element in the present embodiment.
- the x axis is taken in the horizontal direction (lateral direction)
- the y axis is taken in the vertical direction (up and down direction)
- the z axis is taken in the depth direction. That is, FIG. 15 shows the arrangement of the imaging lens and the imaging element when the stereoscopic imaging device 111 is viewed from the front.
- the imaging lens 11-1 and the imaging lens 11-2 are arranged at the same position in the y-axis direction.
- the imaging lens 11-3 and the imaging lens 11-4 are disposed at the same position in the y-axis direction.
- the imaging lens 11-1 and the imaging lens 11-3 are disposed at the same position in the x-axis direction.
- the imaging lens 11-2 and the imaging lens 11-4 are disposed at the same position in the x-axis direction.
- the distance Dx from the center of the imaging lens 11-1 to the center of the imaging lens 11-2 is equal to the distance Dy from the center of the imaging lens 11-1 to the center of the imaging lens 11-3.
- the imaging units 101 to 104 are arranged at the vertices of a square in which each side is along either the horizontal or vertical direction.
- the image sensor 12-1 is arranged to be shifted by py / 2 in the y-axis direction (vertical direction) from the image sensor 12-2. Further, the image sensor 12-3 is arranged so as to be shifted by py / 2 in the y-axis direction (vertical direction) from the image sensor 12-4.
- py is the length of the pixel in the image sensor 12 in the y-axis direction.
- the image sensor 12-1 is arranged to be shifted to the left by px / 2 in the x-axis direction (lateral direction) from the image sensor 12-3.
- the image sensor 12-2 is arranged to be shifted to the left by px / 2 in the x-axis direction (lateral direction) from the image sensor 12-4.
- px is the length of the pixel in the x-axis direction in the image sensor.
- the position of the imaging element with respect to the imaging lens is shifted upward by half of the imaging pixel of the imaging element, as compared with the imaging unit 102.
- the position of the imaging element with respect to the imaging lens is shifted upward by half of the imaging pixel of the imaging element.
- the position of the imaging element with respect to the imaging lens is shifted to the left by half of the imaging pixel of the imaging element, as compared with the imaging unit 103.
- the position of the imaging element with respect to the imaging lens is shifted to the left by half of the imaging pixel of the imaging element, as compared with the imaging unit 104.
- the imaging elements are displaced is shown, but the imaging lens may be displaced as shown in FIG. 4 described in the first embodiment.
- the multi-view high-resolution composition processing unit 121 includes a left-eye multi-view composition unit 130 that generates a left-eye high-definition image, a right-eye multi-view composition unit 132 that generates a right-eye high-definition image, and the imaging unit 101.
- the left-eye composition unit 130 and the right-eye composition block 130 include an alignment correction processing unit 901, a correction processing unit 903, a vertical / horizontal alignment correction processing unit 904, a vertical alignment correction processing unit 905, and a multi-view synthesis processing unit 131, respectively. It comprises. Since the left eye synthesis unit 130 and the right eye synthesis unit 132 have the same basic operation except that the combination of the input video signal and the parallax data is different, the operation of the left eye synthesis block 131 will be described here.
- the video signal R output from the imaging unit 101 is input to the alignment correction processing unit 901. Similar to the first embodiment, the alignment correction processing unit 901 performs the correction processing and alignment of the video represented by the video signal R, the camera parameters of the imaging unit 101 stored in the camera parameter storage unit 902, and the L reference parallax. Based on the data LS, an image from the viewpoint of the imaging unit 102 is generated.
- the video signal L output from the imaging unit 102 is input to the correction processing unit 903. Similar to the first embodiment, the correction processing unit 903 performs correction processing of the video represented by the video signal L based on the camera parameters of the imaging unit 102 stored in the camera parameter storage unit 902.
- the video signal R ′ output from the imaging unit 103 is input to the vertical / horizontal alignment correction processing unit 904.
- the vertical / horizontal alignment correction processing unit 904 performs correction processing and alignment of the video represented by the video signal R ′ based on the camera parameters of the imaging unit 103 stored in the camera parameter storage unit 902 and the L reference parallax data LS.
- An image from the viewpoint of the imaging unit 102 is generated.
- the distance Dx between the center of the imaging lens 11-1 of the imaging unit 101 and the center of the imaging lens 12-1 of the imaging unit 102 is equal to the center of the imaging lens 11-1 of the imaging unit 101 and the imaging unit 103.
- L reference parallax data LS is used as vertical parallax data. That is, the vertical / horizontal alignment correction processing unit 904 performs alignment by applying the L reference parallax data LS in the vertical direction and the horizontal direction.
- the pixel value of the coordinate (x, y) in the image aligned by the vertical / horizontal alignment correction processing unit 904 is the video signal output by the imaging unit 103 when the L reference parallax data LS at the coordinate is d. Is a pixel value of coordinates (x + d, yd) in the image obtained by correcting the image represented by the camera parameter.
- the video signal L ′ output from the imaging unit 104 is input to the vertical alignment correction processing unit 905.
- the vertical alignment correction processing unit 905 performs correction processing and alignment of the video represented by the video signal L ′ based on the camera parameters of the imaging unit 104 stored in the camera parameter storage unit 902 and the L reference parallax data LS. An image from the viewpoint of the imaging unit 102 is generated. That is, the vertical alignment correction processing unit 905 performs alignment by applying the L reference parallax data LS in the vertical direction.
- the pixel value of the coordinates (x, y) in the image aligned by the vertical alignment correction processing unit 905 is the video signal output by the imaging unit 104 when the L reference parallax data LS at the coordinates is d. Is a pixel value at coordinates (x, yd) in the image obtained by correcting the image represented by the camera parameter.
- the multi-eye synthesis processing unit 131 in the left-eye synthesis unit 130 and the right-eye synthesis unit 132 will be described with reference to FIG.
- the multi-eye composition processing unit 131 includes four systems obtained by the four imaging units 101, 102, 103, and 104.
- High-resolution synthesis using the video signal The multi-view synthesis processing unit 131 of the left-eye synthesis unit 130 generates and outputs a left-eye video signal LC ′ that is a signal representing video from the viewpoint of the imaging unit 102.
- the multi-view synthesis processing unit 131 of the right-eye synthesis unit 132 generates and outputs a right-eye video signal RC ′ that is a signal representing video from the viewpoint of the imaging unit 101.
- the four-line high-resolution synthesis is the same as the principle described in the light intensity distribution of FIG. 12, but more specifically, the resolution of the four imaging units 101, 102, 103, and 104 is VGA (640 ⁇ 480). In the following description, a high-resolution composition process is performed on a quad-VGA pixel (1280 ⁇ 960 pixels) that is four times the number of pixels.
- the image pickup device 11-1 of the image pickup unit 101 is arranged so as to be shifted upward by a half pixel with respect to the image pickup device 11-2 of the image pickup unit 102.
- the image sensor 11-4 of the image capturing unit 104 is shifted to the left by half a pixel with respect to the image sensor 11-2 of the image capture unit 102. ing.
- the multi-eye synthesis processing unit 131 uses pixels G11, G21, G31, and G41, which are pixels having the same coordinates in each of the corrected image MR ′ derived from and the corrected image ML ′ derived from the imaging unit 104. Arrange so that. That is, the pixel G31 is arranged on the right side of the pixel G11, the pixel G21 is arranged on the lower side of the pixel G11, and the pixel G41 is arranged on the right side of the pixel G21.
- the corrected video MR derived from the imaging unit 101 is an image generated by the alignment correction processing unit 901
- the corrected video ML derived from the imaging unit 102 is the correction processing unit.
- the corrected image MR ′ derived from the image capturing unit 103 is an image generated by the image capturing unit 103
- the corrected image ML ′ derived from the image capturing unit 104 is the vertical position. This is an image generated by the alignment correction processing unit 905.
- the corrected video MR derived from the imaging unit 101 is an image generated by the correction processing unit 903
- the corrected video ML derived from the imaging unit 102 is processed by the alignment correction processing unit 901.
- the corrected image MR ′ derived from the imaging unit 103 which is the generated image, is the image generated by the vertical alignment correction processing unit 905, and the corrected video ML ′ derived from the imaging unit 104 is the vertical / horizontal alignment correction. It is an image generated by the processing unit 904.
- the above combining operation is executed by the left eye combining unit 130 and the right eye combining unit 132.
- the left-eye synthesizing unit 130 outputs a high-definition image obtained by synthesizing four images captured from the position of the imaging unit 102 (that is, viewed from the left eye).
- the right-eye combining unit 132 outputs a high-definition image obtained by combining four images captured from the position of the imaging unit 101 (that is, viewed from the right eye). That is, an image having a resolution four times that of the output of the imaging unit is output.
- the left-eye video signal LC ′ output from the stereoscopic imaging device 111 and the right-eye video signal RC ′ have four times higher resolution than the video signals output from the imaging units 101, 102, 103, and 104.
- the left-eye video signal LC ′ output from the stereoscopic imaging device 111 and the right-eye video signal RC ′ are a video signal obtained by combining the video signal output from the imaging unit 101 and the video signal output from the imaging unit 103. In comparison, it has four times the resolution.
- the stereoscopic imaging device 111 has a device scale equivalent to that of a stereoscopic imaging device having a resolution equivalent to a video signal obtained by combining the video signal output from the imaging unit 101 and the video signal output from the imaging unit 103, and twice that size. 3D images can be generated.
- the example in which the video signals input to the parallax calculation unit 21 are two video signals output by the imaging units 101 and 102 has been described.
- the number of input video signals can be increased.
- a high-definition video of a plurality of viewpoints other than those for the right eye and the left eye is output, and as a result, a multi-view stereoscopic video can be generated without resolution deterioration.
- FIG. 2 shows a program for realizing the functions of the parallax calculation unit 21 and the high-resolution synthesis processing unit 20 or the parallax calculation unit 21 and the multi-view high-resolution synthesis processing unit 121 in FIG. 14 on a computer-readable recording medium.
- the processing of each unit may be performed by recording, reading the program recorded on the recording medium into a computer system, and executing the program.
- the “computer system” includes an OS and hardware such as peripheral devices.
- the “computer-readable recording medium” means a storage device such as a flexible disk, a magneto-optical disk, a portable medium such as a ROM and a CD-ROM, and a hard disk incorporated in a computer system. Furthermore, the “computer-readable recording medium” dynamically holds a program for a short time like a communication line when transmitting a program via a network such as the Internet or a communication line such as a telephone line. In this case, a volatile memory in a computer system serving as a server or a client in that case, and a program that holds a program for a certain period of time are also included.
- the program may be a program for realizing a part of the functions described above, and may be a program capable of realizing the functions described above in combination with a program already recorded in a computer system.
- the present invention can be applied to a thin color camera that generates a high-definition image using a plurality of cameras.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Theoretical Computer Science (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Studio Devices (AREA)
Abstract
L'invention concerne un dispositif d'imagerie en trois dimensions comprenant : deux unités d'imagerie qui capturent des images d'un même sujet ; une unité de calcul de parallaxe qui détecte des points de correspondance entre des vidéos capturées par les deux unités d'imagerie et calcule des informations de parallaxe pour les vidéos capturées à partir des deux unités d'imagerie ; et un processeur de composition qui, en utilisant le point de vue de chacune des deux unités d'imagerie en référence et en se basant sur les informations de parallaxe et les vidéos capturées par les deux unités d'imagerie, combine les vidéos avec plus de pixels que les vidéos capturées et génère une vidéo à deux systèmes avec un grand nombre de pixels.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-168145 | 2010-07-27 | ||
JP2010168145A JP5088973B2 (ja) | 2010-07-27 | 2010-07-27 | 立体撮像装置およびその撮像方法 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012014695A1 true WO2012014695A1 (fr) | 2012-02-02 |
Family
ID=45529911
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/066089 WO2012014695A1 (fr) | 2010-07-27 | 2011-07-14 | Dispositif d'imagerie en trois dimensions et procédé d'imagerie correspondant |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP5088973B2 (fr) |
WO (1) | WO2012014695A1 (fr) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2013061440A (ja) * | 2011-09-13 | 2013-04-04 | Canon Inc | 撮像装置および撮像装置の制御方法 |
CN102752616A (zh) * | 2012-06-20 | 2012-10-24 | 四川长虹电器股份有限公司 | 双目立体视频转换多目立体视频的方法 |
JP6376474B2 (ja) * | 2013-05-29 | 2018-08-22 | 日本電気株式会社 | 多眼撮像システム、取得画像の合成処理方法、及びプログラム |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0682608A (ja) * | 1992-07-14 | 1994-03-25 | Nippon Telegr & Teleph Corp <Ntt> | 光学素子とそれを用いた光軸変更素子及び投射型表示装置 |
JPH0815616A (ja) * | 1994-06-30 | 1996-01-19 | Olympus Optical Co Ltd | 立体内視鏡撮像装置 |
JP2008078772A (ja) * | 2006-09-19 | 2008-04-03 | Oki Electric Ind Co Ltd | ステレオ映像処理装置及びステレオ映像処理方法のプログラム |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0686332A (ja) * | 1992-09-04 | 1994-03-25 | Canon Inc | 複眼撮像方法 |
JP4621214B2 (ja) * | 2007-01-17 | 2011-01-26 | 日本放送協会 | 立体画像撮像位置調整装置、立体画像撮像位置調整方法およびそのプログラムならびに立体画像撮影システム |
JP4958233B2 (ja) * | 2007-11-13 | 2012-06-20 | 学校法人東京電機大学 | 多眼視画像作成システム及び多眼視画像作成方法 |
-
2010
- 2010-07-27 JP JP2010168145A patent/JP5088973B2/ja not_active Expired - Fee Related
-
2011
- 2011-07-14 WO PCT/JP2011/066089 patent/WO2012014695A1/fr active Application Filing
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0682608A (ja) * | 1992-07-14 | 1994-03-25 | Nippon Telegr & Teleph Corp <Ntt> | 光学素子とそれを用いた光軸変更素子及び投射型表示装置 |
JPH0815616A (ja) * | 1994-06-30 | 1996-01-19 | Olympus Optical Co Ltd | 立体内視鏡撮像装置 |
JP2008078772A (ja) * | 2006-09-19 | 2008-04-03 | Oki Electric Ind Co Ltd | ステレオ映像処理装置及びステレオ映像処理方法のプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP5088973B2 (ja) | 2012-12-05 |
JP2012029199A (ja) | 2012-02-09 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5238429B2 (ja) | 立体映像撮影装置および立体映像撮影システム | |
CN102164298B (zh) | 全景成像系统中基于立体匹配的元素图像获取方法 | |
JP5679978B2 (ja) | 立体視用画像位置合わせ装置、立体視用画像位置合わせ方法、及びそのプログラム | |
JP5472328B2 (ja) | ステレオカメラ | |
JP5982751B2 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
EP1836859B1 (fr) | Conversion automatique d'une video monoscopique en une video stereoscopique | |
JP5320524B1 (ja) | ステレオ撮影装置 | |
JP5204350B2 (ja) | 撮影装置、再生装置、および画像処理方法 | |
CN102986233B (zh) | 图像摄像装置 | |
JP5308523B2 (ja) | 立体画像表示装置 | |
US8130259B2 (en) | Three-dimensional display device and method as well as program | |
KR20110124473A (ko) | 다중시점 영상을 위한 3차원 영상 생성 장치 및 방법 | |
JP5814692B2 (ja) | 撮像装置及びその制御方法、プログラム | |
WO2012029298A1 (fr) | Dispositif de capture d'images et procédé de traitement d'images | |
WO2014145856A1 (fr) | Systèmes et procédés d'imagerie stéréo à l'aide des réseaux de caméras | |
JP2013192229A (ja) | 2次元/3次元デジタル情報取得及び表示装置 | |
JP2007529960A (ja) | 個人用電子機器の3次元情報取得及び表示システム | |
US20130286170A1 (en) | Method and apparatus for providing mono-vision in multi-view system | |
KR20150003576A (ko) | 삼차원 영상 생성 또는 재생을 위한 장치 및 방법 | |
JP5088973B2 (ja) | 立体撮像装置およびその撮像方法 | |
TWI462569B (zh) | 三維影像攝相機及其相關控制方法 | |
KR102112491B1 (ko) | 물체 공간의 물점의 기술을 위한 방법 및 이의 실행을 위한 연결 | |
KR20110025083A (ko) | 입체 영상 시스템에서 입체 영상 디스플레이 장치 및 방법 | |
JP5741353B2 (ja) | 画像処理システム、画像処理方法および画像処理プログラム | |
JP5704885B2 (ja) | 撮影機器、撮影方法及び撮影制御プログラム |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11812287 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11812287 Country of ref document: EP Kind code of ref document: A1 |