WO2013031392A1 - 3d imaging device - Google Patents

3d imaging device Download PDF

Info

Publication number
WO2013031392A1
WO2013031392A1 PCT/JP2012/067786 JP2012067786W WO2013031392A1 WO 2013031392 A1 WO2013031392 A1 WO 2013031392A1 JP 2012067786 W JP2012067786 W JP 2012067786W WO 2013031392 A1 WO2013031392 A1 WO 2013031392A1
Authority
WO
WIPO (PCT)
Prior art keywords
imaging
stereoscopic
image
monocular
unit
Prior art date
Application number
PCT/JP2012/067786
Other languages
French (fr)
Japanese (ja)
Inventor
克俊 井澤
林 淳司
智行 河合
沢地 洋一
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2013031392A1 publication Critical patent/WO2013031392A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/207Image signal generators using stereoscopic image cameras using a single 2D image sensor
    • H04N13/218Image signal generators using stereoscopic image cameras using a single 2D image sensor using spatial multiplexing
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/286Image signal generators having separate monoscopic and stereoscopic modes
    • GPHYSICS
    • G03PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
    • G03BAPPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
    • G03B35/00Stereoscopic photography
    • G03B35/08Stereoscopic photography by simultaneous recording

Definitions

  • the present invention relates to a stereoscopic imaging apparatus, and more particularly to a stereoscopic imaging apparatus provided with a plurality of optical systems.
  • stereoscopic display devices have been developed for displaying a stereoscopic image.
  • the cross point of a stereoscopic image is adjusted according to conditions such as the size of the display device and the distance between the viewer and the display device.
  • stereoscopic cameras capable of acquiring stereoscopic image data have been used.
  • stereoscopic image data is acquired using two (left and right) lenses and one image sensor having different sensitivity depending on the incident angle.
  • the imaging device described in Document 3 includes a plurality of optical systems, and can switch between monocular imaging and compound-eye imaging so that a stereoscopic image can be acquired.
  • Patent Document 1 adjusts the cross point according to the conditions at the time of image reproduction.
  • the cross point is shifted and the user is informed. It will give an unnatural feeling.
  • the settable shooting modes are limited, and images cannot be acquired in shooting modes arbitrarily selected from various shooting modes.
  • the present invention has been made based on such circumstances, and an object thereof is to provide a stereoscopic imaging apparatus capable of displaying an image with a natural feeling when the shooting mode is switched. It is another object of the present invention to provide a stereoscopic imaging apparatus that can acquire various images according to a user's request.
  • a stereoscopic imaging apparatus is a plurality of imaging units that capture an image of a subject, and a light beam that has passed through different areas of a single imaging optical system is a pixel group.
  • a plurality of imaging units including at least one monocular stereoscopic imaging unit having an imaging device including a plurality of pixel groups that perform photoelectric conversion for each, an image generation unit that generates a stereoscopic image of a subject from imaging signals of the plurality of imaging units,
  • a stereoscopic imaging device including a crosspoint control unit that controls a crosspoint of a stereoscopic image, wherein the image generation unit configures a stereoscopic image from a plurality of viewpoint images obtained by photographing with the monocular stereoscopic imaging unit.
  • the cross-point control unit when switching from shooting with the compound eye stereoscopic imaging function to shooting with the monocular stereoscopic imaging function, and when switching from shooting with the anterior eye stereoscopic imaging function to shooting with the compound eye stereoscopic imaging function
  • the image generation unit is controlled so that the cross point of the displayed stereoscopic image does not change before and after the switching.
  • a focused point in the image becomes a cross point
  • the in-focus position in the image is crossed. It is different from the point. Therefore, when the monocular stereoscopic imaging and the compound eye stereoscopic imaging are switched to each other, the cross point changes before and after the switching, so that the viewer may feel uncomfortable that the subject jumps forward or retracts backward. Such a sense of incongruity is particularly felt when the pop-out state of the main subject that is often in focus changes.
  • a cross-point control unit is provided to switch from imaging with the compound eye stereoscopic imaging function to imaging with the monocular stereoscopic imaging function, and from imaging with the monocular stereoscopic imaging function to compound eye stereoscopic imaging.
  • the image generation unit is controlled so that the cross point of the stereoscopic image displayed on the image display unit does not change before and after the switching.
  • the “cross point” means a point where parallax is zero in a stereoscopic image.
  • a subject that is in front of the cross point appears to protrude (front) from the screen, and a subject that is behind the cross point appears to be retracted (back) from the screen.
  • the crosspoint control unit performs imaging with the compound-eye stereoscopic imaging function while imaging with the compound-eye stereoscopic imaging function continues.
  • the first in-focus area in one viewpoint image among the viewpoint images constituting the stereoscopic image obtained by the above is detected, and the detected first in the other viewpoint images among the viewpoint images constituting the stereoscopic image is detected.
  • a first corresponding area corresponding to the in-focus area is detected, and based on the detected first corresponding area, a first misregistration amount between one viewpoint image and another viewpoint image is calculated.
  • the cross point of the stereoscopic image obtained by imaging with the compound eye stereoscopic imaging function can be captured with the monocular stereoscopic imaging function.
  • loss point it may be.
  • the cross point of the stereoscopic image obtained by photographing is shifted in advance so as to coincide with the first in-focus area that is the cross point of the stereoscopic image obtained by photographing with the monocular stereoscopic imaging function.
  • the cross point does not change suddenly when actually switching from the compound-eye stereoscopic imaging function to the monocular stereoscopic imaging function, and an image can be displayed with a natural feeling.
  • the crosspoint control unit is configured to provide a monocular stereoscopic imaging function while shooting with the monocular stereoscopic imaging function is continued.
  • the second in-focus area which is a cross point of the stereoscopic image is detected, and shooting with the compound eye stereoscopic imaging function is performed.
  • a region is detected, a second positional deviation amount between one viewpoint image and another viewpoint image is calculated based on the detected second corresponding region, and the already stored positional deviation amount is calculated as the second position.
  • Update and store the displacement amount, monocular 3D photography When the function is switched to the compound eye stereoscopic imaging function, a compound image is formed by constructing a stereoscopic image from one viewpoint image and an image obtained by shifting the other viewpoint image by the second positional shift amount stored.
  • the image generation unit is controlled so that the cross point of the stereoscopic image obtained by shooting with the stereoscopic imaging function matches the second in-focus area that is the cross point of the stereoscopic image obtained by shooting with the monocular stereoscopic imaging function. You may do it.
  • the second positional shift amount with respect to the viewpoint image is calculated and updated in advance while shooting with the monocular stereoscopic imaging function is continued, and switching from the monocular stereoscopic imaging function to the compound eye stereoscopic imaging function is performed.
  • a stereoscopic image is formed after shifting the viewpoint image newly obtained by switching to the compound eye stereoscopic imaging function by shifting the second positional deviation amount.
  • an imaging function automatic switching unit that automatically switches between a monocular stereoscopic imaging function and a compound-eye stereoscopic imaging function.
  • the imaging function automatic switching unit operates the monocular stereoscopic imaging function while the in-focus position is closer than a predetermined distance, and switches to the compound eye stereoscopic imaging function when the in-focus position becomes farther than the predetermined distance, and the cross point
  • the control unit is operated to operate the compound eye stereoscopic imaging function while the in-focus position is closer than a predetermined distance, and the user is instructed to switch to the monocular stereoscopic imaging function when the in-focus position is closer than the predetermined distance. In such a case, it may be switched to the monocular stereoscopic imaging function.
  • the image generation unit is an imaging element included in the monocular stereoscopic imaging unit during the operation of the compound eye stereoscopic imaging function.
  • the pixel signal of each pixel group which comprises a some pixel group may be added for every pixel position of this, and the pixel signal addition process which makes the said addition result the pixel signal in each pixel position may be performed.
  • a stereoscopic imaging apparatus is a plurality of imaging units that capture an image of a subject, and a light beam that has passed through different areas of a single imaging optical system is grouped into a pixel group.
  • a plurality of imaging units including at least one monocular stereoscopic imaging unit having an imaging element including a plurality of pixel groups that perform photoelectric conversion, an image generation unit that generates a stereoscopic image of a subject from imaging signals of the plurality of imaging units, and a user
  • An imaging mode setting unit that sets an imaging mode based on an instruction input, and the imaging mode setting unit obtains the number of imaging units to be used for imaging among a plurality of imaging units and acquires the imaging mode setting unit.
  • two-dimensional imaging mode and monocular stereoscopic imaging mode using at least one monocular stereoscopic imaging unit, and at least one monocular stereoscopic imaging unit and a plurality of imaging units Without even sets an imaging mode from among the imaging modes including two-dimensional imaging mode and the compound-eye stereoscopic imaging mode, using the image pickup unit other than the one monocular stereoscopic imaging unit.
  • the size of the parallax, the number of viewpoints, and the like differ by changing the number of imaging units used for shooting and the number of viewpoint images to be acquired (number of viewpoints) among the plurality of imaging units.
  • Various imaging modes can be set. For example, the two-dimensional imaging mode or the stereoscopic imaging mode can be selected, and the monocular stereoscopic imaging mode or the compound eye stereoscopic imaging mode can be selected also in the stereoscopic imaging mode.
  • the stereoscopic imaging device according to the second aspect can acquire various images according to the user's request.
  • the image generation unit is based on a user instruction input, and the two-dimensional imaging mode using at least one monocular stereoscopic imaging unit,
  • the compound-eye stereoscopic imaging mode using at least one monocular stereoscopic imaging unit and an imaging unit other than at least one monocular stereoscopic imaging unit among the plurality of imaging units each of the imaging elements included in at least one monocular stereoscopic imaging unit
  • pixel signals of each pixel group constituting a plurality of pixel groups may be added, and pixel signal addition processing may be performed using the added result as a pixel signal at each pixel position. In this way, the amount of noise of an image generated by adding a plurality of pixel signals can be reduced, and more various images can be acquired according to the user's request.
  • the number of the plurality of imaging units is 2, and other than at least one monocular stereoscopic imaging unit.
  • imaging unit also may be a monocular stereoscopic imaging unit.
  • the stereoscopic imaging apparatus may further include a stereoscopic image display unit that displays the generated stereoscopic image.
  • the stereoscopic imaging device of the present invention it is possible to display an image with a natural feeling when the shooting mode is switched, and it is possible to acquire various images according to a user's request.
  • FIG. 1 is a block diagram showing a configuration of a stereoscopic imaging apparatus 10 according to the first embodiment of the present invention.
  • Figure 2 is an image view showing an appearance of a stereoscopic imaging device 10.
  • FIG. 3 is a diagram illustrating a configuration of an imaging element used in the monocular stereoscopic imaging unit.
  • 4 is a diagram showing the main / sub-pixels of the image sensor shown in FIG. 3 one by one.
  • FIG. 5A is a diagram showing a configuration of a normal CCD.
  • Figure 5B is a diagram showing an example of a configuration of a monocular 3D sensor.
  • FIG. 5C is a diagram illustrating another example of the configuration of the monocular 3D sensor.
  • FIG. 1 is a block diagram showing a configuration of a stereoscopic imaging apparatus 10 according to the first embodiment of the present invention.
  • Figure 2 is an image view showing an appearance of a stereoscopic imaging device 10.
  • FIG. 3 is a diagram illustrating a configuration of an
  • FIG. 6 is a block diagram illustrating a main part of the stereoscopic imaging apparatus according to the first embodiment.
  • FIG. 7 is a flowchart showing the cross point control at the time of switching from the compound eye stereoscopic imaging function to the monocular stereoscopic imaging function.
  • FIG. 8A is a conceptual diagram illustrating a relationship between a cross point and a focal point during compound eye stereoscopic imaging.
  • FIG. 8B is a conceptual diagram illustrating a relationship between a cross point and a focal point during monocular stereoscopic imaging.
  • FIG. 9 is another conceptual diagram showing the cross point control at the time of switching from the compound eye stereoscopic imaging function to the monocular stereoscopic imaging function.
  • FIG. 8A is a conceptual diagram illustrating a relationship between a cross point and a focal point during compound eye stereoscopic imaging.
  • FIG. 8B is a conceptual diagram illustrating a relationship between a cross point and a focal point during monocular stereoscopic imaging.
  • FIG. 10 is a flowchart showing the cross point control at the time of switching from the monocular stereoscopic imaging function to the compound eye stereoscopic imaging function.
  • FIG. 11 is a conceptual diagram showing a relationship between a cross point and a focal point during monocular / compound eye stereoscopic imaging.
  • FIG. 12 is a flowchart showing a process for automatically switching between the monocular / compound-eye stereoscopic imaging function.
  • FIG. 13 is a table showing shooting modes that can be set in the stereoscopic imaging apparatus according to the first embodiment.
  • FIG. 14 is a conceptual diagram illustrating a procedure for setting a shooting mode in the stereoscopic imaging apparatus according to the first embodiment.
  • FIG. 15 is a conceptual diagram illustrating pixel signal addition processing in the stereoscopic imaging apparatus according to the first embodiment.
  • FIG. 16 is a flowchart illustrating a procedure for selecting a shooting mode in consideration of pixel signal addition processing.
  • FIG. 17 is a block diagram illustrating a main part of a stereoscopic imaging apparatus according to the second embodiment of the present invention.
  • FIG. 18 is a table showing shooting modes that can be set in the stereoscopic imaging apparatus according to the second embodiment.
  • FIG. 1 is a block diagram illustrating an embodiment of a stereoscopic imaging apparatus 10 according to the present invention
  • FIG. 2 is an image diagram illustrating an external appearance of the stereoscopic imaging apparatus 10.
  • the stereoscopic imaging apparatus 10 displays a captured image on a liquid crystal monitor (LCD) 30 or records it on a memory card 54 (hereinafter also referred to as “media”).
  • the overall operation of the apparatus is a central processing unit (CPU) 40. It is controlled by
  • the stereoscopic imaging device 10 is provided with operation units 38 such as a shutter button, a mode dial, a playback button, a MENU / OK key, a cross key, and a BACK key.
  • operation units 38 such as a shutter button, a mode dial, a playback button, a MENU / OK key, a cross key, and a BACK key.
  • a signal from the operation unit 38 is input to the CPU 40, and the CPU 40 controls each circuit of the stereoscopic imaging device 10 based on the input signal. For example, lens driving control, aperture driving control, photographing operation control, image processing control, image processing Data recording / reproduction control, display control of the liquid crystal monitor 30 for stereoscopic display, and the like are performed.
  • the shutter button is an operation button for inputting an instruction to start shooting, and is configured by a two-stroke switch having an S1 switch that is turned on when half-pressed and an S2 switch that is turned on when fully pressed.
  • the mode dial is a selection means for selecting a 2D shooting mode, a 3D shooting mode, an auto shooting mode, a manual shooting mode, a scene position such as a person, a landscape, a night view, a macro mode, a moving image mode, and a parallax priority shooting mode according to the present invention. is there.
  • the playback button is a button for switching to a playback mode in which a still image or a moving image of a stereoscopic image (3D image) or a planar image (2D image) that has been recorded is displayed on the liquid crystal monitor 30.
  • the MENU / OK key is an operation key having both a function as a menu button for instructing to display a menu on the screen of the liquid crystal monitor 30 and a function as an OK button for instructing confirmation and execution of the selection contents. It is.
  • the cross key is an operation unit for inputting instructions in four directions, up, down, left, and right, and functions as a button (cursor moving operation means) for selecting an item from the menu screen or instructing selection of various setting items from each menu. To do.
  • the up / down key of the cross key functions as a zoom switch for shooting or a playback zoom switch in playback mode
  • the left / right key functions as a frame advance (forward / reverse feed) button in playback mode.
  • the BACK key is used to delete a desired object such as a selection item, cancel an instruction content, or return to the previous operation state.
  • the image light indicating the subject is a phase difference image sensor via the photographing lenses 12 (12-1, 12-2) including the focus lens and the zoom lens, and the aperture 14 (14-1, 14-2).
  • a solid-state imaging device 16 (16-1, 16-2, hereinafter referred to as “monocular 3D sensor”.
  • the photographing lenses 12 (12-1, 12-2) are driven by a lens driving unit 36 (36-1, 36-2) controlled by the CPU 40, and focus control, zoom control, and the like are performed.
  • the diaphragm 14 (14-1, 14-2) is composed of, for example, five diaphragm blades, and is driven by a diaphragm driver 34 (34-1, 34-2) controlled by the CPU 40.
  • the diaphragm value F1 Aperture control is performed in 6 steps in increments of 1AV from .4 to F11.
  • the CPU 40 controls the diaphragm 14 (14-1, 14-2) via the diaphragm driving unit 34 (34-1, 34-2) and the CCD control unit 32 (32-1, 32-2).
  • the charge accumulation time (shutter speed) in the monocular 3D sensor 16, the readout control of the image signal from the monocular 3D sensor 16, and the like are performed.
  • FIG. 3 is a diagram illustrating a configuration example of the monocular 3D sensor 16.
  • the monocular 3D sensor 16 includes odd-line pixels (main pixels) and even-line pixels (sub-pixels) arranged in a matrix.
  • image signals for the two surfaces photoelectrically converted by these main and sub-pixels can be read independently.
  • GRGR ... And BGBG... Pixel array lines are provided alternately.
  • the pixels on the even lines (2, 4, 6,...) Are arranged in the GRGR.
  • BGBG... Pixel array lines are alternately provided, and the pixels are arranged so as to be shifted in the line direction by a half pitch with respect to the even-numbered pixels.
  • FIG. 4 is a diagram showing the main lens PDa and the sub-pixel PDb of the photographing lens 12 (shooting optical system), the diaphragm 14, and the monocular 3D sensor 16, and FIGS. 5A to 5C are main parts of FIG. It is an enlarged view.
  • the light beam passing through the exit pupil enters the normal CCD pixel (photodiode PD) via the microlens L without being restricted.
  • the monocular 3D sensor 16 shown in FIG. 5B includes a microlens L that collects the light beam that has passed through the photographing lens 12, and a photodiode PD that receives the light beam that has passed through the microlens L (the main pixel PDa and the subpixel). PDb) and a light shielding member 16A that partially shields the light receiving surface of the photodiode PD.
  • the right half or the left half of the light receiving surfaces of the main pixel PDa and the subpixel PDb is shielded by the light shielding member 16A. That is, the light shielding member 16A functions as a pupil division member.
  • the monocular 3D sensor 16 having the above-described configuration is configured such that the main pixel PDa and the sub-pixel PDb have different regions (right half and left half) where the light beam is limited by the light shielding member 16A.
  • the microlens L and the photodiode PD (PDa, PDb) are relatively shifted in the left-right direction (pupil division direction) without providing the light shielding member 16A.
  • the light beam incident on the photodiode PD may be limited by disposing the optical axis Ic of the photodiode PD and the optical axes Pc of the photodiodes PDa and PDb. Further, by providing one microlens for two pixels (main pixel and subpixel), the light flux incident on each pixel may be limited.
  • the signal charge accumulated in the monocular 3D sensor 16 (16-1, 16-2) is read out as a voltage signal corresponding to the signal charge based on the readout signal applied from the CCD controller 32.
  • the voltage signal read from the monocular 3D sensor 16 (16-1, 16-2) is applied to the analog signal processing unit 18 (18-1, 18-2), where R, G,
  • the B signal is sampled and held, amplified by a gain designated by the CPU 40 (corresponding to ISO sensitivity), and then added to the A / D converter 20 (20-1, 20-2).
  • the A / D converter 20 (20-1, 20-2) sequentially converts the input R, G, B signals into digital R, G, B signals and converts them into image input controllers 22 (22-1, 22-22). Output to 2).
  • the first image input unit 22-1, the first CCD control unit 32-1, the first aperture driving unit 34-1 and the first lens driving unit 36-1 are used to form the first imaging unit 11-. 1 is configured.
  • the second photographing lens 12-2, the second diaphragm 14-2, the second monocular 3D sensor 16-2, the second analog signal processing unit 18-2, and the second A / D converter 20- 2, the second image input controller 22-2, the second CCD control unit 32-2, the second diaphragm driving unit 34-2, and the second lens driving unit 36-2 11-2 is configured.
  • the digital signal processing unit 24 performs gain control processing including gamma correction processing, gamma correction processing, synchronization processing, YC processing for digital image signals input via the image input controller 22, including offset processing, white balance correction, and sensitivity correction. Then, predetermined signal processing such as sharpness correction is performed.
  • the EEPROM 46 stores a camera control program, defect information of the monocular 3D sensor 16, various parameters and tables used for image processing, a program diagram, a plurality of parallax priority program diagrams according to the present invention, and the like. It is a non-volatile memory.
  • the main image data read from the odd-line main pixels of the monocular 3D sensor 16 is processed as the left viewpoint image data and read from the even-line sub-pixels.
  • the sub image data to be processed is processed as right viewpoint image data.
  • the left viewpoint image data and right viewpoint image data (3D image data) processed by the digital signal processing unit 24 are input to the VRAM 50.
  • the VRAM 50 includes an A area and a B area each storing 3D image data representing a 3D image for one frame.
  • 3D image data representing a 3D image for one frame is rewritten alternately in the A area and the B area.
  • the written 3D image data is read from an area other than the area in which the 3D image data is rewritten in the A area and the B area of the VRAM 50.
  • the 3D image data read from the VRAM 50 is encoded by the video encoder 28 and is output to the stereoscopic display liquid crystal monitor 30 provided on the back of the camera, whereby the 3D subject image is displayed on the display screen of the liquid crystal monitor 30. Is displayed.
  • the liquid crystal monitor (LCD) 30 is a stereoscopic display unit that can display a stereoscopic image (a left viewpoint image and a right viewpoint image) as a directional image having a predetermined directivity by a parallax barrier, but is not limited thereto.
  • a stereoscopic image (a left viewpoint image and a right viewpoint image) as a directional image having a predetermined directivity by a parallax barrier, but is not limited thereto.
  • the left viewpoint image and the right viewpoint image may be viewed separately by using a lenticular lens or by wearing dedicated glasses such as polarized glasses or liquid crystal shutter glasses.
  • the stereoscopic imaging apparatus 10 includes the liquid crystal monitor 30 capable of displaying a stereoscopic image has been described.
  • the stereoscopic imaging apparatus 10 does not include the liquid crystal monitor 30 and is recorded on the memory card 54.
  • a stereoscopic image may be viewed on another stereoscopic image display device using the image data.
  • the monocular 3D sensor 16 starts the AF operation and the AE operation, and the focus in the photographing lens 12 is set via the lens driving unit 36. Control is performed so that the lens comes to the in-focus position.
  • the image data output from the A / D converter 20 when the shutter button is half-pressed is taken into the AE detection unit 44.
  • the AE detection unit 44 integrates the G signals of the entire screen or integrates the G signals that are weighted differently in the central portion and the peripheral portion of the screen, and outputs the integrated value to the CPU 40.
  • the CPU 40 calculates the brightness of the subject (shooting EV value) from the integrated value input from the AE detection unit 44, and the aperture value of the diaphragm 14 and the electronic shutter (shutter speed) of the monocular 3D sensor 16 based on the shooting EV value. Is determined according to a predetermined program diagram, the aperture 14 is controlled via the aperture drive unit 34 based on the determined aperture value, and the monocular 3D sensor is controlled via the CCD control unit 32 based on the determined shutter speed. The charge accumulation time at 16 is controlled.
  • the AF processing unit 42 is a part that performs contrast AF processing.
  • a high frequency component of image data in a predetermined focus area is extracted from at least one of the left viewpoint image data and the right viewpoint image data, and the in-focus state is determined by integrating the high frequency component.
  • An AF evaluation value is calculated.
  • AF control is performed by controlling the focus lens in the photographic lens 12 so that the AF evaluation value is maximized.
  • phase difference AF processing may be performed. In this case, the phase difference between the image data corresponding to the main pixel and the sub-pixel in the predetermined focus area of the left viewpoint image data and the right viewpoint image data is detected.
  • the defocus amount is obtained based on the information indicating the phase difference.
  • AF control is performed by controlling the focus lens in the taking lens 12 so that the defocus amount becomes zero.
  • the two pieces of image data temporarily stored in the memory 48 are appropriately read out by the digital signal processing unit 24, where predetermined signals such as luminance data and color difference data generation processing (YC processing) are performed. Processing is performed.
  • the YC processed image data (YC data) is stored in the memory 48 again. Subsequently, the two pieces of YC data are respectively output to the compression / decompression processing unit 26 and subjected to predetermined compression processing such as JPEG (joint photographic experts group), and then stored in the memory 48 again.
  • a multi-picture file (MP file: a file in a format in which a plurality of images are connected) is generated from two pieces of YC data (compressed data) stored in the memory 48, and the MP file is generated by the media controller 52. It is read and recorded in the memory card 54.
  • MP file a file in a format in which a plurality of images are connected
  • the stereoscopic imaging device 10 having the configuration shown in FIG. 1 includes a first imaging unit 11-1 and a second imaging unit 11-2 that image a subject, and a first imaging unit 11-1 and a second imaging unit.
  • a CPU 40 is provided as a control unit for controlling 11-2.
  • Both the first imaging unit 11-1 and the second imaging unit 11-1 are a plurality of units that photoelectrically convert light beams that have passed through different areas of the exit pupil of the photographing lens 12 (12-1, 12-2).
  • An image sensor (monocular 3D sensors 16-1 and 16-2) including a pixel group is included.
  • the stereoscopic imaging device 10 uses a viewpoint image (image information) obtained by the first imaging unit 11-1 and a viewpoint image (image information) obtained by the second imaging unit 11-2 as a stereoscopic image on the liquid crystal monitor. And a plurality of viewpoint images (a plurality of viewpoint images obtained by one imaging unit having a plurality of pixel groups among the first imaging unit 11-1 and the second imaging unit 11-2).
  • the image information) on the liquid crystal monitor 30 as a stereoscopic image.
  • the imaging unit on the right side is the “first imaging unit”.
  • the imaging unit on the left side when viewed from the right side is described as a “second imaging unit”.
  • FIGS. 8A and 8B are conceptual diagrams showing the relationship between the cross point and the focus during monocular / compound eye stereoscopic imaging.
  • the cross point means a point where parallax is zero in a stereoscopic image.
  • the left and right image data can be electronically shifted to freely set the cross point. Therefore, as shown in the example of FIG. 8A, the cross point (object B) and the focal point (object A ) Is often different.
  • the stereoscopic imaging apparatus 10 can perform the following crosspoint control. Such cross-point control is particularly effective when shooting continuously, that is, when shooting moving images and acquiring so-called through images. Note that whether or not to perform crosspoint control may be determined by a user input via the operation unit 38.
  • FIG. 7 is a flowchart showing the cross point control at the time of switching from the compound eye stereoscopic imaging function to the monocular stereoscopic imaging function
  • FIG. 9 is a conceptual diagram showing the state of the cross point control.
  • the compound eye stereoscopic photographing mode compound eye 3D mode
  • left and right viewpoint images are acquired in S104.
  • the left viewpoint image (L in FIG. 9A) is acquired by the left channel of the first imaging unit 11-1
  • the right viewpoint image (FIG. 9) is acquired by the right channel of the second imaging unit 11-2.
  • R) of (a) shall be acquired.
  • the contrast AF process is performed using the left viewpoint image L acquired in the left channel of the first imaging unit 11-1.
  • the cross point is object B and the focal point is object A, as in FIG. 8A.
  • S106 for detecting a focused area in the left viewpoint image L.
  • region S1 the contrast is maximum
  • S108 a region corresponding to the region S1 is detected in the right viewpoint image R.
  • this area is the area S2 of (c) of FIG.
  • This corresponding area detection can be performed by various methods such as a correlation method and template matching.
  • an amount (shift amount ⁇ X) of moving the right viewpoint image R so that the cross point becomes object A is calculated.
  • the right viewpoint image R is moved by ⁇ X and (the right viewpoint of FIG. 9D)
  • the stereoscopic image composed of the image R ′), the left viewpoint image L and the right viewpoint image R ′ is displayed on the liquid crystal monitor 30.
  • the three-dimensional image may be recorded on the memory card 54.
  • the cross point of the stereoscopic image obtained by the compound eye stereoscopic imaging is object A, which matches the in-focus area.
  • Such processing is performed while the compound eye stereoscopic photography is continued, and the state where the cross point and the in-focus point coincide with each other is maintained until switching from the compound eye stereoscopic photography to the monocular stereoscopic photography.
  • Such processing may be performed at predetermined time intervals, for example, 100 msec intervals.
  • FIG. 10 is a flowchart showing the procedure of such crosspoint control
  • FIG. 11 is a conceptual diagram showing the state of the crosspoint control.
  • the stereoscopic image is composed of two viewpoint images with small parallax, as shown in FIG.
  • the cross point and the focal point coincide (object A in FIG. 11A).
  • the viewpoint image obtained in this way is displayed on the liquid crystal monitor 30 as a stereoscopic image.
  • the in-focus area is detected in the image acquired by the first imaging unit 11-1.
  • the cross point coincides with the focal point at the time of monocular stereoscopic photography, as shown in (b) and (c) of FIG.
  • the focus area may be detected by one of the left and right viewpoint images.
  • a viewpoint image by the second imaging unit 11-2 is also acquired in S210 ((d) in FIG. 11).
  • an area corresponding to the detected in-focus area is detected in the viewpoint image acquired by the second imaging unit 11-2 (S212).
  • the corresponding area can be detected by an algorithm such as a correlation method as described above.
  • a deviation amount between the viewpoint image acquired by the first imaging unit 11-1 and the viewpoint image acquired by the second imaging unit 11-2 is calculated ((e) in FIG. 11).
  • This shift amount is an image shift amount for causing the cross point and the in-focus area to coincide with each other in the stereoscopic image obtained when switching to the compound eye stereoscopic shooting.
  • Such processing is performed while monocular stereoscopic photography continues, and the calculation, update, and recording of the deviation amount are continued until switching from monocular stereoscopic photography to compound eye stereoscopic photography.
  • Such processing may be performed at predetermined time intervals, for example, 100 msec intervals.
  • the viewpoint image acquired by the second imaging unit 11-2 in S218 is shifted by the above deviation amount ((f) in FIG. 11),
  • the stereoscopic image is displayed on the liquid crystal monitor 30 together with the viewpoint image acquired by the first imaging unit 11-1 (S220).
  • the viewpoint image acquired by the second imaging unit 11-2 is shifted by the amount of shift, so that the stereoscopic image obtained when switching to compound eye stereoscopic shooting is aligned with the cross point as in monocular stereoscopic shooting.
  • the focal area matches. Therefore, even when switching from single-eye stereoscopic photography to compound-eye stereoscopic photography, the viewer does not feel that the cross point has changed suddenly, and an image can be displayed with a natural feeling when the photographing mode is switched.
  • the cross-point adjustment function only needs to be activated when switching to long-distance shooting for short-distance shooting, but for long-distance shooting, it is not only when switching to short-distance shooting but also for monocular / compound-eye stereoscopic shooting by the user. It is necessary to always activate it (so that the crosspoint is adjusted continuously) so that it can be handled even when there is a change request.
  • the user selects whether to automatically switch between monocular / compound-eye stereoscopic photography (S302). / The user is allowed to select any one of the compound eye stereoscopic photographing (S304). If YES in step S302, the process proceeds to step S306, where the in-focus position is detected, and if close to a predetermined threshold (for example, 70 cm) (YES in step S306), monocular stereoscopic shooting is performed (S308). This determination is continuously performed at a predetermined time interval (for example, 100 msec) (S310).
  • a predetermined threshold for example, 70 cm
  • S316 After the state of the compound eye stereoscopic photographing / cross point automatic adjustment function is activated in S316, it is determined whether or not to switch to monocular stereoscopic photographing at a predetermined time interval (S318) (S320). If the in-focus distance is shorter than the predetermined threshold value in S320, or if a change request to monocular stereoscopic photography is made by a user instruction via the operation unit 38, the process returns to S308 and monocular stereoscopic photography is performed. If none of the conditions is satisfied in S320, the process returns to S316 to continue the state of the compound-eye stereoscopic photographing and the automatic activation of the crosspoint function.
  • S318 a predetermined time interval
  • both the first imaging unit 11-1 and the second imaging unit 11-2 have a monocular stereoscopic imaging function. Therefore, if left and right viewpoint images are acquired by both of the two imaging units, a total of four viewpoints can be obtained. If the number of viewpoints is large, an improvement in performance can be expected when taking the corresponding points and measuring the amount of parallax.
  • the first imaging unit 11-1 acquires the left viewpoint image
  • the second imaging unit 11-2 acquires the right viewpoint image.
  • the left viewpoint image may be acquired by the second imaging unit 11-2
  • the right viewpoint image may be acquired by the first imaging unit 11-1. In this case the latter is obtained parallax slightly smaller.
  • the pixel value is obtained by performing pixel addition as shown by the dotted line in FIG. 15, two viewpoint images with reduced noise are obtained. It is possible to obtain.
  • a stereoscopic image composed and displayed from a viewpoint image with a small parallax has less presence than a stereoscopic image with a large parallax, but “is less tiring” and “when displayed on a 3D (three-dimensional) TV” , It has an advantage of being viewed as a normal 2D (two-dimensional) image by a person who is not wearing glasses (not a double image).
  • the number of viewpoints, the amount of parallax, the amount of noise, and the like differ depending on the combination of the number of imaging units used for shooting, the number of viewpoint images to be acquired, or the presence or absence of pixel addition.
  • the image according to the user's request can be acquired.
  • the table shown in FIG. 13 summarizes such shooting modes that can be set by the stereoscopic imaging apparatus 10.
  • FIG. 14 is a diagram showing an example of a specific procedure for setting the shooting mode shown in FIG.
  • the interface shown in FIG. 14 can be displayed on the liquid crystal monitor 30, and the shooting mode can be set by a user instruction input via the operation unit 38.
  • the stereoscopic imaging apparatus 10 first displays the screen shown in FIG. 14A on the liquid crystal monitor 30 and prompts the user to input the number of viewpoints (1, 2, or 4).
  • the stereoscopic imaging apparatus 10 prompts the user for further input. That is, which one of the first and second imaging units 11-1 and 11-2 is used for one viewpoint (FIG. 14B), two viewpoint images are obtained by monocular stereoscopic imaging for two viewpoints. Or whether to acquire two viewpoint images by compound eye stereoscopic photography ((c) of FIG. 14).
  • the shooting mode [7] or [8] is set depending on whether the first stereoscopic imaging unit 11-1 or the second stereoscopic imaging unit 11-2 is used.
  • the stereoscopic imaging device 10 prompts the input of the amount of parallax as shown in FIG. Then, one of the shooting modes [2] to [4] is set according to the input amount of parallax ((e) in FIG. 14).
  • the shooting mode setting is not limited to the above example.
  • the first imaging unit 11-1 may be selected unconditionally without selecting the first / second imaging unit at the time of one viewpoint. This is because it is expected that the use of the imaging unit opposite to the side where the shutter button is located is relatively less likely to have a finger (a user's finger covers the lens).
  • FIG. 15 is a conceptual diagram regarding addition of pixel signals.
  • the amount of noise can be reduced by adding the pixel signals of the two pixels of the image sensor 16 as indicated by the dotted line in FIG.
  • the high-frequency signal may become dull and the resolution may be lowered. Therefore, in the stereoscopic imaging device 10, during stereoscopic imaging from two viewpoints, according to the brightness of the screen, the contrast of the signal, etc., “the viewpoint image is acquired by compound eye stereoscopic imaging, and the result of adding the pixel signals of two pixels is captured for each imaging.
  • the image can be automatically selected from the following: “Make a viewpoint image of a part” and “Shoot with monocular three-dimensional imaging and do not add pixel signals”.
  • FIG. 16 is a flowchart showing an example of such an automatic selection process of the stereoscopic shooting mode.
  • S400 it is determined in S402 whether the luminance (Bv value) of the entire screen is equal to or greater than a predetermined threshold value. If it is equal to or greater than the predetermined threshold (YES in S402), the process proceeds to S406, and it is determined whether or not the response of the contrast extraction filter is equal to or greater than the predetermined threshold. If YES in S406, monocular stereoscopic shooting is performed with the shooting mode set to [5] or [6] (S408), and if NO, compound eye stereoscopic shooting is performed with the shooting mode set to [3] (S404).
  • the embodiment of the present invention is limited to such a mode. is not.
  • the stereoscopic imaging device of the present invention at least one of the plurality of imaging units may be a monocular stereoscopic imaging unit, and the types of other imaging units are not particularly limited.
  • a normal imaging unit an imaging unit that is not a monocular stereoscopic imaging unit
  • an imaging device that can separate the left and right incident light may be used.
  • FIG. 17 is a block diagram illustrating a main part of the stereoscopic imaging apparatus 10 ′ according to the second embodiment.
  • the first imaging unit 11-1 is a monocular stereoscopic imaging unit
  • the second imaging unit 11-2 ′ is an imaging unit having a normal sensor 17. Since the configuration other than this is the same as that of the stereoscopic imaging apparatus 10 according to the first embodiment, the same reference numerals as those of the stereoscopic imaging apparatus 10 are used, and detailed description thereof is omitted.
  • a stereoscopic imaging apparatus 10 ′ similarly to the stereoscopic imaging apparatus 10 according to the first embodiment, monocular stereoscopic imaging / compound eye stereoscopic imaging is possible, and the cross point at the time of switching between monocular / compound stereoscopic imaging described above. Automatic activation of control and cross point adjustment can be performed.
  • the second imaging unit is a normal imaging unit, and thus the shooting modes that can be set are different from those in the stereoscopic imaging device 10 according to the first embodiment.
  • the shooting mode can be set according to the number of viewpoints, presence / absence of pixel addition, and the like as in the case of the stereoscopic imaging apparatus 10.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
  • Studio Devices (AREA)

Abstract

A 3D imaging device (10) pertaining to one embodiment of the present invention is provided with a first and second imaging unit (11-1, 11-2) that are both monocular 3D imaging units. The 3D imaging device (10) is further provided with a cross-point control unit (40), and the cross-point control unit controls an image generation unit in a manner so that the cross-point of the 3D images displayed at an image display unit (30) does not change before and after switching when switching from imaging by means of a multi-eye 3D imaging function to a monocular 3D imaging function, and when switching from imaging by means of the monocular 3D imaging function to imaging by means of the multi-eye 3D imaging function. As a result, it is possible to suppress large changes in pop-out state resulting from changes in the cross-point, and it is possible to display an image with a natural feeling when switching imaging modes.

Description

立体撮像装置Stereo imaging device
 本発明は立体撮像装置に係り、特に複数の光学系を備えた立体撮像装置に関する。 The present invention relates to a stereoscopic imaging apparatus, and more particularly to a stereoscopic imaging apparatus provided with a plurality of optical systems.
 近年、立体画像を表示する立体表示装置が開発されている。例えば特許文献1に記載の立体表示装置では、表示装置の大きさや視聴者と表示装置の距離等の条件に応じて立体画像のクロスポイントを調整するようにしている。また、立体視用画像データを取得可能な立体カメラも用いられるようになってきている。例えば、特許文献2に記載の立体カメラでは、2枚(左右)のレンズと入射角度によって感度が異なる1個の撮像素子とを用いて立体視用画像データを取得するようになっており、特許文献3に記載の撮像装置では、複数の光学系を備えて単眼撮影と複眼撮影とを切り替え可能にし、立体画像を取得できるようにしている。 Recently, stereoscopic display devices have been developed for displaying a stereoscopic image. For example, in the stereoscopic display device described in Patent Document 1, the cross point of a stereoscopic image is adjusted according to conditions such as the size of the display device and the distance between the viewer and the display device. In addition, stereoscopic cameras capable of acquiring stereoscopic image data have been used. For example, in the stereoscopic camera described in Patent Document 2, stereoscopic image data is acquired using two (left and right) lenses and one image sensor having different sensitivity depending on the incident angle. The imaging device described in Document 3 includes a plurality of optical systems, and can switch between monocular imaging and compound-eye imaging so that a stereoscopic image can be acquired.
WO2004/082297号公報WO2004 / 082297 特開2007-279512号公報JP 2007-279512 A 特開2011-030123号公報JP 2011-030123 A
 しかしながら、特許文献1に記載の技術は画像再生時の条件に応じてクロスポイントを調整するもので、異なる撮影モードで撮影された立体画像を切り換えて表示する際にはクロスポイントがずれてユーザに不自然な感じを与えてしまうこととなる。また特許文献2や特許文献3に記載の技術では、設定可能な撮影モードが限定されており、多様な撮影モードから任意に選択した撮影モードで画像を取得できるものではなかった。 However, the technique described in Patent Document 1 adjusts the cross point according to the conditions at the time of image reproduction. When switching and displaying a stereoscopic image shot in a different shooting mode, the cross point is shifted and the user is informed. It will give an unnatural feeling. Further, in the techniques described in Patent Document 2 and Patent Document 3, the settable shooting modes are limited, and images cannot be acquired in shooting modes arbitrarily selected from various shooting modes.
 本発明はこのような事情に基づいてなされたもので、撮影モード切替時に自然な感じで画像を表示できる立体撮像装置を提供することを目的とする。また本発明は、ユーザの要求に応じて多様な画像を取得できる立体撮像装置を提供することを目的とする。 The present invention has been made based on such circumstances, and an object thereof is to provide a stereoscopic imaging apparatus capable of displaying an image with a natural feeling when the shooting mode is switched. It is another object of the present invention to provide a stereoscopic imaging apparatus that can acquire various images according to a user's request.
 上記目的を達成するために、本発明の第1の態様に係る立体撮像装置は、被写体を撮像する複数の撮像部であって、単一の撮影光学系の異なる領域を通過した光束を画素群ごとに光電変換する複数の画素群を含む撮像素子を有する単眼立体撮像部を少なくとも一つ含む複数の撮像部と、複数の撮像部の撮像信号から被写体の立体画像を生成する画像生成部と、立体画像のクロスポイントを制御するクロスポイント制御部と、を備える立体撮像装置であって、画像生成部は、単眼立体撮像部による撮影で得られた複数の視点画像から立体画像を構成する単眼立体撮像機能と、少なくとも一つの単眼立体撮像部により得られた視点画像と、複数の撮像部のうち単眼立体撮像部以外の撮像部により得られた視点画像と、から立体画像を構成する複眼立体撮像機能と、を有し、クロスポイント制御部は、複眼立体撮像機能による撮影から単眼立体撮像機能による撮影に切り替える際、及び前眼立体撮像機能による撮影から複眼立体撮像機能による撮影に切り替える際に、表示される立体画像のクロスポイントが当該切り替えの前後において変化しないように画像生成部を制御する。 In order to achieve the above object, a stereoscopic imaging apparatus according to the first aspect of the present invention is a plurality of imaging units that capture an image of a subject, and a light beam that has passed through different areas of a single imaging optical system is a pixel group. A plurality of imaging units including at least one monocular stereoscopic imaging unit having an imaging device including a plurality of pixel groups that perform photoelectric conversion for each, an image generation unit that generates a stereoscopic image of a subject from imaging signals of the plurality of imaging units, A stereoscopic imaging device including a crosspoint control unit that controls a crosspoint of a stereoscopic image, wherein the image generation unit configures a stereoscopic image from a plurality of viewpoint images obtained by photographing with the monocular stereoscopic imaging unit. A composite image that forms a stereoscopic image from an imaging function, a viewpoint image obtained by at least one monocular stereoscopic imaging unit, and a viewpoint image obtained by an imaging unit other than the monocular stereoscopic imaging unit among a plurality of imaging units. The cross-point control unit when switching from shooting with the compound eye stereoscopic imaging function to shooting with the monocular stereoscopic imaging function, and when switching from shooting with the anterior eye stereoscopic imaging function to shooting with the compound eye stereoscopic imaging function In addition, the image generation unit is controlled so that the cross point of the displayed stereoscopic image does not change before and after the switching.
 単眼立体撮像機能により得られる立体画像においては、画像中で合焦している個所がクロスポイントになるのに対し、複眼立体撮像機能により得られる立体画像においては、画像中の合焦箇所とクロスポイントとは異なっている。したがって単眼立体撮像と複眼立体撮像とを相互に切り替えると、切り替えの前後でクロスポイントが変化するため、視聴者にとっては被写体が前に飛び出したり後ろに引っ込んだりするような違和感を覚える場合がある。このような違和感は、合焦していることが多い主要被写体の飛び出し状態が変化すると、特に大きく感じられる。そこで本発明の第一の態様に係る立体撮像装置では、クロスポイント制御部を設け、複眼立体撮像機能による撮影から単眼立体撮像機能による撮影に切り替える際、及び単眼立体撮像機能による撮影から複眼立体撮像機能による撮影に切り替える際に、画像表示部に表示される立体画像のクロスポイントが当該切り替えの前後において変化しないように画像生成部を制御するようにしている。これによりクロスポイントの変化による飛び出し状態の大きな変化を抑えることができ、撮影モード切替時に自然な感じで画像を表示できる。 In a stereoscopic image obtained by the monocular stereoscopic imaging function, a focused point in the image becomes a cross point, whereas in a stereoscopic image obtained by the compound-eye stereoscopic imaging function, the in-focus position in the image is crossed. It is different from the point. Therefore, when the monocular stereoscopic imaging and the compound eye stereoscopic imaging are switched to each other, the cross point changes before and after the switching, so that the viewer may feel uncomfortable that the subject jumps forward or retracts backward. Such a sense of incongruity is particularly felt when the pop-out state of the main subject that is often in focus changes. Therefore, in the stereoscopic imaging device according to the first aspect of the present invention, a cross-point control unit is provided to switch from imaging with the compound eye stereoscopic imaging function to imaging with the monocular stereoscopic imaging function, and from imaging with the monocular stereoscopic imaging function to compound eye stereoscopic imaging. When switching to shooting by function, the image generation unit is controlled so that the cross point of the stereoscopic image displayed on the image display unit does not change before and after the switching. As a result, a large change in the pop-out state due to a change in cross point can be suppressed, and an image can be displayed with a natural feeling when the shooting mode is switched.
 なお本発明において、「クロスポイント」とは立体画像中で視差がゼロになる点を意味する。立体画像再生時、クロスポイントより手前にある被写体は画面よりも飛び出して(手前に)見え、クロスポイントよりも奥にある被写体は画面よりも引っ込んで(奥に)見える。 In the present invention, the “cross point” means a point where parallax is zero in a stereoscopic image. During stereoscopic image playback, a subject that is in front of the cross point appears to protrude (front) from the screen, and a subject that is behind the cross point appears to be retracted (back) from the screen.
 本発明の第2の態様に示すように、第1の態様に係る立体撮像装置において、クロスポイント制御部は、複眼立体撮像機能での撮影が継続している間、複眼立体撮像機能での撮影により得られた立体画像を構成する視点画像のうちの一つの視点画像における第1の合焦領域を検出し、立体画像を構成する視点画像のうちの他の視点画像における、検出した第1の合焦領域に対応する第1の対応領域を検出し、検出した第1の対応領域に基づいて、一つの視点画像と他の視点画像との間の第1の位置ずれ量を算出し、一つの視点画像または他の視点画像を算出した第1の位置ずれ量の分だけ移動する、ことにより、複眼立体撮像機能での撮影により得られる立体画像のクロスポイントが、単眼立体撮像機能での撮影により得られる立体画像のクロスポイントである第1の合焦領域に一致するように画像生成部を制御する、ようにしてもよい。 As shown in the second aspect of the present invention, in the stereoscopic imaging device according to the first aspect, the crosspoint control unit performs imaging with the compound-eye stereoscopic imaging function while imaging with the compound-eye stereoscopic imaging function continues. The first in-focus area in one viewpoint image among the viewpoint images constituting the stereoscopic image obtained by the above is detected, and the detected first in the other viewpoint images among the viewpoint images constituting the stereoscopic image is detected. A first corresponding area corresponding to the in-focus area is detected, and based on the detected first corresponding area, a first misregistration amount between one viewpoint image and another viewpoint image is calculated. By moving one viewpoint image or another viewpoint image by the calculated first positional deviation amount, the cross point of the stereoscopic image obtained by imaging with the compound eye stereoscopic imaging function can be captured with the monocular stereoscopic imaging function. Of stereoscopic images obtained by Controls the image generating unit to match the first focus area is loss point, it may be.
 第2の態様では、複眼立体撮像機能から単眼立体撮像機能に切り替えた時に立体画像のクロスポイントがずれないように、複眼立体撮像機能での撮影が継続している間、複眼立体撮像機能での撮影により得られる立体画像のクロスポイントを予めずらして、単眼立体撮像機能での撮影により得られる立体画像のクロスポイントである第1の合焦領域に一致するようにしている。これにより実際に複眼立体撮像機能から単眼立体撮像機能に切り替えた時にクロスポイントが突然大きく変化するようなことがなく、自然な感じで画像を表示できる。 In the second aspect, while the imaging with the compound eye stereoscopic imaging function is continued so that the cross point of the stereoscopic image does not shift when the compound eye stereoscopic imaging function is switched to the monocular imaging function, The cross point of the stereoscopic image obtained by photographing is shifted in advance so as to coincide with the first in-focus area that is the cross point of the stereoscopic image obtained by photographing with the monocular stereoscopic imaging function. As a result, the cross point does not change suddenly when actually switching from the compound-eye stereoscopic imaging function to the monocular stereoscopic imaging function, and an image can be displayed with a natural feeling.
 本発明の第3の態様に示すように、第1または第2の態様に係る立体撮像装置において、クロスポイント制御部は、単眼立体撮像機能での撮影が継続している間、単眼立体撮像機能での撮影により得られた立体画像を構成する視点画像のうちの一つの視点画像において、当該立体画像のクロスポイントである第2の合焦領域を検出し、複眼立体撮像機能での撮影の際に用いる撮像部であって、単眼立体撮像機能での撮影の際に用いる撮像部以外の撮像部により得られた他の視点画像において、検出した第2の合焦領域に対応する第2の対応領域を検出し、検出した第2の対応領域に基づいて、一つの視点画像と他の視点画像との第2の位置ずれ量を算出し、既に記憶されている位置ずれ量を第2の位置ずれ量で更新して記憶し、単眼立体撮像機能から複眼立体撮像機能への切り替えがあったときは、一つの視点画像と、他の視点画像を記憶された第2の位置ずれ量ずらした画像と、から立体画像を構成することで、複眼立体撮像機能での撮影により得られる立体画像のクロスポイントが、単眼立体撮像機能での撮影により得られる立体画像のクロスポイントである第2の合焦領域と一致するように画像生成部を制御する、ようにしてもよい。 As shown in the third aspect of the present invention, in the stereoscopic imaging device according to the first or second aspect, the crosspoint control unit is configured to provide a monocular stereoscopic imaging function while shooting with the monocular stereoscopic imaging function is continued. When one of the viewpoint images constituting the stereoscopic image obtained by shooting at the point of view, the second in-focus area which is a cross point of the stereoscopic image is detected, and shooting with the compound eye stereoscopic imaging function is performed. A second correspondence corresponding to the detected second in-focus area in another viewpoint image obtained by an imaging unit other than the imaging unit used for shooting with the monocular stereoscopic imaging function. A region is detected, a second positional deviation amount between one viewpoint image and another viewpoint image is calculated based on the detected second corresponding region, and the already stored positional deviation amount is calculated as the second position. Update and store the displacement amount, monocular 3D photography When the function is switched to the compound eye stereoscopic imaging function, a compound image is formed by constructing a stereoscopic image from one viewpoint image and an image obtained by shifting the other viewpoint image by the second positional shift amount stored. The image generation unit is controlled so that the cross point of the stereoscopic image obtained by shooting with the stereoscopic imaging function matches the second in-focus area that is the cross point of the stereoscopic image obtained by shooting with the monocular stereoscopic imaging function. You may do it.
 第3の態様では、単眼立体撮像機能での撮影が継続している間、視点画像に対する第2の位置ずれ量を予め算出・更新しておき、単眼立体撮像機能から複眼立体撮像機能への切り替えがあったときは、複眼立体撮像機能への切り替えで新たに得られる視点画像を当該第2の位置ずれ量ずらした上で、立体画像を構成するようにしている。これにより実際に単眼立体撮像機能から複眼立体撮像機能に切り替えた時にクロスポイントが突然大きく変化するようなことがなく、自然な感じで画像を表示することができる。 In the third aspect, the second positional shift amount with respect to the viewpoint image is calculated and updated in advance while shooting with the monocular stereoscopic imaging function is continued, and switching from the monocular stereoscopic imaging function to the compound eye stereoscopic imaging function is performed. When there is, a stereoscopic image is formed after shifting the viewpoint image newly obtained by switching to the compound eye stereoscopic imaging function by shifting the second positional deviation amount. As a result, when the monocular stereoscopic imaging function is actually switched to the compound eye stereoscopic imaging function, the cross point does not suddenly change greatly, and an image can be displayed with a natural feeling.
 本発明の第4の態様に示すように、第1ないし第3の態様のいずれかに係る立体撮像装置において、単眼立体撮像機能と複眼立体撮像機能とを自動的に切り替える撮像機能自動切替部をさらに有し、撮像機能自動切替部は、合焦位置が所定の距離より近い間は単眼立体撮像機能を作動させ、合焦位置が所定の距離より遠くなったら複眼立体撮像機能に切り替えるともにクロスポイント制御部を動作させ、合焦位置が所定の距離より近い間は複眼立体撮像機能を作動させ、合焦位置が所定の距離より近くなったか、またはユーザにより単眼立体撮像機能への切替指示がなされたときは単眼立体撮像機能に切り替える、ようにしてもよい。 As shown in the fourth aspect of the present invention, in the stereoscopic imaging device according to any one of the first to third aspects, an imaging function automatic switching unit that automatically switches between a monocular stereoscopic imaging function and a compound-eye stereoscopic imaging function. In addition, the imaging function automatic switching unit operates the monocular stereoscopic imaging function while the in-focus position is closer than a predetermined distance, and switches to the compound eye stereoscopic imaging function when the in-focus position becomes farther than the predetermined distance, and the cross point The control unit is operated to operate the compound eye stereoscopic imaging function while the in-focus position is closer than a predetermined distance, and the user is instructed to switch to the monocular stereoscopic imaging function when the in-focus position is closer than the predetermined distance. In such a case, it may be switched to the monocular stereoscopic imaging function.
 単眼立体撮像と複眼立体撮像とを比較した場合、一般に遠距離撮影では単眼立体撮像機能・複眼立体撮像機能のどちらでも用いることができるが、近距離撮影では基線長の短い単眼立体撮像が複眼立体撮像に比べて有利であるため単眼立体撮像機能での撮影が好ましい。したがって、上記態様のように撮影距離及びユーザによる変更要求を考慮して単眼立体撮像機能と複眼立体撮像機能との間の切り替えを行うと共に、単眼/複眼立体撮像機能の切り替えに対応して上述したクロスポイントの制御を行うことが好ましい。このような制御を行うことで、単眼立体撮像機能と複眼立体撮像機能とを自動切替とした場合でも、切り替えに伴いクロスポイントが急に変化したような印象をユーザに与えることがなく撮影を行うことができるとともに、撮影した立体画像を自然な感じで表示できる。 When comparing single-eye stereoscopic imaging and compound-eye stereoscopic imaging, it is generally possible to use either the single-eye stereoscopic imaging function or the compound-eye stereoscopic imaging function for long-distance shooting, but for short-distance shooting, monocular stereoscopic imaging with a short baseline length is a compound-eye stereoscopic imaging. Since it is more advantageous than imaging, photographing with a monocular stereoscopic imaging function is preferable. Therefore, as described above, the switching between the monocular stereoscopic imaging function and the compound eye stereoscopic imaging function is performed in consideration of the shooting distance and the change request by the user, and the switching of the monocular / compound eye stereoscopic imaging function is described above. It is preferable to control the cross point. By performing such control, even when the monocular stereoscopic imaging function and the compound eye stereoscopic imaging function are automatically switched, shooting is performed without giving the user the impression that the cross point has suddenly changed due to the switching. it is possible, it is possible to display the captured three-dimensional image in a natural feeling.
 本発明の第5の態様に示すように、第1ないし第4の態様のいずれかに係る立体撮像装置において、画像生成部は、複眼立体撮像機能の動作時に、単眼立体撮像部が有する撮像素子の各画素位置ごとに、複数の画素群を構成する各画素群の画素信号を加算し、当該加算した結果を各画素位置における画素信号とする画素信号加算処理を行うようにしてもよい。このように複数の画素信号を加算することで、生成される画像のノイズ量を減らすことができ、ユーザの要求に応じて多様な画像を取得することができる。 As shown in the fifth aspect of the present invention, in the stereoscopic imaging device according to any one of the first to fourth aspects, the image generation unit is an imaging element included in the monocular stereoscopic imaging unit during the operation of the compound eye stereoscopic imaging function. The pixel signal of each pixel group which comprises a some pixel group may be added for every pixel position of this, and the pixel signal addition process which makes the said addition result the pixel signal in each pixel position may be performed. By adding a plurality of pixel signals in this manner, the amount of noise in the generated image can be reduced, and various images can be acquired according to the user's request.
 上記目的を達するために、本発明の第6の態様に係る立体撮像装置は、被写体を撮像する複数の撮像部であって、単一の撮影光学系の異なる領域を通過した光束を画素群ごとに光電変換する複数の画素群を含む撮像素子を有する単眼立体撮像部を少なくとも一つ含む複数の撮像部と、複数の撮像部の撮像信号から被写体の立体画像を生成する画像生成部と、ユーザの指示入力に基づいて撮像モードを設定する撮像モード設定部と、を備える立体撮像装置であって、撮像モード設定部は、複数の撮像部の内の撮影に使用する撮像部の数及び取得する視点画像の数を含むユーザの指示入力に基づいて、少なくとも一つの単眼立体撮像部を用いた2次元撮像モード及び単眼立体撮像モード、少なくとも一つの単眼立体撮像部と複数の撮像部の内の少なくとも一つの単眼立体撮像部以外の撮像部とを用いた2次元撮像モード及び複眼立体撮像モード、を含む撮像モードの中から一の撮像モードを設定する。 In order to achieve the above object, a stereoscopic imaging apparatus according to a sixth aspect of the present invention is a plurality of imaging units that capture an image of a subject, and a light beam that has passed through different areas of a single imaging optical system is grouped into a pixel group. A plurality of imaging units including at least one monocular stereoscopic imaging unit having an imaging element including a plurality of pixel groups that perform photoelectric conversion, an image generation unit that generates a stereoscopic image of a subject from imaging signals of the plurality of imaging units, and a user An imaging mode setting unit that sets an imaging mode based on an instruction input, and the imaging mode setting unit obtains the number of imaging units to be used for imaging among a plurality of imaging units and acquires the imaging mode setting unit. Based on a user instruction input including the number of viewpoint images, two-dimensional imaging mode and monocular stereoscopic imaging mode using at least one monocular stereoscopic imaging unit, and at least one monocular stereoscopic imaging unit and a plurality of imaging units Without even sets an imaging mode from among the imaging modes including two-dimensional imaging mode and the compound-eye stereoscopic imaging mode, using the image pickup unit other than the one monocular stereoscopic imaging unit.
 第6の態様に係る立体撮像装置では、複数の撮像部のうち撮影に用いる撮像部の数及び取得する視点画像の数(視点数)を変化させることにより、視差の大きさや視点数等が異なる様々な撮像モードが設定できる。例えば、2次元撮像モード又は立体撮像モードを選択することができるし、立体撮像モードにおいても単眼立体撮像モード又は複眼立体撮像モードを選択することができる。このように第2の態様に係る立体撮像装置では、ユーザの要求に応じて多様な画像を取得することができる。 In the stereoscopic imaging device according to the sixth aspect, the size of the parallax, the number of viewpoints, and the like differ by changing the number of imaging units used for shooting and the number of viewpoint images to be acquired (number of viewpoints) among the plurality of imaging units. Various imaging modes can be set. For example, the two-dimensional imaging mode or the stereoscopic imaging mode can be selected, and the monocular stereoscopic imaging mode or the compound eye stereoscopic imaging mode can be selected also in the stereoscopic imaging mode. As described above, the stereoscopic imaging device according to the second aspect can acquire various images according to the user's request.
 なお、第6の態様に係る立体撮像装置においても、第1ないし第5の態様に係る立体撮像装置と同様に、クロスポイントの制御や単眼/複眼立体撮像機能の自動切替を行うようにしてよい。 Note that in the stereoscopic imaging device according to the sixth aspect, as in the stereoscopic imaging devices according to the first to fifth aspects, crosspoint control and automatic switching of the monocular / compound eye stereoscopic imaging function may be performed. .
 本発明の第7の態様に示すように、第6の態様に係る立体撮像装置において、画像生成部は、ユーザの指示入力に基づき、少なくとも一つの単眼立体撮像部を用いた2次元撮像モード、及び少なくとも一つの単眼立体撮像部と複数の撮像部の内の少なくとも一つの単眼立体撮像部以外の撮像部とを用いた複眼立体撮像モードにおいて、少なくとも一つの単眼立体撮像部が有する撮像素子の各画素位置ごとに、複数の画素群を構成する各画素群の画素信号を加算し、当該加算した結果を各画素位置における画素信号とする画素信号加算処理を行うようにしてもよい。このように複数の画素信号を加算することで生成される画像のノイズ量を減らすことができ、ユーザの要求に応じてさらに多様な画像を取得することができる。 As shown in the seventh aspect of the present invention, in the stereoscopic imaging device according to the sixth aspect, the image generation unit is based on a user instruction input, and the two-dimensional imaging mode using at least one monocular stereoscopic imaging unit, In the compound-eye stereoscopic imaging mode using at least one monocular stereoscopic imaging unit and an imaging unit other than at least one monocular stereoscopic imaging unit among the plurality of imaging units, each of the imaging elements included in at least one monocular stereoscopic imaging unit For each pixel position, pixel signals of each pixel group constituting a plurality of pixel groups may be added, and pixel signal addition processing may be performed using the added result as a pixel signal at each pixel position. In this way, the amount of noise of an image generated by adding a plurality of pixel signals can be reduced, and more various images can be acquired according to the user's request.
 本発明の第8の態様に示すように、第1ないし第7の態様のいずれかに係る立体撮像装置において、複数の撮像部の数は2であり、少なくとも一つの単眼立体撮像部以外の他の撮像部も単眼立体撮像部であるようにしてもよい。 As shown in the eighth aspect of the present invention, in the stereoscopic imaging device according to any one of the first to seventh aspects, the number of the plurality of imaging units is 2, and other than at least one monocular stereoscopic imaging unit. imaging unit also may be a monocular stereoscopic imaging unit.
 本発明の第9の態様に示すように、第1ないし第8の態様のいずれかに係る立体撮像装置において、生成した立体画像を表示する立体画像表示部をさらに備えるようにしてもよい。 As shown in the ninth aspect of the present invention, the stereoscopic imaging apparatus according to any of the first to eighth aspects may further include a stereoscopic image display unit that displays the generated stereoscopic image.
 上述のように本発明係る立体撮像装置によれば、撮影モード切替時に自然な感じで画像を表示でき、またユーザの要求に応じて多様な画像を取得することができる。 As described above, according to the stereoscopic imaging device of the present invention, it is possible to display an image with a natural feeling when the shooting mode is switched, and it is possible to acquire various images according to a user's request.
図1は、本発明の第1の実施形態に係る立体撮像装置10の構成を示すブロック図である。FIG. 1 is a block diagram showing a configuration of a stereoscopic imaging apparatus 10 according to the first embodiment of the present invention. 図2は、立体撮像装置10の外観を示すイメージ図である。Figure 2 is an image view showing an appearance of a stereoscopic imaging device 10. 図3は、単眼立体撮像部に用いられる撮像素子の構成を示す図である。FIG. 3 is a diagram illustrating a configuration of an imaging element used in the monocular stereoscopic imaging unit. 図4は、図3に示す撮像素子の主/副画素を1画素ずつ示した図である。4 is a diagram showing the main / sub-pixels of the image sensor shown in FIG. 3 one by one. 図5Aは、通常のCCDの構成を示す図である。FIG. 5A is a diagram showing a configuration of a normal CCD. 図5Bは、単眼3Dセンサの構成の例を示す図である。Figure 5B is a diagram showing an example of a configuration of a monocular 3D sensor. 図5Cは、単眼3Dセンサの構成の他の例を示す図である。FIG. 5C is a diagram illustrating another example of the configuration of the monocular 3D sensor. 図6は、第1の実施形態に係る立体撮像装置の要部を示すブロック図である。FIG. 6 is a block diagram illustrating a main part of the stereoscopic imaging apparatus according to the first embodiment. 図7は、複眼立体撮像機能から単眼立体撮像機能への切り替え時のクロスポイント制御を示すフローチャートである。FIG. 7 is a flowchart showing the cross point control at the time of switching from the compound eye stereoscopic imaging function to the monocular stereoscopic imaging function. 図8Aは、複眼立体撮像時におけるクロスポイントと合焦点との関係を示す概念図である。FIG. 8A is a conceptual diagram illustrating a relationship between a cross point and a focal point during compound eye stereoscopic imaging. 図8Bは、単眼立体撮像時におけるクロスポイントと合焦点との関係を示す概念図である。FIG. 8B is a conceptual diagram illustrating a relationship between a cross point and a focal point during monocular stereoscopic imaging. 図9は、複眼立体撮像機能から単眼立体撮像機能への切り替え時のクロスポイント制御を示す、他の概念図である。FIG. 9 is another conceptual diagram showing the cross point control at the time of switching from the compound eye stereoscopic imaging function to the monocular stereoscopic imaging function. 図10は、単眼立体撮像機能から複眼立体撮像機能への切り替え時のクロスポイント制御を示すフローチャートである。FIG. 10 is a flowchart showing the cross point control at the time of switching from the monocular stereoscopic imaging function to the compound eye stereoscopic imaging function. 図11は、単眼/複眼立体撮像時におけるクロスポイントと合焦点との関係を示す概念図である。FIG. 11 is a conceptual diagram showing a relationship between a cross point and a focal point during monocular / compound eye stereoscopic imaging. 図12は、単眼/複眼立体撮像機能の自動切替の処理を示すフローチャートである。FIG. 12 is a flowchart showing a process for automatically switching between the monocular / compound-eye stereoscopic imaging function. 図13は、第1の実施形態に係る立体撮像装置において設定可能な撮影モードを示す表である。FIG. 13 is a table showing shooting modes that can be set in the stereoscopic imaging apparatus according to the first embodiment. 図14は、第1の実施形態に係る立体撮像装置における撮影モード設定の手順を示す概念図である。FIG. 14 is a conceptual diagram illustrating a procedure for setting a shooting mode in the stereoscopic imaging apparatus according to the first embodiment. 図15は、第1の実施形態に係る立体撮像装置における画素信号加算処理を示す概念図である。FIG. 15 is a conceptual diagram illustrating pixel signal addition processing in the stereoscopic imaging apparatus according to the first embodiment. 図16は、画素信号加算処理を考慮した撮影モードの選択手順を示すフローチャートである。FIG. 16 is a flowchart illustrating a procedure for selecting a shooting mode in consideration of pixel signal addition processing. 図17は、本発明の第2の実施形態に係る立体撮像装置の要部を示すブロック図である。FIG. 17 is a block diagram illustrating a main part of a stereoscopic imaging apparatus according to the second embodiment of the present invention. 図18は、第2の実施形態に係る立体撮像装置において設定可能な撮影モードを示す表である。FIG. 18 is a table showing shooting modes that can be set in the stereoscopic imaging apparatus according to the second embodiment.
 以下、添付図面に従って本発明に係る撮像装置を実施するための形態について詳細に説明する。 Hereinafter, embodiments for carrying out an imaging apparatus according to the present invention will be described in detail with reference to the accompanying drawings.
 <第1の実施形態>
[撮像装置の全体構成]
 図1は本発明に係る立体撮像装置10の実施の形態を示すブロック図であり、図2は立体撮像装置10の外観を示すイメージ図である。
<First Embodiment>
[Overall configuration of imaging device]
FIG. 1 is a block diagram illustrating an embodiment of a stereoscopic imaging apparatus 10 according to the present invention, and FIG. 2 is an image diagram illustrating an external appearance of the stereoscopic imaging apparatus 10.
 この立体撮像装置10は、撮像した画像を液晶モニタ(LCD)30に表示あるいはメモリカード54(以下「メディア」ともいう)に記録するもので、装置全体の動作は、中央処理装置(CPU)40によって統括制御される。 The stereoscopic imaging apparatus 10 displays a captured image on a liquid crystal monitor (LCD) 30 or records it on a memory card 54 (hereinafter also referred to as “media”). The overall operation of the apparatus is a central processing unit (CPU) 40. It is controlled by
 立体撮像装置10には、シャッタボタン、モードダイヤル、再生ボタン、MENU/OKキー、十字キー、BACKキー等の操作部38が設けられている。この操作部38からの信号はCPU40に入力され、CPU40は入力信号に基づいて立体撮像装置10の各回路を制御し、例えば、レンズ駆動制御、絞り駆動制御、撮影動作制御、画像処理制御、画像データの記録/再生制御、立体表示用の液晶モニタ30の表示制御などを行う。 The stereoscopic imaging device 10 is provided with operation units 38 such as a shutter button, a mode dial, a playback button, a MENU / OK key, a cross key, and a BACK key. A signal from the operation unit 38 is input to the CPU 40, and the CPU 40 controls each circuit of the stereoscopic imaging device 10 based on the input signal. For example, lens driving control, aperture driving control, photographing operation control, image processing control, image processing Data recording / reproduction control, display control of the liquid crystal monitor 30 for stereoscopic display, and the like are performed.
 シャッタボタンは、撮影開始の指示を入力する操作ボタンであり、半押し時にONするS1スイッチと、全押し時にONするS2スイッチとを有する二段ストローク式のスイッチで構成されている。モードダイヤルは、2D撮影モード、3D撮影モード、オート撮影モード、マニュアル撮影モード、人物、風景、夜景等のシーンポジション、マクロモード、動画モード、本発明に係る視差優先撮影モードを選択する選択手段である。 The shutter button is an operation button for inputting an instruction to start shooting, and is configured by a two-stroke switch having an S1 switch that is turned on when half-pressed and an S2 switch that is turned on when fully pressed. The mode dial is a selection means for selecting a 2D shooting mode, a 3D shooting mode, an auto shooting mode, a manual shooting mode, a scene position such as a person, a landscape, a night view, a macro mode, a moving image mode, and a parallax priority shooting mode according to the present invention. is there.
 再生ボタンは、撮影記録した立体視画像(3D画像)、平面画像(2D画像)の静止画又は動画を液晶モニタ30に表示させる再生モードに切り替えるためのボタンである。MENU/OKキーは、液晶モニタ30の画面上にメニューを表示させる指令を行うためのメニューボタンとしての機能と、選択内容の確定及び実行などを指令するOKボタンとしての機能とを兼備した操作キーである。十字キーは、上下左右の4方向の指示を入力する操作部であり、メニュー画面から項目を選択したり、各メニューから各種設定項目の選択を指示したりするボタン(カーソル移動操作手段)として機能する。また、十字キーの上/下キーは撮影時のズームスイッチあるいは再生モード時の再生ズームスイッチとして機能し、左/右キーは再生モード時のコマ送り(順方向/逆方向送り)ボタンとして機能する。BACKキーは、選択項目など所望の対象の消去や指示内容の取消し、あるいは1つ前の操作状態に戻らせる時などに使用される。 The playback button is a button for switching to a playback mode in which a still image or a moving image of a stereoscopic image (3D image) or a planar image (2D image) that has been recorded is displayed on the liquid crystal monitor 30. The MENU / OK key is an operation key having both a function as a menu button for instructing to display a menu on the screen of the liquid crystal monitor 30 and a function as an OK button for instructing confirmation and execution of the selection contents. It is. The cross key is an operation unit for inputting instructions in four directions, up, down, left, and right, and functions as a button (cursor moving operation means) for selecting an item from the menu screen or instructing selection of various setting items from each menu. To do. The up / down key of the cross key functions as a zoom switch for shooting or a playback zoom switch in playback mode, and the left / right key functions as a frame advance (forward / reverse feed) button in playback mode. . The BACK key is used to delete a desired object such as a selection item, cancel an instruction content, or return to the previous operation state.
 撮影モード時において、被写体を示す画像光は、フォーカスレンズ、ズームレンズを含む撮影レンズ12(12-1,12-2)、絞り14(14-1,14-2)を介して位相差イメージセンサである固体撮像素子16(16-1,16-2、以下「単眼3Dセンサ」という)の受光面に結像される。撮影レンズ12(12-1,12-2)は、CPU40によって制御されるレンズ駆動部36(36-1,36-2)によって駆動され、フォーカス制御、ズーム制御等が行われる。絞り14(14-1,14-2)は、例えば、5枚の絞り羽根からなり、CPU40によって制御される絞り駆動部34(34-1,34-2)によって駆動され、例えば、絞り値F1.4~F11まで1AV刻みで6段階に絞り制御される。 In the photographing mode, the image light indicating the subject is a phase difference image sensor via the photographing lenses 12 (12-1, 12-2) including the focus lens and the zoom lens, and the aperture 14 (14-1, 14-2). Is imaged on the light receiving surface of a solid-state imaging device 16 (16-1, 16-2, hereinafter referred to as “monocular 3D sensor”). The photographing lenses 12 (12-1, 12-2) are driven by a lens driving unit 36 (36-1, 36-2) controlled by the CPU 40, and focus control, zoom control, and the like are performed. The diaphragm 14 (14-1, 14-2) is composed of, for example, five diaphragm blades, and is driven by a diaphragm driver 34 (34-1, 34-2) controlled by the CPU 40. For example, the diaphragm value F1 Aperture control is performed in 6 steps in increments of 1AV from .4 to F11.
 また、CPU40は、絞り駆動部34(34-1,34-2)を介して絞り14(14-1,14-2)を制御するとともに、CCD制御部32(32-1、32-2)を介して単眼3Dセンサ16での電荷蓄積時間(シャッタ速度)や、単眼3Dセンサ16からの画像信号の読み出し制御等を行う。 The CPU 40 controls the diaphragm 14 (14-1, 14-2) via the diaphragm driving unit 34 (34-1, 34-2) and the CCD control unit 32 (32-1, 32-2). The charge accumulation time (shutter speed) in the monocular 3D sensor 16, the readout control of the image signal from the monocular 3D sensor 16, and the like are performed.
 <単眼3Dセンサの構成例>
 図3は単眼3Dセンサ16の構成例を示す図である。
<Configuration Example of Monocular 3D Sensor>
FIG. 3 is a diagram illustrating a configuration example of the monocular 3D sensor 16.
 図3の(a)~(c)に示すように単眼3Dセンサ16は、それぞれマトリクス状に配列された奇数ラインの画素(主画素)と、偶数ラインの画素(副画素)とを有しており、これらの主、副画素にてそれぞれ光電変換された2面分の画像信号は、独立して読み出すことができるようになっている。 As shown in FIGS. 3A to 3C, the monocular 3D sensor 16 includes odd-line pixels (main pixels) and even-line pixels (sub-pixels) arranged in a matrix. In addition, the image signals for the two surfaces photoelectrically converted by these main and sub-pixels can be read independently.
 図2に示すように単眼3Dセンサ16の奇数ライン(1、3、5、…)には、R(赤)、G(緑)、B(青)のカラーフィルタを備えた画素のうち、GRGR…の画素配列のラインと、BGBG…の画素配列のラインとが交互に設けられ、一方、偶数ライン(2、4、6、…)の画素は、奇数ラインと同様に、GRGR…の画素配列のラインと、BGBG…の画素配列のラインとが交互に設けられるとともに、偶数ラインの画素に対して画素同士が2分の1ピッチだけライン方向にずれて配置されている。 As shown in FIG. 2, among the pixels provided with R (red), G (green), and B (blue) color filters on the odd lines (1, 3, 5,...) Of the monocular 3D sensor 16, GRGR ... And BGBG... Pixel array lines are provided alternately. On the other hand, the pixels on the even lines (2, 4, 6,...) Are arranged in the GRGR. And BGBG... Pixel array lines are alternately provided, and the pixels are arranged so as to be shifted in the line direction by a half pitch with respect to the even-numbered pixels.
 図4は撮影レンズ12(撮影光学系)、絞り14、及び単眼3Dセンサ16の主画素PDa、副画素PDbの1画素ずつを示した図であり、図5A~図5Cは図4の要部拡大図である。 FIG. 4 is a diagram showing the main lens PDa and the sub-pixel PDb of the photographing lens 12 (shooting optical system), the diaphragm 14, and the monocular 3D sensor 16, and FIGS. 5A to 5C are main parts of FIG. It is an enlarged view.
 図5Aに示すように通常のCCDの画素(フォトダイオードPD)には、射出瞳を通過する光束が、マイクロレンズLを介して制限を受けずに入射する。 As shown in FIG. 5A, the light beam passing through the exit pupil enters the normal CCD pixel (photodiode PD) via the microlens L without being restricted.
 これに対し、図5Bに示す単眼3Dセンサ16は、撮影レンズ12を通過した光束を集光するマイクロレンズLと、マイクロレンズLを通過した光束を受光するフォトダイオードPD(主画素PDa及び副画素PDb)と、フォトダイオードPDの受光面を部分的に遮光する遮光部材16Aを含んで構成されている。本例では、遮光部材16Aにより主画素PDa、副画素PDbの受光面の右半分、又は左半分が遮光されている。即ち、遮光部材16Aが瞳分割部材としての機能を有している。 On the other hand, the monocular 3D sensor 16 shown in FIG. 5B includes a microlens L that collects the light beam that has passed through the photographing lens 12, and a photodiode PD that receives the light beam that has passed through the microlens L (the main pixel PDa and the subpixel). PDb) and a light shielding member 16A that partially shields the light receiving surface of the photodiode PD. In this example, the right half or the left half of the light receiving surfaces of the main pixel PDa and the subpixel PDb is shielded by the light shielding member 16A. That is, the light shielding member 16A functions as a pupil division member.
 尚、上記構成の単眼3Dセンサ16は、主画素PDaと副画素PDbとでは、遮光部材16Aにより光束が制限されている領域(右半分、左半分)が異なるように構成されているが、これに限らず、遮光部材16Aを設けずに、例えば図5Cに示すように、マイクロレンズLとフォトダイオードPD(PDa,PDb)とを相対的に左右方向(瞳分割方向)にずらし、マイクロレンズLの光軸IcとフォトダイオードPDa,PDbの光軸Pcとをずれて配置することで、フォトダイオードPDに入射する光束が制限されるものでもよい。また、2つの画素(主画素と副画素)に対して1つのマイクロレンズを設けることにより、各画素に入射する光束が制限されるものでもよい。 The monocular 3D sensor 16 having the above-described configuration is configured such that the main pixel PDa and the sub-pixel PDb have different regions (right half and left half) where the light beam is limited by the light shielding member 16A. For example, as shown in FIG. 5C, the microlens L and the photodiode PD (PDa, PDb) are relatively shifted in the left-right direction (pupil division direction) without providing the light shielding member 16A. The light beam incident on the photodiode PD may be limited by disposing the optical axis Ic of the photodiode PD and the optical axes Pc of the photodiodes PDa and PDb. Further, by providing one microlens for two pixels (main pixel and subpixel), the light flux incident on each pixel may be limited.
 図1に戻って、単眼3Dセンサ16(16-1,16-2)に蓄積された信号電荷は、CCD制御部32から加えられる読み出し信号に基づいて信号電荷に応じた電圧信号として読み出される。単眼3Dセンサ16(16-1,16-2)から読み出された電圧信号は、アナログ信号処理部18(18-1,18-2)に加えられ、ここで各画素ごとのR、G、B信号がサンプリングホールドされ、CPU40から指定されたゲイン(ISO感度に相当)で増幅されたのちA/D変換器20(20-1,20-2)に加えられる。A/D変換器20(20-1,20-2)は、順次入力するR、G、B信号をデジタルのR、G、B信号に変換して画像入力コントローラ22(22-1,22-2)に出力する。 Returning to FIG. 1, the signal charge accumulated in the monocular 3D sensor 16 (16-1, 16-2) is read out as a voltage signal corresponding to the signal charge based on the readout signal applied from the CCD controller 32. The voltage signal read from the monocular 3D sensor 16 (16-1, 16-2) is applied to the analog signal processing unit 18 (18-1, 18-2), where R, G, The B signal is sampled and held, amplified by a gain designated by the CPU 40 (corresponding to ISO sensitivity), and then added to the A / D converter 20 (20-1, 20-2). The A / D converter 20 (20-1, 20-2) sequentially converts the input R, G, B signals into digital R, G, B signals and converts them into image input controllers 22 (22-1, 22-22). Output to 2).
 第1の撮影レンズ12-1、第1の絞り14-1、第1の単眼3Dセンサ16-1、第1のアナログ信号処理部18-1、第1のA/D変換器20-1、第1の画像入力コントローラ22-1、第1のCCD制御部32-1、第1の絞り駆動部34-1、および、第1のレンズ駆動部36-1によって、第1の撮像部11-1が構成されている。また、第2の撮影レンズ12-2、第2の絞り14-2、第2の単眼3Dセンサ16-2、第2のアナログ信号処理部18-2、第2のA/D変換器20-2、第2の画像入力コントローラ22-2、第2のCCD制御部32-2、第2の絞り駆動部34-2、および、第2のレンズ駆動部36-2によって、第2の撮像部11-2が構成されている。 A first photographing lens 12-1, a first diaphragm 14-1, a first monocular 3D sensor 16-1, a first analog signal processing unit 18-1, a first A / D converter 20-1, The first image input unit 22-1, the first CCD control unit 32-1, the first aperture driving unit 34-1 and the first lens driving unit 36-1 are used to form the first imaging unit 11-. 1 is configured. The second photographing lens 12-2, the second diaphragm 14-2, the second monocular 3D sensor 16-2, the second analog signal processing unit 18-2, and the second A / D converter 20- 2, the second image input controller 22-2, the second CCD control unit 32-2, the second diaphragm driving unit 34-2, and the second lens driving unit 36-2 11-2 is configured.
 デジタル信号処理部24は、画像入力コントローラ22を介して入力するデジタルの画像信号に対して、オフセット処理、ホワイトバランス補正、感度補正を含むゲイン・コントロール処理、ガンマ補正処理、同時化処理、YC処理、シャープネス補正等の所定の信号処理を行う。 The digital signal processing unit 24 performs gain control processing including gamma correction processing, gamma correction processing, synchronization processing, YC processing for digital image signals input via the image input controller 22, including offset processing, white balance correction, and sensitivity correction. Then, predetermined signal processing such as sharpness correction is performed.
 また、EEPROM46は、カメラ制御プログラム、単眼3Dセンサ16の欠陥情報、画像処理等に使用する各種のパラメータやテーブル、プログラム線図、本発明に係る複数の視差優先プログラム線図等が記憶されている不揮発性メモリである。 The EEPROM 46 stores a camera control program, defect information of the monocular 3D sensor 16, various parameters and tables used for image processing, a program diagram, a plurality of parallax priority program diagrams according to the present invention, and the like. It is a non-volatile memory.
 ここで、図3(b)及び(c)に示すように、単眼3Dセンサ16の奇数ラインの主画素から読み出される主画像データは、左視点画像データとして処理され、偶数ラインの副画素から読み出される副画像データは、右視点画像データとして処理される。 Here, as shown in FIGS. 3B and 3C, the main image data read from the odd-line main pixels of the monocular 3D sensor 16 is processed as the left viewpoint image data and read from the even-line sub-pixels. The sub image data to be processed is processed as right viewpoint image data.
 デジタル信号処理部24で処理された左視点画像データ及び右視点画像データ(3D画像データ)は、VRAM50に入力する。VRAM50には、それぞれが1コマ分の3D画像を表す3D画像データを記憶するA領域とB領域とが含まれている。VRAM50において1コマ分の3D画像を表す3D画像データがA領域とB領域とで交互に書き換えられる。VRAM50のA領域及びB領域のうち、3D画像データが書き換えられている方の領域以外の領域から、書き込まれている3D画像データが読み出される。VRAM50から読み出された3D画像データはビデオ・エンコーダ28においてエンコーディングされ、カメラ背面に設けられている立体表示用の液晶モニタ30に出力され、これにより3Dの被写体像が液晶モニタ30の表示画面上に表示される。 The left viewpoint image data and right viewpoint image data (3D image data) processed by the digital signal processing unit 24 are input to the VRAM 50. The VRAM 50 includes an A area and a B area each storing 3D image data representing a 3D image for one frame. In the VRAM 50, 3D image data representing a 3D image for one frame is rewritten alternately in the A area and the B area. The written 3D image data is read from an area other than the area in which the 3D image data is rewritten in the A area and the B area of the VRAM 50. The 3D image data read from the VRAM 50 is encoded by the video encoder 28 and is output to the stereoscopic display liquid crystal monitor 30 provided on the back of the camera, whereby the 3D subject image is displayed on the display screen of the liquid crystal monitor 30. Is displayed.
 この液晶モニタ(LCD)30は、立体視画像(左視点画像及び右視点画像)をパララックスバリアによりそれぞれ所定の指向性をもった指向性画像として表示できる立体表示部であるが、これに限らず、レンチキュラレンズを使用するものや、偏光メガネ、液晶シャッタメガネなどの専用メガネをかけることで左視点画像と右視点画像とを個別に見ることができるものでもよい。なお、本実施の形態では立体撮像装置10が立体画像を表示可能な液晶モニタ30を備える場合について説明しているが、立体撮像装置10に液晶モニタ30を備えず、メモリカード54に記録した立体画像のデータを用いて別の立体画像表示装置で立体画像を視聴するようにしてもよい。 The liquid crystal monitor (LCD) 30 is a stereoscopic display unit that can display a stereoscopic image (a left viewpoint image and a right viewpoint image) as a directional image having a predetermined directivity by a parallax barrier, but is not limited thereto. Alternatively, the left viewpoint image and the right viewpoint image may be viewed separately by using a lenticular lens or by wearing dedicated glasses such as polarized glasses or liquid crystal shutter glasses. In this embodiment, the case where the stereoscopic imaging apparatus 10 includes the liquid crystal monitor 30 capable of displaying a stereoscopic image has been described. However, the stereoscopic imaging apparatus 10 does not include the liquid crystal monitor 30 and is recorded on the memory card 54. A stereoscopic image may be viewed on another stereoscopic image display device using the image data.
 また、操作部38のシャッタボタンの第1段階の押下(半押し)があると、単眼3Dセンサ16は、AF動作及びAE動作を開始させ、レンズ駆動部36を介して撮影レンズ12内のフォーカスレンズが合焦位置にくるように制御する。また、シャッタボタンの半押し時にA/D変換器20から出力される画像データは、AE検出部44に取り込まれる。 In addition, when the shutter button of the operation unit 38 is pressed (half-pressed) in the first stage, the monocular 3D sensor 16 starts the AF operation and the AE operation, and the focus in the photographing lens 12 is set via the lens driving unit 36. Control is performed so that the lens comes to the in-focus position. The image data output from the A / D converter 20 when the shutter button is half-pressed is taken into the AE detection unit 44.
 AE検出部44では、画面全体のG信号を積算し、又は画面中央部と周辺部とで異なる重みづけをしたG信号を積算し、その積算値をCPU40に出力する。CPU40は、AE検出部44から入力する積算値より被写体の明るさ(撮影EV値)を算出し、この撮影EV値に基づいて絞り14の絞り値及び単眼3Dセンサ16の電子シャッタ(シャッタ速度)を所定のプログラム線図にしたがって決定し、その決定した絞り値に基づいて絞り駆動部34を介して絞り14を制御するとともに、決定したシャッタ速度に基づいてCCD制御部32を介して単眼3Dセンサ16での電荷蓄積時間を制御する。 The AE detection unit 44 integrates the G signals of the entire screen or integrates the G signals that are weighted differently in the central portion and the peripheral portion of the screen, and outputs the integrated value to the CPU 40. The CPU 40 calculates the brightness of the subject (shooting EV value) from the integrated value input from the AE detection unit 44, and the aperture value of the diaphragm 14 and the electronic shutter (shutter speed) of the monocular 3D sensor 16 based on the shooting EV value. Is determined according to a predetermined program diagram, the aperture 14 is controlled via the aperture drive unit 34 based on the determined aperture value, and the monocular 3D sensor is controlled via the CCD control unit 32 based on the determined shutter speed. The charge accumulation time at 16 is controlled.
 AF処理部42は、コントラストAF処理を行う部分である。コントラストAF処理では、左視点画像データ及び右視点画像データの少なくとも一方の画像データのうちの所定のフォーカス領域内の画像データの高周波成分を抽出し、この高周波成分を積分することにより合焦状態を示すAF評価値を算出する。このAF評価値が極大となるように撮影レンズ12内のフォーカスレンズを制御することによりAF制御が行われる。なお、位相差AF処理を行うようにしてもよく、この場合左視点画像データ及び右視点画像データのうちの所定のフォーカス領域内の主画素、副画素に対応する画像データの位相差を検出し、この位相差を示す情報に基づいてデフォーカス量を求める。このデフォーカス量が0になるように撮影レンズ12内のフォーカスレンズを制御することによりAF制御が行われる。 The AF processing unit 42 is a part that performs contrast AF processing. In the contrast AF process, a high frequency component of image data in a predetermined focus area is extracted from at least one of the left viewpoint image data and the right viewpoint image data, and the in-focus state is determined by integrating the high frequency component. An AF evaluation value is calculated. AF control is performed by controlling the focus lens in the photographic lens 12 so that the AF evaluation value is maximized. Note that phase difference AF processing may be performed. In this case, the phase difference between the image data corresponding to the main pixel and the sub-pixel in the predetermined focus area of the left viewpoint image data and the right viewpoint image data is detected. The defocus amount is obtained based on the information indicating the phase difference. AF control is performed by controlling the focus lens in the taking lens 12 so that the defocus amount becomes zero.
 AE動作及びAF動作が終了し、シャッタボタンの第2段階の押下(全押し)があると、その押下に応答してA/D変換器20から出力される主画素及び副画素に対応する左視点画像(主画像)及び右視点画像(副画像)の2枚分の画像データが画像入力コントローラ22からメモリ(SDRAM) 48に入力され、一時的に記憶される。 When the AE operation and the AF operation are completed and the shutter button is pressed in the second stage (full press), the left corresponding to the main pixel and the sub-pixel output from the A / D converter 20 in response to the press. Two pieces of image data of the viewpoint image (main image) and the right viewpoint image (sub-image) are input from the image input controller 22 to the memory (SDRAM) 48 and temporarily stored.
 メモリ48に一時的に記憶された2枚分の画像データは、デジタル信号処理部24により適宜読み出され、ここで画像データの輝度データ及び色差データの生成処理(YC処理)等の所定の信号処理が行われる。YC処理された画像データ(YCデータ)は、再びメモリ48に記憶される。続いて、2枚分のYCデータは、それぞれ圧縮伸長処理部26に出力され、JPEG (joint photographic experts group)などの所定の圧縮処理が実行されたのち、再びメモリ48に記憶される。 The two pieces of image data temporarily stored in the memory 48 are appropriately read out by the digital signal processing unit 24, where predetermined signals such as luminance data and color difference data generation processing (YC processing) are performed. Processing is performed. The YC processed image data (YC data) is stored in the memory 48 again. Subsequently, the two pieces of YC data are respectively output to the compression / decompression processing unit 26 and subjected to predetermined compression processing such as JPEG (joint photographic experts group), and then stored in the memory 48 again.
 メモリ48に記憶された2枚分のYCデータ(圧縮データ)から、マルチピクチャファイル(MPファイル:複数の画像が連結された形式のファイル)が生成され、そのMPファイルは、メディア・コントローラ52により読み出され、メモリカード54に記録される。 A multi-picture file (MP file: a file in a format in which a plurality of images are connected) is generated from two pieces of YC data (compressed data) stored in the memory 48, and the MP file is generated by the media controller 52. It is read and recorded in the memory card 54.
 図1に示した構成の立体撮像装置10は、被写体を撮像する第1の撮像部11-1および第2の撮像部11-2と、第1の撮像部11-1および第2の撮像部11-2を制御する制御部としてのCPU40を備える。第1の撮像部11-1および第2の撮像部11-1は双方とも、撮影レンズ12(12-1、12-2)の射出瞳の異なる領域を通過した光束をそれぞれ光電変換する複数の画素群を含む撮像素子(単眼3Dセンサ16-1,16-2)を有する。立体撮像装置10は、第1の撮像部11-1により得られた視点画像(画像情報)と第2の撮像部11-2により得られた視点画像(画像情報)とを立体画像として液晶モニタ30に表示する複眼立体撮影の機能と、第1の撮像部11-1および第2の撮像部11-2のうち複数の画素群を有する一方の撮像部により得られた複数の視点画像(複数の画像情報)を立体画像として液晶モニタ30に表示する単眼立体撮影の機能とを有する。なお第1の実施形態に係る立体撮像装置10では、図2のイメージ図に示すように、被写体から見て右側(ユーザから見て左側)の撮像部を「第1の撮像部」とし、被写体から見て左側(ユーザから見て右側)の撮像部を「第2の撮像部」として記載している。 The stereoscopic imaging device 10 having the configuration shown in FIG. 1 includes a first imaging unit 11-1 and a second imaging unit 11-2 that image a subject, and a first imaging unit 11-1 and a second imaging unit. A CPU 40 is provided as a control unit for controlling 11-2. Both the first imaging unit 11-1 and the second imaging unit 11-1 are a plurality of units that photoelectrically convert light beams that have passed through different areas of the exit pupil of the photographing lens 12 (12-1, 12-2). An image sensor (monocular 3D sensors 16-1 and 16-2) including a pixel group is included. The stereoscopic imaging device 10 uses a viewpoint image (image information) obtained by the first imaging unit 11-1 and a viewpoint image (image information) obtained by the second imaging unit 11-2 as a stereoscopic image on the liquid crystal monitor. And a plurality of viewpoint images (a plurality of viewpoint images obtained by one imaging unit having a plurality of pixel groups among the first imaging unit 11-1 and the second imaging unit 11-2). The image information) on the liquid crystal monitor 30 as a stereoscopic image. In the stereoscopic imaging apparatus 10 according to the first embodiment, as illustrated in the image diagram of FIG. 2, the imaging unit on the right side (left side as viewed from the user) is the “first imaging unit”. The imaging unit on the left side when viewed from the right side (right side when viewed from the user) is described as a “second imaging unit”.
 [単眼/複眼立体撮像機能切り替え時のクロスポイント制御]
 次に、第1の実施形態に係る立体撮像装置10における単眼/複眼立体撮像機能切り替え時のクロスポイント制御について説明する。図8A及び図8Bは、単眼/複眼立体撮像時のクロスポイントとピントとの関係を示す概念図である。なお本発明において、クロスポイントとは立体画像中で視差がゼロになる点を意味する。
[Cross-point control when switching between monocular / compound-eye stereoscopic imaging functions]
Next, the cross point control at the time of switching the monocular / compound eye stereoscopic imaging function in the stereoscopic imaging apparatus 10 according to the first embodiment will be described. 8A and 8B are conceptual diagrams showing the relationship between the cross point and the focus during monocular / compound eye stereoscopic imaging. In the present invention, the cross point means a point where parallax is zero in a stereoscopic image.
 一般に、複眼立体撮像時は左右の画像データを電子的にシフトすることにより、クロスポイントを自在に設定でき、そのため図8Aの例に示すように、クロスポイント(object B)と合焦点(object A)とが異なることが多い。これに対し単眼立体撮像時は画像ぼけの円を左右に分割することで立体感を生じさせているため、図8Bの例に示すように、ぼけの生じない個所(合焦点;object A)が自動的にクロスポイント(object A)となる。したがって、単眼立体撮像と複眼立体撮像とを切り替えると、視聴者にとってはクロスポイントが急にずれたように見えることがありうる。このような観点から第1の実施形態に係る立体撮像装置10では、以下のようなクロスポイント制御を行うことができるようにしている。このようなクロスポイントの制御は、連続して撮影を行う場合、即ち動画撮影及びいわゆるスルー画像取得の場合に特に有効である。なお、クロスポイント制御を行うか否かは、操作部38を介したユーザの入力により決定してよい。 In general, at the time of compound eye stereoscopic imaging, the left and right image data can be electronically shifted to freely set the cross point. Therefore, as shown in the example of FIG. 8A, the cross point (object B) and the focal point (object A ) Is often different. On the other hand, at the time of monocular three-dimensional imaging, since a three-dimensional effect is generated by dividing an image blur circle to the left and right, as shown in the example of FIG. It automatically becomes a crosspoint (object A). Therefore, when switching between monocular stereoscopic imaging and compound eye stereoscopic imaging, it may appear to the viewer that the cross point has suddenly shifted. From such a viewpoint, the stereoscopic imaging apparatus 10 according to the first embodiment can perform the following crosspoint control. Such cross-point control is particularly effective when shooting continuously, that is, when shooting moving images and acquiring so-called through images. Note that whether or not to perform crosspoint control may be determined by a user input via the operation unit 38.
 <複眼立体撮像機能から単眼立体撮像機能への切り替え時>
 図7は、複眼立体撮像機能から単眼立体撮像機能への切り替え時のクロスポイント制御を示すフローチャートであり、図9はそのクロスポイント制御の様子を示す概念図である。S102で操作部38の操作により複眼立体撮影モード(複眼3Dモード)に設定されると、S104で左右の視点画像を取得する。ここでは第1の撮像部11-1の左チャンネルで左視点画像(図9の(a)のL)を取得し、第2の撮像部11-2の右チャンネルで右視点画像(図9の(a)のR)を取得するものとする。またここでは、第1の撮像部11-1の左チャンネルで取得した左視点画像Lを用いてコントラストAF処理を行うものとする。なお図9の(a)の例では、図8Aと同様に、クロスポイントはobject Bで合焦点はobject Aである。
<When switching from the compound-eye stereoscopic imaging function to the monocular stereoscopic imaging function>
FIG. 7 is a flowchart showing the cross point control at the time of switching from the compound eye stereoscopic imaging function to the monocular stereoscopic imaging function, and FIG. 9 is a conceptual diagram showing the state of the cross point control. When the compound eye stereoscopic photographing mode (compound eye 3D mode) is set by the operation of the operation unit 38 in S102, left and right viewpoint images are acquired in S104. Here, the left viewpoint image (L in FIG. 9A) is acquired by the left channel of the first imaging unit 11-1, and the right viewpoint image (FIG. 9) is acquired by the right channel of the second imaging unit 11-2. R) of (a) shall be acquired. Here, it is assumed that the contrast AF process is performed using the left viewpoint image L acquired in the left channel of the first imaging unit 11-1. In the example of FIG. 9A, the cross point is object B and the focal point is object A, as in FIG. 8A.
 S106では、上記左視点画像Lにおいて合焦領域を検出する。ここでは図9の(b)に示すように領域S1で合焦(コントラストが最大)しているものとする。そしてS108では、上記右視点画像Rにおいて領域S1に対応する領域を検出する。図9の例では、この領域は図9の(c)の領域S2である。なおこの対応領域検出は相関法やテンプレートマッチング等、様々な方法により行うことができるが、確実に検出するには左視点画像Lにおいて、AFエリアに対応する領域内を検出対象とすることが好ましい。 In S106, for detecting a focused area in the left viewpoint image L. Here, as shown in FIG. 9B, it is assumed that focusing is performed in region S1 (the contrast is maximum). In S108, a region corresponding to the region S1 is detected in the right viewpoint image R. In the example of FIG. 9, this area is the area S2 of (c) of FIG. This corresponding area detection can be performed by various methods such as a correlation method and template matching. However, in order to reliably detect, it is preferable to set the area corresponding to the AF area in the left viewpoint image L as a detection target. .
 そしてS110で、クロスポイントがobject Aとなるように右視点画像Rを動かす量(ずれ量ΔX)を算出し、S112で右視点画像RをΔX移動して (図9の(d)の右視点画像R’)、左視点画像Lと右視点画像R’とで構成される立体画像を液晶モニタ30に表示する。この立体画像はメモリカード54に記録するようにしてもよい。 In S110, an amount (shift amount ΔX) of moving the right viewpoint image R so that the cross point becomes object A is calculated. In S112, the right viewpoint image R is moved by ΔX and (the right viewpoint of FIG. 9D) The stereoscopic image composed of the image R ′), the left viewpoint image L and the right viewpoint image R ′ is displayed on the liquid crystal monitor 30. The three-dimensional image may be recorded on the memory card 54.
 S112までの処理により、複眼立体撮像により得られる立体画像のクロスポイントはobject Aとなり、合焦領域と一致している。このような処理を、複眼立体撮影が継続している間行い、複眼立体撮影から単眼立体撮影に切り替えられるまで、クロスポイントと合焦点とが一致した状態を維持する。なおこのような処理は所定の時間間隔、例えば100msec間隔で行うようにしてもよい。 Through the processing up to S112, the cross point of the stereoscopic image obtained by the compound eye stereoscopic imaging is object A, which matches the in-focus area. Such processing is performed while the compound eye stereoscopic photography is continued, and the state where the cross point and the in-focus point coincide with each other is maintained until switching from the compound eye stereoscopic photography to the monocular stereoscopic photography. Such processing may be performed at predetermined time intervals, for example, 100 msec intervals.
 複眼立体撮影から単眼立体撮影に切り替えられると(S116でYES)、単眼立体撮像部を用いた単眼立体撮影を行う。ここでは、単眼立体撮像部である第1の撮像部11-1を用いて単眼立体撮影を行うものとする。複眼立体撮影から単眼立体撮影に切り替えられた状態では、S114までの処理により、クロスポイントと合焦領域とが一致した状態が維持されており、また上述したように単眼立体撮影ではクロスポイントと合焦点とが一致するから、複眼立体撮影から単眼立体撮影に切り替えられても、視聴者にはクロスポイントが急に変化したようには感じられない。したがって撮影モード切替時に自然な感じで画像を表示することができる。 When switching from compound eye stereo photography to monocular stereo photography (YES in S116), monocular stereo photography using a monocular stereo imaging unit is performed. Here, it is assumed that monocular stereoscopic imaging is performed using the first imaging unit 11-1 which is a monocular stereoscopic imaging unit. In the state in which the compound eye stereoscopic shooting is switched to the monocular stereoscopic shooting, the state up to the cross point is maintained by the processing up to S114, and as described above, the monocular stereoscopic shooting matches the cross point. Since the focal points coincide with each other, even if switching from compound eye stereoscopic photography to monocular stereoscopic photography, the viewer does not feel that the cross point has changed suddenly. Therefore, an image can be displayed with a natural feeling when the shooting mode is switched.
 <単眼立体撮像機能から複眼立体撮像機能への切り替え時>
 次に、単眼立体撮像機能から複眼立体撮像機能への切り替え時のクロスポイント制御について説明する。図10はそのようなクロスポイント制御の手順を示すフローチャートであり、図11はそのクロスポイント制御の様子を示す概念図である。S202で操作部38の操作により単眼立体撮影モード(単眼3Dモード)に設定されると、S204で左右の視点画像を取得する。ここでは第1の撮像部11-1の左チャンネルで左視点画像を取得し、第1の撮像部11-1の右チャンネルで右視点画像を取得するものとする。この場合単眼立体撮影であるため、図11の(b)に示すように立体画像は視差の小さい2枚の視点画像から構成される。また単眼立体撮影であるため、クロスポイントと合焦点は一致している(図11の(a)では共にobject A)。S206ではこのようにして得られた視点画像を立体画像として液晶モニタ30に表示する。
<When switching from monocular stereo imaging function to compound eye stereo imaging function>
Next, cross point control at the time of switching from the monocular stereoscopic imaging function to the compound eye stereoscopic imaging function will be described. FIG. 10 is a flowchart showing the procedure of such crosspoint control, and FIG. 11 is a conceptual diagram showing the state of the crosspoint control. When the single-eye stereoscopic photographing mode (monocular 3D mode) is set by operating the operation unit 38 in S202, left and right viewpoint images are acquired in S204. Here, it is assumed that the left viewpoint image is acquired by the left channel of the first imaging unit 11-1, and the right viewpoint image is acquired by the right channel of the first imaging unit 11-1. In this case, since monocular stereoscopic shooting is performed, the stereoscopic image is composed of two viewpoint images with small parallax, as shown in FIG. In addition, since it is monocular stereoscopic photography, the cross point and the focal point coincide (object A in FIG. 11A). In S206, the viewpoint image obtained in this way is displayed on the liquid crystal monitor 30 as a stereoscopic image.
 S208では第1の撮像部11-1で取得した画像において合焦領域を検出する。上述のように単眼立体撮影時はクロスポイントと合焦点とが一致しているため、図11の(b)及び(c)に示すように、左右のチャンネル間の相関をとり視差が最も小さい個所をクロスポイントとして検出することで、合焦領域を検出することができる。なお合焦領域の検出は、左右の視点画像の一方で行えばよい。なお、以下で説明する対応点検出等の処理を行うため、S210で第2の撮像部11-2による視点画像を併せて取得しておく(図11の(d))。 In S208, the in-focus area is detected in the image acquired by the first imaging unit 11-1. As described above, since the cross point coincides with the focal point at the time of monocular stereoscopic photography, as shown in (b) and (c) of FIG. By detecting as a cross point, the in-focus area can be detected. The focus area may be detected by one of the left and right viewpoint images. In order to perform processing such as corresponding point detection described below, a viewpoint image by the second imaging unit 11-2 is also acquired in S210 ((d) in FIG. 11).
 次に、第2の撮像部11-2により取得した視点画像において、上記検出した合焦領域と対応する領域を検出する(S212)。対応領域の検出は、複眼立体撮影上述したように相関法などのアルゴリズムにより行うことができる。そして検出した対応領域に基づいて、第1の撮像部11-1で取得した視点画像と第2の撮像部11-2により取得した視点画像とのずれ量を算出し(図11の(e))、既に記憶されている値を更新して記憶する(S214)。このずれ量は、複眼立体撮影に切り替えた時に得られる立体画像が単眼立体撮影時と同様にクロスポイントと合焦領域とが一致するようにするための画像ずらし量である。このような処理を、単眼立体撮影が継続している間行い、単眼立体撮影から複眼立体撮影に切り替えられるまで、ずれ量の算出・更新・記録を継続する。なおこのような処理は所定の時間間隔、例えば100msec間隔で行うようにしてもよい。 Next, an area corresponding to the detected in-focus area is detected in the viewpoint image acquired by the second imaging unit 11-2 (S212). The corresponding area can be detected by an algorithm such as a correlation method as described above. Then, based on the detected corresponding area, a deviation amount between the viewpoint image acquired by the first imaging unit 11-1 and the viewpoint image acquired by the second imaging unit 11-2 is calculated ((e) in FIG. 11). ) Updates the stored value and stores it (S214). This shift amount is an image shift amount for causing the cross point and the in-focus area to coincide with each other in the stereoscopic image obtained when switching to the compound eye stereoscopic shooting. Such processing is performed while monocular stereoscopic photography continues, and the calculation, update, and recording of the deviation amount are continued until switching from monocular stereoscopic photography to compound eye stereoscopic photography. Such processing may be performed at predetermined time intervals, for example, 100 msec intervals.
 単眼立体撮影から複眼立体撮影に切り替えられた時は(S216でYES)、S218で第2の撮像部11-2で取得した視点画像を上記ずれ量分ずらし(図11の(f))、第1の撮像部11-1で取得した視点画像と共に立体画像として液晶モニタ30に表示する(S220)。上述のように第2の撮像部11-2で取得した視点画像は上記ずれ量分ずらされるので、複眼立体撮影に切り替えた時に得られる立体画像は、単眼立体撮影時と同様にクロスポイントと合焦領域とが一致している。したがって単眼立体撮影から複眼立体撮影に切り替えられても、視聴者にはクロスポイントが急に変化したようには感じられず、撮影モード切替時に自然な感じで画像を表示することができる。 When switching from single-eye stereoscopic photography to compound-eye stereoscopic photography (YES in S216), the viewpoint image acquired by the second imaging unit 11-2 in S218 is shifted by the above deviation amount ((f) in FIG. 11), The stereoscopic image is displayed on the liquid crystal monitor 30 together with the viewpoint image acquired by the first imaging unit 11-1 (S220). As described above, the viewpoint image acquired by the second imaging unit 11-2 is shifted by the amount of shift, so that the stereoscopic image obtained when switching to compound eye stereoscopic shooting is aligned with the cross point as in monocular stereoscopic shooting. The focal area matches. Therefore, even when switching from single-eye stereoscopic photography to compound-eye stereoscopic photography, the viewer does not feel that the cross point has changed suddenly, and an image can be displayed with a natural feeling when the photographing mode is switched.
 [クロスポイント調整の自動発動]
 以上、単眼/複眼立体撮影切替時のクロスポイント制御について説明してきたが、次にクロスポイント調整の自動発動について説明する。
[Automatic crosspoint adjustment]
The cross point control at the time of switching to monocular / compound eye stereoscopic photography has been described above. Next, automatic activation of cross point adjustment will be described.
 単眼立体撮像と複眼立体撮像とを比較した場合、一般に遠距離撮影では単眼立体撮像機能・複眼立体撮像機能のどちらでも用いることができるが、近距離撮影では基線長の短い単眼立体撮像が複眼立体撮像に比べて有利であるため単眼立体撮像機能での撮影が好ましい。したがってクロスポイント調整機能は、近距離撮影のときには遠距離撮影に切り替わった時だけ発動させればよいが、遠距離撮影のときには、近距離撮影に切り替わった時だけでなくユーザによる単眼/複眼立体撮影の変更要求があった時にも対応できるように、常時発動させる(クロスポイントの調整を継続的に行う)必要がある。 When comparing single-eye stereoscopic imaging and compound-eye stereoscopic imaging, it is generally possible to use either the single-eye stereoscopic imaging function or the compound-eye stereoscopic imaging function for long-distance shooting, but for short-distance shooting, monocular stereoscopic imaging with a short baseline length is a compound-eye stereoscopic imaging. Since it is more advantageous than imaging, photographing with a monocular stereoscopic imaging function is preferable. Therefore, the cross-point adjustment function only needs to be activated when switching to long-distance shooting for short-distance shooting, but for long-distance shooting, it is not only when switching to short-distance shooting but also for monocular / compound-eye stereoscopic shooting by the user. It is necessary to always activate it (so that the crosspoint is adjusted continuously) so that it can be handled even when there is a change request.
 具体的には、図12に示すクロスポイント調整の自動発動処理を示すフローチャートのように、まず単眼/複眼立体撮影の自動切替をするか否かユーザに選択させ(S302)、NOであれば単眼/複眼立体撮影のいずれかをユーザに選択させる(S304)。S302でYESの場合はS306に進んで合焦位置を検知し、所定のしきい値(例えば、70cm)より近ければ(S306でYESの場合)単眼立体撮影とする(S308)。この判断は、所定の時間間隔(例えば、100msec)で継続的に行う(S310)。合焦位置が所定のしきい値より遠くなり近距離撮影が成り立たなくなったら(S312でNO)、複眼立体撮影とした上で(S314)、再度近距離撮影に変わった時に備えてクロスポイント自動調整機能を発動させ、クロスポイントと合焦位置を一致させる(S316)。 Specifically, as shown in the flowchart of the automatic activation process for crosspoint adjustment shown in FIG. 12, first, the user selects whether to automatically switch between monocular / compound-eye stereoscopic photography (S302). / The user is allowed to select any one of the compound eye stereoscopic photographing (S304). If YES in step S302, the process proceeds to step S306, where the in-focus position is detected, and if close to a predetermined threshold (for example, 70 cm) (YES in step S306), monocular stereoscopic shooting is performed (S308). This determination is continuously performed at a predetermined time interval (for example, 100 msec) (S310). If the in-focus position is farther than the predetermined threshold value and short-distance shooting is no longer possible (NO in S312), compound eye stereoscopic shooting is performed (S314), and automatic cross-point adjustment is performed in preparation for changing to short-distance shooting. The function is activated to match the cross point with the in-focus position (S316).
 S306でNOの場合、即ち合焦位置が所定のしきい値と同じかそれより遠いならば複眼立体撮影とし、再度近距離撮影に変わった時に備えてクロスポイント自動調整機能を発動させ、クロスポイントと合焦位置を一致させる(S316)。 If NO in S306, that is, if the in-focus position is the same as or farther than the predetermined threshold value, the compound eye stereoscopic shooting is performed, and the crosspoint automatic adjustment function is activated in preparation for a change to short-distance shooting. And the in-focus position are matched (S316).
 S316で複眼立体撮影・クロスポイント自動調整機能発動の状態になった後は、所定の時間間隔で(S318)、単眼立体撮影への切り替えを行うか否か判断する(S320)。S320で合焦距離が所定のしきい値より短い場合、及び操作部38を介したユーザの指示により単眼立体撮影への変更要求がなされた場合は、S308へ戻って単眼立体撮影とする。S320でいずれの条件も満たさない場合は、S316へ戻って複眼立体撮影かつクロスポイント自動調整機能発動の状態を継続する。 After the state of the compound eye stereoscopic photographing / cross point automatic adjustment function is activated in S316, it is determined whether or not to switch to monocular stereoscopic photographing at a predetermined time interval (S318) (S320). If the in-focus distance is shorter than the predetermined threshold value in S320, or if a change request to monocular stereoscopic photography is made by a user instruction via the operation unit 38, the process returns to S308 and monocular stereoscopic photography is performed. If none of the conditions is satisfied in S320, the process returns to S316 to continue the state of the compound-eye stereoscopic photographing and the automatic activation of the crosspoint function.
 以上の処理により、単眼/複眼立体撮影を自動切替とした場合でも、クロスポイントが急にずれたような印象をユーザに与えることがなく撮影が可能となり、自然な感じで画像を表示することができる。 With the above processing, even when monocular / compound-eye stereoscopic shooting is automatically switched, shooting can be performed without giving the user an impression that the cross point has suddenly shifted, and an image can be displayed with a natural feeling. it can.
 [撮影モードの設定]
 上述のように、第1の実施形態に係る立体撮像装置10は、第1の撮像部11-1、第2の撮像部11-2とも、単眼立体撮像機能を有する。したがって、2つの撮像部の両方で左右の視点画像を取得すれば、合計4視点が得られる。視点数が多ければ、対応点を取って視差量を測定する場合などの性能向上が期待できる。
[Shooting mode settings]
As described above, in the stereoscopic imaging device 10 according to the first embodiment, both the first imaging unit 11-1 and the second imaging unit 11-2 have a monocular stereoscopic imaging function. Therefore, if left and right viewpoint images are acquired by both of the two imaging units, a total of four viewpoints can be obtained. If the number of viewpoints is large, an improvement in performance can be expected when taking the corresponding points and measuring the amount of parallax.
 4視点取得する必要がない場合は、第1の撮像部11-1にて左視点画像を取得し、第2の撮像部11-2にて右視点画像を取得する。あるいは、第2の撮像部11-2にて左視点画像を取得し、第1の撮像部11-1にて右視点画像を取得するようにしてもよい。この場合後者の方が、得られる視差は若干小さくなる。また、第1の撮像部11-1でも第2の撮像部11-2でも、図15の点線に示すように画素加算をして画素値を取得すれば、ノイズを低減した2つの視点画像を取得することが可能である。 When it is not necessary to acquire four viewpoints, the first imaging unit 11-1 acquires the left viewpoint image, and the second imaging unit 11-2 acquires the right viewpoint image. Alternatively, the left viewpoint image may be acquired by the second imaging unit 11-2, and the right viewpoint image may be acquired by the first imaging unit 11-1. In this case the latter is obtained parallax slightly smaller. Further, in both the first imaging unit 11-1 and the second imaging unit 11-2, if the pixel value is obtained by performing pixel addition as shown by the dotted line in FIG. 15, two viewpoint images with reduced noise are obtained. It is possible to obtain.
 第1の撮像部11-1または第2の撮像部11-2単独で、左から入射する光と右から入射する光とを分離すれば、視差の小さい2つの(左右)視点画像を得ることができる。このような視差の小さい視点画像から構成・表示される立体画像は、視差の大きい立体画像と比べて臨場感は少ないものの、「目が疲れにくい」、「3D(3次元)テレビで表示する際、メガネをかけていない人には通常の2D(2次元)画像として見える(二重像にはならない)」といった利点がある。 If the first imaging unit 11-1 or the second imaging unit 11-2 alone separates light incident from the left and light incident from the right, two (left and right) viewpoint images with small parallax can be obtained. Can do. A stereoscopic image composed and displayed from a viewpoint image with a small parallax has less presence than a stereoscopic image with a large parallax, but “is less tiring” and “when displayed on a 3D (three-dimensional) TV” , It has an advantage of being viewed as a normal 2D (two-dimensional) image by a person who is not wearing glasses (not a double image).
 もちろん、単独の撮像部を用いる場合、左右の入射光を分離せず1つの視点画像のみを取得する、即ち2次元画像を得るようにしてもよい。 Of course, when a single imaging unit is used, only one viewpoint image may be obtained without separating the left and right incident light, that is, a two-dimensional image may be obtained.
 このように、撮影に用いる撮像部の数や取得する視点画像の数の組合せ、あるいは画素加算の有無によって、視点の数、視差の大きさ、ノイズの量等が異なるさ多様な画像の中から、ユーザの要求に応じた画像を取得することができる。このような、立体撮像装置10で設定可能な撮影モードをまとめたものが、図13に示す表である。 As described above, the number of viewpoints, the amount of parallax, the amount of noise, and the like differ depending on the combination of the number of imaging units used for shooting, the number of viewpoint images to be acquired, or the presence or absence of pixel addition. The image according to the user's request can be acquired. The table shown in FIG. 13 summarizes such shooting modes that can be set by the stereoscopic imaging apparatus 10.
 図14は、図13に示す撮影モードを設定する具体的な手順の例を示す図である。立体撮像装置10では、図14に示すインタフェースを液晶モニタ30に表示し、操作部38を介したユーザの指示入力により撮影モードを設定することができる。 FIG. 14 is a diagram showing an example of a specific procedure for setting the shooting mode shown in FIG. In the stereoscopic imaging apparatus 10, the interface shown in FIG. 14 can be displayed on the liquid crystal monitor 30, and the shooting mode can be set by a user instruction input via the operation unit 38.
 立体撮像装置10はまず図14の(a)に示す画面を液晶モニタ30に表示し、ユーザに視点の数(1,2,または4)の入力を促す。ここで4視点の場合は、図13に示す撮影モード[1]で決定する(図14の(e))。1視点または2視点の場合は、立体撮像装置10はユーザに対しさらなる入力を促す。すなわち、1視点であれば第1・第2の撮像部11-1、11-2のうちどちらを使うか(図14の(b))、2視点であれば単眼立体撮像で2つの視点画像を取得するのか複眼立体撮影で2つの視点画像を取得するのか(図14の(c))、について入力を促す。1視点の場合は第1の立体撮像部11-1を用いるか(左視点で撮影する)第2の立体撮像部11-2を用いる(右視点で撮影する)かによって撮影モード[7]または[8]となる。また2視点の場合、単眼立体撮像ならば第1の立体撮像部11-1を用いるか第2の立体撮像部11-2を用いるかによって撮影モード[7]または[8]となる。 The stereoscopic imaging apparatus 10 first displays the screen shown in FIG. 14A on the liquid crystal monitor 30 and prompts the user to input the number of viewpoints (1, 2, or 4). Here, in the case of four viewpoints, it is determined by the shooting mode [1] shown in FIG. 13 ((e) of FIG. 14). In the case of one viewpoint or two viewpoints, the stereoscopic imaging apparatus 10 prompts the user for further input. That is, which one of the first and second imaging units 11-1 and 11-2 is used for one viewpoint (FIG. 14B), two viewpoint images are obtained by monocular stereoscopic imaging for two viewpoints. Or whether to acquire two viewpoint images by compound eye stereoscopic photography ((c) of FIG. 14). In the case of one viewpoint, depending on whether the first stereoscopic imaging unit 11-1 is used (shooting from the left viewpoint) or the second stereoscopic imaging unit 11-2 is used (shooting from the right viewpoint), the shooting mode [7] or [8] In the case of two viewpoints, in the case of monocular stereoscopic imaging, the imaging mode [7] or [8] is set depending on whether the first stereoscopic imaging unit 11-1 or the second stereoscopic imaging unit 11-2 is used.
 2視点で複眼立体撮像(即ち、第1・第2光学系を使用)ならば、立体撮像装置10は図14の(d)のように視差の量の入力を促す。そして入力された視差の量によって撮影モード[2]ないし[4]のいずれかを設定する(図14の(e))。 In the case of compound eye stereoscopic imaging with two viewpoints (that is, using the first and second optical systems), the stereoscopic imaging device 10 prompts the input of the amount of parallax as shown in FIG. Then, one of the shooting modes [2] to [4] is set according to the input amount of parallax ((e) in FIG. 14).
 なお撮影モードの設定は上記の例に限られない。例えば1視点の時に第1/第2の撮像部の選択をさせず第1の撮像部11-1を無条件で選択するようにしてもよい。これは、シャッターボタンのある側と反対の撮像部を用いる方が、指かかり(ユーザの指がレンズを覆うこと)が比較的少ないと予想されるためである。 Note that the shooting mode setting is not limited to the above example. For example, the first imaging unit 11-1 may be selected unconditionally without selecting the first / second imaging unit at the time of one viewpoint. This is because it is expected that the use of the imaging unit opposite to the side where the shutter button is located is relatively less likely to have a finger (a user's finger covers the lens).
 [単眼/複眼立体撮影モードの自動選択]
 図15は画素信号の加算に関する概念図である。図15中の点線で示すように撮像素子16の2つの画素の画素信号を加算することで、ノイズの量を減らすことができる。一方、高周波の信号が鈍り解像度が低下するおそれもある。そこで立体撮像装置10では、2視点による立体撮像時に、画面の明るさ・信号のコントラスト等に応じて、「複眼立体撮像により視点画像を取得し、2画素の画素信号を加算した結果を各撮像部の視点画像とする」、「単眼立体撮像で撮影し、画素信号の加算は行わない」のいずれかを自動的に選択できるようにしている。
[Automatic selection of monocular / compound eye photography mode]
FIG. 15 is a conceptual diagram regarding addition of pixel signals. The amount of noise can be reduced by adding the pixel signals of the two pixels of the image sensor 16 as indicated by the dotted line in FIG. On the other hand, the high-frequency signal may become dull and the resolution may be lowered. Therefore, in the stereoscopic imaging device 10, during stereoscopic imaging from two viewpoints, according to the brightness of the screen, the contrast of the signal, etc., “the viewpoint image is acquired by compound eye stereoscopic imaging, and the result of adding the pixel signals of two pixels is captured for each imaging. The image can be automatically selected from the following: “Make a viewpoint image of a part” and “Shoot with monocular three-dimensional imaging and do not add pixel signals”.
 図16はそのような立体撮影モードの自動選択処理の例を示すフローチャートである。処理が開始されると(S400)、S402で画面全体の輝度(Bv値)が所定のしきい値以上か否かが判断される。所定のしきい値以上の場合(S402でYESの場合)はS406へ進み、コントラスト抽出フィルタのレスポンスが所定のしきい値以上か否かを判断する。S406でYESの場合は撮影モードを[5]または[6]として単眼立体撮影を行い(S408)、NOの場合は撮影モードを[3]として複眼立体撮影を行う(S404)。一方画面全体の輝度(Bv値)が所定のしきい値に満たない場合(S402でNOの場合)は、撮影モードを[3]として複眼立体撮影を行う(S404)。このようにして立体撮影モードを選択することで、ノイズの量や画像の解像度に関して、ユーザの要求に応じた撮影が可能になる。 FIG. 16 is a flowchart showing an example of such an automatic selection process of the stereoscopic shooting mode. When the process is started (S400), it is determined in S402 whether the luminance (Bv value) of the entire screen is equal to or greater than a predetermined threshold value. If it is equal to or greater than the predetermined threshold (YES in S402), the process proceeds to S406, and it is determined whether or not the response of the contrast extraction filter is equal to or greater than the predetermined threshold. If YES in S406, monocular stereoscopic shooting is performed with the shooting mode set to [5] or [6] (S408), and if NO, compound eye stereoscopic shooting is performed with the shooting mode set to [3] (S404). On the other hand, when the luminance (Bv value) of the entire screen does not reach the predetermined threshold value (NO in S402), compound eye stereoscopic shooting is performed with the shooting mode set to [3] (S404). By selecting the stereoscopic shooting mode in this way, it is possible to perform shooting according to the user's request regarding the amount of noise and the resolution of the image.
 <第2の実施形態>
 上述した第1の実施形態では、第1の撮像部・第2の撮像部段とも単眼立体撮像部である場合について説明してきたが、本発明の実施形態はこのような態様に限定されるものではない。本発明の立体撮像装置においては、複数の撮像部のうち少なくとも一つが単眼立体撮像部であればよく、その他の撮像部の種類は特に限定されない。例えば、左右の入射光を分離できる撮像素子を有さない、通常の撮像部(単眼立体撮像部でない撮像部)であってもよい。
<Second Embodiment>
In the first embodiment described above, the case where both the first imaging unit and the second imaging unit stage are monocular stereoscopic imaging units has been described, but the embodiment of the present invention is limited to such a mode. is not. In the stereoscopic imaging device of the present invention, at least one of the plurality of imaging units may be a monocular stereoscopic imaging unit, and the types of other imaging units are not particularly limited. For example, a normal imaging unit (an imaging unit that is not a monocular stereoscopic imaging unit) that does not include an imaging device that can separate the left and right incident light may be used.
 図17は、第2の実施形態に係る立体撮像装置10’の要部を示すブロック図である。立体撮像装置10’では、第1の撮像部11-1が単眼立体撮像部で、第2の撮像部11-2’が通常のセンサ17を有する撮像部である。なおこれ以外の構成は第1の実施形態に係る立体撮像装置10と同様であるため、立体撮像装置10と同様の符号を付し、詳細な説明を省略する。 FIG. 17 is a block diagram illustrating a main part of the stereoscopic imaging apparatus 10 ′ according to the second embodiment. In the stereoscopic imaging apparatus 10 ′, the first imaging unit 11-1 is a monocular stereoscopic imaging unit, and the second imaging unit 11-2 ′ is an imaging unit having a normal sensor 17. Since the configuration other than this is the same as that of the stereoscopic imaging apparatus 10 according to the first embodiment, the same reference numerals as those of the stereoscopic imaging apparatus 10 are used, and detailed description thereof is omitted.
 このような立体撮像装置10’においても、第1の実施形態に係る立体撮像装置10と同様に、単眼立体撮影・複眼立体撮影が可能であり、上述した単眼/複眼立体撮影切替時のクロスポイント制御やクロスポイント調整の自動発動等を行うことができる。ただし立体撮像装置10’では第2の撮像部が通常の撮像部であるため、設定できる撮影モードが第1の実施形態に係る立体撮像装置10と異なる。立体撮像装置10’で設定可能な撮影モードを、図18の表に示す。撮影モードの設定は、立体撮像装置10の場合と同様に視点数や画素加算の有無等によって行うことができる。 In such a stereoscopic imaging apparatus 10 ′, similarly to the stereoscopic imaging apparatus 10 according to the first embodiment, monocular stereoscopic imaging / compound eye stereoscopic imaging is possible, and the cross point at the time of switching between monocular / compound stereoscopic imaging described above. Automatic activation of control and cross point adjustment can be performed. However, in the stereoscopic imaging device 10 ′, the second imaging unit is a normal imaging unit, and thus the shooting modes that can be set are different from those in the stereoscopic imaging device 10 according to the first embodiment. The shooting modes that can be set by the stereoscopic imaging device 10 'are shown in the table of FIG. The shooting mode can be set according to the number of viewpoints, presence / absence of pixel addition, and the like as in the case of the stereoscopic imaging apparatus 10.
 以上、本発明を実施の形態を用いて説明したが、本発明の技術的範囲は上記実施の形態に記載の範囲には限定されない。上記実施の形態に、多様な変更又は改良を加えることが可能であることが当業者に明らかである。そのような変更又は改良を加えた形態も本発明の技術的範囲に含まれ得ることが、特許請求の範囲の記載から明らかである。 As mentioned above, although this invention was demonstrated using embodiment, the technical scope of this invention is not limited to the range as described in the said embodiment. It will be apparent to those skilled in the art that various modifications or improvements can be added to the above embodiment. It is apparent from the scope of the claims that the embodiments added with such changes or improvements can be included in the technical scope of the present invention.
 特許請求の範囲、明細書、及び図面中において示した装置、システム、プログラム、及び方法における動作、手順、ステップ、及び段階等の各処理の実行順序は、特段「より前に」、「先立って」等と明示しておらず、また、前の処理の出力を後の処理で用いるのでない限り、任意の順序で実現しうることに留意すべきである。特許請求の範囲、明細書、及び図面中の動作フローに関して、便宜上「まず、」、「次に、」等を用いて説明したとしても、この順で実施することが必須であることを意味するものではない。 The order of execution of each process such as operation, procedure, step, and stage in the apparatus, system, program, and method shown in the claims, the description, and the drawings is as follows. It should be noted that the output can be realized in any order unless the output of the previous process is used in the subsequent process. Regarding the operation flow in the claims, the specification, and the drawings, even if it is described using “first”, “next”, etc. for the sake of convenience, it means that it is essential to carry out in this order. It is not a thing.
 10,10’…立体撮像装置、11-1…第1の撮像部、11-2,11-2’ …第2の撮像部、16,16-1,16-2…固体撮像素子(単眼3Dセンサ)、17…通常センサ、30…液晶モニタ、38…操作部 DESCRIPTION OF SYMBOLS 10,10 '... Three-dimensional imaging device, 11-1 ... 1st imaging part, 11-2, 11-2' ... 2nd imaging part, 16, 16-1, 16-2 ... Solid-state image sensor (monocular 3D sensor), 17 ... normal sensor, 30 ... liquid crystal monitor, 38 ... operation part

Claims (9)

  1.  被写体を撮像する複数の撮像部であって、単一の撮影光学系の異なる領域を通過した光束を画素群ごとに光電変換する複数の画素群を含む撮像素子を有する単眼立体撮像部を少なくとも一つ含む複数の撮像部と、
     前記複数の撮像部の撮像信号から前記被写体の立体画像を生成する画像生成部と、
     前記立体画像のクロスポイントを制御するクロスポイント制御部と、
    を備える立体撮像装置であって、
     前記画像生成部は、
     前記単眼立体撮像部による撮影で得られた複数の視点画像から前記立体画像を構成する単眼立体撮像機能と、
     前記少なくとも一つの単眼立体撮像部により得られた視点画像と、前記複数の撮像部のうち前記単眼立体撮像部以外の撮像部により得られた視点画像と、から前記立体画像を構成する複眼立体撮像機能と、を有し、
     前記クロスポイント制御部は、前記複眼立体撮像機能による撮影から前記単眼立体撮像機能による撮影に切り替える際、及び前記単眼立体撮像機能による撮影から前記複眼立体撮像機能による撮影に切り替える際に、前記立体画像のクロスポイントが当該切り替えの前後において変化しないように前記画像生成部を制御する、
     立体撮像装置。
    At least one monocular three-dimensional imaging unit having a plurality of imaging units for imaging a subject, the imaging unit including a plurality of pixel groups that photoelectrically convert, for each pixel group, light beams that have passed through different regions of a single photographing optical system. A plurality of imaging units including two,
    An image generation unit that generates a stereoscopic image of the subject from imaging signals of the plurality of imaging units;
    A cross point control unit for controlling a cross point of the stereoscopic image;
    A stereoscopic imaging device comprising:
    The image generation unit
    A monocular stereoscopic imaging function for constructing the stereoscopic image from a plurality of viewpoint images obtained by photographing by the monocular stereoscopic imaging unit;
    Compound eye stereoscopic imaging that constitutes the stereoscopic image from the viewpoint image obtained by the at least one monocular stereoscopic imaging unit and the viewpoint image obtained by an imaging unit other than the monocular stereoscopic imaging unit among the plurality of imaging units. And having a function
    The cross point control unit is configured to switch the stereoscopic image when switching from shooting by the compound eye stereoscopic imaging function to shooting by the monocular stereoscopic imaging function and when switching from shooting by the monocular stereoscopic imaging function to shooting by the compound eye stereoscopic imaging function. Controlling the image generation unit so that the cross point of the change does not change before and after the switching,
    Stereo imaging device.
  2.  前記クロスポイント制御部は、前記複眼立体撮像機能での撮影が継続している間、
     前記複眼立体撮像機能での撮影により得られた立体画像を構成する視点画像のうちの一つの視点画像における第1の合焦領域を検出し、
     前記立体画像を構成する視点画像のうちの他の視点画像における、前記検出した第1の合焦領域に対応する第1の対応領域を検出し、
     前記検出した第1の対応領域に基づいて、前記一つの視点画像と前記他の視点画像との間の第1の位置ずれ量を算出し、
     前記一つの視点画像または前記他の視点画像を前記算出した第1の位置ずれ量の分だけ移動する、
    ことにより、
     前記複眼立体撮像機能での撮影により得られる立体画像のクロスポイントが、前記単眼立体撮像機能での撮影により得られる立体画像のクロスポイントである前記第1の合焦領域に一致するように前記画像生成部を制御する、
     請求項1に記載の立体撮像装置。
    While the cross point control unit continues shooting with the compound eye stereoscopic imaging function,
    Detecting a first in-focus area in one viewpoint image among viewpoint images constituting a stereoscopic image obtained by photographing with the compound-eye stereoscopic imaging function;
    Detecting a first corresponding area corresponding to the detected first in-focus area in another viewpoint image among the viewpoint images constituting the stereoscopic image;
    Based on the detected first corresponding region, a first positional deviation amount between the one viewpoint image and the other viewpoint image is calculated,
    Moving the one viewpoint image or the other viewpoint image by the calculated first positional deviation amount;
    By
    The image so that a cross point of a stereoscopic image obtained by photographing with the compound eye stereoscopic imaging function matches the first in-focus area which is a cross point of a stereoscopic image obtained by photographing with the monocular stereoscopic imaging function. Control the generator,
    The stereoscopic imaging apparatus according to claim 1.
  3.  前記クロスポイント制御部は、前記単眼立体撮像機能での撮影が継続している間、
     前記単眼立体撮像機能での撮影により得られた立体画像を構成する視点画像のうちの一つの視点画像において、当該立体画像のクロスポイントである第2の合焦領域を検出し、
     前記複眼立体撮像機能での撮影の際に用いる撮像部であって、前記単眼立体撮像機能での撮影の際に用いる撮像部以外の撮像部により得られた他の視点画像において、前記検出した第2の合焦領域に対応する第2の対応領域を検出し、
     前記検出した第2の対応領域に基づいて、前記一つの視点画像と前記他の視点画像との第2の位置ずれ量を算出し、
     既に記憶されている位置ずれ量を前記第2の位置ずれ量で更新して記憶し、
     前記単眼立体撮像機能から前記複眼立体撮像機能への切り替えがあったときは、前記一つの視点画像と、前記他の視点画像を前記記憶された第2の位置ずれ量ずらした画像と、から立体画像を構成することで、
     前記複眼立体撮像機能での撮影により得られる立体画像のクロスポイントが、前記単眼立体撮像機能での撮影により得られる立体画像のクロスポイントである前記第2の合焦領域と一致するように前記画像生成部を制御する、
     請求項1または2に記載の立体撮像装置。
    While the crosspoint control unit continues shooting with the monocular stereoscopic imaging function,
    Detecting a second in-focus area which is a cross point of the stereoscopic image in one of the viewpoint images constituting the stereoscopic image obtained by photographing with the monocular stereoscopic imaging function;
    In the other viewpoint image obtained by the imaging unit that is used when shooting with the compound-eye stereoscopic imaging function and that is other than the imaging unit that is used when shooting with the monocular stereoscopic imaging function, the detected first Detecting a second corresponding area corresponding to the in-focus area of 2;
    Based on the detected second corresponding region, a second positional deviation amount between the one viewpoint image and the other viewpoint image is calculated,
    Update and store the already stored misregistration amount with the second misregistration amount;
    When switching from the monocular stereoscopic imaging function to the compound eye stereoscopic imaging function, a stereoscopic image is generated from the one viewpoint image and an image obtained by shifting the other viewpoint image from the stored second misalignment amount. By composing the image,
    The image so that a cross point of a stereoscopic image obtained by photographing with the compound eye stereoscopic imaging function coincides with the second in-focus area which is a cross point of a stereoscopic image obtained by photographing with the monocular stereoscopic imaging function. Control the generator,
    The stereoscopic imaging apparatus according to claim 1 or 2.
  4.  前記単眼立体撮像機能と前記複眼立体撮像機能とを自動的に切り替える撮像機能自動切替部をさらに有し、前記撮像機能自動切替部は、
     合焦位置が所定の距離より近い間は前記単眼立体撮像機能を作動させ、合焦位置が所定の距離より遠くなったら前記複眼立体撮像機能に切り替えるともに前記クロスポイント制御部を動作させ、
     合焦位置が所定の距離より近い間は前記複眼立体撮像機能を作動させ、合焦位置が所定の距離より近くなったか、またはユーザにより前記単眼立体撮像機能への切替指示がなされたときは前記単眼立体撮像機能に切り替える、
     請求項1ないし3のいずれかに記載の立体撮像装置。
    Further comprising an imaging function automatic switching unit that automatically switches between the monocular stereoscopic imaging function and the compound eye stereoscopic imaging function, the imaging function automatic switching unit,
    While the in-focus position is closer than a predetermined distance, operate the monocular stereoscopic imaging function, and when the in-focus position is farther than a predetermined distance, switch to the compound eye stereoscopic imaging function and operate the crosspoint control unit,
    While the in-focus position is closer than a predetermined distance, the compound eye stereoscopic imaging function is operated, and when the in-focus position is closer than the predetermined distance or when the user gives an instruction to switch to the monocular stereoscopic imaging function, Switch to monocular 3D imaging function,
    The three-dimensional imaging device according to claim 1.
  5.  前記画像生成部は、前記複眼立体撮像機能の動作時に、
     前記単眼立体撮像部が有する撮像素子の各画素位置ごとに、前記複数の画素群を構成する各画素群の画素信号を加算し、当該加算した結果を前記各画素位置における画素信号とする画素信号加算処理を行う、
     請求項1ないし4のいずれかに記載の立体撮像装置。
    The image generation unit, during the operation of the compound eye stereoscopic imaging function,
    A pixel signal that adds the pixel signals of each pixel group constituting the plurality of pixel groups for each pixel position of the imaging element included in the monocular three-dimensional imaging unit, and uses the added result as a pixel signal at each pixel position Perform addition processing,
    The three-dimensional imaging device according to claim 1.
  6.  被写体を撮像する複数の撮像部であって、単一の撮影光学系の異なる領域を通過した光束を画素群ごとに光電変換する複数の画素群を含む撮像素子を有する単眼立体撮像部を少なくとも一つ含む複数の撮像部と、
     前記複数の撮像部の撮像信号から前記被写体の立体画像を生成する画像生成部と、
     ユーザの指示入力に基づいて撮像モードを設定する撮像モード設定部と、
    を備える立体撮像装置であって、
     前記撮像モード設定部は、前記複数の撮像部の内の撮影に使用する撮像部の数及び取得する視点画像の数を含む前記ユーザの指示入力に基づいて、
     前記少なくとも一つの単眼立体撮像部を用いた2次元撮像モード及び単眼立体撮像モード、前記少なくとも一つの単眼立体撮像部と前記複数の撮像部の内の前記少なくとも一つの単眼立体撮像部以外の撮像部とを用いた2次元撮像モード及び複眼立体撮像モード、
     を含む撮像モードの中から一の撮像モードを設定する、
     立体撮像装置。
    At least one monocular three-dimensional imaging unit having a plurality of imaging units for imaging a subject, the imaging unit including a plurality of pixel groups that photoelectrically convert, for each pixel group, light beams that have passed through different regions of a single photographing optical system. A plurality of imaging units including two,
    An image generation unit that generates a stereoscopic image of the subject from imaging signals of the plurality of imaging units;
    An imaging mode setting unit for setting an imaging mode based on a user's instruction input;
    A stereoscopic imaging device comprising:
    The imaging mode setting unit is based on the user's instruction input including the number of imaging units to be used for imaging among the plurality of imaging units and the number of viewpoint images to be acquired.
    Two-dimensional imaging mode and monocular stereoscopic imaging mode using the at least one monocular stereoscopic imaging unit, an imaging unit other than the at least one monocular stereoscopic imaging unit and the at least one monocular stereoscopic imaging unit among the plurality of imaging units 2D imaging mode and compound eye stereoscopic imaging mode using
    Set one imaging mode from among imaging modes including
    Stereo imaging device.
  7.  前記画像生成部は、ユーザの指示入力に基づき、前記少なくとも一つの単眼立体撮像部を用いた2次元撮像モード、及び前記少なくとも一つの単眼立体撮像部と前記複数の撮像部の内の前記少なくとも一つの単眼立体撮像部以外の撮像部とを用いた複眼立体撮像モードにおいて、
     前記少なくとも一つの単眼立体撮像部が有する撮像素子の各画素位置ごとに、前記複数の画素群を構成する各画素群の画素信号を加算し、当該加算した結果を前記各画素位置における画素信号とする画素信号加算処理を行う、
     請求項6に記載の立体撮像装置。
    The image generation unit, based on a user instruction input, a two-dimensional imaging mode using the at least one monocular stereoscopic imaging unit, and the at least one of the at least one monocular stereoscopic imaging unit and the plurality of imaging units. In a compound eye stereoscopic imaging mode using an imaging unit other than one monocular stereoscopic imaging unit,
    For each pixel position of the image sensor included in the at least one monocular three-dimensional imaging unit, pixel signals of each pixel group constituting the plurality of pixel groups are added, and the result of the addition is referred to as a pixel signal at each pixel position. Perform pixel signal addition processing,
    The stereoscopic imaging apparatus according to claim 6.
  8.  前記複数の撮像部の数は2であり、前記少なくとも一つの単眼立体撮像部以外の他の撮像部も単眼立体撮像部である、請求項1ないし7のいずれかに記載の立体撮像装置。 The stereoscopic imaging device according to any one of claims 1 to 7, wherein the number of the plurality of imaging units is 2, and an imaging unit other than the at least one monocular stereoscopic imaging unit is also a monocular stereoscopic imaging unit.
  9.  前記生成した立体画像を表示する立体画像表示部をさらに備える、請求項1ないし8のいずれかに記載の立体撮像装置。 The stereoscopic imaging device according to any one of claims 1 to 8, further comprising a stereoscopic image display unit that displays the generated stereoscopic image.
PCT/JP2012/067786 2011-08-30 2012-07-12 3d imaging device WO2013031392A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2011-187559 2011-08-30
JP2011187559 2011-08-30

Publications (1)

Publication Number Publication Date
WO2013031392A1 true WO2013031392A1 (en) 2013-03-07

Family

ID=47755899

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2012/067786 WO2013031392A1 (en) 2011-08-30 2012-07-12 3d imaging device

Country Status (1)

Country Link
WO (1) WO2013031392A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015001788A1 (en) * 2013-07-05 2015-01-08 株式会社ニコン Imaging device

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002185844A (en) * 2001-11-07 2002-06-28 Olympus Optical Co Ltd Camera system
JP2005250396A (en) * 2004-03-08 2005-09-15 Fuji Photo Film Co Ltd Personal digital assistant with camera
JP2010154310A (en) * 2008-12-25 2010-07-08 Fujifilm Corp Compound-eye camera, and photographing method
WO2011024423A1 (en) * 2009-08-28 2011-03-03 パナソニック株式会社 Control device for stereoscopic image display and imaging device for stereoscopic images

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2002185844A (en) * 2001-11-07 2002-06-28 Olympus Optical Co Ltd Camera system
JP2005250396A (en) * 2004-03-08 2005-09-15 Fuji Photo Film Co Ltd Personal digital assistant with camera
JP2010154310A (en) * 2008-12-25 2010-07-08 Fujifilm Corp Compound-eye camera, and photographing method
WO2011024423A1 (en) * 2009-08-28 2011-03-03 パナソニック株式会社 Control device for stereoscopic image display and imaging device for stereoscopic images

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2015001788A1 (en) * 2013-07-05 2015-01-08 株式会社ニコン Imaging device
CN105359519A (en) * 2013-07-05 2016-02-24 株式会社尼康 Imaging device
JPWO2015001788A1 (en) * 2013-07-05 2017-02-23 株式会社ニコン Imaging device
CN105359519B (en) * 2013-07-05 2017-07-04 株式会社尼康 Camera head

Similar Documents

Publication Publication Date Title
JP5595499B2 (en) Monocular stereoscopic imaging device
JP5425554B2 (en) Stereo imaging device and stereo imaging method
JP5722975B2 (en) Imaging device, shading correction method for imaging device, and program for imaging device
JP5269252B2 (en) Monocular stereoscopic imaging device
JP5788518B2 (en) Monocular stereoscopic photographing apparatus, photographing method and program
US20110234767A1 (en) Stereoscopic imaging apparatus
JP5469258B2 (en) Imaging apparatus and imaging method
JP2011199755A (en) Image pickup device
JP2011259168A (en) Stereoscopic panoramic image capturing device
WO2013031349A1 (en) Imaging device and imaging method
JP2011022501A (en) Compound-eye imaging apparatus
JP5449551B2 (en) Image output apparatus, method and program
US9077979B2 (en) Stereoscopic image capture device and method
JP5160460B2 (en) Stereo imaging device and stereo imaging method
JP2010237582A (en) Three-dimensional imaging apparatus and three-dimensional imaging method
WO2012043003A1 (en) Three-dimensional image display device, and three-dimensional image display method
JP5580486B2 (en) Image output apparatus, method and program
WO2013031392A1 (en) 3d imaging device
JP2012124650A (en) Imaging apparatus, and imaging method
JP5351298B2 (en) Compound eye imaging device
JP2010200024A (en) Three-dimensional image display device and three-dimensional image display method
JP2011077680A (en) Stereoscopic camera and method for controlling photographing

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 12827546

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 12827546

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP