WO2011096251A1 - ステレオカメラ - Google Patents
ステレオカメラ Download PDFInfo
- Publication number
- WO2011096251A1 WO2011096251A1 PCT/JP2011/050319 JP2011050319W WO2011096251A1 WO 2011096251 A1 WO2011096251 A1 WO 2011096251A1 JP 2011050319 W JP2011050319 W JP 2011050319W WO 2011096251 A1 WO2011096251 A1 WO 2011096251A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- images
- camera
- stereo camera
- corresponding point
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01C—MEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
- G01C11/00—Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
- G01C11/04—Interpretation of pictures
- G01C11/06—Interpretation of pictures by comparison of two or more pictures of the same area
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
- G03B35/10—Stereoscopic photography by simultaneous recording having single camera with stereoscopic-base-defining system
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/593—Depth or shape recovery from multiple images from stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V10/00—Arrangements for image or video recognition or understanding
- G06V10/10—Image acquisition
- G06V10/12—Details of acquisition arrangements; Constructional details thereof
- G06V10/14—Optical characteristics of the device performing the acquisition or on the illumination arrangements
- G06V10/147—Details of sensors, e.g. sensor lenses
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
- G06V20/647—Three-dimensional objects by matching two-dimensional images to three-dimensional objects
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/25—Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/005—Aspects relating to the "3D+depth" image format
Definitions
- the present invention relates to a stereo camera, and more particularly to a stereo camera provided with a pair of cameras having different camera parameters.
- a stereo camera having a pair of cameras can generate a distance image including distance information to the subject based on the parallax of images obtained by both cameras.
- the cameras are configured with cameras having the same camera parameters.
- Patent Document 1 discloses a configuration in which a three-dimensional image including distance information is captured by two lenses arranged in the horizontal direction, and a two-dimensional image is captured by another different lens.
- a configuration has been disclosed in which a 3D image is acquired up to a certain range, and a 2D image is acquired when the range is exceeded.
- the two lenses for 3D images are composed of lenses with the same characteristics. .
- Patent Document 2 discloses a system that creates an image for stereo viewing viewed from an arbitrary viewpoint based on an image obtained by a stereo camera and distance information.
- two systems constituting the stereo camera are disclosed. Image processing for creating an image for stereo viewing is performed on the assumption that the camera has the same camera parameters.
- the paired cameras are composed of cameras having the same camera parameters. However, if both are composed of high-performance cameras, there is a problem that the manufacturing cost increases. .
- the present invention has been made to solve the above-described problems, and provides a stereo camera capable of generating a stereo image from an arbitrary viewpoint and suppressing the manufacturing cost. With the goal.
- a first aspect of the stereo camera according to the present invention is a first imaging unit that captures a first image and a camera parameter that is different from that of the first imaging unit and that captures a second image.
- Two imaging units a distance information acquiring unit that acquires distance information including parallax information by associating each pixel of the first and second images, and one of the first and second images,
- a left-right image generation unit configured to generate a stereoscopic image based on the distance information.
- the distance information acquisition unit determines a corresponding point search region in which corresponding points are searched based on shooting information at the time of shooting the first and second images. To do.
- the distance information acquiring unit performs template matching with the first and second images, using one as a template, and the highest degree of matching. Is determined as the corresponding point search region.
- the distance information acquisition unit performs resolution conversion processing of the image serving as the template when the shooting magnifications of the first and second images are different. Create the template.
- the distance information acquisition unit extracts an object candidate region that is an object candidate by object recognition from the first and second images, and the obtained object.
- the candidate areas are compared with each other, and the area with the highest degree of matching is determined as the corresponding point search area.
- the distance information acquisition unit sets the optical axis center of the other image on one image with respect to the first and second images,
- the optical axis center of the other image set on the image is aligned with the optical axis center of the other image, and the one image is converted into an image size suitable for the imaging device that captured the other image and superimposed. Together, the corresponding point search area is determined.
- the distance information acquisition unit is configured such that the corresponding point is a ratio of a vertical field angle centered on an optical axis center with respect to the first and second images. Determine the search area.
- the distance information acquisition unit determines a one-dimensional area along an epipolar line as the corresponding point search area for the first and second images. .
- the distance information acquisition unit is in a region that is a reference side of the corresponding point search with respect to each of the corresponding point search regions of the first and second images.
- the reference position is set to the position of a sub-pixel having a size smaller than the pixel size, and the corresponding point search is performed.
- the distance information acquisition unit performs a corresponding point search with a reference image having a larger number of pixels in each of the corresponding point search areas of the first and second images.
- the eleventh aspect of the stereo camera according to the present invention is such that the distance information acquisition unit responds to the degree of zoom when zoom shooting is performed at the time of shooting the first and second images. Change the sampling interval of points.
- the distance information acquisition unit decreases the sampling interval of corresponding points as the zoom magnification increases.
- the distance information acquisition unit corresponds to the number of pixels corresponding to the smaller number of pixels in the corresponding point search area of each of the first and second images. Change the sampling interval.
- the aspect ratio on the object plane is the window aspect ratio used for the corresponding point search.
- Corresponding point search is performed with setting to be isotropic.
- the first imaging unit has a higher resolution than the second imaging unit, and the left and right image generation units use the first image as a main image, A new second image is generated by shifting the image of the corresponding point search area in the second image according to the parallax information, and the first image and the new second image are set as the left and right images.
- the left and right image generation unit is configured such that the number of pixels in the corresponding point search area of the second image is equal to the number of pixels in the corresponding point search area of the first image. If it is less than the number, pixel information is supplemented from the first image.
- the first image pickup unit has a higher resolution than the second image pickup unit
- the left and right image generation unit has the second image higher than the first image.
- the first imaging unit has a higher resolution than the second imaging unit, and the left and right image generation units have the first image higher than the second image.
- the first image is inserted into the second image to generate a new first image, and the new first image and the second image are generated. Is the left and right images.
- the left and right image generation units are zoomed images when the first image is a zoomed image
- a new first image is generated by generating an image in which the baseline length is changed.
- the image size is adjusted so that the subject can be accommodated in the image even if the baseline length is changed. change.
- the lens of the second imaging unit is a foveal lens.
- the lens of the second imaging unit is an anamorphic lens.
- the first imaging unit has a higher resolution than the second imaging unit
- the stereo camera has an arrangement of the first and second imaging units.
- the camera further includes a sensor that senses that the stereo camera is placed in a horizontal position parallel to a horizontal plane, and when the horizontal position is detected, the operation of the distance information acquisition unit is stopped, and the left and right image generation unit Gives the second image information of the first image to create a new second image, and the first image and the new second image are the left and right images.
- a stereo view image can be obtained even with a stereo camera having two imaging units having different camera parameters, and the stereo view image is generated at low cost. Can do.
- the corresponding point search area can be determined based on shooting information when shooting the first and second images, for example, zoom magnification.
- the corresponding point search area can be determined by template matching.
- the corresponding point search area can be determined by template matching even when the shooting magnifications of the first and second images are different.
- the corresponding point search area can be determined by object recognition.
- the corresponding point search area can be easily determined by using the optical axis center.
- the corresponding point search area can be determined based on the ratio of the angle of view in the vertical direction around the optical axis center of the first and second images.
- the corresponding point search area can be determined by a one-dimensional area along the epipolar line for the first and second images.
- the corresponding point search is performed, so the reference image and the reference image Even when the magnifications are different, the sampling intervals can be matched.
- the stereo camera of the present invention it is possible to perform a corresponding point search with a larger number of pixels as a reference image.
- the sampling interval of corresponding points is changed in accordance with the degree of zoom, the time spent for searching for corresponding points is reduced when zoom is not used. can do.
- the sampling interval of corresponding points is changed according to the smaller number of pixels in the corresponding point search region, the first image and the second image are zoomed. It is possible to cope with the case where the number of pixels is different from that of the image.
- the aspect ratio of the window used for the corresponding point search is such that the aspect ratio on the object plane is isotropic when the window is applied to the object plane. Since the corresponding point search is performed by setting to, it is possible to prevent the matching accuracy in the corresponding point search from being lowered.
- the second image becomes a high-quality image at the same level as the first image, and both images are used as stereo-view images.
- 3D display can be performed without any problem.
- the accuracy of the portion that can be viewed in stereo can be increased.
- the stereo camera according to the present invention it is possible to perform a three-dimensional display without a sense of incongruity by generating a new first image in which the baseline length is changed according to the zoom magnification.
- an image having a large amount of information at the center can be obtained by using the foveal lens as the lens of the second imaging unit.
- an image having a wide angle of view in one direction can be obtained by using an anamorphic lens as the lens of the second imaging unit.
- a stereo view image can be obtained by simple processing.
- FIG. 1 to FIG. 3 are diagrams for explaining how to use the stereo camera according to the present invention.
- FIG. 1 shows an example in which the stereo camera VC1 is used vertically
- FIG. 2 shows an example in which the stereo camera VC1 is used vertically.
- the stereo camera VC1 has a configuration in which the main camera MC and the sub camera SC are arranged apart from each other by the base line length L, and the main camera MC and the sub camera SC are arranged to be parallel to one side of the camera housing. ing.
- the main camera MC and the sub camera SC can be referred to as an imaging unit in the stereo camera VC1.
- the main camera MC is a digital camera system having a so-called zoom lens with a high resolution and variable focus such as a high-definition broadcast compatible lens (HDTV lens).
- the sub camera SC is a low-resolution, single-focus digital camera system such as a small unit camera or a micro camera unit (MCU) mounted on a mobile phone or the like.
- a zoom lens may be used as the lens of the sub camera SC, but high resolution is not required.
- a variable magnification lens such as a foveal lens (having a characteristic of greatly compressing the image at the edge compared to the central image), a fish-eye lens, or an anamorphic lens may be used.
- An image with a large amount of information in the center can be obtained by using a foveal lens or a fisheye lens, and an image having a wide angle of view in one direction can be obtained by using an anamorphic lens.
- the state in which the stereo camera VC1 is arranged so that the arrangement of the main camera MC and the sub camera SC is perpendicular to the horizontal plane is referred to as vertical installation.
- a state in which the stereo camera VC1 is arranged so that the arrangement of the main camera MC and the sub camera SC is parallel to the horizontal plane is referred to as horizontal placement.
- the stereo camera VC2 shown in FIG. 3 is the same as the stereo camera VC1 in that the main camera MC and the sub camera SC are separated from each other by the baseline length L, but the main camera MC and the sub camera SC are different from each other. These are arranged so as to be inclined with respect to any side of the camera casing.
- FIG. 3 shows a state in which the arrangement of the main camera MC and the sub camera SC is arranged so as to be inclined from the vertical with respect to the horizontal plane, and this state is referred to as vertical installation. The state rotated 90 degrees with respect to this state is called horizontal placement. In addition, you may arrange
- FIG. 4 is a diagram schematically showing a concept of generating a stereo image by the stereo camera VC1 shown in FIG. 4A shows a high-resolution two-dimensional image obtained by the main camera MC, and FIG. 4B shows a low-resolution two-dimensional image obtained by the sub camera SC. It is shown.
- the two cameras capture the same subject PS and background BG, since the magnifications of the lenses are different, images with different sizes and angles of view of the subject PS can be obtained.
- FIG. 4 (c) shows an image created from the images of FIG. 4 (a) and (b), and FIG. 4 (d) is obtained by the main camera MC. An image is shown. Since such two images are used for three-dimensional display on the display, these two images are called stereo vision images. Three-dimensional display is performed on the display using such a stereoscopic image.
- the stereo camera VC1 uses such a stereo image and performs a three-dimensional display on the display so that the subject PS can be viewed stereoscopically as shown in part (e) of FIG.
- FIG. 5 is a view showing a photographed image when the stereo camera VC1 is used in a horizontal position, and is a schematic view of the subject PS and the background BG as viewed from above.
- the vertical axis indicates the distance to the subject PS and the background BG.
- the horizontal axis indicates the horizontal length when the optical axis of the main camera MC is the origin, and the horizontal angle of view when the main camera MC and the sub-camera SC are taken.
- the main camera MC an example is shown in which shooting is performed at three different magnifications. An image taken at the lowest magnification (the widest angle of view) is referred to as a first main camera image, and the angle of view is referred to as the first camera image. This is indicated by line L1.
- an image taken at a high magnification is referred to as a second main camera image, and the angle of view is indicated by a line L2.
- An image taken with the highest magnification is referred to as a third main camera image, and the angle of view is indicated by a line L3.
- the sub camera SC does not have a zoom function, there is only one angle of view, and the angle of view is indicated by a line L4.
- FIG. 6 shows an image obtained from the photographed image shown in FIG. 6 (a) shows a first main camera image taken by the main camera MC
- FIG. 6 (b) shows a second main camera image taken by the main camera MC
- FIG. Part c) shows a third main camera image taken by the main camera MC.
- FIG. 7 is a block diagram showing a configuration of the stereo camera VC1.
- the main camera MC and the sub camera SC are connected to the shooting information acquisition unit 1, and shooting information is given to the shooting information acquisition unit 1 together with image data obtained by each camera.
- one of the image data obtained by the main camera MC and the sub camera SC is given to the main image acquisition unit 2 via the photographing information acquisition unit 1 as a main image.
- the image data obtained by the main camera MC and the sub camera SC is given to the distance information obtaining unit 3 via the photographing information obtaining unit 1, and the distance information is obtained.
- the distance information acquisition unit 3 is configured to receive known camera information from the camera information storage unit 6 for acquiring distance information.
- the main image output from the main image acquisition unit 2 and the distance information output from the distance information acquisition unit 3 are given to the left and right image generation unit 4 to generate a stereoscopic image. Then, the obtained stereoscopic image (left and right images) is given to the three-dimensional display unit 5 such as a liquid crystal screen and displayed three-dimensionally.
- the shooting information acquisition unit 1 acquires shooting information when acquiring an image as shown in FIG.
- the shooting information acquired at this time includes parameters that may vary during shooting, such as zoom magnification, focal length, and angle of view. However, not all information on zoom magnification, focal length, and angle of view is required, and if there is any one piece of information, the other information can be calculated.
- the angle of view ( ⁇ ) in each direction can be calculated by the following formula (1). .
- f represents the focal length
- x represents the dimensions (h, w, d) of the light receiving part.
- FIG. 8 is a flowchart showing a processing procedure from image acquisition to distance information acquisition.
- a first image and a second image are obtained, respectively.
- the first image is an image obtained by the main camera MC
- the second image is an image obtained by the sub camera SC.
- the shooting information acquisition unit 1 acquires camera information at the time of shooting with the main camera MC and the sub camera SC (step S3).
- the known camera information is acquired from the camera information storage unit 6 (step S4).
- the known camera information includes fixed shooting information in the sub camera SC, information on the dimensions of the light receiving unit such as a CCD sensor, and pixel arrangement intervals (pixel pitch).
- the distance information acquisition unit 3 sets a corresponding point search area for performing corresponding point search for the first and second images (step S5).
- the distance information acquisition unit 3 performs corresponding point search processing for each pixel in the first and second images (step S6). And distance information is acquired by calculating distance information based on the matched pixel (step S7).
- FIG. 9 is a conceptual diagram illustrating the corresponding point search area setting process.
- 9A shows the first main camera image taken by the main camera MC
- FIG. 9B shows the second main camera image
- FIG. 9C shows the second main camera image.
- 3 shows the main camera image.
- the images obtained by the sub camera SC when the images of (a) to (c) are acquired are shown. ing.
- a region R1 surrounded by a broken line is a corresponding point search region, and the entire sub camera image shown in part (d) of FIG. The region corresponds to R1.
- a region R2 surrounded by a broken line is a corresponding point search region, and the entire second main camera image shown in part (b) of FIG. The region corresponds to R2.
- a region R3 surrounded by a broken line is a corresponding point search region
- the entire third main camera image shown in part (c) of FIG. Is a region corresponding to the region R3.
- the process of determining the corresponding point search area is the corresponding point search area setting process.
- the following first to sixth methods can be employed.
- the first method determines a region by template matching, and performs resolution conversion processing on one of the first image obtained by the main camera MC and the second image obtained by the sub camera SC. Create multiple template images. Then, pattern matching with the other image is performed, and an area with the highest degree of matching is determined as a corresponding point search area.
- FIG. 10 shows a conceptual diagram of the first method.
- the first, second, and third template images G11, G12, and G13 having different resolutions created using the first image are respectively displayed.
- FIG. 10D shows a sub-camera image to be compared.
- FIG. 11 is a diagram showing the concept of resolution conversion processing, in which the first template image G11, the second template image G12, and the third template image G13 are shown hierarchically.
- the resolution conversion process is a process for reducing a large image by reducing the resolution.
- the first image is a zoomed image
- a template image is created using the first image.
- the first image is not zoomed
- a template image is created using the second image. In this case, it is assumed that the magnification of the second image is higher than the magnification of the first image that has not been zoomed.
- FIG. 10 shows an example in which the second template image G12 shown in part (b) and the region R1 of the sub camera image in part (d) are determined to have the highest degree of coincidence.
- R1 is a corresponding point search area. Since the template matching technique is a well-known technique, a description thereof will be omitted.
- the second method is to determine a region by object recognition for the first image obtained by the main camera MC and the second image obtained by the sub camera SC.
- the object candidate area in the first and second images is determined using pattern recognition
- the largest object area is specified in the object candidate area
- the corresponding point search area is determined based on the size of the object area To do.
- the largest object area in the object candidate area can be specified by calculating the total number of pixels for each object area in each image and performing comparison between the object areas.
- FIG. 12 is a diagram illustrating an object identified by object recognition.
- FIG. 12A shows an image of the subject PS in the first image obtained by the main camera MC.
- Part (b) shows an image of the subject PS in the second image obtained by the sub camera SC.
- the width of the maximum portion of the subject PS is indicated as a
- the widths of the regions outside both sides of the subject PS are indicated as c and d.
- the width of the maximum portion of the subject PS is b
- the corresponding point is based on the ratio of this dimension to the dimension of the width a in the first image.
- the widths X and Y of the outer edge of the search area that is, the area outside the both sides of the subject PS are determined.
- FIG. 13 shows the corresponding point search area determined in this way. That is, the (a) part of FIG. 13 shows the first image obtained by the main camera MC, and the (b) part of FIG. 13 shows the second image obtained by the sub camera SC. A region R1 surrounded by a broken line in the second image is a corresponding point search region.
- the outer edge of the corresponding point search region can be determined only in the parallax direction, and the examples shown in FIGS. 12 and 13 are examples in the case where the stereo camera VC1 is used in the horizontal orientation.
- the horizontal outer edge of the corresponding point search area can be determined, but the vertical outer edge is defined by the outer edge of the image.
- the stereo camera VC1 is used vertically, the outer edge in the vertical direction of the corresponding point search area can be determined, but the outer edge in the horizontal direction is defined by the outer edge of the image.
- the third method uses the object region specified by the object recognition described in the second method as a template, and determines the region by template matching of the first method.
- the size of the template is reduced, and the time required for matching may be shortened.
- first image is a zoomed image
- second image is used. This is the same as the first method in that a template image is created.
- the fourth method converts the first image obtained by the main camera MC and the second image obtained by the sub camera SC so that the other image coincides with the center of one optical axis, and after the conversion
- the corresponding point search region is determined by converting one image into an image size suitable for the image sensor of the camera that has captured the other image and superimposing the images.
- FIG. 14 shows a conceptual diagram of the fourth method.
- the angles of view of the main camera MC and the sub camera SC are indicated by lines LM and LS, and the first image G1 and the first image acquired by the main camera MC and the sub camera SC, respectively.
- 2 shows an image G2.
- the optical axis center shown in the second image G2 represents the optical axis center OL of the first image.
- the center of the optical axis is shown as a line for convenience, but is actually a point.
- the optical axis center OL of the first image shown on the second image G2 obtained by the sub camera SC is shown in the first image G1 obtained by the main camera MC.
- the figure matched with the optical axis center is shown.
- a method of obtaining the optical axis center OL of the first image on the second image G2 is obtained by calibration. That is, where the optical axis center of the first image G1 is located on the second image G2 is uniquely determined by calibration at the time of product shipment. Therefore, if there is the information, it is easy to obtain the optical axis center OL of the first image on the second image G2.
- an epipolar line which will be described later with reference to FIG. 18 can also be uniquely determined by calibration.
- FIG. 15 shows a conceptual diagram of a process of superimposing by converting to a suitable image size.
- the (a) part of FIG. 15 shows the first image G1
- the (b) part of FIG. 15 converts the second image G2 into an image size that matches the image sensor of the first image G1.
- An image is shown, and the overlapping area is defined as a corresponding point search area R1 by superimposing both.
- the image sensor normalizes the image to the size of the combined image sensor after conversion. For example, since the size of the image sensor is included as known camera information and the shooting angle of view is acquired as shooting information, the angle of view per pixel is known. That is, the size of the second image obtained by the sub camera SC can be changed to the size of the first image obtained by the main camera MC.
- the pixel arrangement interval ( If the (pixel pitch) is 0.1, the image is captured up to 100 degrees in the horizontal direction using a 100 ⁇ 100 (pixel) image sensor.
- the pixel pitch of the image sensor is 0.2.
- a 200 ⁇ 200 (pixel) image sensor is used to photograph up to a horizontal angle of view of 50 degrees.
- a virtual 400 ⁇ 400 (pixel) image sensor will shoot up to a field angle of 100 degrees.
- the image is 2000 (pixels). This is image normalization.
- the fourth method described above is an effective method when the stereo camera VC1 is used horizontally.
- the fifth method searches for corresponding points by limiting the area with the ratio of the angle of view in the vertical direction with respect to the first image obtained by the main camera MC and the second image obtained by the sub camera SC. The area is determined.
- the vertical field angles of the main camera MC and the sub camera SC are indicated by lines LM and LS.
- the ratio between the distance S1 from the optical axis center OL at a position relatively close to the stereo camera VS1 to the line LM and the distance S2 between the line LM and the line LS is as follows: This is the same as the ratio of the distance S11 from the optical axis center OL at a position relatively far from VS1 to the line LM and the distance S12 between the lines LM and LS.
- the corresponding point search area is limited for the first and second images by utilizing the fact that the ratio of the angle of view is the same according to the distance from the camera.
- FIG. 17 is a conceptual diagram in which a region is limited by the ratio of the angle of view.
- FIG. 17A shows the position of the optical axis center OL in the vertical direction with respect to the first image G1. .
- the vertical ratio is calculated with respect to the position of the optical axis center OL in the vertical direction, and the vertical area of the second image G2 is limited so as to be the same ratio as that in FIG.
- the region R1 shown in the part b) is limited. This region R1 becomes a corresponding point search region.
- the position in the vertical direction of the optical axis center OL is uniquely determined by the center of the angle of view in the vertical direction.
- the corresponding point search area is determined using the epipolar line. That is, when a feature point in one three-dimensional space is photographed using two cameras, the points, the lens centers of the two cameras, and the features in the two image planes obtained by each camera are displayed. Point projections lie on one plane. This plane is called an epipolar plane, and the line of intersection between the epipolar plane and each image is called an epipolar line. The point where the epipolar lines intersect in each image is called an epipole.
- the two cameras have been calibrated and the geometric relationship between them is known, if one point is given in one image, the epipolar plane and the epipolar line on the image are determined. Even if it is not known, the corresponding point is limited to the epipolar line on the other image. Therefore, the corresponding point may be searched by performing a one-dimensional search along the epipolar line.
- FIG. 18 shows the lens centers CL1 and CL2 of the two cameras, the epipolar plane EP defined by the feature point P, and the two image planes IP1 and IP2 obtained by the respective cameras.
- the diagonal lines shown in the image planes IP1 and IP2 are epipolar lines EL, and the two points shown on the epipolar lines EL are epipoles.
- the search range can be limited compared to the case where the two-dimensional plane is the corresponding point search region, and the corresponding point search Efficiency can be improved.
- the main image may be the first image or the second image.
- the main image may be the one with a larger number of pixels in the corresponding point search area, or the main image may be the one with a wider angle of view of the acquired image.
- a point (corresponding point) on the reference image corresponding to an arbitrary attention point on the base image is searched for and obtained.
- the reference image is an image corresponding to the standard image. Specifically, in a stereo image, one of a pair of images taken at the same time is a standard image, and the other is a reference image. In a time-series image, among images captured by the same camera, the temporally previous image is a reference image, and the temporally subsequent image is a reference image. A template is set for the point of interest on the reference image, a window on the reference image corresponding to the template is searched, and a corresponding point is obtained from the searched window.
- FIG. 19 is a conceptual diagram illustrating the corresponding point search process.
- the (a) part of FIG. 19 shows the second image obtained by the sub camera SC as the reference image, and the (b) part of FIG. 19 shows the first image obtained by the main camera MC as the reference image. Show.
- the attention point is shown as OP, and the corresponding point is shown as AP.
- processing is performed in which the reference image is sampled pixel by pixel in order to obtain the corresponding point on the reference image with respect to the target point on the base image.
- the sampling interval is significantly different, and information on the reference image is lost, making it difficult to obtain accurate distance information. Therefore, by adopting the first to fourth methods described below, it is possible to obtain accurate distance information.
- the reference position on the reference image is set to a subpixel having a size smaller than the pixel size, and the corresponding point on the reference image corresponding thereto is obtained.
- FIG. 20 is a diagram schematically showing an example in which subpixels are set on a reference image.
- one pixel is divided into three in the horizontal direction, and three subpixels are set.
- the pixel division is not limited to this, and it is also possible to divide the pixel more finely. Thereby, even when the magnification is different between the standard image and the reference image, the sampling intervals can be matched.
- the search for the corresponding points is performed by cutting out the corresponding images from each other in a certain range area and taking the correlation value between the areas, and this area is called a window.
- the window which is this area, is generated in units of pixels, and only a part of the pixels is included in the window, and the others are not included.
- the search template is not an area in units of pixels because the center of gravity is a point of interest having a sub-pixel level position.
- FIG. 20 shows an example in which the search template TP is set around the sub-pixel SP that is the point of interest when the point of interest is a sub-pixel level position.
- Japanese Patent Application Laid-Open No. 2001-195597 discloses a correlation value between a pixel and a pixel based on a positional relationship between a position having the highest correlation value and its surrounding correlation values after calculating correlation values between images.
- a method for interpolating the correlation value between the two values in a linear equation or a curve equation to estimate the peak position and peak value of the correlation value is disclosed.
- the second method compares the number of pixels in the corresponding point search area for the first image obtained by the main camera MC and the second image obtained by the sub-camera SC, and uses the larger number of pixels as a reference. It is an image.
- FIG. 21 shows an example in which the larger number of pixels is used as the reference image, and shows an example in which the attention point OP can be increased because the number of pixels is large.
- the interval for searching for corresponding points is set coarsely, and when information indicating that zoom shooting has been performed is acquired as camera information, the interval for searching for corresponding points according to the degree of zooming is acquired.
- the sampling interval of corresponding points is changed so that is changed. This is because when the magnification of the lens is increased by zooming and the corresponding point search area is reduced, it is necessary to search for corresponding points at smaller intervals.
- the time spent for searching for corresponding points can be reduced when zoom is not used.
- the fourth method compares the number of pixels in the corresponding point search area for the first image obtained by the main camera MC and the second image obtained by the sub-camera SC, and matches the smaller number of pixels. This sets the sampling interval.
- FIG. 22 (a) shows a second image obtained by the sub camera SC
- FIG. 22 (b) shows a first image obtained by the main camera MC.
- a region R1 surrounded by a broken line is a corresponding point search region, and this corresponds to the entire first image.
- the zoomed first image has a large number of pixels in the corresponding point search area, while the second image has a small number of pixels in the corresponding point search area. Therefore, the sampling interval is adjusted as shown in FIG.
- FIG. 23 (b) shows the first image obtained by the main camera MC, but the sampling points PP are not provided for each pixel, but are provided for every two pixels.
- the sampling point PP may be set based on zoom magnification information as camera information.
- sampling interval is set according to the smaller number of pixels, some information will be lost, but by setting the sampling interval so that it does not become too wide, the accuracy of the distance information will be kept to a certain extent. Can do.
- ⁇ Modification> As described above, when a variable magnification lens such as a foveal lens, a fisheye lens, or an anamorphic lens is used as the lens of the sub camera SC, the first image obtained by the main camera MC and the sub camera SC are used. There is a possibility that the size of the corresponding point search area is extremely different from the obtained second image.
- a variable magnification lens such as a foveal lens, a fisheye lens, or an anamorphic lens
- the aspect ratio of the window used for corresponding point search is set so that the aspect ratio on the object plane is isotropic when the window is applied to the object plane. You can go. Thereby, it can prevent that the matching precision in corresponding point search falls.
- a correlation value (similarity) between a plurality of windows set in the reference image corresponding to the reference image is calculated, and the template and the window correspond to each other based on the correlation value. It is determined whether or not.
- an SAD Sum of Absolute Difference
- an SSD Sud of Squared Difference
- an NCC Normalize cross Correlation
- the SAD method is a method using a function for obtaining the sum of absolute values of luminance values of templates and windows, and the correlation value for each template and window is obtained using this function.
- a correlation value calculation method having robustness compared to the SAD method and the like.
- this method is a method of performing similarity calculation using a signal having only a phase component in which an amplitude component is suppressed from a frequency decomposition signal of an image pattern. By using this method, it is possible to realize a robust correlation value calculation that is not easily affected by differences in shooting conditions of the left and right cameras in a stereo image, noise, and the like.
- the method of calculating the frequency-resolved signal of the image pattern is, for example, fast Fourier transform (FFT), discrete Fourier transform (DFT), discrete cosine transform (DCT), discrete sine transform (DST), wavelet transform, Hadamard transform, etc. It has been known.
- FFT fast Fourier transform
- DFT discrete Fourier transform
- DCT discrete cosine transform
- DST discrete sine transform
- wavelet transform wavelet transform
- Hadamard transform etc.
- a phase only correlation method POC method
- a template is set on the standard image and a window having the same size is set on the reference image. Then, a correlation value (POC value) between the template and each window is calculated, and a window corresponding to the template is obtained from the correlation value.
- the template of the standard image and the window of the reference image are each subjected to two-dimensional discrete Fourier transform, normalized, synthesized, and then subjected to two-dimensional inverse discrete Fourier transform. In this way, a POC value that is a correlation value is obtained. Further, since the POC value is obtained discretely for each pixel in the window, the correlation value for each pixel can be obtained.
- the disparity information can be obtained from the relationship between the attention point and the corresponding point obtained by the corresponding point search described above, and the distance information can be obtained from the disparity information.
- the stereo camera SV1 may be used vertically as described with reference to FIG.
- parallax information is horizontal parallax, it is easy to calculate the distance information.
- vertical parallax it is necessary to change to parallax in the horizontal direction by three-dimensional reconstruction.
- the two cameras C1 and C2 are arranged in the vertical direction so as to have the base line length L, the focal length f of the two cameras, the number of pixels of the image sensor CD, and the number of pixels of one pixel.
- the size ⁇ is assumed to be equal.
- the three-dimensional position (X, Y, Z) is calculated by the following mathematical formulas (3) and (4). However, this is the position at which the object is projected on the image sensor CD of the camera C1 as a reference (x, y).
- Equation (2) may be converted into Equation (5).
- the parallel processing is performed before the processing of the equations (2) to (4). Need to be processed.
- Three-dimensional reconstruction and parallelization are common techniques. For example, a 2005 graduation thesis written by Kosuke Kondo, Department of Electronics and Information Communication, Faculty of Science and Engineering, Waseda University This is described in “Study on movement trajectory extraction”.
- FIG. 25 is a flowchart for explaining a procedure for generating left and right images in the left and right image generating unit 4.
- step S23 when a main image and distance information are input in steps S21 and S22, first, an image generation target is determined in step S23.
- left and right images are generated in step S24.
- the first to fourth methods described below can be employed.
- the first method is to generate a new second image having the same high image quality as the first image based on the distance information from the second image obtained by the sub camera SC.
- FIG. 26 is a diagram conceptually showing the first method.
- 26 (a) and 26 (b) show a first image obtained by the main camera MC and a second image obtained by the sub camera SC, respectively.
- the first image is the main image, and distance information is acquired from the second image.
- the main image is shown in part (c) of FIG. 26, and the distance is shown in part (d) of FIG.
- region where the information was obtained is shown typically.
- the second image is shifted according to the parallax value, and a new second image is generated.
- attention is paid only to a region that can be viewed stereoscopically (a region in which parallax information is obtained by obtaining corresponding points), and right and left images are formed only by the region. That is, as shown in part (d) of FIG. 26, attention is paid only to a region where distance information (disparity information) is obtained from the second image, and the image in that region is shifted according to the value of the parallax.
- a new second image having the same level of resolution as the first image can be obtained by supplementing pixel information from the corresponding first image. Is obtained.
- FIG. 26 (e) shows a first image
- FIG. 26 (f) shows a new second image.
- the position of the subject PS is shifted according to the parallax value, and a stereoscopic image can be obtained from the first image and the new second image.
- a new second image with high resolution is generated, a high-quality image can be obtained.
- FIG. FIG. 27 is a diagram showing the first and second images when processing for obtaining a stereo image is not performed.
- FIG. 27A shows a first main camera image taken by the main camera MC.
- 27 (b) shows the second main camera image
- FIG. 27 (c) shows the third main camera image.
- These main camera images are images with different magnifications.
- images obtained by sub-camera SC when images of (a) to (c) are acquired are shown. ing.
- the process for obtaining the stereo image is not performed, the first image and the second image are completely different from each other, and the stereo image cannot be obtained from both.
- FIG. 28 shows first and second images when the first method is applied to obtain a stereoscopic image.
- FIG. 28A shows the first main camera image taken by the main camera MC
- FIG. 28B shows the second main camera image
- FIG. 28C shows the second main camera image.
- the main camera images are images with different magnifications
- the second images corresponding to these images are also images with different magnifications.
- each new second image is represented by It is an image including a shift corresponding to parallax with respect to one image.
- the second image obtained by the sub-camera SC which originally does not reach the main camera MC due to resolution or the like, becomes a high-quality image at the same level as the first image, and both images are made into a stereo image.
- a three-dimensional display without a sense of incongruity becomes possible.
- the first method described above is a method of constructing a stereo image only with a stereoscopically viewable area.
- the second image is a main image, and only the area covered by the first image is viewed in stereo. You may take the method of producing
- FIG. 29 is a diagram showing the first and second images when processing for obtaining a stereo image is not performed, and in FIG. 29 (a), the first main camera image taken by the main camera MC is shown.
- FIG. 29 (b) shows the second main camera image
- FIG. 29 (c) shows the third main camera image.
- These main camera images are images with different magnifications.
- images obtained by sub-camera SC when images of (a) to (c) are acquired are shown. ing.
- the process for obtaining the stereo image is not performed, the first image and the second image are completely different from each other, and the stereo image cannot be obtained from both.
- FIG. 30 shows first and second images when a modification of the first method is applied to obtain a stereoscopic image.
- FIG. 30A shows an image taken by the main camera MC as the first image, which is the same as the second image taken by the sub camera SC shown in FIG. 29D.
- the images have the same angle of view. Therefore, in this case, a new first image shown in FIG. 30A is generated by shifting the subject PS by the parallax value for the second image shown in FIG. 29D.
- the part indicated as the region R10 uses the data of the second image shown in the part (d) of FIG. 29.
- the image shown in part (e) of FIG. 30 uses the data of the first image shown in part (b) of FIG. 29 for the part shown as area R11, and the other areas are those of the second image. You are using data. Then, by shifting the subject PS by the parallax value for the image shown in part (e) of FIG. 30, a new first image shown in part (b) of FIG. 30 is generated.
- the part shown as region R12 uses the data of the first image shown in part (c) of FIG. 29, and the other region is the second image. Is using data. Then, by shifting the subject PS by the parallax value for the image shown in part (f) of FIG. 30, a new first image shown in part (b) of FIG. 30 is generated.
- the second image is used as the main image and the first image has a wider angle of view (part (a) in FIG. 29)
- stereo The visual image is as shown in part (d) of FIG. 30, and the entire region becomes a region that can be displayed three-dimensionally, or only the portion covered by the first image becomes an image that can be displayed three-dimensionally. For this reason, the precision of the area
- an image as shown in part (b) of FIG. 30 is to be obtained, as shown in part (b) of FIG. 31, for an image PSX of the subject PS not included in the region R11, for example,
- the image information on the side opposite to the direction in which the region R11 is shifted may be copied like a mirror to obtain a complete image of the subject PS.
- the image information of the neighboring peripheral image PH may be copied like a mirror to the position where the image PSX is located.
- image information may be moved instead of copying.
- the hole may be filled by performing processing such as blurring the image.
- the entire subject PS is recognized as an object and included in the region R11 as shown in part (c) of FIG.
- processing it is possible to prevent only a part of the subject PS from being separated.
- the image PSX of the subject PS not included in the region R11 when the image PSX of the subject PS not included in the region R11 is separated, the image PSX may be deleted.
- the image information on the side opposite to the direction in which the region R11 is shifted may be copied like a mirror, or the image information may be moved.
- the second method uses the first image obtained by the main camera MC and the second image obtained by the sub camera SC as the main image with the wider angle of view, and only the region that can be viewed stereoscopically is the main image. It is to fit in.
- FIG. 32 is a diagram conceptually showing the second method.
- 32 (a) and 32 (b) show a first image obtained by the main camera MC and a second image obtained by the sub camera SC, respectively.
- the second image with a wide angle of view is used as the main image, and distance information is acquired from the first image.
- the main image is displayed in the part (d) of FIG.
- the area where the distance information is obtained is schematically shown in ().
- FIG. 32 (e) shows a new first image obtained in this way.
- the area indicated by area R21 in part (e) of FIG. 32 is an area into which a first image corresponding to the area where the distance information shown in part (c) of FIG. 32 is obtained is fitted.
- a new first image is generated by fitting a stereoscopically viewable region into a main image having a wide angle of view, so in the region R22 in FIG. Accurate stereoscopic viewing is possible only with the indicated area.
- FIG. 33 is a diagram showing the first and second images when the process for obtaining the stereo image is not performed.
- FIG. 33A shows a first main camera image taken by the main camera MC.
- 33 (b) shows the second main camera image, and
- FIG. 33 (c) shows the third main camera image.
- These main camera images are images with different magnifications.
- images obtained by sub-camera SC when images of (a) part to (c) part are acquired are shown. ing.
- the process for obtaining the stereo image is not performed, the first image and the second image are completely different from each other, and the stereo image cannot be obtained from both.
- FIG. 34 shows first and second images when the second method is applied in order to obtain a stereoscopic image.
- FIG. 34 (a) shows an image taken by the main camera MC as the first image. This is based on the second image taken by the sub camera SC shown in FIG. 33 (d). Is also an image with a wide angle of view. Therefore, in this case, the first image is used as the main image, and the second image is inserted into the first image to generate a new second image shown in part (d) of FIG.
- area R23 is an area into which the second image photographed by the sub camera SC shown in part (d) of FIG. The part other than is the area of the first image which is the main image.
- the region R23 corresponds to a region where distance information is obtained.
- the second image captured by the sub camera SC shown in part (e) of FIG. 33 has a wider angle of view than the first image captured by the main camera MC shown in part (b) of FIG. Therefore, in this case, the second image is used as the main image, and the first image is inserted into the second image to generate a new first image shown in part (b) of FIG.
- area R24 is an area into which the first image taken by main camera MC shown in part (b) of FIG.
- the part other than is the area of the second image which is the main image.
- the region R24 corresponds to a region where distance information is obtained.
- the second image taken by the sub camera SC shown in part (f) of FIG. 33 has a field angle more than the first image taken by the main camera MC shown in part (c) of FIG.
- the second image is the main image
- the first image is inserted into the second image to generate a new first image shown in part (c) of FIG.
- the region R25 is a region into which the first image taken by the main camera MC shown in part (c) of FIG.
- the part other than is the area of the second image which is the main image.
- the region R25 corresponds to a region where distance information is obtained.
- the first image obtained by the main camera MC and the second image obtained by the sub camera SC are used as a main image with a wider angle of view, and only in a region that can be stereoscopically viewed.
- a method may be adopted in which only a region that can be viewed stereoscopically is configured as a stereoscopic image, and an image is generated for other regions so that pseudo three-dimensional data is displayed.
- FIG. 35 is a diagram conceptually showing a first modification of the second method.
- 35 (a) and 35 (b) show a first image obtained by the main camera MC and a second image obtained by the sub camera SC, respectively.
- the second image having a wide angle of view is used as the main image, and distance information is acquired from the first image.
- the main image is displayed in the part (d) of FIG.
- the area where the distance information is obtained is schematically shown in ().
- FIG. 35 (e) shows a new first image obtained in this way.
- a region indicated by a region R31 is a region into which the first image is fitted.
- the area (f) in FIG. 35 schematically shows a region where distance information is not obtained.
- the portion is displayed as a pseudo three-dimensional region.
- the region indicated by the region R32 as shown in the second image shown in part (h) of FIG. 35 is a region where stereo viewing is possible. Note that since regions other than the distance information actually calculated also have parallax information, it is necessary to shift regions other than the distance information actually calculated when creating a stereoscopic image.
- the image in the region R31 is shifted by the amount of parallax, and the region other than the region R31 is also shifted by the amount of parallax. In the example of the (g) part of FIG. 35, it will shift to the left direction toward the figure. As a result, there is a region NR where no valid data exists at the right end of the figure.
- creation of pseudo three-dimensional data is disclosed in, for example, Japanese Patent Application Laid-Open No. 2005-151534.
- This document discloses a method of creating depth estimation data from a two-dimensional image to which no distance information is given, and generating a pseudo-stereoscopic image using the depth estimation data. It is disclosed that depth estimation data is created using a plurality of basic depth models each indicating a depth value.
- the obtained distance information may be used to determine an appropriate one from a plurality of basic depth models (curved surface models) for generating a pseudo stereoscopic image.
- ⁇ Modification 2 of the second method> of the first image obtained by the main camera MC and the second image obtained by the sub camera SC, the one having a wider angle of view is set as the main image, and the distance information is actually obtained.
- the acquired area is inserted into the main image as an area that can be stereoscopically viewed, but a configuration in which pseudo-parallax information is given to an area other than the area where the distance information is actually acquired may be used.
- FIG. 36 is a diagram conceptually showing a second modification of the second method.
- FIG. 36 shows the first and second images when the second modification of the second method is applied to obtain a stereoscopic image.
- FIG. 36 (a) shows an image taken by the main camera MC as the first image. This is based on the second image taken by the sub camera SC shown in FIG. 33 (d). Is also an image with a wide angle of view. Accordingly, in this case, the first image is used as the main image, and the second image is fitted into the first image, thereby generating a new second image shown in part (d) of FIG.
- the region R23 is an area into which the second image taken by the sub camera SC shown in part (d) of FIG.
- the portion other than is a region of the first image to which the parallax information is given in a pseudo manner.
- the region R23 is a region where distance information is actually obtained, and the other regions also have pseudo parallax information.
- the portion indicated by the region R231 can be viewed in stereo. It becomes an area.
- the second image captured by the sub camera SC shown in part (e) of FIG. 33 has a wider angle of view than the first image captured by the main camera MC shown in part (b) of FIG. Therefore, in this case, the second image is set as the main image, and the first image is fitted into the second image to generate a new first image shown in part (b) of FIG.
- area R24 is an area into which the first image taken by main camera MC shown in part (b) of FIG.
- the portion other than is a region of the first image to which the parallax information is given in a pseudo manner.
- the region R24 is a region where distance information is actually obtained, and the other regions also have pseudo parallax information.
- the portion indicated by the region R241 can be viewed in stereo. It becomes an area.
- the second image captured by the sub camera SC shown in part (f) of FIG. 33 has a larger angle of view than the first image taken by the main camera MC shown in part (c) of FIG.
- the second image is the main image
- the first image is fitted into the second image to generate a new first image shown in part (c) of FIG.
- the region R25 is an area into which the first image taken by the main camera MC shown in part (c) of FIG.
- the portion other than is a region of the first image to which the parallax information is given in a pseudo manner.
- the area R25 is an area where distance information is actually obtained, and the other areas also have pseudo parallax information.
- the portion indicated by the area R251 can be viewed in stereo. It becomes an area.
- the pseudo parallax information may be given as a variation value that varies depending on the distance (the disparity given becomes smaller as the distance increases), or may be given as a uniform fixed value that is independent of the distance.
- a fixed value if a parallax value of an object that is in front of the subject is given, an uncomfortable image is obtained when three-dimensionally displayed, so it is desirable that the parallax value be as small as possible.
- the first image is set as the main image
- the second image taken by the sub camera SC shown in (d) of FIG. 33 is fitted into the first image to be inserted into (d) of FIG.
- the image in the region R23 is shifted by the amount of parallax
- the region other than the region R23 is also shifted by the amount of parallax.
- the (d) part of FIG. 33 it shifts rightward toward the figure.
- part (c) of FIG. 36 it is shifted leftward toward the figure. As a result, there is a region NR where no valid data exists at the right end of the figure.
- the third method creates pseudo three-dimensional data using the acquired distance information as auxiliary data.
- FIG. 37 is a diagram conceptually showing the third method.
- 37 (a) and 37 (b) show a first image obtained by the main camera MC and a second image obtained by the sub camera SC, respectively.
- the second image having a wide angle of view is used as the main image, and distance information is acquired from the first image.
- a new second image obtained from pseudo three-dimensional data created using this distance information as auxiliary data is schematically shown in part (d) of FIG.
- an image that can be viewed in stereo can be generated.
- Japanese Patent Laid-Open No. 2005-151534 described above discloses a technique for preparing a depth model indicating a depth value in advance and creating depth estimation data using the model. By adopting a method of creating pseudo three-dimensional data using distance information as auxiliary data, it is not necessary to prepare a depth model in advance.
- the fourth method is to change the base line length according to the zoom magnification into left and right images for stereo viewing.
- FIG. 38 is a diagram showing a captured image when the stereo camera VC1 is used in a horizontal position, and is a schematic view of the subject PS and the background BG as viewed from above, and the vertical axis indicates the distance to the subject PS and the background BG.
- the horizontal axis indicates the horizontal length when the optical axis of the main camera MC is the origin, and the horizontal angle of view when the main camera MC and the sub-camera SC are taken.
- FIG. 38 schematically shows an example in which the position of the stereo camera VC1 with respect to the subject PS is variously changed, considering that zoom shooting is equivalent to performing shooting by changing the position of the stereo camera VC1.
- zooming is necessary for generating a stereoscopic image.
- the subject may fall out of the angle of view depending on the zoom magnification. In that case, the generated image size may be increased.
- FIG. 39 shows an image obtained from the photographed image shown in FIG.
- FIG. 39A shows a main camera image taken by the main camera MC
- FIG. 39B shows a sub camera when the main camera image of FIG. 39A is acquired.
- the image obtained by SC is shown.
- FIG. 39 (c) and (d) show images obtained by changing the baseline length for such an image.
- FIG. 39 (c) shows an image obtained by the main camera MC when the baseline length is not changed
- FIG. 39 (d) shows an image obtained by the main camera MC when the baseline length is increased.
- a first image is shown.
- the subject PS may be out of the angle of view by increasing the base line length.
- an image of the portion that protrudes from the portion (d) of FIG. 39 is created and a process for increasing the image size is performed.
- what is necessary is just to produce the image of the part which protruded based on the image obtained with main camera MC, when not changing a base line length.
- the image shown in part (c) of FIG. 39 and the image shown in part (e) of FIG. 39 are used.
- the image of part (e) of FIG. Since it is wide, the image size differs between the left and right images. Therefore, when an image as shown in part (e) of FIG. 39 is created, some image is added to the image shown in part (c) of FIG. 39 to increase the image size.
- a method of adding an image a method of adding a region NR where no valid data exists, or a method of copying an end image like a mirror can be employed.
- the first to fourth methods can be used to perform a three-dimensional display without a sense of incongruity.
- first to fourth methods described above may be used alone or in combination.
- first method and the fourth method it is possible to perform a three-dimensional display that does not feel strange even when zoomed.
- a stereo camera VC1 may be used in a horizontal position, and a stereo image may be generated using the obtained first and second images.
- the (b) part and (c) part of FIG. 40 show the first image obtained by the main camera MC and the second image obtained by the sub camera SC, respectively, and (d) of FIG.
- the part shows a new high-resolution second image obtained from both images.
- a new second image is created by increasing the resolution of the second image, and the first image and By using the new second image as a stereoscopic image, three-dimensional display can be performed.
- the stereo camera VC1 has a sensor for detecting the vertically placed state and the horizontally placed state, and in the horizontally placed state, the stereo camera VC1 does not use distance information. What is necessary is just to comprise so that circuit connection may be changed so that it may become a system which produces
- the video obtained by the main camera MC and the video obtained by the sub camera SC cannot be matched due to this, it can be estimated using a pseudo-stereoscopic image technique, May be interpolated from these images.
Abstract
Description
まず、図1~図6を用いて本発明の概要について説明する。
図7は、ステレオカメラVC1の構成を示すブロック図である。図7に示されるように、メインカメラMCおよびサブカメラSCは撮影情報取得部1に接続され、それぞれのカメラで得られた画像データとともに撮影情報が撮影情報取得部1に与えられる。そして、メインカメラMCおよびサブカメラSCで得られた画像の画像データは、一方がメイン画像として撮影情報取得部1を介してメイン画像取得部2に与えられる。また、メインカメラMCおよびサブカメラSCで得られた画像データは、撮影情報取得部1を介して距離情報取得部3に与えられ、距離情報が取得される。なお、距離情報取得部3には、距離情報取得のため既知のカメラ情報がカメラ情報蓄積部6から与えられる構成となっている。
撮影情報取得部1では、図6に示したような画像を取得する際の撮影情報を取得する。このときに取得する撮影情報としては、ズーム倍率、焦点距離、画角などの撮影時に変動する可能性があるパラメータである。ただし、ズーム倍率、焦点距離、画角の全ての情報を必要とするわけではなく、何れか1つの情報があれば、他の情報は計算により算出できる。
次に、距離情報取得部3における距離情報の取得について説明する。図8は、画像の取得から距離情報取得までの処理手順を示すフローチャートである。
次に、図9~図18を用いて、対応点探索領域の設定処理について説明する。
第1の方法は、テンプレートマッチングにより領域を決定するものであり、メインカメラMCによって得られた第1の画像およびサブカメラSCによって得られた第2の画像の一方に対して解像度変換処理を行い、複数のテンプレート画像を作成する。そして、他方の画像とのパターンマッチングを行って、最も一致度が高い領域を対応点探索領域として決定する。
第2の方法は、メインカメラMCによって得られた第1の画像およびサブカメラSCによって得られた第2の画像に対してオブジェクト認識により領域を決定するものである。例えばパターン認識を用いて第1および第2の画像での物体候補領域を決定し、物体候補領域内で最も大きな物体領域を特定し、当該物体領域の大きさに基づいて対応点探索領域を決定する。なお、物体候補領域内での最も大きな物体領域の特定は、各画像内で、物体領域ごとに総画素数を算出し、物体領域間での比較を行うことで特定することができる。
第3の方法は、第2の方法で説明したオブジェクト認識により特定される物体領域をテンプレートとして用いて、第1の方法のテンプレートマッチングにより領域を決定するものである。
第4の方法は、メインカメラMCによって得られた第1の画像およびサブカメラSCによって得られた第2の画像を、一方の光軸中心に他方の画像が一致するように変換し、変換後に、一方の画像を他方の画像を撮影したカメラの撮像素子に合うような画像サイズに変換して重ね合わせることで対応点探索領域を決定するものである。
第5の方法は、メインカメラMCによって得られた第1の画像およびサブカメラSCによって得られた第2の画像に対して、縦方向の画角の比率で領域を限定することで対応点探索領域を決定するものである。
第6の方法は、エピポーラ線を利用して対応点探索領域を決定するものである。すなわち、2つのカメラを用いて1つの3次元空間上の特徴点を撮影した場合、その点と2つのカメラのレンズ中心および、それぞれのカメラで得られた2つの画像平面(image plane)における特徴点の投影が1つの平面上に存在する。この平面はエピポーラ平面(epipolar plane) と呼称され、エピポーラ平面とそれぞれの画像との交線が,エピポーラ線(epipolar line)と呼称される。各画像においてエピポーラ線が交わる点はエピポール(epipole) と呼称される。
以上説明した第1~第6の方法においては、メインカメラMCによって得られた第1の画像およびサブカメラSCによって得られた第2の画像のうちどちらをメイン画像として使用するかは、適宜判断するものとし、メイン画像が第1の画像である場合も、第2の画像である場合でも何れでも良い。
次に、図19~図23を用いて、対応点探索処理について説明する。
第1の方法は、基準画像上の基準位置を画素サイズよりも細かいサイズを持ったサブピクセルに設定してそれに対応する参照画像上の対応点を求めるものである。
第2の方法は、メインカメラMCによって得られた第1の画像およびサブカメラSCによって得られた第2の画像について、対応点探索領域内の画素数を比較し、画素数が多い方を基準画像とするものである。
第3の方法は、デフォルトでは対応点探索を行う間隔を粗く設定しておき、カメラ情報としてズーム撮影が行われたという情報を取得した場合は、ズームの程度に合わせて対応点探索を行う間隔が変更されるように対応点のサンプリングの間隔を変更するものである。これは、ズームによりレンズの倍率が大きくなり、対応点探索領域が小さくなった場合には、より小さな間隔で対応点探索を行う必要が生じるからである。
第4の方法は、メインカメラMCによって得られた第1の画像およびサブカメラSCによって得られた第2の画像について、対応点探索領域内の画素数を比較し、画素数が少ない方に合わせてサンプリング間隔を設定するものである。
先に説明したように、サブカメラSCのレンズに中心窩レンズや魚眼レンズ、あるいはアナモフィックレンズなどの変倍率レンズを用いた場合には、メインカメラMCで得られた第1の画像とサブカメラSCで得られた第2の画像とで、対応点探索領域のサイズが極端に異なる可能性がある。
上述した対応点探索により得られた注目点と対応点との関係から視差情報を得ることができ、当該視差情報から距離情報を取得することができる。
次に、距離情報取得部3において得られた距離情報およびメイン画像取得部2において得られたメイン画像に基づいて、左右画像生成部4においてステレオ視画像を得る方法について説明する。
第1の方法は、サブカメラSCで得られた第2の画像を、距離情報に基づいて第1の画像並みの高画質な新たな第2の画像を生成するものである。
以上説明した第1の方法は、立体視できる領域のみでステレオ視画像を構成する方法であったが、第2の画像をメイン画像とし、第1の画像でカバーしている領域のみでステレオ視画像を生成する方法を採っても良い。
第2の方法は、メインカメラMCで得られた第1の画像およびサブカメラSCで得られた第2の画像のうち、画角の広い方をメイン画像とし、立体視できる領域のみをメイン画像に嵌め込むというものである。
以上説明した第2の方法は、メインカメラMCで得られた第1の画像およびサブカメラSCで得られた第2の画像のうち、画角の広い方をメイン画像とし、立体視できる領域のみをメイン画像に嵌め込むというものであったが、立体視できる領域のみ立体画像として構成し、その他の領域は、擬似3次元データを表示するように画像を生成する方法を採っても良い。
以上説明した第2の方法は、メインカメラMCで得られた第1の画像およびサブカメラSCで得られた第2の画像のうち、画角の広い方をメイン画像とし、実際に距離情報を取得した領域を立体視できる領域としてメイン画像に嵌め込むというものであったが、実際に距離情報を取得した領域以外の領域には擬似的な視差情報を与える構成としても良い。
第3の方法は、取得した距離情報を補助データとして用いて擬似3次元データを作成するものである。
第4の方法は、ズーム倍率に従って基線長を変更した画像をステレオ視のための左右の画像とするというものである。
以上説明した本発明に係る実施の形態においては、第1または第2の画像から取得した距離情報を使用することを前提とするものであったが、距離情報を使用せずにステレオ視画像を生成するものとしても良い。
以上説明したステレオカメラにおいては、撮影者が間違ってサブカメラSCを手で隠してしまったような場合に、映像はメインカメラMCだけで見ているために気付かない可能性がある。このような場合、メインカメラMCで得られた映像と、サブカメラSCで得られた映像とを比較し、両画像の類似性を検出し、類似性が全くないような場合には警告を発するような機能を有していても良い。
2 メイン画像取得部
3 距離情報取得部
4 左右画像生成部
MC メインカメラ
SC サブカメラ
Claims (23)
- 第1の画像を撮影する第1の撮像部と、
前記第1の撮像部とは異なるカメラパラメータを有し、第2の画像を撮影する第2の撮像部と、
前記第1および第2の画像の各画素について対応付けを行って視差情報を含む距離情報を取得する距離情報取得部と、
前記第1および第2の画像の一方と、前記距離情報とに基づいてステレオ視画像を生成する左右画像生成部と、を備えるステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像の撮影の際の撮影情報に基づいて、対応点探索を行う対応点探索領域を決定する、請求項1記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像に対して、一方をテンプレートとして他方とのテンプレートマッチングを行い、最も一致度が高い領域を前記対応点探索領域として決定する、請求項2記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像の撮影倍率が異なる場合は、前記テンプレートとなる方の画像の解像度変換処理を行って前記テンプレートを作成する、請求項3記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像に対して、オブジェクト認識により物体候補となる物体候補領域を抽出し、得られた物体候補領域どうしの比較を行って、最も一致度が高い領域を前記対応点探索領域として決定する、請求項2記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像に対して、一方の画像上に他方の画像の光軸中心を設定し、一方の画像上に設定された前記他方の画像の光軸中心と、前記他方の画像の光軸中心とを合わせ、前記一方の画像を他方の画像を撮影した撮像素子に合う画像サイズに変換して重ね合わせることで、前記対応点探索領域を決定する、請求項2記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像に対して、光軸中心を中心とした縦方向の画角の比率で前記対応点探索領域を決定する、請求項2記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像に対して、エピポーラ線に沿った1次元の領域を前記対応点探索領域として決定する、請求項2記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像のそれぞれの前記対応点探索領域に対して、対応点探索の基準側となる領域内の基準位置を、画素サイズよりも細かいサイズのサブピクセルの位置に設定して対応点探索を行う、請求項2記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像のそれぞれの前記対応点探索領域の画素数が多い方を基準画像として対応点探索を行う、請求項2記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像の撮影の際にズーム撮影が行われた場合には、ズームの程度に合わせて対応点のサンプリング間隔を変更する、請求項2記載のステレオカメラ。 - 前記距離情報取得部は、
ズーム倍率が大きくなるほど前記対応点のサンプリング間隔を小さくする、請求項11記載のステレオカメラ。 - 前記距離情報取得部は、
前記第1および第2の画像のそれぞれの前記対応点探索領域の画素数が少ない方の画素数に合わせて対応点のサンプリング間隔を変更する、請求項2記載のステレオカメラ。 - 前記距離情報取得部は、
対応点探索に使用されるウインドウの縦横比が、ウインドウを物体面に適用した場合に、物体面での縦横比が等方となるように設定して対応点探索を行う、請求項2記載のステレオカメラ。 - 前記第1の撮像部は前記第2の撮像部よりも解像度が高く、
前記左右画像生成部は、
前記第1の画像をメイン画像とし、前記視差情報に従って前記第2の画像内の前記対応点探索領域の画像をずらして新たな第2の画像を生成し、前記第1の画像および前記新たな第2の画像を前記左右画像とする、請求項2記載のステレオカメラ。 - 前記左右画像生成部は、
前記第2の画像の前記対応点探索領域内の画素数が、前記第1の画像の前記対応点探索領域内画素数よりも少ない場合には、前記第1の画像から画素情報を補充する、請求項15記載のステレオカメラ。 - 前記第1の撮像部は前記第2の撮像部よりも解像度が高く、
前記左右画像生成部は、
前記第1の画像よりも前記第2の画像の方が撮影倍率が高い場合は、前記第2の画像を前記第1の画像に嵌め込むことで新たな第2の画像を生成し、前記第1の画像および前記新たな第2の画像を前記左右画像とする、請求項2記載のステレオカメラ。 - 前記第1の撮像部は前記第2の撮像部よりも解像度が高く、
前記左右画像生成部は、
前記第2の画像よりも前記第1の画像の方が撮影倍率が高い場合は、前記第1の画像を前記第2の画像に嵌め込むことで新たな第1の画像を生成し、前記新たな第1の画像および前記第2の画像を前記左右画像とする、請求項2記載のステレオカメラ。 - 前記第1の撮像部はズーム機能を有し、
前記左右画像生成部は、
前記第1の画像がズームされた画像である場合は、ズーム倍率に従って、基線長を変更した画像を生成することで新たな第1の画像を生成する、請求項2記載のステレオカメラ。 - 前記左右画像生成部は、
前記新たな第1の画像を生成する場合に、基線長の変更によっても被写体が画像内に収まるように画像サイズを変更する、請求項19記載のステレオカメラ。 - 前記第2の撮像部のレンズは、中心窩レンズである、請求項1記載のステレオカメラ。
- 前記第2の撮像部のレンズは、アナモフィックレンズである、請求項1記載のステレオカメラ。
- 前記第1の撮像部は前記第2の撮像部よりも解像度が高く、
前記ステレオカメラは、
前記第1および第2の撮像部の配列が、水平面に対して平行となる横置きで前記ステレオカメラが配置されたことを感知するセンサをさらに備え、横置きを感知した場合は、前記距離情報取得部の動作を停止し、
前記左右画像生成部は、
前記第1の画像の情報を前記第2の画像に与えることで、新たな第2の画像を作成し、前記第1の画像および前記新たな第2の画像を前記左右画像とする、請求項1記載のステレオカメラ。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US13/574,952 US9109891B2 (en) | 2010-02-02 | 2011-01-12 | Stereo camera |
EP11739600.2A EP2533541A4 (en) | 2010-02-02 | 2011-01-12 | STEREO CAMERA |
JP2011552721A JP5472328B2 (ja) | 2010-02-02 | 2011-01-12 | ステレオカメラ |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010021152 | 2010-02-02 | ||
JP2010-021152 | 2010-02-02 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011096251A1 true WO2011096251A1 (ja) | 2011-08-11 |
Family
ID=44355261
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/050319 WO2011096251A1 (ja) | 2010-02-02 | 2011-01-12 | ステレオカメラ |
Country Status (4)
Country | Link |
---|---|
US (1) | US9109891B2 (ja) |
EP (1) | EP2533541A4 (ja) |
JP (1) | JP5472328B2 (ja) |
WO (1) | WO2011096251A1 (ja) |
Cited By (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012156775A (ja) * | 2011-01-26 | 2012-08-16 | Toshiba Corp | カメラモジュール |
JP5414947B2 (ja) * | 2011-12-27 | 2014-02-12 | パナソニック株式会社 | ステレオ撮影装置 |
WO2014141654A1 (ja) * | 2013-03-13 | 2014-09-18 | パナソニック株式会社 | 測距装置、撮像装置および測距方法 |
EP2911392A1 (en) | 2014-02-25 | 2015-08-26 | Ricoh Company, Ltd. | Parallax calculation system, information processing apparatus, information processing method, and program |
US9142010B2 (en) * | 2012-01-04 | 2015-09-22 | Audience, Inc. | Image enhancement based on combining images from multiple cameras |
EP2757789A4 (en) * | 2011-09-16 | 2016-01-20 | Konica Minolta Inc | IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD AND PICTURE PROCESSING PROGRAM |
WO2016047510A1 (ja) * | 2014-09-26 | 2016-03-31 | 株式会社 明電舎 | 線条計測装置及びその方法 |
JP2016192105A (ja) * | 2015-03-31 | 2016-11-10 | 公益財団法人鉄道総合技術研究所 | ステレオ画像処理方法およびその装置 |
JP2017028611A (ja) * | 2015-07-27 | 2017-02-02 | キヤノン株式会社 | 撮像装置 |
JP2017208606A (ja) * | 2016-05-16 | 2017-11-24 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法および画像処理プログラム |
WO2020090358A1 (ja) * | 2018-11-01 | 2020-05-07 | ミツミ電機株式会社 | 測距カメラ |
WO2020170861A1 (ja) * | 2019-02-21 | 2020-08-27 | ソニーセミコンダクタソリューションズ株式会社 | イベント信号検出センサ及び制御方法 |
WO2023276229A1 (ja) * | 2021-06-30 | 2023-01-05 | 日立Astemo株式会社 | 距離測定装置 |
US11741193B2 (en) | 2020-12-02 | 2023-08-29 | Yamaha Hatsudoki Kabushiki Kaisha | Distance recognition system for use in marine vessel, control method thereof, and marine vessel |
WO2024004190A1 (ja) * | 2022-06-30 | 2024-01-04 | 富士通株式会社 | 3次元位置算出方法、装置、及びプログラム |
Families Citing this family (30)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US9171221B2 (en) | 2010-07-18 | 2015-10-27 | Spatial Cam Llc | Camera to track an object |
US10896327B1 (en) | 2013-03-15 | 2021-01-19 | Spatial Cam Llc | Device with a camera for locating hidden object |
US9736368B2 (en) | 2013-03-15 | 2017-08-15 | Spatial Cam Llc | Camera in a headframe for object tracking |
US10354407B2 (en) | 2013-03-15 | 2019-07-16 | Spatial Cam Llc | Camera for locating hidden objects |
US11119396B1 (en) | 2008-05-19 | 2021-09-14 | Spatial Cam Llc | Camera system with a plurality of image sensors |
US10585344B1 (en) | 2008-05-19 | 2020-03-10 | Spatial Cam Llc | Camera system with a plurality of image sensors |
US8619148B1 (en) | 2012-01-04 | 2013-12-31 | Audience, Inc. | Image correction after combining images from multiple cameras |
WO2013108554A1 (ja) * | 2012-01-17 | 2013-07-25 | コニカミノルタ株式会社 | 画像処理装置、画像処理方法および画像処理プログラム |
JP5773944B2 (ja) * | 2012-05-22 | 2015-09-02 | 株式会社ソニー・コンピュータエンタテインメント | 情報処理装置および情報処理方法 |
DE102012014994B4 (de) * | 2012-07-28 | 2024-02-22 | Volkswagen Aktiengesellschaft | Bildverarbeitungsverfahren für eine digitale Stereokameraanordnung |
US9142019B2 (en) | 2013-02-28 | 2015-09-22 | Google Technology Holdings LLC | System for 2D/3D spatial feature processing |
US20150145950A1 (en) * | 2013-03-27 | 2015-05-28 | Bae Systems Information And Electronic Systems Integration Inc. | Multi field-of-view multi sensor electro-optical fusion-zoom camera |
US9646384B2 (en) | 2013-09-11 | 2017-05-09 | Google Technology Holdings LLC | 3D feature descriptors with camera pose information |
TWI508526B (zh) * | 2013-10-01 | 2015-11-11 | Wistron Corp | 產生視角平移影像之方法及其可攜式電子設備 |
JP6249825B2 (ja) * | 2014-03-05 | 2017-12-20 | キヤノン株式会社 | 撮像装置、その制御方法、および制御プログラム |
KR102221036B1 (ko) * | 2014-09-15 | 2021-02-26 | 엘지전자 주식회사 | 이동단말기 및 그 제어방법 |
GB2541101A (en) * | 2015-06-23 | 2017-02-08 | Bosch Gmbh Robert | Method and camera system for determining the distance of objects in relation to a vehicle |
JP6648916B2 (ja) * | 2015-07-27 | 2020-02-14 | キヤノン株式会社 | 撮像装置 |
EP3323237A4 (en) | 2015-08-26 | 2019-07-31 | Zhejiang Dahua Technology Co., Ltd | METHODS AND SYSTEMS FOR MONITORING TRAFFIC |
JP2017069926A (ja) * | 2015-10-02 | 2017-04-06 | ソニー株式会社 | 画像処理装置、および画像処理方法、並びにプログラム |
KR102482067B1 (ko) * | 2015-11-27 | 2022-12-28 | 삼성전자주식회사 | 전자 장치 및 그의 동작 방법 |
WO2018147329A1 (ja) * | 2017-02-10 | 2018-08-16 | パナソニック インテレクチュアル プロパティ コーポレーション オブ アメリカ | 自由視点映像生成方法及び自由視点映像生成システム |
KR20190013224A (ko) * | 2017-08-01 | 2019-02-11 | 엘지전자 주식회사 | 이동 단말기 |
EP3486606A1 (de) * | 2017-11-20 | 2019-05-22 | Leica Geosystems AG | Stereokamera und stereophotogrammetrisches verfahren |
US10721419B2 (en) * | 2017-11-30 | 2020-07-21 | International Business Machines Corporation | Ortho-selfie distortion correction using multiple image sensors to synthesize a virtual image |
US10789725B2 (en) * | 2018-04-22 | 2020-09-29 | Cnoga Medical Ltd. | BMI, body and other object measurements from camera view display |
EP3825649A4 (en) * | 2018-07-18 | 2022-04-20 | Mitsumi Electric Co., Ltd. | DISTANCE CAMERA |
US10452959B1 (en) * | 2018-07-20 | 2019-10-22 | Synapse Tehnology Corporation | Multi-perspective detection of objects |
JP7150508B2 (ja) * | 2018-07-24 | 2022-10-11 | 株式会社東芝 | 鉄道車両用撮像システム |
US11010605B2 (en) | 2019-07-30 | 2021-05-18 | Rapiscan Laboratories, Inc. | Multi-model detection of objects |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10134187A (ja) * | 1996-10-31 | 1998-05-22 | Nec Corp | 三次元構造推定装置 |
JP2000102040A (ja) * | 1998-09-28 | 2000-04-07 | Olympus Optical Co Ltd | 電子ステレオカメラ |
JP2001346226A (ja) * | 2000-06-02 | 2001-12-14 | Canon Inc | 画像処理装置、立体写真プリントシステム、画像処理方法、立体写真プリント方法、及び処理プログラムを記録した媒体 |
JP2006093859A (ja) * | 2004-09-21 | 2006-04-06 | Olympus Corp | 2眼撮像系を搭載したカメラ及びステレオ撮影可能なカメラ |
JP2008092007A (ja) * | 2006-09-29 | 2008-04-17 | Fujifilm Corp | 撮影装置 |
Family Cites Families (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1418766A3 (en) | 1998-08-28 | 2010-03-24 | Imax Corporation | Method and apparatus for processing images |
JP2002071309A (ja) * | 2000-08-24 | 2002-03-08 | Asahi Optical Co Ltd | 3次元画像検出装置 |
US6915008B2 (en) * | 2001-03-08 | 2005-07-05 | Point Grey Research Inc. | Method and apparatus for multi-nodal, three-dimensional imaging |
JP2004297540A (ja) | 2003-03-27 | 2004-10-21 | Sharp Corp | 立体映像記録再生装置 |
JP2005210217A (ja) | 2004-01-20 | 2005-08-04 | Olympus Corp | ステレオカメラ |
US20080002878A1 (en) * | 2006-06-28 | 2008-01-03 | Somasundaram Meiyappan | Method For Fast Stereo Matching Of Images |
JP4668863B2 (ja) * | 2006-08-01 | 2011-04-13 | 株式会社日立製作所 | 撮像装置 |
JP4958233B2 (ja) | 2007-11-13 | 2012-06-20 | 学校法人東京電機大学 | 多眼視画像作成システム及び多眼視画像作成方法 |
-
2011
- 2011-01-12 EP EP11739600.2A patent/EP2533541A4/en not_active Ceased
- 2011-01-12 WO PCT/JP2011/050319 patent/WO2011096251A1/ja active Application Filing
- 2011-01-12 JP JP2011552721A patent/JP5472328B2/ja not_active Expired - Fee Related
- 2011-01-12 US US13/574,952 patent/US9109891B2/en not_active Expired - Fee Related
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH10134187A (ja) * | 1996-10-31 | 1998-05-22 | Nec Corp | 三次元構造推定装置 |
JP2000102040A (ja) * | 1998-09-28 | 2000-04-07 | Olympus Optical Co Ltd | 電子ステレオカメラ |
JP2001346226A (ja) * | 2000-06-02 | 2001-12-14 | Canon Inc | 画像処理装置、立体写真プリントシステム、画像処理方法、立体写真プリント方法、及び処理プログラムを記録した媒体 |
JP2006093859A (ja) * | 2004-09-21 | 2006-04-06 | Olympus Corp | 2眼撮像系を搭載したカメラ及びステレオ撮影可能なカメラ |
JP2008092007A (ja) * | 2006-09-29 | 2008-04-17 | Fujifilm Corp | 撮影装置 |
Non-Patent Citations (1)
Title |
---|
See also references of EP2533541A4 * |
Cited By (23)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2012156775A (ja) * | 2011-01-26 | 2012-08-16 | Toshiba Corp | カメラモジュール |
EP2757789A4 (en) * | 2011-09-16 | 2016-01-20 | Konica Minolta Inc | IMAGE PROCESSING SYSTEM, IMAGE PROCESSING METHOD AND PICTURE PROCESSING PROGRAM |
JP5414947B2 (ja) * | 2011-12-27 | 2014-02-12 | パナソニック株式会社 | ステレオ撮影装置 |
US9204128B2 (en) | 2011-12-27 | 2015-12-01 | Panasonic Intellectual Property Management Co., Ltd. | Stereoscopic shooting device |
US9142010B2 (en) * | 2012-01-04 | 2015-09-22 | Audience, Inc. | Image enhancement based on combining images from multiple cameras |
US20160065862A1 (en) * | 2012-01-04 | 2016-03-03 | Audience, Inc. | Image Enhancement Based on Combining Images from a Single Camera |
JPWO2014141654A1 (ja) * | 2013-03-13 | 2017-02-16 | パナソニックIpマネジメント株式会社 | 測距装置、撮像装置および測距方法 |
WO2014141654A1 (ja) * | 2013-03-13 | 2014-09-18 | パナソニック株式会社 | 測距装置、撮像装置および測距方法 |
EP2911392A1 (en) | 2014-02-25 | 2015-08-26 | Ricoh Company, Ltd. | Parallax calculation system, information processing apparatus, information processing method, and program |
WO2016047510A1 (ja) * | 2014-09-26 | 2016-03-31 | 株式会社 明電舎 | 線条計測装置及びその方法 |
JP2016065838A (ja) * | 2014-09-26 | 2016-04-28 | 株式会社明電舎 | 線条計測装置及びその方法 |
JP2016192105A (ja) * | 2015-03-31 | 2016-11-10 | 公益財団法人鉄道総合技術研究所 | ステレオ画像処理方法およびその装置 |
JP2017028611A (ja) * | 2015-07-27 | 2017-02-02 | キヤノン株式会社 | 撮像装置 |
JP2017208606A (ja) * | 2016-05-16 | 2017-11-24 | キヤノン株式会社 | 画像処理装置、撮像装置、画像処理方法および画像処理プログラム |
US11032533B2 (en) | 2016-05-16 | 2021-06-08 | Canon Kabushiki Kaisha | Image processing apparatus, image capturing apparatus, image processing method, and storage medium |
WO2020090358A1 (ja) * | 2018-11-01 | 2020-05-07 | ミツミ電機株式会社 | 測距カメラ |
JP2020071160A (ja) * | 2018-11-01 | 2020-05-07 | ミツミ電機株式会社 | 測距カメラ |
US11385053B2 (en) | 2018-11-01 | 2022-07-12 | Mitsumi Electric Co., Ltd. | Distance measuring camera |
JP7132501B2 (ja) | 2018-11-01 | 2022-09-07 | ミツミ電機株式会社 | 測距カメラ |
WO2020170861A1 (ja) * | 2019-02-21 | 2020-08-27 | ソニーセミコンダクタソリューションズ株式会社 | イベント信号検出センサ及び制御方法 |
US11741193B2 (en) | 2020-12-02 | 2023-08-29 | Yamaha Hatsudoki Kabushiki Kaisha | Distance recognition system for use in marine vessel, control method thereof, and marine vessel |
WO2023276229A1 (ja) * | 2021-06-30 | 2023-01-05 | 日立Astemo株式会社 | 距離測定装置 |
WO2024004190A1 (ja) * | 2022-06-30 | 2024-01-04 | 富士通株式会社 | 3次元位置算出方法、装置、及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JP5472328B2 (ja) | 2014-04-16 |
EP2533541A4 (en) | 2013-10-16 |
US9109891B2 (en) | 2015-08-18 |
US20120293633A1 (en) | 2012-11-22 |
EP2533541A1 (en) | 2012-12-12 |
JPWO2011096251A1 (ja) | 2013-06-10 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5472328B2 (ja) | ステレオカメラ | |
US10116867B2 (en) | Method and apparatus for displaying a light field based image on a user's device, and corresponding computer program product | |
KR102013978B1 (ko) | 이미지들의 융합을 위한 방법 및 장치 | |
CN101673395B (zh) | 图像拼接方法及装置 | |
JP4942221B2 (ja) | 高解像度仮想焦点面画像生成方法 | |
US9210405B2 (en) | System and method for real time 2D to 3D conversion of video in a digital camera | |
EP1836859B1 (en) | Automatic conversion from monoscopic video to stereoscopic video | |
US20160165206A1 (en) | Digital refocusing method | |
US20130113898A1 (en) | Image capturing apparatus | |
CN102111629A (zh) | 图像处理装置、图像捕获装置、图像处理方法和程序 | |
US20160050372A1 (en) | Systems and methods for depth enhanced and content aware video stabilization | |
JP2011166264A (ja) | 画像処理装置、撮像装置、および画像処理方法、並びにプログラム | |
JP5522018B2 (ja) | 画像処理装置、画像処理方法及び画像処理プログラム | |
CN103379267A (zh) | 三维空间图像的获取系统及方法 | |
CN105191300B (zh) | 图像处理方法以及图像处理装置 | |
JP2013247543A (ja) | 撮像装置、表示装置、および画像処理方法、並びにプログラム | |
JP2013074473A (ja) | パノラマ撮像装置 | |
JP5925109B2 (ja) | 画像処理装置、その制御方法、および制御プログラム | |
CN104463958A (zh) | 基于视差图融合的三维超分辨率方法 | |
JP5088973B2 (ja) | 立体撮像装置およびその撮像方法 | |
JP6648916B2 (ja) | 撮像装置 | |
JP2013175821A (ja) | 画像処理装置、画像処理方法およびプログラム | |
JP5689693B2 (ja) | 描画処理装置 | |
JP6292785B2 (ja) | 画像処理装置、画像処理方法およびプログラム | |
JP2013085018A (ja) | 撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11739600 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011552721 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13574952 Country of ref document: US |
|
REEP | Request for entry into the european phase |
Ref document number: 2011739600 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011739600 Country of ref document: EP |
|
NENP | Non-entry into the national phase |
Ref country code: DE |