WO2012056686A1 - 3次元画像補間装置、3次元撮像装置および3次元画像補間方法 - Google Patents
3次元画像補間装置、3次元撮像装置および3次元画像補間方法 Download PDFInfo
- Publication number
- WO2012056686A1 WO2012056686A1 PCT/JP2011/005956 JP2011005956W WO2012056686A1 WO 2012056686 A1 WO2012056686 A1 WO 2012056686A1 JP 2011005956 W JP2011005956 W JP 2011005956W WO 2012056686 A1 WO2012056686 A1 WO 2012056686A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- interpolation
- distance
- motion vector
- dimensional
- Prior art date
Links
- 238000000034 method Methods 0.000 title claims description 103
- 238000003384 imaging method Methods 0.000 title claims description 73
- 239000013598 vector Substances 0.000 claims description 228
- 230000033001 locomotion Effects 0.000 claims description 180
- 238000004364 calculation method Methods 0.000 claims description 69
- 238000012545 processing Methods 0.000 claims description 42
- 230000006870 function Effects 0.000 description 32
- 230000008569 process Effects 0.000 description 23
- 230000003287 optical effect Effects 0.000 description 14
- 238000010586 diagram Methods 0.000 description 13
- 238000004590 computer program Methods 0.000 description 12
- 239000011521 glass Substances 0.000 description 7
- 206010025482 malaise Diseases 0.000 description 7
- 238000005259 measurement Methods 0.000 description 7
- 230000008859 change Effects 0.000 description 5
- 230000000694 effects Effects 0.000 description 4
- 238000005286 illumination Methods 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 238000009826 distribution Methods 0.000 description 3
- 230000010354 integration Effects 0.000 description 3
- 238000012986 modification Methods 0.000 description 3
- 230000004048 modification Effects 0.000 description 3
- 238000004458 analytical method Methods 0.000 description 2
- 230000015556 catabolic process Effects 0.000 description 2
- 238000006731 degradation reaction Methods 0.000 description 2
- 238000005516 engineering process Methods 0.000 description 2
- 239000004973 liquid crystal related substance Substances 0.000 description 2
- 238000004519 manufacturing process Methods 0.000 description 2
- 201000003152 motion sickness Diseases 0.000 description 2
- 238000003860 storage Methods 0.000 description 2
- 230000002411 adverse Effects 0.000 description 1
- 238000013459 approach Methods 0.000 description 1
- 208000003464 asthenopia Diseases 0.000 description 1
- 230000004888 barrier function Effects 0.000 description 1
- 230000005540 biological transmission Effects 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 230000000295 complement effect Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 239000000284 extract Substances 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000007274 generation of a signal involved in cell-cell signaling Effects 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000011159 matrix material Substances 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 229910044991 metal oxide Inorganic materials 0.000 description 1
- 150000004706 metal oxides Chemical class 0.000 description 1
- 230000007480 spreading Effects 0.000 description 1
- 238000003892 spreading Methods 0.000 description 1
- 230000002123 temporal effect Effects 0.000 description 1
- 238000002366 time-of-flight method Methods 0.000 description 1
- 230000001131 transforming effect Effects 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T3/00—Geometric image transformations in the plane of the image
- G06T3/40—Scaling of whole images or parts thereof, e.g. expanding or contracting
- G06T3/4007—Scaling of whole images or parts thereof, e.g. expanding or contracting based on interpolation, e.g. bilinear interpolation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
- G06T7/571—Depth or shape recovery from multiple images from focus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/128—Adjusting depth or disparity
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/271—Image signal generators wherein the generated image signals comprise depth maps or disparity maps
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10141—Special mode during image acquisition
- G06T2207/10148—Varying focus
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2213/00—Details of stereoscopic systems
- H04N2213/003—Aspects relating to the "2D+depth" image format
Definitions
- the present invention relates to a three-dimensional image interpolation apparatus for performing frame interpolation of a three-dimensional moving image, a three-dimensional imaging apparatus, and a three-dimensional image interpolation method.
- imaging devices such as CCD image sensors (charge coupled device image sensors) or CMOS image sensors (complementary metal oxide semiconductor image sensors)
- CCD image sensors charge coupled device image sensors
- CMOS image sensors complementary metal oxide semiconductor image sensors
- a thin display device such as a liquid crystal display or a plasma display enables high-resolution and high-contrast image display without taking up space.
- the trend toward high quality images is spreading from two-dimensional images to three-dimensional images.
- three-dimensional display devices have been developed which display high-quality three-dimensional images using polarized glasses or glasses having a high-speed shutter.
- a three-dimensional imaging device for acquiring a high quality three-dimensional image or a three-dimensional image to be displayed on a three-dimensional display device is also in progress.
- a simple method of acquiring a three-dimensional image and displaying it on a three-dimensional display device it is conceivable to capture an image or video with an imaging device provided with two optical systems (lens and imaging element) different in position . Images captured using each optical system are input to the three-dimensional display as an image for the left eye and an image for the right eye.
- the three-dimensional display device switches and displays the captured left-eye image and the right-eye image at high speed, so that the user wearing glasses can perceive the image stereoscopically.
- the depth information is calculated from multiple images taken with a single camera by changing the geometric or optical conditions of the scene such as how light hits or the conditions of the optical system of the imaging device (such as the size of the aperture).
- Non-Patent Document 1 As a method relating to the former, there is a multi-baseline stereo method described in Non-Patent Document 1 which obtains the depth of each pixel by simultaneously using images acquired from a large number of cameras. It is known that this multi-baseline stereo method can estimate the depth of a scene with high accuracy as compared to general binocular stereo.
- the ambiguity of the corresponding point search can be reduced by using three or more cameras, so the error of the parallax estimation is reduced.
- a virtual camera position (a camera position for the left eye and a camera position for the right eye) is determined as a new viewpoint position using the estimated depth and the texture of the scene obtained from the imaging device. Images can be generated. This makes it possible to obtain an image at a viewpoint position different from that at the time of shooting.
- An image of a new viewpoint position can be generated by (Equation 2).
- each symbol is the same as (Expression 1).
- xc be the x coordinate of the camera whose depth has been determined
- xl and xr be the x coordinates of the camera at the newly generated viewpoint position.
- xl and xr are the x-coordinates of the left-eye and right-eye cameras (virtual cameras), respectively.
- tx be the distance between the virtual cameras (baseline length: baseline).
- Non-Patent Document 3 As a method of changing the conditions related to a scene.
- the three-dimensional position of the subject can be obtained from the three-dimensional relationship between the pixel value of the subject and the position of the illumination.
- a Depth From Defocus method shown in Non-Patent Document 4.
- the distance from the camera to the subject is calculated using the amount of change in blur of each pixel in a plurality of images captured with the camera focal length changed, the focal length of the camera, and the aperture size (aperture diameter) Depth) can be obtained.
- various methods for acquiring scene depth information have been studied for a long time.
- the Depth From Defocus method is characterized in that the imaging device can be made compact and lightweight, and no other device such as a lighting device is required.
- the Depth From Defocus method it is possible to obtain scene depth information with a single-lens, compact system.
- the Depth From Defocus method it is necessary to change the focal length of the camera to take two or more images. That is, at the time of photographing, in order to change the focal length of the camera, it is necessary to drive the lens (or the imaging element) back and forth. Therefore, the time required for one shooting depends largely on the driving time and the time until the vibration of the lens or the imaging device disappears after driving.
- the Depth From Defocus method has a problem that only a few images can be taken per second. Therefore, when a moving image is captured while calculating depth information by the Depth From Defocus method, the frame rate of the moving image is low.
- a method of generating a high frame rate moving image from a low frame rate moving image there is a method of improving temporal resolution by generating an image interpolated from two images in the time direction. This method is used, for example, as a method of increasing the time resolution to smooth the display of the display.
- the image is interpolated for each viewpoint. There is a way to increase the time resolution.
- Patent Document 1 a motion model of a photographed subject is defined, and interpolation of coordinate information and interpolation of motion information are performed. According to this method, not only interpolation of two-dimensional coordinate information but also interpolation of three-dimensional motion information can be performed. However, there is a problem that it is difficult to apply this method because motion is complicated and modeling is difficult in a general scene.
- the present invention solves the above-mentioned conventional problems, and provides a three-dimensional image interpolation device, a three-dimensional imaging device, and a three-dimensional image interpolation method capable of performing frame interpolation of a three-dimensional moving image with high accuracy.
- the purpose is
- a three-dimensional image interpolation device is a three-dimensional image interpolation device that performs frame interpolation of a three-dimensional moving image, and the first one included in the three-dimensional moving image.
- a distance image interpolation unit that generates at least one interpolation distance image that interpolates between a first distance image and a second distance image that respectively indicate depths of the image and the second image; and the first image and the second image Generating at least one set of interpolated parallax images having a parallax according to the depth indicated by the interpolation distance image based on the image interpolation unit that generates at least one interpolation image that interpolates between And an interpolated parallax image generation unit.
- an interpolation parallax image is generated after performing interpolation of the two-dimensional image and interpolation of the distance image separately. Therefore, by separately performing interpolation of the left-eye image and interpolation of the right-eye image, interpolation errors in the depth direction can be suppressed more than in the case of generating an interpolated parallax image, and frame interpolation of a three-dimensional moving image It can be done with high accuracy.
- the left-eye interpolation image and the right-eye interpolation image are generated using the same interpolation distance image and interpolation image, the user who views the frame-interpolated three-dimensional image is uncomfortable due to the interpolation. It also produces the effect of making it difficult to
- the three-dimensional image interpolation device further includes a distance motion vector calculation unit that calculates a motion vector as a distance motion vector from the first distance image and the second distance image, and from the first image and the second image
- An image motion vector calculation unit that calculates a motion vector as an image motion vector
- a vector similarity calculation unit that calculates a vector similarity that is a value indicating the degree of similarity between the image motion vector and the distance motion vector
- the interpolation parallax image generation unit further includes: an interpolation image number determination unit that determines an upper limit number of interpolations such that the number increases as the calculated vector similarity increases; Preferably, the interpolated parallax image is generated.
- the upper limit number of interpolations can be determined according to the similarity between the distance motion vector and the image motion vector. If the similarity between the distance motion vector and the image motion vector is low, there is a high possibility that the distance motion vector or the image motion vector has not been correctly calculated. Therefore, in such a case, by reducing the upper limit number of interpolations, it is possible to suppress degradation of the image quality of the three-dimensional moving image due to the interpolated parallax image.
- the distance motion vector calculation unit calculates the distance motion vector for each block of the first size
- the image motion vector calculation unit calculates the image motion vector for each block of the first size
- the vector similarity calculation unit generates a histogram of at least one of the direction and the intensity of the distance motion vector for each block of a second size larger than the first size, and for each block of the second size, A histogram of at least one of the direction and the intensity of the image motion vector is generated, and the similarity between the histogram of the distance motion vector and the direction of the image motion vector, the intensity of the distance motion vector and the intensity of the image motion vector
- the vector similarity is calculated based on at least one of similarity between histograms. It is preferable to.
- the vector similarity can be calculated based on the histogram of at least one of the direction and the intensity of the motion vector.
- the degree of correlation between the possibility that the motion vector is not correctly calculated and the degree of vector similarity can be improved, and the upper limit number of interpolations can be appropriately determined.
- the interpolation image number determination unit determines the number below the upper limit number input by the user as the interpolation number, and the interpolation parallax image generation unit generates the interpolation parallax image of the determined interpolation number. Is preferred.
- the three-dimensional image interpolation device further acquires the first distance image based on the correlation of blur among a plurality of photographed images having different focal lengths which are included in the first photographed image group, and A distance image acquisition unit that acquires the second distance image based on the correlation of blur among a plurality of photographed images having different focal lengths and which are included in a second photographed image group temporally subsequent to the first photographed image group
- a distance image acquisition unit that acquires the second distance image based on the correlation of blur among a plurality of photographed images having different focal lengths and which are included in a second photographed image group temporally subsequent to the first photographed image group
- a plurality of photographed images having different focal lengths can be used as an input, which can contribute to downsizing of the imaging device.
- the three-dimensional image interpolation device further performs restoration processing on the one captured image using blur information indicating a feature of blur of one captured image included in the first captured image group.
- a first texture image as the first image, and restoration processing for the one captured image using blur information indicating a feature of blur of the one captured image included in the second captured image group It is preferable to provide the texture image acquisition part which acquires a 2nd texture image as said 2nd image by performing.
- the three-dimensional image interpolation device may be configured as an integrated circuit.
- a three-dimensional imaging device includes an imaging unit and the three-dimensional image interpolation device.
- the present invention can not only be realized as such a three-dimensional image interpolation device, but also as a three-dimensional image interpolation method in which the operation of the characteristic components included in such a three-dimensional image interpolation device is taken as a step. It can be realized.
- the present invention can also be realized as a program that causes a computer to execute the steps included in the three-dimensional image interpolation method. It goes without saying that such a program can be distributed via a non-temporary recording medium such as a compact disc read only memory (CD-ROM) or a transmission medium such as the Internet.
- CD-ROM compact disc read only memory
- frame interpolation of a three-dimensional moving image can be performed with high accuracy.
- FIG. 1 is a diagram showing an overall configuration of a three-dimensional imaging apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing the configuration of a three-dimensional image interpolation unit according to the embodiment of the present invention.
- FIG. 3 is a flowchart showing the processing operation of the three-dimensional image interpolation unit in the embodiment of the present invention.
- FIG. 4 is a flowchart showing the processing operation of the distance image acquisition unit in the embodiment of the present invention.
- FIG. 5 is a diagram for explaining an example of a motion vector calculation method according to the embodiment of the present invention.
- FIG. 6 is a view showing the relationship between the blurred image, the omnifocal image, and the PSF.
- FIG. 1 is a diagram showing an overall configuration of a three-dimensional imaging apparatus according to an embodiment of the present invention.
- FIG. 2 is a block diagram showing the configuration of a three-dimensional image interpolation unit according to the embodiment of the present invention.
- FIG. 3 is a
- FIG. 7 is a diagram showing how to determine the size of the blur kernel in the embodiment of the present invention.
- FIG. 8 is a flowchart showing the processing operation of the vector similarity calculation unit in the embodiment of the present invention.
- FIG. 9 is a diagram showing an example of an interpolation number input method according to an embodiment of the present invention.
- FIG. 10 is a diagram for explaining a method of generating an interpolated distance image and an interpolated texture image according to an embodiment of the present invention.
- FIG. 11 is a diagram for explaining a parallax image generation method according to an embodiment of the present invention.
- FIG. 12 is a block diagram showing a functional configuration of a three-dimensional image interpolation device according to an aspect of the present invention.
- FIG. 13 is a flowchart showing the processing operation of the three-dimensional image interpolation device according to one aspect of the present invention.
- image means a signal or information representing the luminance or color of a scene in a two-dimensional manner.
- range image means a signal or information representing the distance (depth) of the scene from the camera in two dimensions.
- parllax image means a plurality of images (for example, an image for the right eye and an image for the left eye) corresponding to a plurality of different viewpoint positions.
- FIG. 1 is a block diagram showing an overall configuration of a three-dimensional imaging device 10 according to an embodiment of the present invention.
- the three-dimensional imaging device 10 of the present embodiment is a digital electronic camera, and includes an imaging unit 100, a signal processing unit 200, and a display unit 300.
- the imaging unit 100, the signal processing unit 200, and the display unit 300 will be described in detail below.
- the imaging unit 100 captures an image of a scene.
- a scene means all of the things shown in the image taken by the imaging unit 100, and includes the background as well as the subject.
- the imaging unit 100 includes an imaging element 101, an optical lens 103, a filter 104, a control unit 105, and an element driving unit 106.
- the imaging device 101 is a solid-state imaging device such as a CCD image sensor or a CMOS image sensor, and is manufactured by a known semiconductor manufacturing technology.
- the imaging device 101 includes a plurality of light sensing cells arranged in a matrix on the imaging surface.
- the optical lens 103 forms an image on the imaging surface of the imaging element 101.
- the imaging unit 100 includes one optical lens 103. However, a plurality of optical lenses may be included.
- the filter 104 is an infrared cut filter 4 that transmits visible light and cuts near infrared light (IR). Note that the imaging unit 100 does not necessarily have to include the filter 104.
- the control unit 105 generates a basic signal for driving the imaging device 101. Further, the control unit 105 receives an output signal from the imaging element 101 and sends it to the signal processing unit 200.
- the element drive unit 106 drives the imaging element 101 based on the basic signal generated by the control unit 105.
- the control unit 105 and the element drive unit 106 are configured of, for example, a large scale integration (LSI) such as a CCD driver.
- LSI large scale integration
- the signal processing unit 200 generates an image signal based on the signal output from the imaging unit 100. As shown in FIG. 1, the signal processing unit 200 includes a memory 201, a three-dimensional image interpolation unit 202, and an interface unit 203.
- the three-dimensional image interpolation unit 202 performs frame interpolation of a three-dimensional moving image.
- the three-dimensional image interpolation unit 202 can be suitably realized by a combination of hardware such as a known digital signal processor (DSP) and software for executing image processing including image signal generation processing.
- DSP digital signal processor
- the details of the three-dimensional image interpolation unit 202 will be described later with reference to the drawings.
- the memory 201 is configured by, for example, a dynamic random access memory (DRAM) or the like.
- the memory 201 records a signal obtained from the imaging unit 100 and also temporarily records image data generated by the three-dimensional image interpolation unit 202 or its compressed image data. These image data are sent to a recording medium (not shown) or the display unit 300 via the interface unit 203.
- DRAM dynamic random access memory
- the display unit 300 displays a photographing condition or a photographed image. Further, the display unit 300 is a touch panel of a capacitive type or a resistive film type, and also functions as an input unit that receives an input from a user. The information input from the user is reflected in the control of the signal processing unit 200 and the imaging unit 100 through the interface unit 203.
- the three-dimensional imaging device 10 of the present embodiment may further include known components such as an electronic shutter, a view finder, a power supply (battery), a flash light, etc., the description thereof is particularly important for the understanding of the present invention. It is omitted because it is not necessary.
- FIG. 2 is a block diagram showing a configuration of the three-dimensional image interpolation unit 202 in the embodiment of the present invention.
- the three-dimensional image interpolation unit 202 determines the distance image acquisition unit 400, the distance motion vector calculation unit 401, the image motion vector calculation unit 402, the vector similarity calculation unit 403, and the number of interpolated images.
- a section 404, a distance image interpolation section 405, an image interpolation section 406, and an interpolated parallax image generation section 407 are provided.
- the distance image acquisition unit 400 acquires a first distance image and a second distance image representing the depths of the first image and the second image.
- the first image and the second image are images of the same viewpoint included in the three-dimensional moving image and are images to be subjected to frame interpolation.
- the distance image acquisition unit 400 acquires the first distance image based on the correlation of blur among a plurality of photographed images having different focal distances, which are included in the first photographed image group. Further, the distance image acquisition unit 400 acquires a second distance image based on the correlation of blur among a plurality of photographed images having different focal lengths, which are included in the second photographed image group.
- Each of the first captured image group and the second captured image group includes a plurality of captured images captured by the imaging unit 100 while changing the focal length. Further, the second captured image group is an image group temporally subsequent to the first captured image group.
- the texture image acquisition unit 408 performs the restoration process on the one captured image using the blur information indicating the feature of the blur of the one captured image included in the first captured image group to obtain the first texture image. Acquire as the first image. In addition, the texture image acquisition unit 408 performs the restoration process on the one captured image using the blur information indicating the feature of the blur of the one captured image included in the second captured image group, thereby performing the second texture. An image is acquired as a second image.
- a texture image is an image obtained by performing restoration processing on a captured image using blur information indicating a feature of blur of the captured image. That is, the texture image is an image from which blur included in the captured image has been removed. That is, the texture image is an image in which all the pixels are in focus.
- first texture image and the second texture image do not necessarily have to be used as the first image and the second image. That is, the first image and the second image may be images having blur.
- the three-dimensional image interpolation unit 202 may not include the texture image acquisition unit 408.
- the distance motion vector calculation unit 401 calculates a motion vector from the first distance image and the second distance image.
- the motion vector calculated from the first distance image and the second distance image in this manner is referred to as a distance motion vector.
- the image motion vector calculation unit 402 calculates a motion vector from the first image and the second image.
- the motion vector calculated from the first image and the second image in this manner is referred to as an image motion vector.
- the vector similarity calculation unit 403 calculates vector similarity, which is a value indicating the degree of similarity between the distance motion vector and the image motion vector. Details of the method of calculating the vector similarity will be described later.
- the interpolation image number determination unit 404 determines the upper limit number of interpolations so that the number increases as the calculated similarity degree increases.
- the distance image interpolation unit 405 generates at least one interpolation distance image that interpolates between the first distance image and the second distance image. Specifically, the distance image interpolation unit 405 generates interpolation distance images by a number equal to or less than the upper limit number of interpolations determined by the interpolation image number determination unit 404.
- the image interpolation unit 406 generates at least one interpolation image that interpolates between the first image and the second image.
- the image interpolation unit 406 generates at least one interpolated texture image that interpolates between the first texture image and the second texture image.
- the image interpolation unit 406 generates interpolation images by the number equal to or less than the upper limit number of interpolations determined by the interpolation image number determination unit 404.
- the interpolation parallax image generation unit 407 generates, based on the interpolation image, at least one set of interpolation parallax images having parallax according to the depth indicated by the interpolation distance image. In the present embodiment, the interpolation parallax image generation unit 407 generates interpolation parallax images having a number equal to or less than the upper limit number of interpolations determined by the interpolation image number determination unit 404.
- the three-dimensional image interpolation unit 202 performs frame interpolation of a three-dimensional moving image by generating the interpolated parallax image in this manner.
- the three-dimensional moving image subjected to frame interpolation in this manner is output to, for example, a stereoscopic display device (not shown).
- the stereoscopic display device displays a three-dimensional moving image by, for example, a glasses-type stereoscopic display method.
- the glasses-type three-dimensional display method is a method of displaying an image for the left eye and an image for the right eye having parallax to a user wearing glasses (for example, liquid crystal shutter glasses or polarized glasses).
- the stereoscopic display device does not necessarily have to display parallax images by the glasses-type stereoscopic display method, and may display parallax images by the naked-eye-type stereoscopic display method.
- the autostereoscopic display method is a stereoscopic display method without glasses (for example, a parallax barrier method or a lenticular lens method).
- FIG. 3 is a flowchart showing the processing operation of the three-dimensional image interpolation unit 202 in the embodiment of the present invention.
- the first image and the second image are the first texture image and the second texture image will be described.
- the distance image acquisition unit 400 acquires a first distance image and a second distance image (S102).
- the distance motion vector calculation unit 401 calculates a motion vector (distance motion vector) from the first distance image and the second distance image (S104).
- the texture image acquisition unit 408 acquires the first texture image and the second texture image (S105).
- the image motion vector calculation unit 402 calculates a motion vector (image motion vector) from the first texture image and the second texture image (S106).
- the vector similarity calculation unit 403 calculates the similarity between the distance motion vector and the image motion vector (S108).
- the interpolation image number determination unit 404 determines the upper limit number of interpolations so that the number increases as the calculated similarity degree increases (S110).
- the distance image interpolation unit 405 generates interpolation distance images having a number equal to or less than the upper limit number of interpolations to interpolate between the first distance image and the second distance image (S112).
- the image interpolation unit 406 generates interpolation texture images having a number equal to or less than the upper limit number of interpolations to interpolate between the first texture image and the second texture image (S114).
- the interpolation parallax image generation unit 407 generates an interpolation parallax image having a parallax according to the depth indicated by the corresponding interpolation distance image based on the interpolation texture image (S116).
- an interpolated parallax image is generated, and frame interpolation of a three-dimensional moving image is performed.
- the processing in steps S102 to S116 is repeated while changing the images to be interpolated (the first texture image and the second texture image).
- the distance image acquisition unit 400 acquires a distance image indicating the distance from the camera of the scene based on the plurality of captured images obtained from the imaging unit 100.
- a method of measuring the distance for each pixel by the Depth From Defocus method described in Patent Document 2 will be described.
- the distance image acquisition unit 400 may acquire the distance image by another method (for example, a stereo method using a plurality of cameras, a photometric stereo, a TOF method using an active sensor, or the like).
- the imaging unit 100 captures a plurality of images with different blurs as one image group by changing settings of a lens and an aperture.
- the imaging unit 100 obtains a plurality of image groups by repeating imaging of the image group.
- one image group among a plurality of image groups obtained in this manner is referred to as a first image group
- a temporally next image group of the first image group is referred to as a second image group.
- the distance image acquisition unit 400 acquires one distance image from one image group.
- the distance image acquisition unit 400 calculates, for each pixel, the correlation amount of blur between a plurality of photographed images included in the first image group.
- the distance image acquiring unit 400 acquires a distance image by referring to a reference table in which the relationship between the amount of correlation of the blur and the subject distance is previously determined.
- FIG. 4 is a flowchart showing an example of the processing operation of the distance image acquisition unit 400 in the embodiment of the present invention. Specifically, FIG. 4 shows a distance measurement method by the Depth From Defocus method.
- the distance image acquiring unit 400 acquires, from the imaging unit 100, two photographed images in which the same scene is photographed and which are two photographed images having mutually different focal lengths (S202). It is assumed that the two photographed images are included in the first image group.
- the focal length can be changed by moving the position of the lens or the imaging device.
- the distance image acquisition unit 400 sets an area including the pixel to be distance measurement target and the pixel group in the vicinity area as a DFD kernel (S204).
- This DFD kernel is the target of ranging processing.
- the size and shape of the DFD kernel are not particularly limited. For example, a 10 ⁇ 10 10 rectangular area centering on the distance measurement target pixel can be set as the DFD kernel.
- the distance image acquisition unit 400 extracts an area set as a DFD kernel from two photographed images captured with different focal distances, and calculates a blur correlation amount for each pixel of the DFD kernel (S206).
- the distance image acquiring unit 400 weights the amount of blur correlation obtained for each pixel of the DFD kernel using a weighting factor predetermined for the DFD kernel (S208).
- the weighting factor is, for example, a factor that increases in value toward the center of the DFD kernel and decreases in value toward the end.
- an existing weight distribution such as a Gaussian distribution may be used as a weight coefficient. This weighting process is characterized in that it is robust to the influence of noise.
- the sum of weighted blur correlation amounts is treated as the blur correlation amount of the DFD kernel.
- the distance image acquisition unit 400 obtains the distance from the blur correlation amount using a look-up table indicating the relationship between the distance and the blur correlation amount (S210).
- the blur correlation amount has a linear relationship with the reciprocal of the subject distance (see non-patent document 5 for look-up table calculation processing). If the corresponding blur correlation amount is not included in the look-up table, the distance image acquisition unit 400 may obtain the subject distance by interpolation. In addition, it is preferable to change the look-up table as the optical system changes. Therefore, the distance image acquisition unit 400 may prepare a plurality of look-up tables according to the size of the aperture and the focal length. Since the setting information of these optical systems is known at the time of imaging, it is possible to obtain in advance a lookup table to be used.
- the distance image acquisition unit 400 selects a distance measurement target pixel for which the subject distance is to be measured, and sets pixel values in the vicinity M ⁇ M rectangular area of the distance measurement target pixel in the images G1 and G2 as DFD kernels.
- the blur correlation amount G (u, v) for each pixel at an arbitrary pixel position (u, v) in the DFD kernel is expressed by (Expression 3).
- ⁇ represents the second derivative (Laplacian) of the pixel value.
- the blur correlation amount for each pixel is calculated by dividing the difference between the pixel values of predetermined pixels in two images having different blurs by the average value of the second derivative of predetermined pixels of the two images. .
- the blur correlation amount indicates the degree of correlation of blur in pixel units in the image.
- the distance image acquisition unit 400 acquires a distance image representing the distance from the camera to the subject with respect to the group of photographed images. That is, the distance image acquisition unit 400 acquires the first distance image based on the correlation of blur among a plurality of photographed images having different focal lengths, which are included in the first photographed image group. Furthermore, the distance image acquisition unit 400 is configured to calculate a second distance based on the correlation of blur among a plurality of photographed images having different focal lengths which are included in a second photographed image group temporally later than the first photographed image group. Get an image.
- the distance image acquisition unit 400 does not necessarily have to perform the above-described processing to acquire a distance image.
- the distance image acquisition unit 400 may simply acquire the distance image generated by the imaging unit 100 having a distance sensor.
- the distance motion vector calculation unit 401 calculates a motion vector from the first distance image and the second distance image.
- the distance motion vector calculation unit 401 first obtains corresponding points for each pixel of the first distance image and the second distance image. Then, the distance motion vector calculation unit 401 calculates a vector connecting corresponding points as a motion vector.
- the motion vector represents the movement amount and movement direction of each pixel between images. The motion vector will be described with reference to FIG.
- FIG. 5 shows the distance image (first distance image) at time t and the distance image (second distance image) at time t + 1.
- the pixel A and the pixel B are obtained as corresponding points by searching for a pixel corresponding to the pixel A at time t from the image at time t + 1.
- the distance motion vector calculation unit 401 correlates the region corresponding to the pixel A with the region corresponding to a pixel included in the search region. Calculate the value.
- the correlation value is calculated using, for example, SAD (Sum Of Absolute Difference) or SSD (Sum Of Squared Difference).
- the search area is indicated by a dotted frame in the distance image at time t + 1 in (a) of FIG. 4, for example.
- the size of the search area may be set large.
- the size of the search area may be set small.
- Equation 4 for calculating the correlation value using SAD and SSD is shown below.
- I1 (u, v) represents the pixel value of the pixel (u, v) in the image I1 at time t.
- I2 (u, v) represents the pixel value of the pixel (u, v) in the image I2 at time t + 1.
- the distance motion vector calculation unit 401 uses the equation (4) to search for a region similar to the region of N ⁇ M pixels based on the pixel (i1, j1) of the image I1.
- a correlation value is determined between an area of N ⁇ M pixels based on i1, j1) and an area of N ⁇ M pixels based on pixel (i2, j2) of the image I2.
- corsad is a correlation value determined by SAD
- corssd is a correlation value determined by SSD, either may be used as a correlation value. In corsad and corssd, the higher the correlation, the smaller the value.
- the distance motion vector calculation unit 401 calculates the correlation value while changing the pixel (i2, j2) in the search area.
- the distance motion vector calculation unit 401 determines the pixel (i2, j2) for which the minimum correlation value is calculated among the correlation values calculated in this manner as the pixel corresponding to the pixel A.
- the method of calculating the correlation value using SAD and SDD has been described on the assumption that the variation in illumination or the variation in contrast is small between the two images.
- the variation in illumination or the variation in contrast between the two images is large, it is preferable to calculate the correlation value using, for example, the normalized cross correlation method. This makes it possible to search for corresponding points more robustly.
- the distance motion vector calculation unit 401 can obtain a motion vector at each pixel of two distance images by performing the above processing for all the pixels. Note that noise removal processing such as median filtering may be performed after motion vectors are determined.
- the motion vector does not necessarily have to be calculated for each pixel.
- the distance motion vector calculation unit 401 may calculate a distance motion vector for each block of the first size obtained by dividing an image. In this case, the load for calculating the motion vector can be reduced as compared to the case where the motion vector is calculated for each pixel.
- the texture image acquisition unit 408 first calculates the first texture image using the first image group and the first distance image. Furthermore, the texture image acquisition unit 408 calculates a second texture image using the second image group and the second distance image.
- the texture image acquisition unit 408 performs restoration processing on the one captured image using the blur information indicating the blur characteristic of one captured image included in the first captured image group. Acquire a first texture image. Furthermore, the texture image acquisition unit 408 performs the restoration process on the one captured image using the blur information indicating the feature of the blur of the one captured image included in the second captured image group. Get an image.
- the texture image shown in the present embodiment is an image obtained by removing blur included in a photographed image using a distance image obtained by the Depth From Defocus method. That is, the texture image is an image in which all pixels are in focus (all-in-focus image).
- the texture image acquisition unit 408 calculates blur information (blur kernel) indicating the magnitude of blur of each pixel using the distance image and the lens formula.
- the texture image acquisition unit 408 generates a texture image (all-in-focus image) in which all the pixels are in focus by performing a deconvolution operation (restoration process) on each pixel of the captured image using the blur kernel. .
- FIG. 6 is an example in which (Expression 5) is expressed by an image.
- a blurred image i (x, y) is obtained by convolution with a circular blur function (details will be defined later).
- This blur function is also called a blur kernel.
- the diameter of the circle of the blur function is called kernel size.
- Equation 6 When the image consists of M ⁇ N pixels, the above (Equation 6) can be expressed by the following (Equation 7).
- the Fourier transform of the convolution of two functions is represented by the product of the Fourier transforms of each function. Therefore, the Fourier transforms of i (x, y), s (x, y), f (x, y) are I (u, v), S (u, v), F (u, v), respectively. If it represents, the following (Formula 8) will be derived
- Equation 9 is the Fourier transform F (u, v) of f (x, y) which is the blur function PSF, of the Fourier transform I (u, v) of the image i (x, y) obtained by camera photography. It is shown that the function obtained by dividing by) corresponds to the Fourier transform S (u, v) of the omnifocal image s (x, y).
- the omnifocal image s (x, y) can be obtained from the photographed image i (x, y).
- FIG. 7 shows a schematic view of the lens.
- B be the size of the blur kernel when photographing an object whose distance from the camera is d
- C be the distance to the imaging plane.
- the diameter (aperture diameter) A of the aperture and the focal length f are known from the setting conditions of the camera.
- the relationship between the aperture diameter A and the focal length f, and the relationship between the blur kernel B and the difference between the distance C to the imaging surface and the focal length f are similar to each other, so (Equation 10) is obtained.
- Equation 12 is obtained from the lens formula.
- the texture image acquisition unit 408 can obtain the size B of the blur kernel by this (Equation 13). Once the size B of the blur kernel is determined, the blur function f (x, y) is obtained.
- the blur kernel is defined by the pillbox function.
- the pillbox function can be defined by (Equation 14).
- the texture image acquisition unit 408 obtains the blur kernel of each pixel to obtain the blur function. Then, the texture image acquisition unit 408 generates a texture image by performing a deconvolution operation on the captured image using the blur function according to (Expression 10).
- the texture image acquisition unit 408 calculates the texture image from each of the first captured image group captured at time t and the second captured image group captured at time t + 1 to obtain the first texture image and the second captured image. 2 Acquire the texture image.
- the image motion vector calculation unit 402 calculates a motion vector (image motion vector) from the first texture image and the second texture image.
- the details of the process of calculating the motion vector from the first texture image and the second texture vector are the same as the distance motion vector calculation process, and thus the description thereof is omitted.
- the vector similarity calculation unit 403 calculates the vector similarity between the distance motion vector calculated by the distance motion vector calculation unit 401 and the image motion vector calculated by the image motion vector calculation unit 402.
- the fact that the two motion vectors are not similar means that the subject moves differently in the distance image and the texture image. However, in the case of the same subject, it is considered that the subject performs similar motion in the distance image and the texture image.
- the interpolation parallax image generated from the interpolation distance image and the interpolation texture image generated based on the two motion vectors does not correctly express the depth of the scene. Sex is high. As a result, even if a three-dimensional moving image frame-interpolated by such an interpolated parallax image is displayed on the three-dimensional display device, the user can not correctly view the sense of depth of the scene.
- a scene having a sense of depth that can not be realized in reality is displayed on the three-dimensional display device.
- a three-dimensional moving image for example, one object that is moving slowly may move violently to the front or back.
- the expected movement of the subject and the movement of the subject visually recognized from the three-dimensional moving image are significantly different, the user is more likely to experience 3D sickness.
- the similarity between the motion vector of the distance image and the motion vector of the texture image is used.
- the distance image and the texture image represent different information (distance and texture) as an image, they have a feature that the motion directions of the image area generated by the motion of the object included in the scene are similar.
- the likelihood of two motion vectors can be defined by the similarity of the two motion vectors. That is, when the motion vector of the distance image and the motion vector of the texture image are not similar, there is a high possibility that at least one of the motion vector of the distance image and the motion vector of the texture image is not correctly calculated. Therefore, there is a high possibility that motion vectors can not be used to correctly generate an interpolated texture image or an interpolated distance image. Therefore, in such a case, a three-dimensional moving image is displayed at a low frame rate on the three-dimensional display device by limiting the number of generated interpolated images. This makes it possible to suppress 3D sickness caused by a sudden change in the depth of the scene.
- FIG. 8 is a flowchart showing the processing operation of the vector similarity calculation unit 403 in the embodiment of the present invention.
- the vector similarity calculation unit 403 divides the distance image and the texture image into a plurality of blocks (for example, N ⁇ M rectangular regions: N and M are integers of 1 or more) (S302).
- the size of this block is larger than the size of the block for which the motion vector is calculated. That is, when the motion vector is calculated for each block of the first size, the vector similarity calculation unit 403 divides it into blocks of the second size larger than the first size.
- the vector similarity calculation unit 403 creates a direction histogram and an intensity histogram for each block (S304). The vector similarity calculation unit 403 calculates the similarity for each block using these histograms (S306). Finally, the vector similarity calculation unit 403 calculates the average value of the similarity obtained for each block (S308).
- the motion vector is a vector on a two-dimensional space. Therefore, the direction dir and the intensity pow of the motion vector can be calculated by (Equation 15).
- the range of values of the direction dir of the motion vector obtained by (Equation 15) is 0 to 359 degrees. Therefore, the vector similarity calculation unit 403 calculates, for each block, the direction dir of the motion vector of each pixel in the block using Equation (15). Then, the vector similarity calculation unit 403 generates the direction histogram of the motion vector of each block by calculating the calculated frequency of the direction dir of the motion vector of each pixel for each angle of 0 to 359 degrees.
- the vector similarity calculation unit 403 applies Equation 16 to motion vectors of all pixels in the block.
- the motion vector is represented as (xvec, yvec).
- the xvec and yvec values are used to calculate the direction of the selected motion vector.
- direction_hist is an array having 360 storage areas.
- the initial value of all elements in this array is zero.
- the function f shown in (Expression 16) is a function that converts a value from radian to frequency. In the function f, values after the decimal point are rounded (or rounded off). The values of 0 to 359 indicating the direction obtained by this function f are used as arguments of the direction_hist, and the values of the elements of the array corresponding to the arguments are incremented by one. Thereby, a direction histogram of motion vectors in the block is obtained.
- the maximum value of the strength pow of the motion vector obtained by (Expression 15) is the maximum value of the length of the motion vector. That is, the maximum value of the strength pow of the motion vector matches the maximum value of the search range of the corresponding points of the image at time t and the image at time t + 1. Therefore, the maximum value of the intensity pow of the motion vector coincides with the maximum value of the distance between the pixel (i1, j1) of the image at time t and the pixel (i2, j2) of the image at time t + 1 shown in (Expression 4) .
- the search range may be determined according to the scene to be captured, or may be determined for each imaging device. Also, the search range may be set when the user shoots. Assuming that the maximum value of the search range is powmax, the possible range of the strength of the motion vector is 0 to powmax.
- the vector similarity calculation unit 403 generates an intensity histogram of motion vectors by applying (Expression 17) to the motion vectors of all the pixels in the block.
- power_hist is an array having powmax + 1 storage areas. The initial value of all elements in this array is zero.
- the strength of the selected motion vector is calculated by (Equation 15).
- the function g shown in (Expression 17) is a function for rounding off (or rounding off) the value after the decimal point of the strength of the calculated motion vector.
- the value of 0 to powmax indicating the strength obtained by this function g is used as an argument of power_hist, and the value of the element of the array corresponding to that argument is incremented by one. Thereby, an intensity histogram of motion vectors in the block is obtained.
- d_direction_hist and d_power_hist be the direction histogram and the intensity histogram of the distance image, respectively.
- the direction histogram and the intensity histogram of the texture image are t_direction_hist and t_power_hist, respectively.
- the number of pixels in the block (the number of motion vectors) is N ⁇ M.
- the vector similarity calculation unit 403 calculates the histogram correlation value of the direction histogram and the histogram correlation value of the intensity histogram according to (Expression 18).
- dircor is a correlation value of the direction histogram
- powcor is a correlation value of the intensity histogram
- the function min is a function that returns the smaller value of the two arguments. The more similar the shapes of the histogram, the closer the correlation values (dircor and powcor) of the histogram are to one, and the different the shapes of the histogram, the closer the correlation value of the histogram to zero.
- the vector similarity calculation unit 403 calculates the correlation value of the histogram calculated by the above method for each block. Then, the vector similarity calculation unit 403 determines the average value of the correlation values calculated for each block as the similarity. Since the histogram correlation value ranges from 0 to 1, the similarity, which is the average value thereof, also ranges from 0 to 1. Therefore, the similarity indicates the rate at which the motion vector of the distance image and the motion vector of the texture image are similar.
- the vector similarity calculation unit 403 generates, for each block, the direction histogram and the intensity histogram of the distance motion vector. Furthermore, the vector similarity calculation unit 403 generates, for each block, a direction histogram and an intensity histogram of the image motion vector. Then, the vector similarity calculation unit 403 calculates the vector similarity based on the similarity between the distance motion vector and the orientation histogram of the image motion vector, and the similarity between the distance motion vector and the intensity histogram of the image motion vector. .
- the vector similarity calculation unit 403 does not necessarily have to calculate the vector similarity based on the similarity of both the direction histogram and the intensity histogram. That is, the vector similarity calculation unit 403 may calculate the vector similarity based on the similarity of one of the direction histogram and the intensity histogram. In this case, the other of the direction and intensity histograms need not be generated.
- the vector similarity calculation unit 403 does not have to calculate the vector similarity using a histogram.
- the vector similarity calculation unit 403 may calculate the vector similarity by comparing the direction and the intensity of the average vector.
- the interpolated image number determination unit 404 determines the upper limit number of interpolations based on the vector similarity.
- the vector similarity is regarded as the accuracy of the motion vector
- the upper limit number of interpolations is determined so as to reduce the number of generated interpolated parallax images when the vector similarity is low.
- the interpolated image number determination unit 404 determines the upper limit number Num of interpolations corresponding to the vector similarity using (Expression 19).
- F is a predetermined fixed value
- Sim is a vector similarity. For example, when F is 30, if the vector similarity Sim is 0.5, the upper limit number of interpolated parallax images that can be interpolated between time t and time t + 1 is determined to be 15.
- the interpolation image number determination unit 404 may determine, as the number of interpolations, a number equal to or less than the upper limit number of interpolations input by the user. For example, when the upper limit number is 15, the user may input a number in the range of 0 to 15 as the interpolation number.
- a slide bar for receiving an input of a number in the range of 0 to 15 is displayed on the touch panel (display unit 300).
- the user moves the slide bar displayed on the touch panel by touch operation to input a number equal to or less than the upper limit number.
- the user can set the number of interpolations while looking at the display unit 300 on the back of the camera.
- the user can adjust the number of interpolations while checking a three-dimensional moving image frame-interpolated by an interpolated parallax image generated by an interpolated parallax image generation process described later.
- the user can intuitively input the number of interpolations for obtaining a three-dimensional moving image with less sickness. That is, it is possible to suppress the user from feeling uncomfortable by the frame interpolation.
- the input of the interpolation number may be accepted by an input device other than the touch panel as illustrated.
- the interpolation image number determination unit 404 does not necessarily determine the number input by the user as the interpolation number.
- the interpolation image number determination unit 404 may determine the upper limit number as the interpolation number as it is.
- Non-Patent Document 6 does not show the direct experimental result on the motion sickness of a three-dimensional moving image, it shows the experimental result on the motion sickness of the 2D moving image.
- camera parameters etc. are set accurately so that there is no shift in the size, rotation, color, etc. of the left and right images when shooting with a camera. It is stated that
- the number of interpolated parallax images is normally set to a small value, and the number of interpolations is adjusted by a user interface that specifies a value as shown in FIG.
- the distance image interpolation unit 405 and the image interpolation unit 406 generate interpolation distance images and interpolation texture images by the number equal to or less than the upper limit number of interpolations determined by the interpolation image number determination unit 404 using the motion vector.
- the motion vector of the pixel (u, v) of the image I1 at time t is (vx, vy).
- the pixel of the image I2 corresponding to the pixel (u, v) of the image I1 is a pixel (u + vx, u + vy).
- FIG. 9 is a diagram for explaining the interpolation method of the distance image and the texture image in the embodiment of the present invention.
- an interpolated distance image and an interpolated texture image that interpolate between the distance image and the texture image at time t and the distance image and the texture image at time t + 1 are generated.
- first interpolation pixels the pixels constituting the first interpolation distance image
- second interpolation pixels are pixels of the first distance image It is an internally dividing point between (u, v) and the pixel (u + vx, v + vy) of the second distance image. Therefore, the first interpolation pixel is (u + vx / 3, v + vy / 3), and the second interpolation pixel is (u + vx * 2/3, v + vy * 2/3).
- the pixel value of the pixel (u, v) of the first distance image is represented as Depth (u, v)
- the pixel value of the pixel (u, v) of the second distance image is Depth ′ (u, v) Represent.
- the pixel value of the first interpolation pixel (u + vx / 3, v + vy / 3) is Depth (u, v) * 2/3 + Depth '(u + vx, v + vy) / 3.
- the pixel value of the second interpolation pixel is Depth (u, v) / 3 + Depth '(u + vx, v + vy) * 2/3.
- An interpolated distance image is generated by the linear interpolation as described above.
- an interpolation texture image is also produced
- Equation 21 the coordinates of the pixel at time t are (u, v), the motion vector is (vx, vy), and the number of interpolations is Num. Also, j is an integer of 1 or more and Num or less. The coordinates of the pixel of the j-th interpolation image are calculated by (Equation 20).
- Equation 21 The equation for calculating the pixel value of the j-th interpolation image is shown in (Equation 21).
- I (u, v) is the pixel value of the pixel (u, v) at time t
- I ′ (u, v) is the pixel value of the pixel (u, v) at time t + 1.
- the j-th interpolation image can be generated by the equation defined above.
- the interpolation parallax image generation unit 407 generates an interpolation parallax image (a parallax image in this case indicates two images for the left eye and the right eye) from the interpolation distance image and the interpolation texture image.
- an interpolation parallax image (a parallax image in this case indicates two images for the left eye and the right eye) from the interpolation distance image and the interpolation texture image.
- FIG. 11 is a diagram for explaining a method of generating an interpolated parallax image according to the embodiment of the present invention. Specifically, FIG. 11 shows the relationship between the distance to the subject and the coordinates on the image when viewed from the viewpoints of the interpolation distance image and the interpolation texture image and the left eye image to be generated. The meanings of the symbols in FIG. 11 are as follows.
- the pixel of the left eye interpolation image corresponding to the pixel (u, v) of the interpolation texture image is known
- the pixel value of the pixel (u, v) is copied to the corresponding pixel of the left eye interpolation image to copy the left eye image. It can be made.
- the focal length f and the distances Z and Z 'from the camera to the subject are known.
- the distance d is known because it can be arbitrarily set in advance when generating parallax images.
- Equation (22) is obtained.
- equation (23) is obtained.
- the pixel (u, v) of the interpolation texture image corresponds to the pixel (u ⁇ X1, v) of the interpolation image for left eye. Therefore, the left-eye interpolation image is generated by copying the pixel value of the pixel (u, v) of the interpolation texture image to the pixel (u-X1, v) of the left-eye interpolation image.
- the pixel value of the pixel (u, v) of the interpolation texture image may be copied to the pixel (u ⁇ X2, v) of the interpolation image for left eye.
- the interpolation parallax image generation unit 407 can generate the interpolation image for the left eye by performing the above-described processing on all the pixels included in the interpolation distance image.
- the right-eye interpolation image is generated by copying pixel values to positions opposite to the left-eye interpolation image.
- the pixel of the interpolation image for the right eye corresponding to the pixel (u ⁇ X1, v) of the interpolation image for the left eye is a pixel (u + X1, v).
- the interpolation parallax image generation unit 407 can generate the left eye interpolation image and the right eye interpolation image.
- the interpolation parallax image generation unit 407 may generate a parallax image in addition to the interpolation parallax image.
- the three-dimensional imaging apparatus when performing frame interpolation of a three-dimensional moving image, interpolation of the two-dimensional image and interpolation of the distance image are separately performed. An interpolated parallax image is generated. Therefore, by separately performing interpolation of the left-eye image and interpolation of the right-eye image, interpolation errors in the depth direction can be suppressed more than in the case of generating an interpolated parallax image, and frame interpolation of a three-dimensional moving image It can be done with high accuracy.
- the left-eye interpolation image and the right-eye interpolation image are generated using the same interpolation distance image and interpolation image, the user who views the frame-interpolated three-dimensional image is uncomfortable due to the interpolation. It also produces the effect of making it difficult to
- the upper limit number of interpolations can be determined according to the similarity between the distance motion vector and the image motion vector. If the similarity between the distance motion vector and the image motion vector is low, there is a high possibility that the distance motion vector or the image motion vector has not been correctly calculated. Therefore, in such a case, by reducing the upper limit number of interpolations, it is possible to suppress degradation of the image quality of the three-dimensional moving image due to the interpolated parallax image.
- the vector similarity can be calculated based on the histogram of at least one of the direction and the intensity of the motion vector.
- a plurality of photographed images having different focal lengths can be used as an input, which can contribute to downsizing of the imaging apparatus.
- the present invention is not limited to these embodiments. As long as the gist of the present invention is not deviated, modes in which various modifications that those skilled in the art can think of are applied to the present embodiment are also included in the scope of the present invention.
- the three-dimensional image interpolation unit executes various processes with a plurality of photographed images having different focal distances as inputs, but a plurality of photographed images having different focal distances does not necessarily have to be input. There is no need.
- a three-dimensional moving image including an image for the left eye and an image for the right eye may be input.
- the distance image acquisition unit may acquire the distance image based on the parallax between the left eye image and the right eye image.
- the three-dimensional image interpolation unit is included in the three-dimensional imaging device, but may be realized as a three-dimensional image interpolation device independently from the three-dimensional imaging device.
- An example of such a three-dimensional image interpolation apparatus will be described using FIG. 12 and FIG.
- FIG. 12 is a block diagram showing a functional configuration of the three-dimensional image interpolation apparatus 500 according to an aspect of the present invention.
- FIG. 13 is a flowchart showing the processing operation of the three-dimensional image interpolation apparatus 500 according to an aspect of the present invention.
- the three-dimensional image interpolation apparatus 500 includes a distance image interpolation unit 501, an image interpolation unit 502, and an interpolation parallax image generation unit 503.
- the distance image interpolation unit 501 generates at least one interpolation distance image that interpolates between the first distance image and the second distance image (S402).
- the image interpolation unit 502 generates at least one interpolation image to interpolate between the first image and the second image (S404).
- the interpolation parallax image generation unit 503 generates an interpolation parallax image having parallax according to the depth indicated by the interpolation distance image based on the interpolation image (S406).
- the three-dimensional image interpolation device 500 performs frame interpolation of a three-dimensional moving image.
- the above three-dimensional image interpolation apparatus is specifically a computer system configured of a microprocessor, ROM, RAM, hard disk unit, display unit, keyboard, mouse and the like.
- a computer program is stored in the ROM or the hard disk unit.
- the three-dimensional image interpolation device achieves its function by the microprocessor operating according to the computer program.
- the computer program is configured by combining a plurality of instruction codes indicating instructions to the computer in order to achieve a predetermined function.
- the system LSI is a super-multifunctional LSI manufactured by integrating a plurality of components on one chip, and more specifically, a computer system including a microprocessor, a ROM, a RAM, and the like. . A computer program is stored in the RAM. The system LSI achieves its functions as the microprocessor operates in accordance with the computer program.
- Some or all of the components constituting the above three-dimensional image interpolation device may be composed of an IC card or a single module which can be detached from the three-dimensional image interpolation device.
- the IC card or the module is a computer system including a microprocessor, a ROM, a RAM, and the like.
- the IC card or the module may include the super multifunctional LSI described above.
- the IC card or the module achieves its function by the microprocessor operating according to the computer program. This IC card or this module may be tamper resistant.
- the present invention may be the method shown above. Further, the present invention may be a computer program that realizes these methods by a computer, or may be a digital signal composed of the computer program.
- the present invention is also directed to a non-transitory recording medium that can read the computer program or the digital signal from a computer, such as a flexible disk, a hard disk, a CD-ROM, an MO, a DVD, a DVD-ROM, a DVD-RAM, a BD It may be recorded on a Blu-ray Disc (registered trademark), a semiconductor memory, or the like. Further, the present invention may be the digital signal recorded on these recording media.
- the computer program or the digital signal may be transmitted via a telecommunication line, a wireless or wired communication line, a network represented by the Internet, data broadcasting, and the like.
- the present invention may be a computer system comprising a microprocessor and a memory, wherein the memory stores the computer program, and the microprocessor operates according to the computer program.
- the three-dimensional image interpolation device and the three-dimensional imaging device according to the present invention can perform frame interpolation of a three-dimensional moving image with high accuracy, and can be used as a digital video camera, a display device, computer software, or the like.
- Reference Signs List 10 three-dimensional imaging apparatus 100 imaging unit 101 imaging device 103 optical lens 104 filter 105 control unit 106 element driver 200 signal processing unit 201 memory 202 three-dimensional image interpolation unit 203 interface unit 300 display unit 400 distance image acquisition unit 401 distance motion vector Calculation unit 402 Image motion vector calculation unit 403 Vector similarity calculation unit 404 Interpolated image number determination unit 405, 501 Distance image interpolation unit 406, 502 Image interpolation unit 407, 503 Interpolated parallax image generation unit 408 Texture image acquisition unit 500 Three-dimensional image Interpolation device
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Testing, Inspecting, Measuring Of Stereoscopic Televisions And Televisions (AREA)
- Image Processing (AREA)
Abstract
Description
まず、ステップS102の距離画像取得処理の詳細について説明する。
次に、ステップS104の距離動きベクトル算出処理の詳細を説明する。
次に、ステップS105のテクスチャ画像取得処理の詳細を説明する。
次に、ステップS106の画像動きベクトル算出処理について説明する。
次に、ステップS108のベクトル類似度算出処理の詳細について説明する。
次に、ステップS110の補間画像数決定処理について説明する。
次に、ステップS112の補間距離画像生成処理と、ステップS114の補間テクスチャ画像生成処理とについて詳細に説明する。
最後に、ステップS116の補間視差画像生成処理の詳細について説明する。
B:左視差位置
C、D:被写体
E:左視差位置の光軸
G、I:被写体C、Dの左目用カメラでの撮影位置
f:距離計測位置の焦点距離
d:AとBの距離
Z、Z’:C、Dまでの距離
X1、X2:撮影画像上の座標
なお、以下のような場合も本発明に含まれる。
100 撮像部
101 撮像素子
103 光学レンズ
104 フィルタ
105 制御部
106 素子駆動部
200 信号処理部
201 メモリ
202 3次元画像補間部
203 インタフェース部
300 表示部
400 距離画像取得部
401 距離動きベクトル算出部
402 画像動きベクトル算出部
403 ベクトル類似度算出部
404 補間画像数決定部
405、501 距離画像補間部
406、502 画像補間部
407、503 補間視差画像生成部
408 テクスチャ画像取得部
500 3次元画像補間装置
Claims (10)
- 3次元動画像のフレーム補間を行う3次元画像補間装置であって、
前記3次元動画像に含まれる第1画像および第2画像の奥行きをそれぞれ表す第1距離画像と第2距離画像との間を補間する少なくとも1枚の補間距離画像を生成する距離画像補間部と、
前記第1画像と前記第2画像との間を補間する少なくとも1枚の補間画像を生成する画像補間部と、
前記補間画像に基づいて、前記補間距離画像が示す奥行きに応じた視差を有する少なくとも1組の補間視差画像を生成する補間視差画像生成部とを備える
3次元画像補間装置。 - 前記3次元画像補間装置は、さらに、
前記第1距離画像および前記第2距離画像から動きベクトルを距離動きベクトルとして算出する距離動きベクトル算出部と、
前記第1画像および前記第2画像から動きベクトルを画像動きベクトルとして算出する画像動きベクトル算出部と、
前記画像動きベクトルと前記距離動きベクトルとの類似性の高さを示す値であるベクトル類似度を算出するベクトル類似度算出部と、
算出された前記ベクトル類似度が大きいほど数が多くなるように、補間の上限数を決定する補間画像数決定部とを備え、
前記補間視差画像生成部は、決定された前記上限数以下の数の前記補間視差画像を生成する
請求項1に記載の3次元画像補間装置。 - 前記距離動きベクトル算出部は、第1サイズのブロック毎に前記距離動きベクトルを算出し、
前記画像動きベクトル算出部は、前記第1サイズのブロック毎に前記画像動きベクトルを算出し、
前記ベクトル類似度算出部は、
前記第1サイズよりも大きい第2サイズのブロック毎に、前記距離動きベクトルの方向および強度のうちの少なくとも一方のヒストグラムを生成し、
前記第2サイズのブロック毎に、前記画像動きベクトルの方向および強度のうちの少なくとも一方のヒストグラムを生成し、
前記距離動きベクトルおよび前記画像動きベクトルの方向のヒストグラム間の類似性と、前記距離動きベクトルおよび前記画像動きベクトルの強度のヒストグラム間の類似性とのうちの少なくとも一方に基づいて前記ベクトル類似度を算出する
請求項2に記載の3次元画像補間装置。 - 前記補間画像数決定部は、ユーザによって入力された前記上限数以下の数を補間数と決定し、
前記補間視差画像生成部は、決定された前記補間数の前記補間視差画像を生成する
請求項2または3に記載の3次元画像補間装置。 - 前記3次元画像補間装置は、さらに、
第1撮影画像群に含まれる互いに焦点距離が異なる複数の撮影画像間のボケの相関性に基づいて前記第1距離画像を取得し、かつ、前記第1撮影画像群よりも時間的に後の第2撮影画像群に含まれる互いに焦点距離が異なる複数の撮影画像間のボケの相関性に基づいて前記第2距離画像を取得する距離画像取得部を備える
請求項1~4のいずれか1項に記載の3次元画像補間装置。 - 前記3次元画像補間装置は、さらに、
前記第1撮影画像群に含まれる一の撮影画像のボケの特徴を示すボケ情報を用いて当該一の撮影画像に対して復元処理を行うことにより、第1テクスチャ画像を前記第1画像として取得し、かつ、前記第2撮影画像群に含まれる一の撮影画像のボケの特徴を示すボケ情報を用いて当該一の撮影画像に対して復元処理を行うことにより、第2テクスチャ画像を前記第2画像として取得するテクスチャ画像取得部を備える
請求項5に記載の3次元画像補間装置。 - 前記3次元画像補間装置は、集積回路として構成されている
請求項1~6のいずれか1項に記載の3次元画像補間装置。 - 撮像部と、
請求項1~7のいずれか1項に記載の3次元画像補間装置とを備える
3次元撮像装置。 - 3次元動画像のフレーム補間を行う3次元画像補間方法であって、
前記3次元動画像に含まれる第1画像および第2画像の奥行きをそれぞれ表す第1距離画像と第2距離画像との間を補間する少なくとも1枚の補間距離画像を生成する距離画像補間ステップと、
前記第1画像と前記第2画像との間を補間する少なくとも1枚の補間画像を生成する画像補間ステップと、
前記補間画像に基づいて、前記補間距離画像が示す奥行きに応じた視差を有する少なくとも1組の補間視差画像を生成する補間視差画像生成ステップとを含む
3次元画像補間方法。 - 請求項9に記載の3次元画像補間方法をコンピュータに実行させるためのプログラム。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201180005126.2A CN102687515B (zh) | 2010-10-27 | 2011-10-26 | 三维图像插补装置、三维摄像装置以及三维图像插补方法 |
EP11835831.6A EP2635034B1 (en) | 2010-10-27 | 2011-10-26 | 3d image interpolation device, 3d imaging device, and 3d image interpolation method |
JP2012517976A JP5887267B2 (ja) | 2010-10-27 | 2011-10-26 | 3次元画像補間装置、3次元撮像装置および3次元画像補間方法 |
US13/519,158 US9270970B2 (en) | 2010-10-27 | 2011-10-26 | Device apparatus and method for 3D image interpolation based on a degree of similarity between a motion vector and a range motion vector |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-240461 | 2010-10-27 | ||
JP2010240461 | 2010-10-27 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2012056686A1 true WO2012056686A1 (ja) | 2012-05-03 |
Family
ID=45993432
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/005956 WO2012056686A1 (ja) | 2010-10-27 | 2011-10-26 | 3次元画像補間装置、3次元撮像装置および3次元画像補間方法 |
Country Status (5)
Country | Link |
---|---|
US (1) | US9270970B2 (ja) |
EP (1) | EP2635034B1 (ja) |
JP (1) | JP5887267B2 (ja) |
CN (1) | CN102687515B (ja) |
WO (1) | WO2012056686A1 (ja) |
Cited By (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014036362A (ja) * | 2012-08-09 | 2014-02-24 | Canon Inc | 撮像装置、その制御方法、および制御プログラム |
JP2014075726A (ja) * | 2012-10-05 | 2014-04-24 | Dainippon Printing Co Ltd | 奥行き制作支援装置、奥行き制作支援方法、およびプログラム |
JP2014192794A (ja) * | 2013-03-28 | 2014-10-06 | Dainippon Printing Co Ltd | 奥行き制作支援装置、奥行き制作方法、及びプログラム |
JP2015170307A (ja) * | 2014-03-10 | 2015-09-28 | サクサ株式会社 | 画像処理装置 |
JP2020061114A (ja) * | 2018-10-09 | 2020-04-16 | 財團法人工業技術研究院Industrial Technology Research Institute | 奥行き推定装置、奥行き推定装置を使用する自動運転車両、及び自動運転車両に使用する奥行き推定方法 |
WO2024057902A1 (ja) * | 2022-09-12 | 2024-03-21 | ソニーグループ株式会社 | 情報処理装置および方法、並びにプログラム |
Families Citing this family (24)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
KR101055411B1 (ko) * | 2010-03-12 | 2011-08-09 | 이상원 | 입체 영상 생성 방법 및 그 장치 |
KR101852811B1 (ko) * | 2011-01-05 | 2018-04-27 | 엘지전자 주식회사 | 영상표시 장치 및 그 제어방법 |
DE112011105927T5 (de) * | 2011-12-07 | 2014-09-11 | Intel Corporation | Grafik-Renderingverfahren für autostereoskopisches dreidimensionales Display |
US9171393B2 (en) * | 2011-12-07 | 2015-10-27 | Microsoft Technology Licensing, Llc | Three-dimensional texture reprojection |
TWI483612B (zh) * | 2011-12-22 | 2015-05-01 | Nat Univ Chung Cheng | Converting the video plane is a perspective view of the video system |
JP6071257B2 (ja) * | 2012-06-07 | 2017-02-01 | キヤノン株式会社 | 画像処理装置及びその制御方法、並びにプログラム |
JP6007682B2 (ja) * | 2012-08-31 | 2016-10-12 | 富士通株式会社 | 画像処理装置、画像処理方法及びプログラム |
KR101433242B1 (ko) * | 2012-11-16 | 2014-08-25 | 경북대학교 산학협력단 | 정복 시술 로봇 및 그의 구동 제어 방법 |
JP5786847B2 (ja) * | 2012-12-19 | 2015-09-30 | カシオ計算機株式会社 | 撮像装置、撮像方法及びプログラム |
US9036040B1 (en) | 2012-12-20 | 2015-05-19 | United Services Automobile Association (Usaa) | Vehicle identification number capture |
JP6214233B2 (ja) | 2013-06-21 | 2017-10-18 | キヤノン株式会社 | 情報処理装置、情報処理システム、情報処理方法およびプログラム。 |
KR102103984B1 (ko) | 2013-07-15 | 2020-04-23 | 삼성전자주식회사 | 깊이 영상 처리 방법 및 장치 |
US9769498B2 (en) | 2014-03-28 | 2017-09-19 | University-Industry Cooperation Group Of Kyung Hee University | Method and apparatus for encoding of video using depth information |
TWI549478B (zh) * | 2014-09-04 | 2016-09-11 | 宏碁股份有限公司 | 產生三維影像的方法及其電子裝置 |
US9781405B2 (en) | 2014-12-23 | 2017-10-03 | Mems Drive, Inc. | Three dimensional imaging with a single camera |
CN105208366A (zh) * | 2015-09-16 | 2015-12-30 | 云南师范大学 | 一种用于近视患者立体视觉增强的方法 |
US10484629B2 (en) * | 2015-10-16 | 2019-11-19 | Capso Vision Inc | Single image sensor for capturing mixed structured-light images and regular images |
JP6464281B2 (ja) | 2015-11-06 | 2019-02-06 | 富士フイルム株式会社 | 情報処理装置、情報処理方法、及びプログラム |
JP6534457B2 (ja) * | 2016-02-04 | 2019-06-26 | 富士フイルム株式会社 | 情報処理装置、情報処理方法、及びプログラム |
US10277889B2 (en) * | 2016-12-27 | 2019-04-30 | Qualcomm Incorporated | Method and system for depth estimation based upon object magnification |
TWI622022B (zh) * | 2017-07-13 | 2018-04-21 | 鴻海精密工業股份有限公司 | 深度計算方法及其裝置 |
US10502791B1 (en) * | 2018-09-04 | 2019-12-10 | Lg Chem, Ltd. | System for determining an accurate ohmic resistance value associated with a battery cell |
CN111862183B (zh) * | 2020-07-02 | 2024-08-02 | Oppo广东移动通信有限公司 | 深度图像处理方法和系统、电子设备及存储介质 |
CN112258635B (zh) * | 2020-10-26 | 2023-07-21 | 北京石油化工学院 | 基于改进双目匹配sad算法的三维重建方法及装置 |
Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07262382A (ja) | 1994-03-17 | 1995-10-13 | Fujitsu Ltd | 画像処理システムにおける物体認識及び画像補間装置 |
JP2004229093A (ja) * | 2003-01-24 | 2004-08-12 | Nippon Telegr & Teleph Corp <Ntt> | 立体画像生成方法及び立体画像生成装置、ならびに立体画像生成プログラム及び記録媒体 |
JP2009244490A (ja) * | 2008-03-31 | 2009-10-22 | Casio Comput Co Ltd | カメラ、カメラ制御プログラム及びカメラ制御方法 |
JP2010011066A (ja) * | 2008-06-26 | 2010-01-14 | Sony Corp | 画像圧縮装置及び画像圧縮方法 |
JP2010016743A (ja) | 2008-07-07 | 2010-01-21 | Olympus Corp | 測距装置、測距方法、測距プログラム又は撮像装置 |
JP2010081357A (ja) * | 2008-09-26 | 2010-04-08 | Olympus Corp | 画像処理装置、画像処理方法、画像処理プログラム及び撮像装置 |
JP2010171672A (ja) * | 2009-01-22 | 2010-08-05 | Hitachi Ltd | フレームレート変換装置、映像表示装置、フレームレート変換方法 |
Family Cites Families (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP4670918B2 (ja) | 2008-08-26 | 2011-04-13 | ソニー株式会社 | フレーム補間装置及びフレーム補間方法 |
-
2011
- 2011-10-26 US US13/519,158 patent/US9270970B2/en not_active Expired - Fee Related
- 2011-10-26 JP JP2012517976A patent/JP5887267B2/ja not_active Expired - Fee Related
- 2011-10-26 EP EP11835831.6A patent/EP2635034B1/en not_active Ceased
- 2011-10-26 CN CN201180005126.2A patent/CN102687515B/zh not_active Expired - Fee Related
- 2011-10-26 WO PCT/JP2011/005956 patent/WO2012056686A1/ja active Application Filing
Patent Citations (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH07262382A (ja) | 1994-03-17 | 1995-10-13 | Fujitsu Ltd | 画像処理システムにおける物体認識及び画像補間装置 |
JP2004229093A (ja) * | 2003-01-24 | 2004-08-12 | Nippon Telegr & Teleph Corp <Ntt> | 立体画像生成方法及び立体画像生成装置、ならびに立体画像生成プログラム及び記録媒体 |
JP2009244490A (ja) * | 2008-03-31 | 2009-10-22 | Casio Comput Co Ltd | カメラ、カメラ制御プログラム及びカメラ制御方法 |
JP2010011066A (ja) * | 2008-06-26 | 2010-01-14 | Sony Corp | 画像圧縮装置及び画像圧縮方法 |
JP2010016743A (ja) | 2008-07-07 | 2010-01-21 | Olympus Corp | 測距装置、測距方法、測距プログラム又は撮像装置 |
JP2010081357A (ja) * | 2008-09-26 | 2010-04-08 | Olympus Corp | 画像処理装置、画像処理方法、画像処理プログラム及び撮像装置 |
JP2010171672A (ja) * | 2009-01-22 | 2010-08-05 | Hitachi Ltd | フレームレート変換装置、映像表示装置、フレームレート変換方法 |
Non-Patent Citations (7)
Title |
---|
"3DC Safety Guidelines", 3D CONSORTIUM, 20 April 2010 (2010-04-20) |
A. P. PENTLAND: "A new sense for depth of field", IEEE TRANSACTION ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 2, no. 4, 1987, pages 523 - 531 |
L. ZHANG; W. J. TAM: "Stereoscopic Image Generation Based on Depth Images for 3D TV", IEEE TRANS. ON BROADCASTING, vol. 51, no. 2, June 2005 (2005-06-01) |
M. OKUTOMI; T. KANADE: "A Multiple-baseline Stereo", IEEE TRANS. PATTERN ANALYSIS AND MACHINE INTELLIGENCE, vol. 15, no. 4, 1993, pages 353 - 363 |
M. SUBBARAO; G. SURYA: "Depth from Defocus: A Spatial Domain Approach", INTERNATIONAL JOURNAL OF COMPUTER VISION, vol. 13, no. 3, 1994, pages 271 - 294 |
R. J. WOODHAM: "Photometric method for determining surface orientation from multiple images", OPTICAL ENGINEERINGS, vol. 19, no. I, 1980, pages 139 - 144 |
See also references of EP2635034A4 |
Cited By (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014036362A (ja) * | 2012-08-09 | 2014-02-24 | Canon Inc | 撮像装置、その制御方法、および制御プログラム |
JP2014075726A (ja) * | 2012-10-05 | 2014-04-24 | Dainippon Printing Co Ltd | 奥行き制作支援装置、奥行き制作支援方法、およびプログラム |
JP2014192794A (ja) * | 2013-03-28 | 2014-10-06 | Dainippon Printing Co Ltd | 奥行き制作支援装置、奥行き制作方法、及びプログラム |
JP2015170307A (ja) * | 2014-03-10 | 2015-09-28 | サクサ株式会社 | 画像処理装置 |
JP2020061114A (ja) * | 2018-10-09 | 2020-04-16 | 財團法人工業技術研究院Industrial Technology Research Institute | 奥行き推定装置、奥行き推定装置を使用する自動運転車両、及び自動運転車両に使用する奥行き推定方法 |
US10699430B2 (en) | 2018-10-09 | 2020-06-30 | Industrial Technology Research Institute | Depth estimation apparatus, autonomous vehicle using the same, and depth estimation method thereof |
WO2024057902A1 (ja) * | 2022-09-12 | 2024-03-21 | ソニーグループ株式会社 | 情報処理装置および方法、並びにプログラム |
Also Published As
Publication number | Publication date |
---|---|
EP2635034A1 (en) | 2013-09-04 |
US20120293627A1 (en) | 2012-11-22 |
JP5887267B2 (ja) | 2016-03-16 |
CN102687515B (zh) | 2015-07-15 |
EP2635034A4 (en) | 2014-04-23 |
JPWO2012056686A1 (ja) | 2014-03-20 |
US9270970B2 (en) | 2016-02-23 |
CN102687515A (zh) | 2012-09-19 |
EP2635034B1 (en) | 2014-09-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP5887267B2 (ja) | 3次元画像補間装置、3次元撮像装置および3次元画像補間方法 | |
JP5942195B2 (ja) | 3次元画像処理装置、3次元撮像装置および3次元画像処理方法 | |
CN107430782B (zh) | 用于利用深度信息的全视差压缩光场合成的方法 | |
JP5156837B2 (ja) | 領域ベースのフィルタリングを使用する奥行マップ抽出のためのシステムおよび方法 | |
Terzić et al. | Methods for reducing visual discomfort in stereoscopic 3D: A review | |
US9076267B2 (en) | Image coding device, integrated circuit thereof, and image coding method | |
US9049423B2 (en) | Zero disparity plane for feedback-based three-dimensional video | |
US20130335535A1 (en) | Digital 3d camera using periodic illumination | |
JP2011166264A (ja) | 画像処理装置、撮像装置、および画像処理方法、並びにプログラム | |
JP5755571B2 (ja) | 仮想視点画像生成装置、仮想視点画像生成方法、制御プログラム、記録媒体、および立体表示装置 | |
CN106170086B (zh) | 绘制三维图像的方法及其装置、系统 | |
TW201225635A (en) | Image processing device and method, and stereoscopic image display device | |
JP2016225811A (ja) | 画像処理装置、画像処理方法およびプログラム | |
Jung | A modified model of the just noticeable depth difference and its application to depth sensation enhancement | |
KR20110025083A (ko) | 입체 영상 시스템에서 입체 영상 디스플레이 장치 및 방법 | |
JP5741353B2 (ja) | 画像処理システム、画像処理方法および画像処理プログラム | |
JP2015005200A (ja) | 情報処理装置、情報処理システム、情報処理方法、プログラムおよび記憶媒体 | |
JP2013242378A (ja) | 撮像装置、表示方法、およびプログラム | |
Fatima et al. | Quality assessment of 3D synthesized images based on structural and textural distortion |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180005126.2 Country of ref document: CN |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012517976 Country of ref document: JP |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11835831 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2011835831 Country of ref document: EP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13519158 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |