JP2003209858A - Stereoscopic image generating method and recording medium - Google Patents

Stereoscopic image generating method and recording medium

Info

Publication number
JP2003209858A
JP2003209858A JP2002008484A JP2002008484A JP2003209858A JP 2003209858 A JP2003209858 A JP 2003209858A JP 2002008484 A JP2002008484 A JP 2002008484A JP 2002008484 A JP2002008484 A JP 2002008484A JP 2003209858 A JP2003209858 A JP 2003209858A
Authority
JP
Japan
Prior art keywords
image
parallax
value
images
viewpoint
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
JP2002008484A
Other languages
Japanese (ja)
Inventor
Hidetoshi Tsubaki
秀敏 椿
Original Assignee
Canon Inc
キヤノン株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Canon Inc, キヤノン株式会社 filed Critical Canon Inc
Priority to JP2002008484A priority Critical patent/JP2003209858A/en
Publication of JP2003209858A publication Critical patent/JP2003209858A/en
Application status is Granted legal-status Critical

Links

Abstract

(57) [Summary] [Problem] A method of generating an easy-to-view stereoscopic image [Problem] In generating a virtual viewpoint image for forming a stereoscopic image, a depth amount or a parallax value is adjusted so that a virtual viewpoint image to be generated is generated. Adjust the parallax of. For an input parallax image that captures a complex scene, a specific range in the image is selected using various segmentation techniques and input from a user using a GUI, and the parallax value or only the depth amount of the specific area is selected. By adjusting,
The parallax value or the depth amount of the pixel in the unexpected area is not changed.

Description

Description: BACKGROUND OF THE INVENTION 1. Field of the Invention The present invention relates to a scene or a subject.
Of an image processing method for generating 3D images
A method for generating a body image, and the image processing method
And a three-dimensional photo print system. 2. Description of the Related Art Conventionally, as a method for forming a three-dimensional image,
Means integral photography (IP) or wrench
Cura board 3D image, holographic stereograph
And multi-view stereoscopic images such as the
Kei: 'Three-dimensional image engineering', Sangyo Tosho, 1972). In creating such a multi-view stereoscopic image,
Requires images in multiple viewpoints, and
Many pairs of image pairs with parallax composed of
The image of the pair of parallax images is the observer among the parallax image pairs.
Standing independently by the left and right eyes
Enables visual observation. Parallax image observed by left and right eyes
The amount of parallax between images and the amount of protrusion and sinking with respect to the display surface
2 is shown in FIG. In FIG. 3, s10 is the right eye of the observer, s10
11 is the left eye, s12 is the base line length K, s13 is the print surface
From the viewing distance D to the viewpoint, s14 is the display surface, and s15 is the flying distance.
Projection amount Z f , S16 is the subsidence amount Z b And [0005] ○ is an image of an object without parallax, Δ is between parallax images
Object with parallax amount in each parallax image,
R is selectively incident on the right eye s10
It is assumed that the image L is an image incident on the left eye s11. [0006] With both eyes, the point where the image becomes one point is the parallax
From the print side
Set the observation distance to the viewpoint as D, and the parallax on the print surface
Amount x f , And the protrusion amount s15 is Z f Then, [0008] Also, for the left eye (L), for the right eye
When the image of (R) enters, the position where the image becomes one point
Is on the back side of the print surface, and the parallax on the print surface
Amount x f , And the sinking amount s16 is Z b Then, It can be seen that In other words, 3D pudding
The amount of physical pop-out due to binocular parallax,
And the amount of sinking is simply the difference between the parallax images on the print surface.
Of the subject (parallax). Print
Of parallax (displacement) between pairs of parallax images on a plane and image data
Between the parallax image pair in the map and the parallax value (pixel shift amount)
Is the display size (magnification) of the parallax image on the print surface
And data size (number of pixels), observation distance, baseline length, etc.
It can be easily converted from geometric relationships. Conventionally, a multi-view camera has been used to obtain a multi-viewpoint image.
Camera and special tripods, these multi-view cameras and special
In a method using a tripod, the camera
Depending on the position of the group and the subject, a three-dimensional effect,
The amount of dispensing and sinking had been decided. Therefore,
At the time of shooting, the design of a stereoscopic image was important. A finger for designing a three-dimensional image in three-dimensional image shooting
The needle has a maximum protrusion amount and a maximum sinking amount
Adjust the baseline length of the camera so that it falls within the maximum parallax range.
Adjust. Main subject is the camera's gaze point position (parallax 0 position)
Adjust the convergence angle of the camera to set it on your body. That
Conceivable. First, the maximum parallax range in the human body
Range that can be fused due to the visual function of
The blurring of the image felt by the crosstalk when it comes off,
Allow the observer to perceive the quality of the stereoscopic image as not degrading
This is an allowable parallax range determined by the tolerance range. As the value of the fusing range, 3D-HDTV
Empirical values are suggested in the paper ("3D video
Noriaki Design ", Noriaki Kumada, Broadcasting Technology 1992.1
1) However, due to the wrench shape accuracy, etc., the viewpoint position is limited.
For lenticular prints that cannot be
Range due to the effects of crosstalk
Parallax amount (protruding
In some cases, it is empirical to limit the amount
is there. Here, the maximum and minimum values in this range are defined as the maximum parallax range.
And the minimum parallax range. Further, the position of the gazing point of the camera is set to the main subject.
The reason for setting above is the focal position adjustment position of the eye lens
And the depth of the displayed image are shifted,
Or the main subject in the image to induce hard to see
Minimize the parallax amount (protrusion amount, sinking amount) of the part
Feel tired and easy to see
In some cases, a stereoscopic image can be generated empirically. In recent years, with the development of computer technology,
Generating an image at a virtual viewpoint based on a small number of real images
An approach has been proposed. A method for generating a virtual viewpoint image based on a real image
One of the methods is that the subject
To generate an explicit three-dimensional geometric model of a model
Based Rendering Method and 3D Geometry
Image Based Render that does not generate a model
There is an ering method. Model Based Renderi
The ng method is a 3D geometric model of the scene once
Once you have acquired the skills of computer graphics
The subject image from any viewpoint using the
Will be able to But full of the real world
It is very difficult to obtain a three-dimensional geometric model.
Therefore, it is difficult to generate highly realistic virtual viewpoint images.
New On the other hand, Image Based R
The ending method uses a captured two-dimensional image.
Method to generate highly realistic virtual viewpoint images.
It is possible to Also, view mor, which is one of the methods, is used.
phing (Steven M. Seitz “Vie
w Morphing ", SIGGRAPH, p21-
p31, 1996) are images containing the same object.
Line of sight is perpendicular to the baseline between the cameras,
By converting between images so that they are parallel to each other
Objects in the scene must be explicitly
Without generating a three-dimensional geometric model of the body,
Xel correspondence information or depth calculated by it
Information that is used to connect between the camera viewpoints.
When generating an image of an arbitrary viewpoint on a line, it is as if
Smooth changes without distortion as if Dell existed
This is a method of generating an obtained arbitrary viewpoint image. Since a realistic multi-viewpoint image can be obtained,
High quality 3D images composed of point images can be obtained.
There is an advantage that it can be. Using these computer technologies, a small number
Method for Generating Images from Virtual Viewpoint Based on Real Images
In, even after capturing the input parallax image, many temporary
Adjust the stereoscopic effect between multi-viewpoint images
There is an advantage that it is easy. For example, Model Based Ren
The dering method is a three-dimensional geometric model of the subject scene
Computer graphics techniques to obtain
To easily observe the subject image from any viewpoint.
Of course, you can also
Individual objects in a scene because they can be easily changed
Movement of the body in the front-back direction (the amount of protrusion from the display surface, sinking
Customization of scenes, etc.
It is easy. However, in reality, a complete three-dimensional
What shape model is very difficult to get,
It is difficult to generate realistic virtual viewpoint images.
In addition, for each object in the scene,
3D geometric model of the scene
From the display surface for individual objects in the scene.
Adjusting the amount of protrusion and sinking of the
It is difficult. On the other hand, a technique for generating a virtual viewpoint image in recent years
In the method using View Morphing
Is the distance from any viewpoint on a straight line connecting the camera viewpoints.
Although it is possible to easily observe the photo image,
Do not generate a three-dimensional geometric model of the object
Do not have a three-dimensional geometric model of each object in the scene.
Customizing the scene,
Amount of protrusion and sinking of the object from the display surface
It was difficult to make adjustments. Therefore, the three-dimensional image design guidelines
Condition, the maximum amount of protrusion and maximum amount of sinking are
Take effective viewpoints within a certain maximum parallax range.
You. Set the parallax of the main subject position in the image to
You. At the same time is difficult and maximally strong
Creating an easy-to-view three-dimensional image while creating a three-dimensional effect
Was difficult in this method. That is, the three-dimensional geometry of the object in the scene
You can acquire realistic multi-view images without acquiring a shape model.
Take advantage of the View Morphing method
And a three-dimensional image composed of the multi-view images.
It is difficult to adjust the feeling to a state without fatigue and easy to see
Met. [Problems to be Solved by the Invention] Projection of a stereoscopic image
Adjust the amount and sinking amount, and at the time of display, feel a strong three-dimensional feeling
Eyes are tired even if you deviate from the optimal viewpoint position
3D that easily realizes 3D display that is difficult and less uncomfortable
To provide an image generation method. Adjustment of the amount of projection and sinking of the three-dimensional image
For complex parallax images
Even when displaying without distorting unexpected image areas
Out of the optimal viewpoint position while giving a strong three-dimensional feeling
3D display with less discomfort even when eyes are tired
To provide a stereoscopic image generation method that can be easily realized. Means for Solving the Problems A provisional image for generating a stereoscopic image is provided.
Adjustment of depth amount or parallax value in generation of virtual viewpoint image
The parallax between the generated virtual viewpoint images.
Adjust. For an input parallax image obtained by capturing a complex scene,
Even if various segmentation methods, G from the user
Select a specific area in the image using input using UI
And adjust only the parallax value or the depth of that specific area.
The parallax value of the pixel in the area that is not expected
Or change the depth amount. DETAILED DESCRIPTION OF THE PREFERRED EMBODIMENTS (First Embodiment)
A straight line connecting the viewpoints of the input disparity image taken at two viewpoints
A virtual viewpoint at or above the virtual viewpoint
Generate an image and set it as a reference image among the input parallax images.
Input image and the virtual viewpoint image
3D images composed of images or virtual viewpoint images only
Acquire a multi-view image used to create an image
In a stereoscopic image creating method for synthesizing images, a virtual viewpoint image
Before generating, the input disparity image itself or another distance map
Words in virtual viewpoint image generation obtained from image acquisition means
Depth image or disparity value distribution image that is an index of the amount of
By adjusting the value of
Adjust the amount of parallax between the presented multi-viewpoint images for easy observation
An image generation method using the present invention for generating a three-dimensional image.
You. A virtual viewpoint image generating method according to the present embodiment,
And a stereoscopic image generation method is described in Japanese Patent Application No.
Processing device, 3D photo print system, image processing method, vertical
One of media recording body printing method and processing program
-Based virtual viewpoint image generation method and stereoscopic image generation
It is a method of formation. A stereoscopic image of the first embodiment using the present invention
FIG. 1 shows the algorithm of the synthesis method. In the figure, s1 is
Image data acquisition unit, s2 is corresponding point extraction unit, s3 is parallax
Cloth image creation unit, s4 is an image generation parameter acquisition unit, s5
Is a depth sense adjustment unit, and s6 is a virtual viewpoint image sequence generation
Unit, s7 is a three-dimensional stripe image generation unit, s6 is a printing unit
It is. The present invention is applied to a general-purpose PC or image processing.
Image processing program that runs on a dedicated workstation
Or embedded in hardware that includes image processing functions
You can also. The operation of the above configuration will be described below.
In the data acquisition unit of s1, the two captured image data are
take in. Shooting is almost parallel along the horizontal direction of the imaging surface
Done by two digital cameras attached
You. Or two digital cameras focus on the main subject
And may be taken with convergence. Or multiple data
Arbitrarily select two cameras from the camera array consisting of digital cameras.
You may choose Mera. Or two or more optical systems and
Having an imaging system in one housing and taking images from multiple viewpoints simultaneously
You can also shoot using a compound eye camera that can shadow
No. Or, to pass a prism or mirror through the optical path
Image from two viewpoints on one imaging plane of the camera
The stereo adapter, which is an optical system that projects
May be mounted to capture an input parallax image. When there is congestion between input data images
Performs distortion correction processing in the data acquisition unit s1.
U. In the distortion correction process, when shooting stereo images
Correct the generated keystone distortion. By this processing, the shooting optical axis
Parallel to the horizontal in a plane perpendicular to
The viewpoint is converted to the image pair. Shooting parameters of each camera,
Or, from the captured image itself, a general camera
Left and right using the fundamental matrix obtained by the
Keystone distortion around the image center of each image data
So that the virtual shooting planes after
Perform shape distortion correction. That is, there is a predetermined convergence angle in the left and right optical paths.
Each image inevitably has a trapezoidal distortion
Because. The trapezoidal distortion correction process is a well-known
This is a geometric transformation using a dimensional rotation matrix. In the present embodiment, the image data is
Two-dimensional array of digital data whose prime values represent the luminance level
And perform the processing. In the corresponding point extraction unit of s2, the input parallax
Point-to-point, pixel-to-pixel correspondence of the same subject part between images
Is extracted. The correspondence of each pixel between images
There are two types of extraction methods: feature-based and region-based.
Is a well-known representative example. In this embodiment, one method based on area is used.
A simple template matching method using the sum of differences
The corresponding point extracting means will be described using the method. Here, first, of two image data,
The left image where the shaded viewpoint position corresponds to the left side
The image whose viewpoint position corresponds to the right side is defined as the right image. )
Perform template matching as reference image, left
Find corresponding points based on the image. First, the left image
A partial area of a specified size centered on a specified point
Cut out as a rate. And the template match
Is performed at predetermined pixel intervals. About all points in the image
Corresponding points may be obtained, but if the image size is large
Fixed time becomes huge. Then, a predetermined corresponding point search area in the right image
At the center of the pixel of interest in the corresponding point search area
Image area of the same size as the template
The correlation value with the rate is calculated. The search area for the corresponding point is the distortion compensation in step s1.
An almost parallel stereo image pair is obtained by the normal processing.
Therefore, the horizontal direction in the right image is centered on the point of interest in the left image.
Set a predetermined range of directions. In addition, a reserve for depth such as a distance range of a subject is provided.
When the information is given in advance, the horizontal
Limiting the search range in the direction as much as possible
It is desirable to prevent wrong handling of ropes. The result of the correlation operation is set in the right image.
One-dimensional correlation value of the same size as the predetermined corresponding point search area
Obtained as cloth. Analyzing this one-dimensional correlation value distribution
To determine the corresponding point position. The corresponding point position is the maximum of the difference sum.
Is also determined to be a small position. However, the sum of the differences at the corresponding point positions is smaller than a predetermined value.
Is larger, the sum of the difference at the corresponding point position and the second smallest
If the difference with the sum of differences is smaller than a predetermined value, or
When the change in the sum of differences near the point position is smaller than a predetermined value
Is considered to be unreliable for the corresponding point extraction process
Therefore, this point is not supported. Further, regarding the corresponding point for which the correspondence has been determined,
Cut out the template around the corresponding point in the right image
Similarly, a corresponding point search area is set in the left image,
Perform a horizontal search. The result of the template matching is
Judge whether it is near the position of the corresponding point of the image, and
If it is, there is a high possibility that
Information about uncorrespondence is given for the corresponding point result. that's all
Is calculated at predetermined intervals for the pixels of the left image.
U. By these processes, the template based on the left image is obtained.
Stereo matching points are determined by rate matching. FIG. 4 shows a diagram of extraction of corresponding points based on the left image.
You. The left image is the reference image, and the dots in the figure are the reference points.
The area surrounded by squares around the
To search for a corresponding point in the right image serving as the reference image. In the parallax value distribution image generation unit of s3,
A difference value distribution image or a depth image is generated. to s2
Of the coordinate values of the corresponding points
From the difference, the disparity value for the target pixel can be obtained.
You. Subsequently, from the parallax values obtained so far, the original
Points (pixels) for which corresponding points have not been determined and corresponding points have been determined.
Find parallax for uncorresponding points (pixels)
You. Interpolation using previously calculated disparity information in the neighborhood
As a result, a dense parallax value distribution image is obtained. In this embodiment
Is the interpolation process, the uninterpolated point and the corresponding point extraction process
The distance parameter at the corresponding point where the parallax value is
Weighted average as a weight, no corresponding point was originally determined,
And the disparity of the uncorresponding uncorresponding point is obtained by the following equation (3).
You. [Equation 3] N is an arbitrary parameter that affects the interpolation result.
If the value of n is too small, the required parallax
Is close to the average from the entire image, and the parallax distribution becomes uniform.
You. Also, if the value of n is too large, the corresponding point of perception
Affected significantly. Considering calculation time, about n = 1
Is desirable. By the above processing, dense viewing of the left image
Obtain a difference distribution image. Further, a dense parallax distribution image for the right image is obtained.
If necessary in the next step s6,
And displace wi and di, and dense parallax for the right image.
Obtain a cloth image. [Equation 4] As a method of parallax interpolation, a distance parameter is used.
Weighted averaging was performed with the meter as the weight, but in addition to the distance,
And the pixel values of each of the RGB channels, and the parameters and
And increase the weight of pixels with similar spatial positions and pixel values.
You may complement it. Also, for example, bilinear interpolation or spline interpolation
A space or the like may be used. Also, if the camera parameters are
If required by calibration methods, etc.
Well-known geometric relationship between disparity value distribution images between difference images
Can be easily converted to a depth image using
You. Replace the parallax value distribution image with the depth image, s4 or less
May be performed. Therefore, the parallax value distribution image of s4 or less
The image is a depth image, the disparity value is a depth value,
Replace reference values and threshold values with reference values and threshold values for various depth values.
It can be said that it is easy to perform the processing of s4 and below. In the image generation parameter acquisition unit of s4
Is generated by the virtual image sequence generation unit of s6.
Number of images in the virtual image sequence and viewpoint position or observation
Gets parameters related to location. In this embodiment, the viewpoint position or the view
Eventually, the image
In contrast to Kens, how many images are skipped and the image is viewed as a parallax image pair
Information related to the information
It requires the amount of protrusion and the maximum amount of sinking. FIG. 8 shows the relationship between the observation position and the number of skipped images.
You. K is the baseline length, D is the observation distance, f is the focal length of the wrench
The separation, Δp, is the print pitch of one virtual viewpoint image. Is the number of skipped images. Also, in FIG.
Parallax image pair in image sequence in case of skipping one image
An example of the selection will be shown. The observation position is close to the print
Observe adjacent sequence images as a parallax image pair.
Rather than skipping images during a virtual image sequence.
To observe the image. The number of skipped images depends on the observation distance.
Release, wrench focal length, print pitch for one image, base
It can be geometrically determined from the line length. A virtual viewpoint image sequence of s6 described later.
The viewpoint position of the image generated by the generator is
Based on the position of interest
The parallax generated between pairs of parallax images that can be
Predetermined, limited by the difference value and the maximum sunk parallax value
Is determined so as to be within the parallax amount. Parallax is small
If it is too much, the three-dimensional effect when observing a three-dimensional image is impaired.
Conversely, if the amount of parallax becomes too large, the
The three-dimensional image to be observed becomes unclear due to crosstalk. Therefore, in this parameter acquisition unit,
The parallax range, that is, the maximum amount of protrusion and the maximum amount of sinking
Set the maximum between the viewpoints of the virtual viewpoint image from the geometric relationship
Calculate the pop-out disparity value and the maximum sinking disparity value,
Set as a reference value between viewpoints. During the virtual image sequence, depending on the number of skipped images
The combination of parallax image pairs selected from
The maximum pop-out parallax value D ' max , Maximum subduction parallax value
D ' min Virtual image applied in s5, calculated from
Maximum possible disparity value D between adjacent viewpoints in an image sequence
max And the minimum possible parallax value D min Changes in the following relationship
You. [Mathematical formula-see original document] Each virtual viewpoint is a left image or a right image.
So that they are arranged at equal intervals based on either of the viewpoint positions
Determine the viewpoint position. This is because the viewpoint positions of the image sequence are equally spaced.
Lined up at a distance, natural image changes and stable
And to adjust the stereoscopic effect
Includes both input left and right parallax images in the image sequence
This is because they can no longer do it. In the depth sense adjusting section of s5, the parallax value
View the distribution image so that it becomes
The parallax value of the difference value distribution is adjusted. Along with that, set in s4
Parameters related to the specified viewpoint position or observation position
To set the viewpoint position. The maximum projecting amount and the maximum sinking amount are
Take effective viewpoints within a certain maximum parallax range.
You. And a parallax of a main subject position in the image can be obtained.
Only set to 0. Depth value or parallax to satisfy
Adjust the value distribution. If the main purpose is to satisfy
The disparity observed between the left and right images between the viewpoints is the maximum disparity value d.
max , And the minimum disparity value d min If you take any viewpoint
r x At d max ≒ D max , D min ≒ D min Take the most
It is desirable to effectively use the large parallax range. From this, it can be seen that the entire parallax value distribution image
Then, offset addition is performed to adjust the range of the parallax value.
Obtain the offset value sh and adjust the parallax value distribution.
Defined by the maximum possible disparity value and the minimum possible disparity value
Effective use of the maximum depth range
Determine the viewpoint position so that the amount is as close to 0 as possible in principle
I do. First, the disparity value in the disparity value distribution image is offset.
Make linear adjustments such as add-on, and then determine the viewpoint position.
Set. The maximum disparity value in the disparity value distribution image is d max , Minimum vision
The difference value is d min And, for example, the left image of the input disparity images
Reference image of virtual viewpoint image sequence generated by s7
(Start point image). The parallax between the left and right stereo image pairs is
The ratio of each image to be processed, that is, the viewpoint position parameter is r,
When the viewpoint of the left image is taken among the input parallax images, r =
0, r = 1 when the viewpoint of the right image is taken. Then, the maximum parallax value and the minimum parallax
When the viewpoint position r is obtained by substituting the difference value
Between the left input image, which is the reference image, and the virtual source image.
To determine the maximum and minimum parallax values in the x direction.
it can. [Mathematical formula-see original document] Do not exceed the maximum sinking amount and maximum protrusion amount
The viewpoint position r will be set within the range
In order to use the parallax range effectively, the entire parallax distribution image
It is necessary to add the aforementioned offset to the parallax value. Next formula
Offset sh and the viewpoint position adjacent to the reference image
Calculate r. ## EQU8 ## It is assumed that When the main purpose is to satisfy
Is the representative parallax value or depth of the part including the main subject.
Outgoing value d 0 Then, the offset sh is given by: Is obtained. The viewpoint position of the virtual viewpoint image adjacent to the reference image
The replacement parameter is: Of these, it is determined by a small r. However, the main processing for adding the offset
Satisfies and simultaneously at the same time, and effectively maximizes
It is difficult to use the difference value range. Therefore, the disparity value of the disparity value distribution image is set to be non-linear.
By adjusting the parallax value,
Satisfies the image design conditions and
Value configuration of multi-view stereoscopic images that makes effective use of surroundings
Make the change to. Fig. 5 Non-linear adjustment of image depth value
Here's how to do it. FIG. 5 shows the parallax value distribution with priority given to the above guidelines.
For an image that has been processed to add an offset to the entire image,
FIG. Parallax before correction in the horizontal axis direction
This is a value distribution, and the direction of the vertical axis is the corrected parallax value.
A black circle is a representative value of the parallax value of the main subject. The dotted line at the center indicates the visual recognition in the corrected parallax distribution.
The position of the difference 0 is shown. The upper dotted line shows the corrected parallax
The position of the maximum possible parallax on the cloth, the dotted line below is
This is the position of the smallest possible parallax in the parallax distribution. In this figure, a substitute for the parallax value of the main subject is shown.
Tabular values have negative values. So, control
Pull the black circle upward on the curve to the dotted line position in the figure
To reduce the representative value of the corrected main subject to 0
And the above conditions can be satisfied. In the control curve, the shape of the curve is smoothed.
Easily fit and smooth changes in parallax values
This reduces distortion in the virtual viewpoint image.
You. Also, FIG.
Image that has been processed to add an offset to the entire value distribution image
The state of processing on an image is shown. As in FIG. 6, the disparity value before correction is the horizontal axis direction.
Or it is the distribution of depth values, and the vertical axis direction is
It is a parallax value. Black circles represent typical parallax values of the main subject.
The upper dotted line indicates the maximum value in the corrected disparity distribution.
The dotted line below indicates the position of possible parallax in the corrected parallax distribution.
The black dot on the upper right indicates the position of the minimum possible parallax in the parallax value image.
The black circle at the lower left indicates the minimum disparity in the disparity value image.
Indicates the value. After performing this adjustment, the parallax distribution image
Whether the maximum and minimum parallax values in the image have changed
Then, the ratio r of the viewpoint position is calculated again using the equation (6).
Should. In this embodiment, linear adjustment and non-linear adjustment
Of parallax value distribution images or depth images in the order of dynamic adjustment
Adjustment, but only linear adjustment, only nonlinear adjustment,
Or non-linear adjustment, then linear adjustment
Processing may be performed. In addition to satisfying the above policy, FIG.
Performing the operation shown in
Obviously, you can also get other effects that emphasize
Is. In the virtual image sequence generation unit in s6,
Is the input parallax image, which is obtained by the depth sense adjustment unit of s5.
In addition, the corrected disparity value distribution image, viewpoint position,
To the parallax of the input left and right stereo image pair corresponding to the viewpoint position
Virtual viewpoint at each viewpoint position using the ratio r of each image to
Generate an image. First, a new viewpoint image from the left image is
Is generated by forward mapping using the pixels of
That is, first, a new view that maps each pixel of the left image
The parallax d at the pixel position (x, y) in the point image and the viewpoint position
Representation ratio r, whole image in corrected disparity value distribution image
Adjustment of the depth of the offsets sh and s5 of the parallax value with respect to
Parallax value change amount Δd added by non-linear adjustment of the section L To
It is obtained by the following equation (2). (Equation 11) Similarly, a new viewpoint image from the right image is
Generated by forward mapping using the pixels of the right image
I do. That is, a new mapping of each pixel of the right image
Position in the viewpoint image (x N , Y N ) Is the pixel position of the right image
Parallax d at (x, y), ratio r representing viewpoint position, whole image
Depth feeling of the offsets sh and s5 of the parallax value for the body
Parallax value change amount Δ added by the user in the adjusting unit
It is obtained by the following equation (2 ') using d in the same manner as for the left image. (Equation 12) Is as follows. Then, the image at the pixel position (x, y) of the left image
To the new viewpoint image (x N , Y N )
Ping. Occlusion due to viewpoint change
Pixels at different positions in the power parallax image are mapped to the same pixel
If you see the pixels
If the difference value is different, only the pixel value of the pixel with the larger parallax value is taken.
Use. If the disparity values have almost the same value, those pixels
Use the average of the values. Similarly, the image at the pixel position (x, y) of the right image
To the new viewpoint image (x N , Y N )
Ping. Occlusion by viewpoint change like the left image
Pixels at different positions in the input parallax image
If they are mapped to
If the parallax values for the pixels are different,
Use only pixel values. Also, like the left image, the parallax value is
If they have almost the same value, use the average of those pixel values.
You. This process is repeated for all pixels of the left and right images.
return. Then, a new viewpoint image from the left image and a right image
A new viewpoint image from the image is synthesized. Only from left image
Pixels to which no pixel is assigned are pixel values from the left image
And can pixels be assigned only from the right image?
The pixel value obtained from the right image is used as the pixel. Pixel values are assigned from both the left and right images.
Pixels having a viewpoint position ratio near r = 0, or
If r <0, the pixel assigned by the left image
When a value is adopted and r = 1 or r> 1
Adopts the pixel value assigned by the right image. [0110] This is because the new viewpoint image is a visual image of the input parallax image.
If it is very close to the point position or extrapolated viewpoint position
Pixel values assigned from the input parallax image on the closer side
Only adopts The ratio representing the viewpoint position is 0 <r <1.
In the case of, the viewpoint position is weighted and assigned from the left and right images.
Weighted averaging is performed between the obtained pixels according to the following equation (3).
Find the pixel value of the composite image. (Equation 13) N is a predetermined parameter, and the value of n is large.
If it is too sharp, the desired pixel value will be
It is strongly affected by pixel values. Considering the calculation time, n =
About 1 is desirable. Pixels are assigned from both the left and right images.
Fill-in processing is performed on pixels that have not been deleted. This filling process is performed at a predetermined distance from the pixel to be obtained.
Search for a valid pixel that is far away, and parameterize the distance of that pixel value.
Assign a weighted average value as a meter. Valid pixels exist
If not, the search range is expanded and the search is repeated. With the above processing, a new viewpoint image in which all pixels are valid is obtained.
Generated. Repeat this process for the number of viewpoints to
Let it be an image sequence. At the s7 three-dimensional stripe image synthesizing section
3D stripe image from multi-view image sequence
Are synthesized. At this time, each image in the multi-viewpoint image sequence is
According to the viewpoint arrangement of the image, the pixels at the same coordinates of the image
The three-dimensional image is synthesized so as to be arranged as. In the composition, the image of each viewpoint is vertically
Disassemble into strips for each line, and view in reverse order of the viewpoint position
Combine for only a few minutes. Here, the arrangement order of the viewpoint positions is reversed.
When observing with a lenticular plate,
This is because the image is observed left-right reversed within the pitch. In the printing section of s8, a three-dimensional stripe
Print the image. Before printing, s1 to s
Display up to 7 processing results on a display device for confirmation
It may be. The images printed by the above processes s1 to s8
When a lenticular plate is superimposed on the image, a good stereoscopic image is obtained.
Observable. In this embodiment, the camera arrangement is horizontal.
Direction and the input parallax image pair has parallax only in the horizontal direction.
The processing method has been described only when
For input parallax image pairs taken with the arrangement in the vertical direction
Processing method by changing the arrangement from horizontal to vertical
Of virtual viewpoint image sequence by applying
Can be said to be easy. Therefore, it has horizontal and vertical parallax
It can be said that processing can be easily extended to IP print creation, etc.
You. Further, the lenticularity shown in this embodiment
Prints other than multi-view three-dimensional prints using
Multi-view stereoscopic displays and holograms using chicula and IP
Similar points of view, such as traffic stereograms
Various multi-view displays that require multiple images taken with different
For the visual display method, the image generation method of this embodiment is also used.
Multi-view image sequence
To the multi-view stereoscopic display method as input images.
By the combined synthesis method, it is synthesized as a multi-view stereoscopic image,
Adjust the parallax between multi-view images even when displaying and outputting
The amount of protrusion and sinking of the object in the image
Make adjustments and use the elements of the present invention to improve visibility
A stereoscopic image can be generated. In the present method, conventionally, a multi-viewpoint image is taken.
The advantage is that the number of cameras
It is. And the elements of the present invention, especially as shown in s5
Is used to adjust the parallax between the multi-view images,
Projection amount and sinking amount of object in body image
To create a 3D image that is easy to see by adjusting
be able to. (Second Embodiment) This embodiment is similar to the first embodiment.
Apply the example technique to input disparity images of complex scenes
Not expected when the depth adjustment process is performed
Change the parallax value or depth of pixels in the area.
The problem of distorting the image of the unexpected image area.
To prevent. For that purpose, the input stereo image or the back
Going image or disparity value distribution image or a plurality thereof
Area division is performed using the information of the image, and
Using the area information, select a specific area and
Only the adjustment of the depth amount or the parallax value of the corresponding pixel is performed. Accordingly, the depth of the pixel in the area not expected can be obtained.
Adjust the depth without changing the distance or parallax value.
The image of the unexpected image area can be distorted.
Adjust the parallax between the generated virtual viewpoint images,
This is a method for generating a stereoscopic image. FIG. 2 shows the algorithm of this embodiment. Figure
In s201, the image data acquisition unit and s202 correspond
Point extraction unit, s203 is a parallax distribution image creation unit, s204 is
An image generation parameter acquisition unit, s205 is an area dividing unit, s
Reference numeral 206 denotes a depth sense adjustment unit, and s207 denotes a virtual viewpoint image sequence.
The cans generator, s208, generates a three-dimensional stripe image
A copy unit s209 is a printing unit. In the present embodiment, the image data acquisition of s201 is performed.
Obtaining unit, corresponding point extracting unit in s202, parallax distribution image in s203
An image creation unit, an image generation parameter acquisition unit in s204, and
And a virtual viewpoint image sequence generation unit in s207, s208
The three-dimensional stripe image generation unit of s209, the printing unit of
Image data acquisition unit of s1 of Embodiment 1 and corresponding point extraction of s2
Section, parallax distribution image creating section of s3, image generation parameter of s4
Data acquisition unit, virtual viewpoint image sequence generation unit in s6, s
7 3D stripe image generation unit, same as the printing unit of s8
Work. Therefore, the description is omitted in this embodiment.
You. In the area dividing section of s205, the input
Teleo image, or depth image or parallax value distribution image
Using the image or composite information of these multiple images,
Divide and select a specific area. The area dividing process is performed by a known method.
Clustering method such as k-mean method, multi-stage
Processing method using floor threshold, region growing method, etc.
(Image Analysis Handbook, Mikio Takagi, Hirohisa Shimoda, University of Tokyo
Gakkai Shuppan (1991)) for the application to the present invention,
Effective use of the above image information, individual objects in the scene
This method does not erode the boundary in the image that corresponds to the
If so, there is no difference in the method. Arbitrary region division method
May be selected. In the description of this embodiment, the area dividing method
Of the simplest examples,
Use the results that will be obtained. FIG. 10 shows an input parallax image. this
For the input image, the
Figure 11 shows the label image generated as a result of the division.
You. In the depth sense adjustment section of s206,
In the same manner as in the first embodiment, the entire parallax value distribution image is
On the other hand, linear adjustment such as offset addition is performed. The maximum parallax value in the parallax value distribution image is d max The most
The small parallax value is d min And the left image in the input disparity image
Reference of virtual viewpoint image sequence for generating image by s7
An image (starting point image). And the left and right stereo image pair
The ratio of each image to the disparity between them, that is, the viewpoint position parameter
And r is the viewpoint of the left image of the input parallax images.
In this case, r = 0, and when taking the viewpoint of the right image, r = 1.
You. Then, the maximum disparity value and the minimum disparity value are substituted into the following equation.
The reference image when the viewpoint position r is taken.
X direction generated between left input image as image and virtual start image
Can be obtained. (Equation 14) Do not exceed the maximum sinking amount and maximum projecting amount.
The viewpoint position r will be set within the range
In order to use the parallax range effectively, the entire parallax distribution image
It is necessary to add the aforementioned offset to the parallax value. The offset sh and the reference image are calculated by the following equations.
Calculate the adjacent viewpoint position r. (Equation 15) It is assumed that: When the main purpose is to satisfy
Is the representative parallax value or depth of the part including the main subject.
Outgoing value d 0 Then, the offset sh is given by: Is obtained. Virtual vision adjacent to the reference image
The viewpoint position parameter of the point image is as follows: Of these, it is determined by a small r. However, the main processing for adding the offset
Satisfies and simultaneously at the same time, and effectively maximizes
It is difficult to use the difference value range. Therefore, in the first embodiment, a general image
The parallax value of the parallax value distribution image is
Adjust the parallax value by converting to non-linear
Simultaneously satisfy the design conditions and
Depth of multi-view stereoscopic image that makes effective use of difference value range
Processing to change to the value configuration was performed. However, the object arrangement shown in FIG.
Disparity value distribution image obtained between stereo images with
On the other hand, the nonlinearity of the depth sensation adjustment unit of s5 of the first embodiment is
If you try to perform an accurate adjustment process,
Changes the parallax value of the
A problem arises that results in distortion of the region. Because the figure
10 stereo image pairs are shown in the graph of FIG.
This is because it has a parallax value distribution as shown in FIG. In FIG. 12, A, B, C, and D represent s20.
Out of the region division information obtained by the region division unit 5
Of the area label arbitrarily attached to the area excluding the scenery area
The symbols correspond to A, B, C, and D in FIG. Figure
The vertical axis of the graph of FIG. 12 is similar to the vertical axis of the graph of FIG.
4 shows a distribution of parallax values in a distribution image. Label A,
The short forms in B, C, and D are the respective areas in the parallax image.
Is the distribution range of the parallax values at. The black circle in the rectangle is split
This is a representative value arbitrarily selected in the region. The object A in FIG.
In this case, as can be seen from the graph of FIG.
As in the case of the adjustment in FIG.
Since the difference value has a negative value, the parallax value of this representative point and the vicinity thereof
To 0 as performed in the adjustment of FIG. 5 of the first embodiment.
Will be performed. However, in this stereo image, D
An object having a label is also viewed near the representative point in the main subject A.
Since the pixel has the difference value, the sense of depth of s5 in the first embodiment is obtained.
When the adjustment unit performs nonlinear adjustment processing,
Parallax value of pixels in the area where the effect of parallax adjustment is not expected
Will be changed. Therefore, depending on the properties of the input parallax image,
In the virtual viewpoint image, and hence the stereoscopic image, the first real
Performs non-linear adjustment processing of the depth sense adjustment unit in s5 of the embodiment.
If you try to adjust the pixel
Changing the disparity value may distort unexpected image areas
Can be a problem that can result in
it is obvious. Therefore, in the second embodiment of the present invention,
In contrast to the first embodiment, a non-linear depth amount or
When adjusting parallax values, implement region division and specify
Select an area to determine the depth of pixels corresponding to the specific area.
Or by adjusting the parallax value,
Depth without changing pixel depth or parallax values
Adjustments can distort unwanted image areas.
Adjust parallax between generated virtual viewpoint images without
Generate an easy-to-view stereoscopic image. Depth feeling adjusting section of s206 in the second embodiment
Generated by s205 in the non-linear adjustment process of
Corresponding to a specific area based on the area information shown in FIG.
The desired adjustment of the depth amount or parallax value of the pixel to be performed is performed.
In this embodiment, as an example, FIG.
Disparity values in the disparity distribution image are arranged in a graph
Fig. 3 shows a method for performing non-linear parallax adjustment. In FIG. 12, A, B, C and D represent s20
Out of the region division information obtained by the region division unit 5
Of the area label arbitrarily attached to the area excluding the scenery area
The symbols correspond to A, B, C, and D in FIG. Figure
The vertical axis of the graph of FIG. 12 is similar to the vertical axis of the graph of FIG.
4 shows a distribution of parallax values in a distribution image. Label A,
The short forms in B, C, and D are the respective areas in the parallax image.
Is the distribution range of the parallax values at. The black circle in the rectangle is split
This is a representative value arbitrarily selected in the region. In (1) of FIG. 12, the short form of the label B
Is D max And the short form of label D is D min The table
Object with a parallax value touching the dotted line
Do not exist. This fact applies to the linear disparity adjustment
In addition to the fact that the stereoscopic image design policy is
No. Therefore, as shown in FIG.
The process of making the representative point in the middle close to 0 is performed.
Using the information by area division of the second embodiment,
Only adjust the depth or parallax value of the corresponding pixel
By doing so, without distorting unexpected image areas,
At the same time, we can meet the policy of stereoscopic image design
become. The amount of disparity value in each object shown in a short form
The cloth area is stored as shown in FIG.
It is. If you want to maintain accuracy, change the disparity value to a depth value.
Then, the processing is performed. In the processing after s207, the first actual
The same processing as in the embodiment is performed, and the depth of the pixel corresponding to the specific area is
By adjusting the amount of travel or the disparity value,
Do not change the depth or parallax value of pixels in
The virtual viewpoint created by adjusting the depth
Adjusting parallax between images to create a stereoscopic image that is easy to see
Can be. In this method, conventionally, a multi-viewpoint image is taken.
The advantage is that the number of cameras
It is. And especially as shown in s205 and s206
Select specific areas by using various elements of the present invention.
The depth amount or parallax value of the pixel corresponding to the specific area
By adjusting, the depth of the pixel in the area that is not expected
Make depth adjustments without changing the amount or parallax value
By doing so, without distorting unexpected image areas,
Adjust the parallax between the generated virtual viewpoint images to make it easier to see
It is a method that can generate an image. (Third Embodiment) In this embodiment, the second embodiment
The method for generating an easy-to-view stereoscopic image described in the examples
Instead of segmenting the area, the user
This is a method for interactively dividing a region. FIG. 2 shows an algorithm of the present embodiment and the second embodiment.
It is a rhythm. In other words, the present embodiment is broadly the second implementation.
Take the same processing algorithm as in the example. In the figure, s201
Is an image data acquisition unit, s202 is a corresponding point extraction unit, s20
3 is a parallax distribution image creation unit, and s204 is an image generation parameter.
Data acquisition unit, s205 is a region division unit, and s206 is a sense of depth.
An adjustment unit, s207 is a virtual viewpoint image sequence generation unit, s207
208 is a three-dimensional stripe image generation unit, and s209 is printing
Department. In this embodiment, the image data acquisition of s201 is performed.
Obtaining unit, corresponding point extracting unit in s202, parallax distribution image in s203
An image creation unit, an image generation parameter acquisition unit in s204, and
And the sense of depth adjustment unit in s206, the virtual viewpoint image in s207
Sequence generation unit, 3D stripe image raw of s208
The printing unit of s209 is the same as that of s201 of the second embodiment.
Image data acquisition unit, corresponding point extraction unit of s202, s203
Parallax distribution image creation unit, image generation parameters of s204
Acquisition unit, depth sense adjustment unit in s206,
Virtual viewpoint image sequence generation unit, three-dimensional strike of s208
The same operation as the live image generation unit and the printing unit in s209 is performed.
You. Therefore, the description is omitted in this embodiment. In the area dividing unit of s205, the user
-Interactive and highly accurate information for segmentation
Is input, and information on the area
A means for immediately generating the split information is provided. For example, Japanese Patent Application Laid-Open No. 200-339483 discloses
Image displayed on the display
And the starting point determined by the pointing device or
Edge tray between the previous fixed point and the point currently pointed by the mouse
Follow the movement of the mouse at all times at high speed, and
By displaying on the display at real time,
Eliminates the need for regular work, high operability and high-precision edges
A high-performance real-timer that enables trace compatibility
Im and interactive methods for generating information about segmentation
It has a step. According to the edge trace result,
Immediately generate region division information to be used in the process. As a result, the area of s205 in the second embodiment is
In the area division section, various area division methods are implemented,
Divides the input image for each object included in
Necessary to obtain a good and good segmentation result, and
Subtle errors in setting values can greatly alter the split result.
Setting of the area segmentation parameters depending on various area segmentation methods.
No need for setting. In the processing after s206, the second actual
The same processing as in the embodiment is performed, and the depth of the pixel corresponding to the specific area is
By adjusting the amount of travel or the disparity value,
Do not change the depth or parallax value of pixels in
The virtual viewpoint created by adjusting the depth
Adjusting parallax between images to create a stereoscopic image that is easy to see
Can be. In the present method, conventionally, a multi-viewpoint image is taken.
The advantage is that the number of cameras
It is. And especially as shown in s205 and s206
By using the elements of the present invention,
Without having to set complicated segmentation parameters,
Appropriate segmentation is possible by user input,
Select a specific area from the divided areas, and
Adjust the depth or parallax value of the corresponding pixel
The depth or parallax value of the pixel in the area that is not expected
By adjusting the depth without changing the
Generate virtual vision without distorting image areas that do not wait
Adjust the parallax between point images to generate easy-to-view stereoscopic images
That is the way you can. Effect of the Invention Adjustment of the amount of projection and sinking of a stereoscopic image
Optimum while displaying a strong three-dimensional effect at the time of display
Eyes are less likely to get tired even when deviating from the viewpoint
Provides a stereoscopic image generation method that easily realizes stereoscopic display without
To do. Adjustment of the amount of projection and sinking of the three-dimensional image
For complex parallax images
Even when displaying without distorting unexpected image areas
Out of the optimal viewpoint position while giving a strong three-dimensional feeling
3D display with less discomfort even when eyes are tired
To provide a stereoscopic image generation method that can be easily realized.

BRIEF DESCRIPTION OF THE DRAWINGS FIG. 1 is a diagram showing an algorithm according to a first embodiment of the present invention. FIG. 2 is a diagram showing an algorithm according to a second embodiment and a third embodiment of the present invention. FIG. 4 is a diagram showing the relationship between the amount of protrusion and sinking, and the amount of parallax. FIG. 4 is a diagram showing template matching. FIG. 5 is a diagram showing depth sense adjustment. FIG. 6 is a diagram showing depth sense adjustment. FIG. 8 is a diagram showing a sense adjustment. FIG. 8 is a diagram showing a relationship between an observation position and an image skip. FIG. 9 is a diagram showing a relationship between image skips in an image sequence. FIG. 10 is a diagram showing a stereo image pair. FIG. 12 is a diagram showing a result of region division performed on a stereo image.

Claims (1)

1. A stereoscopic image generation method using virtual viewpoint image generation, wherein a parallax between generated virtual viewpoint images is adjusted by adjusting a depth amount or a parallax value. 3D image generation method. 2. The adjustment of the depth amount or the parallax value is performed by performing region division using information of an input stereo image, a depth image, a parallax value distribution image, or a plurality of these images, and for a selected specific region. The method according to claim 1, wherein the method is performed. 3. The method for adjusting the depth amount or the parallax value includes performing an area division of an input image by a user's interactive operation;
The method according to claim 1, wherein the method is performed on a selected specific area. 4. A recording medium on which the stereoscopic image generation method according to claim 1 is recorded as a program. 5. A recording medium on which the stereoscopic image generation method according to claim 2 is recorded as a program. 6. A recording medium on which the stereoscopic image generation method according to claim 3 is recorded as a program.
JP2002008484A 2002-01-17 2002-01-17 Stereoscopic image generating method and recording medium Granted JP2003209858A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2002008484A JP2003209858A (en) 2002-01-17 2002-01-17 Stereoscopic image generating method and recording medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
JP2002008484A JP2003209858A (en) 2002-01-17 2002-01-17 Stereoscopic image generating method and recording medium

Publications (1)

Publication Number Publication Date
JP2003209858A true JP2003209858A (en) 2003-07-25

Family

ID=27646729

Family Applications (1)

Application Number Title Priority Date Filing Date
JP2002008484A Granted JP2003209858A (en) 2002-01-17 2002-01-17 Stereoscopic image generating method and recording medium

Country Status (1)

Country Link
JP (1) JP2003209858A (en)

Cited By (85)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2006178900A (en) * 2004-12-24 2006-07-06 Hitachi Displays Ltd Stereoscopic image generating device
JP2006203668A (en) * 2005-01-21 2006-08-03 Konica Minolta Photo Imaging Inc Image creation system and image creation method
JP2006229725A (en) * 2005-02-18 2006-08-31 Konica Minolta Photo Imaging Inc Image generation system and image generating method
JP2007110360A (en) * 2005-10-13 2007-04-26 Ntt Comware Corp Stereoscopic image processing apparatus and program
JP2008141666A (en) * 2006-12-05 2008-06-19 Fujifilm Corp Stereoscopic image creating device, stereoscopic image output device, and stereoscopic image creating method
US7443392B2 (en) 2004-10-15 2008-10-28 Canon Kabushiki Kaisha Image processing program for 3D display, image processing apparatus, and 3D display system
WO2009020277A1 (en) * 2007-08-06 2009-02-12 Samsung Electronics Co., Ltd. Method and apparatus for reproducing stereoscopic image using depth control
JP2009129420A (en) * 2007-11-28 2009-06-11 Fujifilm Corp Image processing device, method, and program
US7567648B2 (en) 2004-06-14 2009-07-28 Canon Kabushiki Kaisha System of generating stereoscopic image and control method thereof
JP2009186938A (en) * 2008-02-08 2009-08-20 Toppan Printing Co Ltd Printed matter for stereoscopic vision
JP2009290905A (en) * 2009-09-10 2009-12-10 Panasonic Corp Displaying apparatus and method
JP2010206774A (en) * 2009-02-05 2010-09-16 Fujifilm Corp Three-dimensional image output device and method
JP2010226500A (en) * 2009-03-24 2010-10-07 Toshiba Corp Device and method for displaying stereoscopic image
WO2011052389A1 (en) * 2009-10-30 2011-05-05 富士フイルム株式会社 Image processing device and image processing method
CN102143371A (en) * 2010-01-28 2011-08-03 株式会社东芝 Image processing apparatus, 3D display apparatus and image processing method
JP2011160347A (en) * 2010-02-03 2011-08-18 Sony Corp Recording device and method, image processing device and method, and program
JP2011176822A (en) * 2010-01-28 2011-09-08 Toshiba Corp Image processing apparatus, 3d display apparatus, and image processing method
JP2011211717A (en) * 2009-02-05 2011-10-20 Fujifilm Corp Three-dimensional image output device and method
JP2011211739A (en) * 2011-06-01 2011-10-20 Fujifilm Corp Stereoscopic vision image preparation device, stereoscopic vision image output device and stereoscopic vision image preparation method
WO2011148921A1 (en) * 2010-05-26 2011-12-01 シャープ株式会社 Image processor, image display apparatus, and imaging device
WO2011155330A1 (en) * 2010-06-07 2011-12-15 ソニー株式会社 Three-dimensional image display system, disparity conversion device, disparity conversion method, and program
WO2012001958A1 (en) * 2010-06-30 2012-01-05 富士フイルム株式会社 Image processing device, method and program
US8111875B2 (en) 2007-02-20 2012-02-07 Fujifilm Corporation Method of and apparatus for taking solid image and computer program for causing computer to execute the method
WO2012018001A1 (en) * 2010-08-02 2012-02-09 シャープ株式会社 Video image processing device, display device and video image processing method
WO2012020558A1 (en) * 2010-08-10 2012-02-16 株式会社ニコン Image processing device, image processing method, display device, display method and program
JP2012505586A (en) * 2008-10-10 2012-03-01 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Method for processing disparity information contained in a signal
WO2012029301A1 (en) * 2010-08-31 2012-03-08 パナソニック株式会社 Image capturing apparatus, playback apparatus, and image processing method
WO2012036176A1 (en) * 2010-09-14 2012-03-22 Sharp Kabushiki Kaisha Reducing viewing discomfort
WO2012036056A1 (en) * 2010-09-15 2012-03-22 シャープ株式会社 Stereoscopic image display device, stereoscopic image display method, program for executing stereoscopic image display method on computer, and recording medium on which said program is recorded
WO2012046369A1 (en) * 2010-10-07 2012-04-12 パナソニック株式会社 Image capturing device, disparity adjustment method, semiconductor integrated circuit, and digital camera
JP2012078942A (en) * 2010-09-30 2012-04-19 Toshiba Corp Depth correction device and method
WO2012049848A1 (en) * 2010-10-14 2012-04-19 パナソニック株式会社 Stereo image display device
WO2012056685A1 (en) * 2010-10-27 2012-05-03 パナソニック株式会社 3d image processing device, 3d imaging device, and 3d image processing method
WO2012060406A1 (en) * 2010-11-05 2012-05-10 シャープ株式会社 Device for creating stereoscopic image data, device for playing back stereoscopic image data, and file management method
JP2012095039A (en) * 2010-10-26 2012-05-17 Fujifilm Corp Three-dimensional image display device, three-dimensional image display method and program
WO2012063595A1 (en) * 2010-11-11 2012-05-18 シャープ株式会社 Stereo image display device, stereo image display method, program for executing stereo image display method on computer, and recording medium with same program recorded thereon
CN102474636A (en) * 2009-07-29 2012-05-23 伊斯曼柯达公司 Adjusting perspective and disparity in stereoscopic image pairs
JP2012099956A (en) * 2010-10-29 2012-05-24 Toshiba Corp Video reproduction device and video reproduction method
JP2012103980A (en) * 2010-11-11 2012-05-31 Sony Corp Image processing device, image processing method, and program
JP2012130008A (en) * 2010-12-16 2012-07-05 Korea Electronics Telecommun Three-dimensional video display device and method
JP2012134955A (en) * 2010-12-23 2012-07-12 Tektronix Inc Displays for test and measurement instruments
WO2012098608A1 (en) * 2011-01-17 2012-07-26 パナソニック株式会社 Three-dimensional image processing device, three-dimensional image processing method, and program
JP2012142922A (en) * 2010-12-17 2012-07-26 Canon Inc Imaging device, display device, computer program, and stereoscopic image display system
JP2012169911A (en) * 2011-02-15 2012-09-06 Nintendo Co Ltd Display control program, display controller, display control system, and display control method
JP2012204852A (en) * 2011-03-23 2012-10-22 Sony Corp Image processing apparatus and method, and program
JP2012205148A (en) * 2011-03-25 2012-10-22 Kyocera Corp Electronic apparatus
JP2012231405A (en) * 2011-04-27 2012-11-22 Toshiba Corp Depth-adjustable three-dimensional video display device
WO2012169217A1 (en) * 2011-06-10 2012-12-13 シャープ株式会社 Image generation device, image display device, television reception device, image generation method, and computer program
JP2012257022A (en) * 2011-06-08 2012-12-27 Sony Corp Image processing apparatus, method, and program
JP2013005052A (en) * 2011-06-13 2013-01-07 Toshiba Corp Image processing system, apparatus, method and program
JP2013003586A (en) * 2011-06-14 2013-01-07 Samsung Electronics Co Ltd Display system with image conversion mechanism and method of operation thereof
JP2013009324A (en) * 2011-05-23 2013-01-10 Panasonic Corp Image display device
WO2013014710A1 (en) * 2011-07-27 2013-01-31 パナソニック株式会社 Stereoscopic image adjustment device
CN102948157A (en) * 2010-06-22 2013-02-27 富士胶片株式会社 Stereoscopic image display device, stereoscopic image display method, stereoscopic image display program, and recording medium
JP2013517657A (en) * 2010-01-14 2013-05-16 ヒューマンアイズ テクノロジーズ リミテッド Method and system for adjusting the depth value of an object in a three-dimensional display
JPWO2011086636A1 (en) * 2010-01-13 2013-05-16 パナソニック株式会社 Stereo image pickup device, stereo image pickup method, stereo image display device, and program
JP2013157951A (en) * 2012-01-31 2013-08-15 Jvc Kenwood Corp Image processing device, image processing method, and image processing program
JP2013162330A (en) * 2012-02-06 2013-08-19 Sony Corp Image processing device and method, program, and recording medium
JP2013534772A (en) * 2010-06-24 2013-09-05 コリア エレクトロニクス テクノロジ インスティチュート How to configure stereoscopic video files
JP2013538474A (en) * 2010-06-14 2013-10-10 クゥアルコム・インコーポレイテッドQualcomm Incorporated Calculation of parallax for 3D images
CN103369340A (en) * 2013-07-05 2013-10-23 邓伟廷 Stereoscopic display method of multi-dimensional LED (Light Emitting Diode) display screen and multi-dimensional LED display screen
JP2013219422A (en) * 2012-04-04 2013-10-24 Seiko Epson Corp Image processing device and image processing method
JP2013219421A (en) * 2012-04-04 2013-10-24 Seiko Epson Corp Image processing device and image processing method
JP2013540402A (en) * 2010-10-04 2013-10-31 クゥアルコム・インコーポレイテッドQualcomm Incorporated 3D video control system for adjusting 3D video rendering based on user preferences
WO2014017201A1 (en) * 2012-07-26 2014-01-30 ソニー株式会社 Image processing device, image processing method, and image display device
WO2014038476A1 (en) * 2012-09-06 2014-03-13 シャープ株式会社 Stereoscopic image processing device, stereoscopic image processing method, and program
JP2014508430A (en) * 2010-12-01 2014-04-03 クゥアルコム・インコーポレイテッドQualcomm Incorporated Zero parallax plane for feedback-based 3D video
JP2014086851A (en) * 2012-10-23 2014-05-12 Toppan Printing Co Ltd Parallax control device and parallax control program
JP5492311B2 (en) * 2011-02-08 2014-05-14 富士フイルム株式会社 Viewpoint image generation apparatus, viewpoint image generation method, and stereoscopic image printing apparatus
US8885026B2 (en) 2011-03-31 2014-11-11 Fujifilm Corporation Imaging device and imaging method
JP2014534657A (en) * 2011-09-14 2014-12-18 サムスン エレクトロニクス カンパニー リミテッド Video processing apparatus and video processing method thereof
JP5644862B2 (en) * 2010-12-10 2014-12-24 富士通株式会社 Stereoscopic moving image generating apparatus, stereoscopic moving image generating method, stereoscopic moving image generating program
US8970680B2 (en) 2006-08-01 2015-03-03 Qualcomm Incorporated Real-time capturing and generating stereo images and videos with a monoscopic low power mobile device
US9042709B2 (en) 2010-08-31 2015-05-26 Panasonic Intellectual Property Management Co., Ltd. Image capture device, player, and image processing method
JP2015213299A (en) * 2014-04-15 2015-11-26 キヤノン株式会社 Image processing system and image processing method
JP5851625B2 (en) * 2013-03-29 2016-02-03 株式会社東芝 Stereoscopic video processing apparatus, stereoscopic video processing method, and stereoscopic video processing program
US9361668B2 (en) 2012-08-20 2016-06-07 Denso Corporation Method and apparatus for generating disparity map
JP2016149772A (en) * 2016-03-01 2016-08-18 京セラ株式会社 Electronic apparatus
JP6011862B2 (en) * 2011-01-27 2016-10-19 パナソニックIpマネジメント株式会社 3D image capturing apparatus and 3D image capturing method
EP3091443A3 (en) * 2009-06-16 2017-03-08 Microsoft Technology Licensing, LLC Viewer-centric user interface for stereoscopic cinema
KR101742993B1 (en) 2010-07-07 2017-06-02 엘지전자 주식회사 A digital broadcast receiver and a method for processing a 3-dimensional effect in the digital broadcast receiver
JP2017525198A (en) * 2014-06-10 2017-08-31 ビッタニメイト インコーポレイテッド Stereoscopic depth adjustment and focus adjustment
KR101870764B1 (en) * 2011-06-14 2018-06-25 삼성전자주식회사 Display apparatus using image conversion mechanism and method of operation thereof
JP2019500762A (en) * 2015-12-07 2019-01-10 グーグル エルエルシー System and method for multiscopic noise reduction and high dynamic range
WO2019058811A1 (en) * 2017-09-19 2019-03-28 ソニー株式会社 Display device and display control method

Cited By (127)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7567648B2 (en) 2004-06-14 2009-07-28 Canon Kabushiki Kaisha System of generating stereoscopic image and control method thereof
US7443392B2 (en) 2004-10-15 2008-10-28 Canon Kabushiki Kaisha Image processing program for 3D display, image processing apparatus, and 3D display system
JP2006178900A (en) * 2004-12-24 2006-07-06 Hitachi Displays Ltd Stereoscopic image generating device
JP2006203668A (en) * 2005-01-21 2006-08-03 Konica Minolta Photo Imaging Inc Image creation system and image creation method
JP2006229725A (en) * 2005-02-18 2006-08-31 Konica Minolta Photo Imaging Inc Image generation system and image generating method
JP2007110360A (en) * 2005-10-13 2007-04-26 Ntt Comware Corp Stereoscopic image processing apparatus and program
US9509980B2 (en) 2006-08-01 2016-11-29 Qualcomm Incorporated Real-time capturing and generating viewpoint images and videos with a monoscopic low power mobile device
US8970680B2 (en) 2006-08-01 2015-03-03 Qualcomm Incorporated Real-time capturing and generating stereo images and videos with a monoscopic low power mobile device
JP2008141666A (en) * 2006-12-05 2008-06-19 Fujifilm Corp Stereoscopic image creating device, stereoscopic image output device, and stereoscopic image creating method
US8111875B2 (en) 2007-02-20 2012-02-07 Fujifilm Corporation Method of and apparatus for taking solid image and computer program for causing computer to execute the method
WO2009020277A1 (en) * 2007-08-06 2009-02-12 Samsung Electronics Co., Ltd. Method and apparatus for reproducing stereoscopic image using depth control
JP2009129420A (en) * 2007-11-28 2009-06-11 Fujifilm Corp Image processing device, method, and program
JP2009186938A (en) * 2008-02-08 2009-08-20 Toppan Printing Co Ltd Printed matter for stereoscopic vision
JP2012505586A (en) * 2008-10-10 2012-03-01 コーニンクレッカ フィリップス エレクトロニクス エヌ ヴィ Method for processing disparity information contained in a signal
JP2010206774A (en) * 2009-02-05 2010-09-16 Fujifilm Corp Three-dimensional image output device and method
JP4737573B2 (en) * 2009-02-05 2011-08-03 富士フイルム株式会社 3D image output apparatus and method
US8120606B2 (en) 2009-02-05 2012-02-21 Fujifilm Corporation Three-dimensional image output device and three-dimensional image output method
JP2011211717A (en) * 2009-02-05 2011-10-20 Fujifilm Corp Three-dimensional image output device and method
CN102308590B (en) * 2009-02-05 2014-04-16 富士胶片株式会社 Three-dimensional image output device and three-dimensional image output method
JP2010226500A (en) * 2009-03-24 2010-10-07 Toshiba Corp Device and method for displaying stereoscopic image
EP3091443A3 (en) * 2009-06-16 2017-03-08 Microsoft Technology Licensing, LLC Viewer-centric user interface for stereoscopic cinema
CN102474636A (en) * 2009-07-29 2012-05-23 伊斯曼柯达公司 Adjusting perspective and disparity in stereoscopic image pairs
JP2013500536A (en) * 2009-07-29 2013-01-07 イーストマン コダック カンパニー Perspective and parallax adjustment in stereoscopic image pairs
JP2009290905A (en) * 2009-09-10 2009-12-10 Panasonic Corp Displaying apparatus and method
CN102598051A (en) * 2009-10-30 2012-07-18 富士胶片株式会社 Image processing device and image processing method
WO2011052389A1 (en) * 2009-10-30 2011-05-05 富士フイルム株式会社 Image processing device and image processing method
US8472704B2 (en) 2009-10-30 2013-06-25 Fujifilm Corporation Image processing apparatus and image processing method
JP5068391B2 (en) * 2009-10-30 2012-11-07 富士フイルム株式会社 Image processing device
JPWO2011086636A1 (en) * 2010-01-13 2013-05-16 パナソニック株式会社 Stereo image pickup device, stereo image pickup method, stereo image display device, and program
JP2013517657A (en) * 2010-01-14 2013-05-16 ヒューマンアイズ テクノロジーズ リミテッド Method and system for adjusting the depth value of an object in a three-dimensional display
US9438759B2 (en) 2010-01-14 2016-09-06 Humaneyes Technologies Ltd. Method and system for adjusting depth values of objects in a three dimensional (3D) display
CN102143371A (en) * 2010-01-28 2011-08-03 株式会社东芝 Image processing apparatus, 3D display apparatus and image processing method
JP2011176822A (en) * 2010-01-28 2011-09-08 Toshiba Corp Image processing apparatus, 3d display apparatus, and image processing method
JP2011176823A (en) * 2010-01-28 2011-09-08 Toshiba Corp Image processing apparatus, 3d display apparatus, and image processing method
JP2011160347A (en) * 2010-02-03 2011-08-18 Sony Corp Recording device and method, image processing device and method, and program
WO2011148921A1 (en) * 2010-05-26 2011-12-01 シャープ株式会社 Image processor, image display apparatus, and imaging device
JP2011250059A (en) * 2010-05-26 2011-12-08 Sharp Corp Image processing device, image display device and image pickup device
US8831338B2 (en) 2010-05-26 2014-09-09 Sharp Kabushiki Kaisha Image processor, image display apparatus, and image taking apparatus
CN102939764A (en) * 2010-05-26 2013-02-20 夏普株式会社 Image processor, image display apparatus, and imaging device
WO2011155330A1 (en) * 2010-06-07 2011-12-15 ソニー株式会社 Three-dimensional image display system, disparity conversion device, disparity conversion method, and program
JP2011259045A (en) * 2010-06-07 2011-12-22 Sony Corp Stereoscopic image display device, parallax conversion device, parallax conversion method and program
US8605994B2 (en) 2010-06-07 2013-12-10 Sony Corporation Stereoscopic image display system, disparity conversion device, disparity conversion method and program
JP2013538474A (en) * 2010-06-14 2013-10-10 クゥアルコム・インコーポレイテッドQualcomm Incorporated Calculation of parallax for 3D images
CN102948157A (en) * 2010-06-22 2013-02-27 富士胶片株式会社 Stereoscopic image display device, stereoscopic image display method, stereoscopic image display program, and recording medium
CN102948157B (en) * 2010-06-22 2016-05-11 富士胶片株式会社 3 D image display device and 3 D image display method
JP2013534772A (en) * 2010-06-24 2013-09-05 コリア エレクトロニクス テクノロジ インスティチュート How to configure stereoscopic video files
WO2012001958A1 (en) * 2010-06-30 2012-01-05 富士フイルム株式会社 Image processing device, method and program
KR101742993B1 (en) 2010-07-07 2017-06-02 엘지전자 주식회사 A digital broadcast receiver and a method for processing a 3-dimensional effect in the digital broadcast receiver
JP2012054912A (en) * 2010-08-02 2012-03-15 Sharp Corp Video processing apparatus, display device and video processing method
WO2012018001A1 (en) * 2010-08-02 2012-02-09 シャープ株式会社 Video image processing device, display device and video image processing method
WO2012020558A1 (en) * 2010-08-10 2012-02-16 株式会社ニコン Image processing device, image processing method, display device, display method and program
US9488841B2 (en) 2010-08-10 2016-11-08 Nikon Corporation Image processing apparatus, image processing method, display apparatus, display method, and computer readable recording medium
US10462455B2 (en) 2010-08-10 2019-10-29 Nikon Corporation Display apparatus, display method, and computer readable recording medium
US9042709B2 (en) 2010-08-31 2015-05-26 Panasonic Intellectual Property Management Co., Ltd. Image capture device, player, and image processing method
JP5204350B2 (en) * 2010-08-31 2013-06-05 パナソニック株式会社 Imaging apparatus, playback apparatus, and image processing method
WO2012029301A1 (en) * 2010-08-31 2012-03-08 パナソニック株式会社 Image capturing apparatus, playback apparatus, and image processing method
US8970675B2 (en) 2010-08-31 2015-03-03 Panasonic Intellectual Property Management Co., Ltd. Image capture device, player, system, and image processing method
WO2012036176A1 (en) * 2010-09-14 2012-03-22 Sharp Kabushiki Kaisha Reducing viewing discomfort
WO2012036056A1 (en) * 2010-09-15 2012-03-22 シャープ株式会社 Stereoscopic image display device, stereoscopic image display method, program for executing stereoscopic image display method on computer, and recording medium on which said program is recorded
JP2012065111A (en) * 2010-09-15 2012-03-29 Sharp Corp Stereoscopic image display device, stereoscopic image display method, program for allowing computer to perform stereoscopic image display method, and recording medium with program recorded therein
US8761492B2 (en) 2010-09-30 2014-06-24 Kabushiki Kaisha Toshiba Depth correction apparatus and method
JP2012078942A (en) * 2010-09-30 2012-04-19 Toshiba Corp Depth correction device and method
US9035939B2 (en) 2010-10-04 2015-05-19 Qualcomm Incorporated 3D video control system to adjust 3D video rendering based on user preferences
JP2013540402A (en) * 2010-10-04 2013-10-31 クゥアルコム・インコーポレイテッドQualcomm Incorporated 3D video control system for adjusting 3D video rendering based on user preferences
WO2012046369A1 (en) * 2010-10-07 2012-04-12 パナソニック株式会社 Image capturing device, disparity adjustment method, semiconductor integrated circuit, and digital camera
JP4972716B2 (en) * 2010-10-14 2012-07-11 パナソニック株式会社 Stereo image display device
WO2012049848A1 (en) * 2010-10-14 2012-04-19 パナソニック株式会社 Stereo image display device
JP2012095039A (en) * 2010-10-26 2012-05-17 Fujifilm Corp Three-dimensional image display device, three-dimensional image display method and program
US9024940B2 (en) 2010-10-26 2015-05-05 Fujifilm Corporation Three-dimensional image display device and three-dimensional image display method and program
JP5942195B2 (en) * 2010-10-27 2016-06-29 パナソニックIpマネジメント株式会社 3D image processing apparatus, 3D imaging apparatus, and 3D image processing method
WO2012056685A1 (en) * 2010-10-27 2012-05-03 パナソニック株式会社 3d image processing device, 3d imaging device, and 3d image processing method
CN103181173B (en) * 2010-10-27 2016-04-27 松下知识产权经营株式会社 3-dimensional image processing apparatus, three-dimensional image pickup device and three dimensional image processing method
US9338426B2 (en) 2010-10-27 2016-05-10 Panasonic Intellectual Property Management Co., Ltd. Three-dimensional image processing apparatus, three-dimensional imaging apparatus, and three-dimensional image processing method
CN103181173A (en) * 2010-10-27 2013-06-26 松下电器产业株式会社 3D image processing device, 3d imaging device, and 3d image processing method
JP2012099956A (en) * 2010-10-29 2012-05-24 Toshiba Corp Video reproduction device and video reproduction method
WO2012060406A1 (en) * 2010-11-05 2012-05-10 シャープ株式会社 Device for creating stereoscopic image data, device for playing back stereoscopic image data, and file management method
JP2012100211A (en) * 2010-11-05 2012-05-24 Sharp Corp Stereoscopic image data creating apparatus, stereoscopic image data playback apparatus, and file managing method
JP2012105153A (en) * 2010-11-11 2012-05-31 Sharp Corp Stereoscopic image display device, stereoscopic image display method, program for allowing computer to execute stereoscopic image display method, and recording medium for storing program
WO2012063595A1 (en) * 2010-11-11 2012-05-18 シャープ株式会社 Stereo image display device, stereo image display method, program for executing stereo image display method on computer, and recording medium with same program recorded thereon
JP2012103980A (en) * 2010-11-11 2012-05-31 Sony Corp Image processing device, image processing method, and program
US9049423B2 (en) 2010-12-01 2015-06-02 Qualcomm Incorporated Zero disparity plane for feedback-based three-dimensional video
JP2014508430A (en) * 2010-12-01 2014-04-03 クゥアルコム・インコーポレイテッドQualcomm Incorporated Zero parallax plane for feedback-based 3D video
JP5644862B2 (en) * 2010-12-10 2014-12-24 富士通株式会社 Stereoscopic moving image generating apparatus, stereoscopic moving image generating method, stereoscopic moving image generating program
JP2012130008A (en) * 2010-12-16 2012-07-05 Korea Electronics Telecommun Three-dimensional video display device and method
JP2012142922A (en) * 2010-12-17 2012-07-26 Canon Inc Imaging device, display device, computer program, and stereoscopic image display system
CN102685535A (en) * 2010-12-23 2012-09-19 特克特朗尼克公司 Displays for easy visualizing of 3d disparity data
JP2012134955A (en) * 2010-12-23 2012-07-12 Tektronix Inc Displays for test and measurement instruments
EP2469874A3 (en) * 2010-12-23 2012-09-05 Tektronix, Inc. Displays for easy visualizing of 3D disparity data
US8750601B2 (en) 2011-01-17 2014-06-10 Panasonic Corporation Three-dimensional image processing device, and three-dimensional image processing method
WO2012098608A1 (en) * 2011-01-17 2012-07-26 パナソニック株式会社 Three-dimensional image processing device, three-dimensional image processing method, and program
JP6011862B2 (en) * 2011-01-27 2016-10-19 パナソニックIpマネジメント株式会社 3D image capturing apparatus and 3D image capturing method
JP5492311B2 (en) * 2011-02-08 2014-05-14 富士フイルム株式会社 Viewpoint image generation apparatus, viewpoint image generation method, and stereoscopic image printing apparatus
JP2012169911A (en) * 2011-02-15 2012-09-06 Nintendo Co Ltd Display control program, display controller, display control system, and display control method
JP2012204852A (en) * 2011-03-23 2012-10-22 Sony Corp Image processing apparatus and method, and program
JP2012205148A (en) * 2011-03-25 2012-10-22 Kyocera Corp Electronic apparatus
US9462259B2 (en) 2011-03-25 2016-10-04 Kyocera Corporation Electronic device
JP5640143B2 (en) * 2011-03-31 2014-12-10 富士フイルム株式会社 Imaging apparatus and imaging method
US8885026B2 (en) 2011-03-31 2014-11-11 Fujifilm Corporation Imaging device and imaging method
JP2012231405A (en) * 2011-04-27 2012-11-22 Toshiba Corp Depth-adjustable three-dimensional video display device
JP2013009324A (en) * 2011-05-23 2013-01-10 Panasonic Corp Image display device
JP2011211739A (en) * 2011-06-01 2011-10-20 Fujifilm Corp Stereoscopic vision image preparation device, stereoscopic vision image output device and stereoscopic vision image preparation method
JP2012257022A (en) * 2011-06-08 2012-12-27 Sony Corp Image processing apparatus, method, and program
WO2012169217A1 (en) * 2011-06-10 2012-12-13 シャープ株式会社 Image generation device, image display device, television reception device, image generation method, and computer program
JP2013004989A (en) * 2011-06-10 2013-01-07 Sharp Corp Video generating device, video display device, television image receiving device, video generating method and computer program
US9578303B2 (en) 2011-06-13 2017-02-21 Toshiba Medical Systems Corporation Image processing system, image processing apparatus, and image processing method for displaying a scale on a stereoscopic display device
JP2013005052A (en) * 2011-06-13 2013-01-07 Toshiba Corp Image processing system, apparatus, method and program
JP2013003586A (en) * 2011-06-14 2013-01-07 Samsung Electronics Co Ltd Display system with image conversion mechanism and method of operation thereof
KR101870764B1 (en) * 2011-06-14 2018-06-25 삼성전자주식회사 Display apparatus using image conversion mechanism and method of operation thereof
WO2013014710A1 (en) * 2011-07-27 2013-01-31 パナソニック株式会社 Stereoscopic image adjustment device
JP2014534657A (en) * 2011-09-14 2014-12-18 サムスン エレクトロニクス カンパニー リミテッド Video processing apparatus and video processing method thereof
JP2013157951A (en) * 2012-01-31 2013-08-15 Jvc Kenwood Corp Image processing device, image processing method, and image processing program
JP2013162330A (en) * 2012-02-06 2013-08-19 Sony Corp Image processing device and method, program, and recording medium
US9177382B2 (en) 2012-04-04 2015-11-03 Seiko Epson Corporation Image processing apparatus for forming synthetic image and image processing method for forming synthetic image
JP2013219422A (en) * 2012-04-04 2013-10-24 Seiko Epson Corp Image processing device and image processing method
JP2013219421A (en) * 2012-04-04 2013-10-24 Seiko Epson Corp Image processing device and image processing method
WO2014017201A1 (en) * 2012-07-26 2014-01-30 ソニー株式会社 Image processing device, image processing method, and image display device
US9361668B2 (en) 2012-08-20 2016-06-07 Denso Corporation Method and apparatus for generating disparity map
WO2014038476A1 (en) * 2012-09-06 2014-03-13 シャープ株式会社 Stereoscopic image processing device, stereoscopic image processing method, and program
JP2014086851A (en) * 2012-10-23 2014-05-12 Toppan Printing Co Ltd Parallax control device and parallax control program
JP5851625B2 (en) * 2013-03-29 2016-02-03 株式会社東芝 Stereoscopic video processing apparatus, stereoscopic video processing method, and stereoscopic video processing program
CN103369340A (en) * 2013-07-05 2013-10-23 邓伟廷 Stereoscopic display method of multi-dimensional LED (Light Emitting Diode) display screen and multi-dimensional LED display screen
CN103369340B (en) * 2013-07-05 2015-09-02 邓伟廷 Multi-dimensional LED display screen stereo display method and multi-dimensional LED display screen
JP2015213299A (en) * 2014-04-15 2015-11-26 キヤノン株式会社 Image processing system and image processing method
JP2017525198A (en) * 2014-06-10 2017-08-31 ビッタニメイト インコーポレイテッド Stereoscopic depth adjustment and focus adjustment
JP2019500762A (en) * 2015-12-07 2019-01-10 グーグル エルエルシー System and method for multiscopic noise reduction and high dynamic range
JP2016149772A (en) * 2016-03-01 2016-08-18 京セラ株式会社 Electronic apparatus
WO2019058811A1 (en) * 2017-09-19 2019-03-28 ソニー株式会社 Display device and display control method

Similar Documents

Publication Publication Date Title
Fehn et al. Interactive 3-DTV-concepts and key technologies
Zhang et al. 3D-TV content creation: automatic 2D-to-3D video conversion
KR101625830B1 (en) Method and device for generating a depth map
US8400496B2 (en) Optimal depth mapping
Zhang et al. Stereoscopic image generation based on depth images for 3D TV
US6160909A (en) Depth control for stereoscopic images
US7054478B2 (en) Image conversion and encoding techniques
JP4440067B2 (en) Image processing program for stereoscopic display, image processing apparatus, and stereoscopic display system
CN102075694B (en) Stereoscopic editing for video production, post-production and display adaptation
US7321374B2 (en) Method and device for the generation of 3-D images
US5751927A (en) Method and apparatus for producing three dimensional displays on a two dimensional surface
Fehn A 3D-TV approach using depth-image-based rendering (DIBR)
JPWO2004071102A1 (en) Stereoscopic video providing method and stereoscopic video display device
US20070165942A1 (en) Method for rectifying stereoscopic display systems
EP1437898A1 (en) Video filtering for stereo images
US7295699B2 (en) Image processing system, program, information storage medium, and image processing method
JP4740135B2 (en) System and method for drawing 3D image on screen of 3D image display
KR101761751B1 (en) Hmd calibration with direct geometric modeling
EP2357838A2 (en) Method and apparatus for processing three-dimensional images
US6263100B1 (en) Image processing method and apparatus for generating an image from the viewpoint of an observer on the basis of images obtained from a plurality of viewpoints
JP2006325165A (en) Device, program and method for generating telop
US9445072B2 (en) Synthesizing views based on image domain warping
US20040066555A1 (en) Method and apparatus for generating stereoscopic images
JP4652727B2 (en) Stereoscopic image generation system and control method thereof
JP4125252B2 (en) Image generation apparatus, image generation method, and image generation program