WO2011125461A1 - 画像生成装置及び方法並びにプリンタ - Google Patents
画像生成装置及び方法並びにプリンタ Download PDFInfo
- Publication number
- WO2011125461A1 WO2011125461A1 PCT/JP2011/056561 JP2011056561W WO2011125461A1 WO 2011125461 A1 WO2011125461 A1 WO 2011125461A1 JP 2011056561 W JP2011056561 W JP 2011056561W WO 2011125461 A1 WO2011125461 A1 WO 2011125461A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- viewpoint
- image data
- virtual
- viewpoint image
- Prior art date
Links
Images
Classifications
-
- G—PHYSICS
- G03—PHOTOGRAPHY; CINEMATOGRAPHY; ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ELECTROGRAPHY; HOLOGRAPHY
- G03B—APPARATUS OR ARRANGEMENTS FOR TAKING PHOTOGRAPHS OR FOR PROJECTING OR VIEWING THEM; APPARATUS OR ARRANGEMENTS EMPLOYING ANALOGOUS TECHNIQUES USING WAVES OTHER THAN OPTICAL WAVES; ACCESSORIES THEREFOR
- G03B35/00—Stereoscopic photography
- G03B35/08—Stereoscopic photography by simultaneous recording
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/111—Transformation of image signals corresponding to virtual viewpoints, e.g. spatial image interpolation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/10—Processing, recording or transmission of stereoscopic or multi-view image signals
- H04N13/106—Processing image signals
- H04N13/122—Improving the 3D impression of stereoscopic images by modifying image signal contents, e.g. by filtering or adding monoscopic depth cues
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/20—Image signal generators
- H04N13/204—Image signal generators using stereoscopic image cameras
- H04N13/239—Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N13/30—Image reproducers
- H04N13/302—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays
- H04N13/305—Image reproducers for viewing without the aid of special glasses, i.e. using autostereoscopic displays using lenticular lenses, e.g. arrangements of cylindrical lenses
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N13/00—Stereoscopic video systems; Multi-view video systems; Details thereof
- H04N2013/0074—Stereoscopic image analysis
- H04N2013/0081—Depth or disparity estimation from stereoscopic image signals
Definitions
- the present invention relates to an image generation apparatus and method for generating a virtual viewpoint image when a subject is viewed from a virtual viewpoint, based on viewpoint images obtained by photographing the subject from two viewpoints, and a printer including the image generation apparatus.
- a technique is known in which a stereoscopic image can be observed using a lenticular sheet in which a large number of lenticular lenses are arranged in the left-right direction. This is because, on the back side of the lenticular sheet, for example, linear images obtained by dividing an L viewpoint image and an R viewpoint image taken from two left and right viewpoints into lines are alternately arranged, and under one lenticular lens. , Two adjacent linear images are positioned. A stereoscopic image can be observed by the left eye and the right eye observing the L viewpoint image and the R viewpoint image having parallax through each lenticular lens.
- Patent Document 1 based on L and R viewpoint images obtained by a compound eye camera, a virtual image obtained when a subject is viewed from a plurality of virtual viewpoints set differently from the L and R viewpoints by electronic interpolation.
- a printing system that generates a viewpoint image and records a linear image on a lenticular sheet based on the original L and R viewpoint images and a new virtual viewpoint image is disclosed.
- the present invention has been made to solve the above-described problem.
- An image generating apparatus and method, and a printer which can obtain a good virtual viewpoint image even when there is an abnormality in one of the L and R viewpoint captured images.
- the purpose is to provide.
- the image generation apparatus of the present invention has a predetermined number of images different from the viewpoint based on the first viewpoint image and the second viewpoint image having parallax obtained by photographing the subject from different viewpoints.
- an image generation apparatus that generates a virtual viewpoint image when a subject is viewed from a virtual viewpoint, a detection unit that detects whether or not the first viewpoint image and the second viewpoint image are abnormal, and a detection result of the detection unit Based on the above, when one of the first viewpoint image and the second viewpoint image is an abnormal image having an abnormality, a corresponding point on the abnormal image corresponding to each pixel of the other normal image having no abnormality is extracted.
- a parallax map generating unit that generates a parallax map indicating the depth distribution of the subject based on the extraction result, and a virtual viewpoint image generating unit that generates the virtual viewpoint image based on the parallax map and the normal image , Characterized in that it comprises a.
- the image processing apparatus includes an image output unit that outputs the normal image and the virtual viewpoint image to a predetermined output destination.
- the camera includes a virtual viewpoint setting unit that sets the virtual viewpoint more than the predetermined number between the viewpoints of the abnormal image and the normal image
- the virtual viewpoint image generation unit includes each virtual viewpoint setting unit configured by the virtual viewpoint setting unit. It is preferable that the virtual viewpoint image is generated by selecting the predetermined number of virtual viewpoints from the virtual viewpoint in order from the side closer to the viewpoint of the normal image.
- the virtual viewpoint is preferably set at equiangular intervals around the subject. It is preferable that an area detection unit that detects an area of the region in which the abnormality occurs in the abnormal image is provided, and the virtual viewpoint setting unit increases the number of virtual viewpoints set as the area increases.
- the abnormality preferably includes at least one of a flare and an image of an obstacle that shields at least a part of the photographing lens of the imaging unit.
- the printer according to any one of claims 1 to 7, wherein when the image generating apparatus according to any one of claims 1 to 7 and any one of the first viewpoint image and the second viewpoint image is the abnormal image, the normal image And recording means for recording an image that can be viewed stereoscopically on a recording medium based on the virtual viewpoint image.
- the warning display means for displaying a warning when both the first viewpoint image and the second viewpoint image are abnormal.
- the image generation method of the present invention is based on the first viewpoint image and the second viewpoint image having parallax obtained by photographing the subject from different viewpoints, and the subject is generated from a predetermined number of virtual viewpoints different from the viewpoint.
- a detection step for detecting whether or not the first viewpoint image and the second viewpoint image are abnormal and a detection result in the detection step, the first When one of the first viewpoint image and the second viewpoint image is an abnormal image having an abnormality, the corresponding points on the abnormal image corresponding to the respective pixels of the other normal image having no abnormality are extracted and extracted.
- a parallax map generation step for generating a parallax map indicating the depth distribution of the subject based on the result, and a virtual viewpoint image generation step for generating the virtual viewpoint image based on the parallax map and the normal image. Characterized in that it has Tsu and up, the.
- the image generating apparatus and method and the printer according to the present invention can cope with the abnormal image corresponding to each pixel of the other normal image. Since a parallax map is generated based on the result of extracting points, and a virtual viewpoint image is generated based on the parallax map and a normal image, it is good even when one of the first viewpoint image and the second viewpoint image is abnormal A virtual viewpoint image can be obtained.
- the stereoscopic image printing system 10 includes a compound eye camera 11 and a printer 12.
- the compound-eye camera 11 includes a pair of imaging units 14L and 14R.
- the L-viewpoint L-viewpoint image data I (L) and the R-viewpoint image data of the R-viewpoint are obtained by shooting a subject from two different viewpoints on the left and right.
- I (R) The L and R viewpoint image data I (L) and I (R) are recorded on the memory card 16 as a single image file 15.
- Reference numeral 14a denotes a photographing lens of the imaging units 14L and 14R.
- the printer 12 Based on the L and R viewpoint image data I (L) and (R) recorded on the memory card 16, the printer 12 transmits a plurality of viewpoint image data for observing a stereoscopic image to a lenticular sheet 17 (hereinafter simply referred to as a sheet 17). This is recorded on the back of FIG.
- the sheet 17 has a large number of semi-cylindrical lenticular lenses (hereinafter simply referred to as lenses) 18 arranged on the surface side, and the back surface thereof is flat. On this back surface, an image area 19 is virtually divided for each lens 18, and one image area 19 corresponds to one lens 18.
- lenses semi-cylindrical lenticular lenses
- Each image area 19 is partitioned in the arrangement direction of the lenses 18 according to the number of viewpoint images. For example, when recording images of six viewpoints, each image area 19 is divided into six first to sixth minute areas 19a to 19f, and linear images obtained by dividing the six viewpoint images into linear shapes are respectively provided. To be recorded. Each minute area 19a to 19f has a one-to-one correspondence with an image of six viewpoints.
- the CPU 21 of the printer 12 comprehensively controls each unit of the printer 12 by sequentially executing various programs and data read from the memory 23 based on a control signal from the operation unit 22.
- the RAM area of the memory 23 functions as a work memory for the CPU 21 to execute processing and a temporary storage destination for various data.
- An operation unit 22, a memory 23, a sheet conveyance mechanism 26, an image recording unit 27, an image input I / F 28, an image processing circuit (image generation device) 29, a monitor 30, and the like are connected to the CPU 21 via a bus 25. Yes.
- the operation unit 22 is used for power ON / OFF operation of the printer 12 and image recording start operation.
- the sheet conveyance mechanism 26 conveys the sheet 17 in the sub-scanning direction parallel to the arrangement direction of the lenses 18.
- the image recording unit 27 records a linear image extending in the main scanning direction on the back surface of the sheet 17.
- the image recording unit 27 records a linear image line by line each time the sheet 17 is conveyed line by line in the sub-scanning direction. Thereby, linear images can be recorded side by side in the sub-scanning direction.
- the memory card 16 is set in the image input I / F 28.
- the image input I / F 28 reads the image file 15 from the memory card 16 and sends it to the image processing circuit 29.
- the image processing circuit 29 generates virtual viewpoint image data from a plurality of virtual viewpoints different from the L and R viewpoints based on the L and R viewpoint image data I (L) and I (R) of the image file 15. In addition, when the virtual viewpoint image data is generated, the image processing circuit 29 performs n-viewpoint parallax including at least one of the L and R viewpoint image data I (L) and (R) and the virtual viewpoint image data.
- the image data is output to the image recording unit 27.
- the parallax image data refers to a collection of individual viewpoint image data when the subject is viewed from different viewpoints.
- the monitor 30 displays a selection screen for selecting a menu for image recording processing, a setting screen for performing various settings, a warning message when a trouble occurs, and the like.
- the image processing circuit 29 includes an image reading circuit (image acquisition unit) 31, a shooting failure detection circuit (detection unit) 32, a parallax map generation circuit 33, a virtual viewpoint image generation circuit 34, And an image output circuit 35.
- the image reading circuit 31 reads the image file 15 designated by the operation unit 22 from the memory card 16 via the image input I / F 28 and stores it.
- the shooting failure detection circuit 32 analyzes the image file 15 in the image reading circuit 31 and detects whether or not a shooting failure has occurred in the L and R viewpoint image data I (L) and I (R).
- Examples of the photographing trouble include a physical trouble such as finger engagement and an optical trouble such as flare.
- Finger engagement means that a photographer's finger (obstacle) shields at least a part of the photographing lens 14a so that a finger image is displayed in the photographed image (see FIG. 10B).
- the presence / absence of finger engagement is determined by, for example, storing in advance a plurality of image patterns photographed when finger engagement occurs, and the similarity between these image patterns and L and R viewpoint image data I (L) and I (R). It can be detected by determining the degree.
- the presence or absence of flare is determined by comparing, for example, L and R viewpoint image data I (L) and I (R), and whether or not the difference in luminance between the two parts is greater than a predetermined threshold value. Can be detected.
- the parallax map generation circuit 33 generates a parallax map indicating the depth distribution of the subject based on the L and R viewpoint image data I (L) and (R) in the image reading circuit 31 and outputs the generated parallax map to the virtual viewpoint image generation circuit 34. To do.
- the parallax map generation circuit 33 generates at least one of a parallax map 38L based on the L viewpoint image data I (L) and a parallax map 38R based on the R viewpoint image data I (R).
- a parallax map 38L based on the L viewpoint image data I (L)
- a parallax map 38R based on the R viewpoint image data I (R).
- a corresponding pixel (hereinafter referred to as a corresponding point) 40 on I (R) is extracted.
- representative examples of the pixel 39 and the corresponding point 40 are shown.
- a method for extracting the corresponding points 40 there are various methods such as a template matching method described in Patent Document 1, and any method may be used.
- a horizontal positional shift amount of each corresponding point 40 of the R viewpoint image data I (R) with respect to each pixel 39 of the L viewpoint image data I (L) is obtained.
- the parallax for each pixel 39 of the L viewpoint image data I (L) is obtained, and this is set as a parallax map 38L as shown in FIG. 4C.
- FIG. 4C it shows that it is closer to the compound-eye camera 11 as the dot density increases.
- a portion with a high dot density represents a main subject such as a person close to the compound-eye camera 11, and a portion with a low dot density represents a background far from the compound-eye camera 11.
- the parallax map 38R illustrated in FIG. 4D is generated by obtaining the positional deviation amount of the corresponding point 40 on the L viewpoint image data I (L) corresponding to each pixel 39 of the R viewpoint image data I (R). .
- the virtual viewpoint image generation circuit 34 generates virtual viewpoint image data when the subject is viewed from a virtual viewpoint different from the left and right viewpoints.
- the virtual viewpoint image generation circuit 34 includes a virtual viewpoint setting unit 43 and a virtual viewpoint image generation unit 44.
- the virtual viewpoint setting unit 43 sets a plurality of virtual viewpoints between the L and R viewpoints.
- the virtual viewpoint setting unit 43 selectively executes one of the following normal viewpoint setting process and special viewpoint setting process.
- n 6 viewpoints
- (n ⁇ 2) 4 virtual viewpoints V (1) to V (4) are set.
- the virtual viewpoints V (1) to V (4) are set at equiangular intervals so that the convergence angle ⁇ formed by the viewpoint directions of the L viewpoint V (L) and the R viewpoint V (R) is divided into five.
- n 6 viewpoints
- the virtual viewpoints V (1) to V (10) are set at equiangular intervals so that the convergence angle ⁇ is divided into 11 parts.
- the virtual viewpoint image generation unit 44 generates virtual viewpoint data corresponding to the virtual viewpoint set by the virtual viewpoint setting unit 43 and sends the virtual viewpoint data to the image output circuit 35.
- the virtual viewpoint image generation unit 44 executes the normal image generation process when the normal viewpoint setting process is executed, and executes the special image generation process when the special viewpoint setting process is executed.
- L normal image generation process the L normal image generation process and the R normal image generation process are executed in order.
- virtual viewpoint image data of a virtual viewpoint located on the L viewpoint V (L) side from the center between the L and R viewpoints V (L) and V (R) is generated (see FIG. 8).
- virtual viewpoint image data (hereinafter referred to as L virtual viewpoint image data) is generated based on the L viewpoint image data I (L) and the parallax map 38L.
- R virtual viewpoint image data of a virtual viewpoint located on the R viewpoint V (R) side with respect to the center between the L and R viewpoints V (L) and V (R) is generated.
- virtual viewpoint image data (hereinafter referred to as R virtual viewpoint image data) is generated based on the R viewpoint image data I (R) and the parallax map 38R.
- R virtual viewpoint image data is generated based on the R viewpoint image data I (R) and the parallax map 38R.
- the special image generation process either the L special image generation process or the R special image generation process is selectively executed.
- (n ⁇ 1) virtual viewpoints are selected in order from the closest to the L viewpoint V (L), and L virtual viewpoint image data corresponding to each virtual viewpoint is generated (see FIG. 12).
- L virtual viewpoint image data corresponding to each virtual viewpoint is generated (see FIG. 12).
- R virtual viewpoint image data corresponding to (n ⁇ 1) virtual viewpoints selected in order from the closest to the R viewpoint V (R) is generated.
- (n-1) L virtual viewpoint image data or (n-1) R virtual viewpoint image data are generated.
- the image output circuit 35 outputs n-viewpoint parallax image data to the image recording unit 27 when virtual viewpoint image data is input from the virtual viewpoint image generation unit 44.
- the image output circuit 35 outputs the L and R viewpoint image data I (L) and I (R) in the image reading circuit 31 to the image recording unit 27 when there is no input of virtual viewpoint image data.
- the parallax image data of n viewpoints is any of the following normal parallax image data, L parallax image data, and R parallax image data depending on the number and types of virtual viewpoint image data input from the virtual viewpoint image generation unit 44. It becomes.
- the normal parallax image data includes (n ⁇ 2) L and R virtual viewpoint image data from the virtual viewpoint image generation unit 44 and L and R viewpoint image data I (L) and I (R (R) from the image reading circuit 31.
- the L parallax image data includes (n ⁇ 1) L virtual viewpoint image data from the virtual viewpoint image generation unit 44 and L viewpoint image data I (L) from the image reading circuit 31.
- the R parallax image data includes (n ⁇ 1) R virtual viewpoint image data from the virtual viewpoint image generation unit 44 and R viewpoint image data I (R) from the image reading circuit 31.
- the CPU 21 is any one of normal parallax image data output processing, L parallax image data output processing, R parallax image data output processing, and L and R viewpoint image data output processing. Is executed selectively.
- the normal parallax image data output process is executed when both the L and R viewpoint image data I (L) and I (R) are normally photographed.
- the L parallax image data output processing is executed when a shooting failure occurs in the R viewpoint image data I (R), and the R parallax image data output processing occurs in the L viewpoint image data I (L). It is executed when you are.
- the L and R viewpoint image data output processing is executed when a shooting failure has occurred in both the L and R viewpoint image data I (L) and I (R).
- the CPU 21 causes the monitor 30 to display a warning message indicating that a shooting failure has occurred in both the L and R viewpoint image data I (L) and I (R).
- the memory card 16 removed from the compound eye camera 11 is set in the image input I / F 28.
- the CPU 21 issues a read command for the image file 15 to the image reading circuit 31.
- the image reading circuit 31 reads the designated image file 15 from the memory card 16 via the image input I / F 28 and temporarily stores it.
- the CPU 21 issues a shooting failure detection command to the shooting failure detection circuit 32.
- the photographing failure detection circuit 32 analyzes the L and R viewpoint image data I (L) and I (R) in the image reading circuit 31 and both the image data I (L) and I (R). The presence or absence of a shooting failure is detected, and the detection result is sent to the CPU 21.
- the CPU 21 executes normal parallax image data output processing when no shooting failure has occurred in both the L and R viewpoint image data I (L) and I (R). Further, the CPU 21 executes the L parallax image data output process when the shooting failure occurs in the R viewpoint image data I (R), and the shooting failure occurs in the L viewpoint image data I (L). In this case, an R parallax image data output process is executed. Then, the CPU 21 executes L and R viewpoint image data output processing when a shooting failure has occurred in both the L and R viewpoint image data I (L) and I (R).
- the CPU 21 issues a normal viewpoint setting command to the virtual viewpoint setting unit 43.
- the virtual viewpoint setting unit 43 starts the normal viewpoint setting process shown in FIG. 5A.
- the virtual viewpoint setting unit 43 obtains a parallax value corresponding to the subject position closest to the compound eye camera 11 and a parallax value corresponding to the farthest subject position based on the parallax map 38L.
- the virtual viewpoint V (1 is set so that the closest subject is observed on the front side by a predetermined distance from the recording surface of the sheet 17 and the farthest subject is observed on the back side by a predetermined distance from the recording surface of the sheet 17.
- To V (4) are set. Note that various known methods may be used as the virtual viewpoint setting method.
- the CPU 21 issues an execution command for L normal image generation processing to the virtual viewpoint image generation unit 44.
- the virtual viewpoint image generation unit 44 uses the L virtual viewpoint data I L (1), V (2) corresponding to the virtual viewpoints V (1) and V (2) based on the parallax map 38L and the L viewpoint image data I (L).
- I L (2) is generated.
- a method for generating virtual viewpoint image data based on the parallax map and the L viewpoint image data is a well-known technique, and thus the description thereof is omitted here (for example, JP 2001-346226 A, JP 2003-346188 A). No. publication).
- the CPU 21 After generating the L virtual viewpoint image data I L (1) and I L (2), the CPU 21 issues a generation command for generating a parallax map 38R to the parallax map generation circuit 33. Thereby, the corresponding point 40 on the L viewpoint image data I (L) corresponding to each pixel 39 of the R viewpoint image data I (R) is extracted, and the parallax map 38R is generated based on this extraction result.
- the CPU 21 issues a normal viewpoint setting command to the virtual viewpoint setting unit 43 and issues an R normal image generation processing execution command to the virtual viewpoint image generation unit 44.
- R virtual viewpoint data I R (3), I R (4) corresponding to the virtual viewpoints V (3), V (4) are generated.
- the CPU 21 issues a normal parallax image data output command to the image output circuit 35.
- the image output circuit 35 reads the L and R viewpoint image data I (L) and I (R) from the image reading circuit 31.
- the image output circuit 35 outputs the virtual viewpoint data I L (1), I L (2), I R (3), I R (4) and L, R viewpoint image data I (L), I ( R) is output to the image recording unit 27.
- the normal parallax image data output process is thus completed.
- the corresponding point 40a in the R viewpoint image data I (R) corresponding to the pixel 39a in the L viewpoint image data I (L) is hidden by the finger image 46. Even in such a case, a parallax between the pixel 39a and the corresponding point 40b may be obtained with a pixel that is not hidden by the finger image 46 in the vicinity of the corresponding point 40a as the corresponding point 40b. Although this parallax value is an incorrect value obtained as a result of an incorrect response, it is a value indicating some depth of the subject. For this reason, as shown in FIG. 11C, in the parallax map 38L obtained by the corresponding point search based on the L viewpoint image data I (L) captured normally, the parallax value of the abnormal region 47 also has some depth of the subject. It may be the value shown.
- the L viewpoint image data I (L) corresponds to each pixel of the finger engagement image 46. Since there is no corresponding point, the parallax value is unknown. For this reason, as shown in FIG. 11D, in the parallax map 38R obtained by the corresponding point search based on the R viewpoint image data I (R), the parallax value of the abnormal region 49 is set to the default value (usually 0 or 255). Become. Accordingly, the depth distribution of the subject can be obtained more accurately in the parallax map 38L than in the parallax map 38R.
- the parallax map 38L is generated when a shooting failure has occurred in the R viewpoint image data I (R).
- the parallax map 38L is output to the virtual viewpoint image generation circuit 34.
- the CPU 21 issues a special viewpoint setting command to the virtual viewpoint setting unit 43.
- the virtual viewpoint setting unit 43 that has received the special viewpoint setting command executes a special viewpoint setting process to set virtual viewpoints V (1) to V (10).
- the CPU 21 issues an execution command for L special image generation processing to the virtual viewpoint image generation unit 44.
- the virtual viewpoint image generation unit 44 that has received a command from the CPU 21 based on the parallax map 38L and the L viewpoint image data I (L), L virtual viewpoint image data corresponding to the virtual viewpoints V (1) to V (5). I L (1) to I L (5) are generated. Since the virtual viewpoint image data is generated based on the normal L viewpoint image data I (L), the finger image 46 is not included in the virtual viewpoint image data.
- the abnormal region 47 shown in FIG. 11C has occurred in the parallax map 38L, as the virtual viewpoint approaches the R viewpoint V (R), an abnormality occurs in the L virtual viewpoint image data corresponding to the virtual viewpoint.
- the probability of occurrence increases, and the degree of abnormality increases.
- the abnormality is, for example, that a part of the background is displayed in front of the main subject located in the center of the image.
- the virtual viewpoint image generation unit 44 performs L virtual viewpoint image data I L (1) to I L (5) corresponding to the five virtual viewpoints V (1) to V (5) from the closest to the L viewpoint V (L). ) Is generated, the probability of occurrence of abnormality in these image data is low. Further, even if an abnormality occurs, the degree becomes small.
- the L virtual viewpoint data I L (1) to I L (5) are input to the image output circuit 35.
- the CPU 21 issues an output command for L parallax image data to the image output circuit 35.
- the image output circuit 35 has six viewpoint L parallax images composed of the L virtual viewpoint data I L (1) to I L (5) and the L viewpoint image data I (L) read from the image reading circuit 31. Data is output to the image recording unit 27. The L parallax image data output process is thus completed.
- R parallax image data output processing As shown in FIG. 13, the flow of the R parallax image data output process is basically the same as the flow of the L parallax image data output process. However, the parallax map 38R is generated in the R parallax image data output processing. Next, based on the parallax map 38R and the R viewpoint image data I (R), the R virtual viewpoint image data corresponding to the five virtual viewpoints V (6) to V (10) from the closest to the R viewpoint V (R). I R (6) to I R (10) are generated. Then, 6-viewpoint R parallax image data including the R virtual viewpoint data I R (6) to I R (10) and the R viewpoint image data I (R) is output to the image recording unit 27. The R parallax image data output process is thus completed.
- the CPU 21 issues an output command of L and R viewpoint image data to the image output circuit 35.
- the image output circuit 35 reads out the L and R viewpoint image data I (L) and I (R) from the image reading circuit 31 and outputs them to the image recording unit 27.
- the CPU 21 stops the image recording process.
- the CPU 21 when any of the normal parallax image data, the L parallax image data, and the R parallax image data is input to the image recording unit 27, the CPU 21 records six viewpoint images on the image recording unit 27. Issue a command. In response to this instruction, the image recording unit 27 records a linear image obtained by dividing the six viewpoint images into linear shapes on the back surface of the sheet 17.
- the stereoscopic effect of the stereoscopic image is lower than when recording is performed based on the normal parallax image data, but photographing of the finger image 46 and the flare is performed. Since no obstacle is displayed, a good stereoscopic image can be observed.
- the CPU 21 issues a two-viewpoint image recording command to the image recording unit 27.
- the image recording unit 27 records a linear image obtained by dividing the L and R viewpoint image data I (L) and I (R) into linear shapes on the back surface of the sheet 17.
- the above-described processing is repeatedly executed.
- the present invention can also be applied to the case of recording parallax image data of three or more viewpoints on the sheet 17. It can.
- the printer 52 according to the second embodiment of the present invention will be described with reference to FIG.
- the number of virtual viewpoints set during the special viewpoint setting process is predetermined.
- the set number of virtual viewpoints is determined according to the area of a shooting failure area such as the finger image 46 in the L and R viewpoint image data (hereinafter simply referred to as shooting failure area).
- the printer 52 has basically the same configuration as the printer 12 of the first embodiment. However, the shooting failure detection circuit 32 is provided with an area detection unit 53 and the CPU 21 functions as a virtual viewpoint setting control unit 54.
- the memory 23 stores a setting number determination table 55.
- the setting number determination table 55 stores the area S of the shooting failure area and the setting number of the virtual viewpoint in association with each other. In the setting number determination table 55, both are associated with each other so that the setting number of virtual viewpoints increases as the area S increases by a certain amount.
- the area detection unit 53 operates when the shooting failure detection circuit 32 detects the occurrence of a shooting failure, calculates the area S of the shooting failure region, and outputs the result to the CPU 21.
- the area S is obtained, for example, by specifying a shooting failure area in the image data and counting the number of pixels in this area. Note that the shooting failure area can be specified, for example, by comparing normally captured image data and image data in which a shooting failure has occurred using various matching methods.
- the virtual viewpoint setting control unit 54 operates during the special viewpoint setting process and determines the number of virtual viewpoint settings.
- the virtual viewpoint setting control unit 54 refers to the setting number determination table 55 in the memory 23 based on the value of the area S input from the area detection unit 53, determines the setting number of the virtual viewpoint, and uses this result as the virtual viewpoint setting. Send to part 43. Accordingly, the number of virtual viewpoints set during the special viewpoint setting process can be increased or decreased according to the area S of the shooting failure area.
- the number of virtual viewpoints can be increased.
- the position of the virtual viewpoint of each virtual viewpoint image data can be brought closer to the L / R viewpoint on the side where no shooting failure has occurred.
- the positions of the virtual viewpoints V (1) to V (5) approach the L viewpoint V (L). For this reason, it is possible to reduce the influence of shooting failure on the virtual viewpoint image data.
- a stereoscopic image printing system 58 according to the third embodiment of the present invention will be described with reference to FIG.
- virtual viewpoint image data is generated based on the parallax map generated by the parallax map generation circuit 33 during the L and R parallax image data output processing.
- the stereoscopic image printing system 58 when a shooting failure has occurred in any one of the L and R viewpoint image data I (L) and I (R), a parallax map stored in advance is used. Using this, virtual viewpoint image data is generated.
- the stereoscopic image printing system 58 includes a compound eye camera 59 and a printer 60.
- the compound-eye camera 59 is basically the same as the compound-eye camera 11 of the first embodiment, but has a person photographing mode, a landscape photographing mode, and a normal photographing mode as photographing modes.
- the portrait shooting mode is a mode in which shooting is performed under shooting conditions suitable for shooting a person such as setting the focus to a foreground.
- the landscape shooting mode is a mode in which shooting is performed under shooting conditions suitable for landscape shooting, for example, the focus is set to a distant view.
- the normal shooting mode is a mode that can cover a wide range of shooting conditions corresponding to shooting of a person or landscape.
- the printer 60 basically has the same configuration as the printer 12 of the first embodiment.
- the image processing circuit 64 of the printer 60 includes a parallax map storage unit 65, a parallax map output circuit 66, and a virtual viewpoint image generation circuit 67.
- the parallax map storage unit 65 stores a normal shooting parallax map 71, a person shooting parallax map 72, and a landscape shooting parallax map 73.
- the main subject H is located in the center of the image, the lower subject in the image is located in front of the main subject H, and the upper subject in the image is the main subject.
- It is a parallax map assuming L and R viewpoint image data located on the back side of H.
- the normal shooting parallax map 71 includes a region A (0) where the parallax value is set to zero, a region A ( ⁇ 10) where the parallax value is set to ⁇ 10, and a region A ( +10), and is divided into four areas of area A (+20) where the parallax value is set to +20.
- the region A (0) as a reference, the smaller the value of the parallax value, the closer to the near side region, and vice versa.
- Area A (0) has a substantially trapezoidal shape and is set at the center of the map. This is because the portion most watched by the viewer is the main subject H, and if this portion has parallax, the visual fatigue of the viewer increases.
- the other area A ( ⁇ 10), area A (+10), and area A (+20) are set in the lower map area, the middle map area, and the upper map area, respectively, excluding the center of the map.
- the person-photographing parallax map 72 is a parallax map assuming L and R viewpoint image data obtained by person photographing.
- a substantially rectangular area A (0) is set at the lower center of the map and the center of the map.
- the area A ( ⁇ 10) is set in the lower part of the map so as to surround the lower end of the area A (0).
- the area A (+10) is set so as to surround the area A (0) in a range other than the area A ( ⁇ 10).
- the region A (+20) is set so as to surround the region A (+10).
- the landscape shooting parallax map 73 is a parallax map assuming L and R viewpoint image data obtained by landscape shooting.
- a band-like region A ( ⁇ 10), region A (0), region A (+10), and region A (+20) are set in order from the lower map level to the upper map level.
- the parallax map output circuit 66 selects the parallax map most suitable for the shooting scene of the image file 15 from the parallax maps 71 to 73 in the parallax map storage unit 65 (hereinafter referred to as the optimal parallax map). Is output to the virtual viewpoint image generation circuit 67.
- the parallax map output circuit 66 includes a shooting scene determination unit 75.
- the shooting scene determination unit 75 refers to the incidental information 62 of the image file 15 and confirms the type of shooting mode when the image file 15 is obtained, so that the shooting scene category of the image file 15 is person shooting, It is determined whether the shooting is landscape shooting or normal shooting.
- the virtual viewpoint image generation circuit 67 basically generates virtual viewpoint image data in the same manner as the virtual viewpoint image generation circuit 34 of the first embodiment. However, when a shooting failure has occurred in one of the L and R viewpoint image data I (L) and (L), the virtual viewpoint setting unit 77 performs a special viewpoint setting process (hereinafter referred to as “first viewpoint setting process”) different from that in the first embodiment.
- the virtual viewpoint image generation unit 78 executes a special image generation process (hereinafter referred to as a special image generation process X) different from that of the first embodiment.
- the special image generation process X the following L special image generation process X and R special image generation are performed depending on which of the L and R viewpoint image data I (L) and I (R) has a shooting failure.
- One of the processes X is selectively executed.
- the CPU 21 issues an optimal parallax map output command to the parallax map output circuit 66.
- the parallax map output circuit 66 operates the shooting scene determination unit 75.
- the shooting scene determination unit 75 confirms the type of shooting mode recorded in the incidental information 62 of the image file 15 in the image reading circuit 31. Thereby, it is determined that the category of the shooting scene is any one of portrait shooting, landscape shooting, and normal shooting.
- the parallax map output circuit 66 selects an optimal parallax map corresponding to the determination result of the shooting scene determination unit 75 from the parallax map storage unit 65, and outputs this optimal parallax map to the virtual viewpoint image generation circuit 67.
- the CPU 21 executes an L parallax image data output process when a shooting failure has occurred in the R viewpoint image data I (R).
- the virtual viewpoint setting unit 43 that has received the special viewpoint setting command executes a special viewpoint setting process X to set virtual viewpoints V (1) to V (5).
- the CPU 21 issues an execution command for the L special image generation process X to the virtual viewpoint image generation unit 78.
- the virtual viewpoint image generation unit 78 In response to the command from the CPU 21, the virtual viewpoint image generation unit 78 generates the L virtual viewpoint image data I corresponding to the virtual viewpoints V (1) to V (5) based on the optimal parallax map and the L viewpoint image data I (L). L (1) to I L (5) are generated. Unlike the first embodiment, since it is not necessary to generate the parallax map 38L, the load on the image processing circuit 64 can be reduced, and the processing time can be further reduced as compared with the first embodiment. In addition, since a pre-stored parallax map is used, even if the area of the shooting failure area generated in one of the L and R viewpoint image data I (L) and I (R) is large, the virtual viewpoint image is good to some extent. Data is obtained.
- the L virtual viewpoint image data I L (1) to I L (5) are input to the image output circuit 35.
- the CPU 21 issues an output command for L parallax image data to the image output circuit 35.
- six viewpoint L parallax image data including the L virtual viewpoint image data I L (1) to I L (5) and the L viewpoint image data I (L) is output to the image recording unit 27.
- the L parallax image data output process is thus completed.
- R parallax image data output processing As shown in FIG. 23, the CPU 21 executes an R parallax image data output process when a shooting failure has occurred in the L viewpoint image data I (L).
- the flow of the R parallax image data output process is basically the same as the flow of the L parallax image data output process described above.
- ⁇ I R (5) is generated.
- R parallax image data of 6 viewpoints including the R virtual viewpoint data I R (1) to I R (5) and the R viewpoint image data I (R) is output to the image recording unit 27.
- the R parallax image data output process is thus completed.
- the processing after the output of the L and R parallax image data is the same as in the first embodiment, the description thereof is omitted. Also in the third embodiment, since the virtual viewpoint image data is generated based on the image data in which no shooting failure has occurred, a good stereoscopic image can be observed.
- the present invention can also be applied to the case of recording parallax image data of three or more viewpoints on the sheet 17. it can.
- the parallax map stored in the parallax map storage unit 65 As the parallax map stored in the parallax map storage unit 65, the normal / person / landscape shooting parallax maps 71 to 73 are given as examples. However, the parallax maps corresponding to other various shooting scenes are used. May be stored. Furthermore, in the third embodiment, the shooting scene is determined based on the incidental information 62 of the image file 15. For example, a well-known face detection process is performed to perform L and R viewpoint image data I (L) and I (R The shooting scene may be determined based on the result of detecting the presence / absence of the face and the size thereof.
- five virtual viewpoints are set at the time of the L- and R-parallax image data output processing of six viewpoints.
- ten virtual viewpoints are set and five as in the first embodiment.
- Individual virtual viewpoint image data may be generated.
- virtual viewpoint image data may be generated in the same manner as in the first embodiment except that the optimal parallax map is used instead of the parallax maps 38L and 38R.
- a stereoscopic image printing system 80 according to the fourth embodiment of the present invention will be described with reference to FIG.
- the virtual viewpoint image data is generated by the printer, but in the stereoscopic image printing system 80, the virtual viewpoint image data is generated by the compound eye camera.
- the stereoscopic image printing system 80 includes a compound eye camera 81 and a printer 82.
- the compound eye camera 81 includes a pair of imaging units 14L and 14R.
- the imaging units 14L and 14R include an image sensor and the like (not shown) in addition to the photographing lens 14a.
- the CPU 85 performs overall control of each unit of the compound eye camera 81 by sequentially executing various programs read from the memory 87 based on the control signal from the operation unit 86.
- An operation unit 86, a memory 87, a signal processing circuit 89, a display driver 90 and a monitor 91, an image processing circuit 92, a recording control circuit 93, and the like are connected to the CPU 85 via a bus 88.
- the operation unit 86 includes, for example, a power switch, a mode changeover switch for switching an operation mode (for example, a shooting mode, a reproduction mode, etc.) of the compound-eye camera 81, a shutter button, and the like.
- An AFE (analog front end) 95 performs noise removal processing, image signal amplification processing, and digitization processing on the analog image signals output from the imaging units 14L and 14R, respectively, and outputs the L and R image signals. Generate.
- the L and R image signals are output to the signal processing circuit 89.
- the signal processing circuit 89 subjects the L and R image signals input from the AFE 95 to various image processing such as gradation conversion, white balance correction processing, ⁇ correction processing, YC conversion processing, and the like, and L and R viewpoint image data I (L) and I (R) are generated.
- the signal processing circuit 89 stores the L and R viewpoint image data I (L) and I (R) in the memory 87.
- the display driver 90 reads the L and R viewpoint image data I (L) and I (R) from the memory 87 each time the L and R viewpoint image data I (L) and I (R) are stored in the memory 87. An image display signal is generated and output to the monitor 91 at a fixed timing. As a result, a through image is displayed on the monitor 91.
- the image processing circuit 92 operates when the shutter button of the operation unit 86 is pressed. Since the image processing circuit 92 has basically the same configuration as the image processing circuit 29 of the first embodiment, refer to FIG. 3 for the configuration. Note that the shooting failure detection circuit 32 of the image processing circuit 92 detects a shooting failure by the same shooting failure detection method as in the first embodiment. At this time, although the manufacturing cost of the compound-eye camera 81 increases, detection sensors 84L and 84R for detecting contact with a finger or the like are provided around the photographing lens 14a, and a photographing failure is detected based on the detection results of the detection sensors 84L and 84R. Occurrence may be detected.
- the image reading circuit 31 of the fourth embodiment reads the L and R viewpoint image data I (L) and I (R) stored in the memory 87. Further, the image output circuit 35 of the fourth embodiment stores six viewpoint parallax image data or L and R viewpoint image data in the memory 87.
- the recording control circuit 93 reads the parallax image data or the L and R viewpoint image data stored in the memory 87 when the shutter button of the operation unit 86 is fully pressed, and stores an image file 97 in which these are combined into one. Formed and recorded on the memory card 16.
- the printer 82 has the same configuration as that of the printer 12 of the first embodiment except that the printer 82 does not include the image processing circuit 29.
- the printer 82 records an image on the sheet 17 based on the parallax image data or the L and R viewpoint image data read from the memory card 16.
- the generation of the parallax image data performed by the printer 10 of the first embodiment is performed by the compound eye camera 81, but the generation of the parallax image data performed by the printer 60 of the third embodiment is also performed.
- the compound eye camera 81 can be used.
- the virtual viewpoint image data is generated based on the L and R viewpoint image data obtained by the two-eye compound camera, but the viewpoint images of three or more viewpoints obtained by the three-eye or more compound eye camera are generated.
- the present invention can also be applied to the case where virtual viewpoint image data is generated using any two of the data.
- the virtual viewpoint is set between the L and R viewpoints.
- the virtual viewpoint may be set on the left side of the L viewpoint or on the right side of the R viewpoint.
- the description has been given by taking the printer or compound eye camera that generates the virtual viewpoint image data as an example.
- the stereoscopic image display device that performs stereoscopic display based on the parallax image, or the parallax image in a predetermined order.
- the present invention can be applied to various devices that generate virtual viewpoint image data such as a display device to be displayed.
Abstract
Description
図7に示すように、通常視差画像データ出力処理の実行が決定すると、輻輳角αを分割する分割数(以下、視点分割数という)Kが「5」に決定し、仮想視点の設定数が「4」に決定する。次いで、CPU21は、視差マップ生成回路33に対して視差マップ38Lの生成指令を発する。この指令を受けて視差マップ生成回路33は、L視点画像データI(L)の各画素39に対応するR視点画像データI(R)上の対応点40を抽出し、この抽出結果を基に視差マップ38Lを生成する。視差マップ38Lは仮想視点画像生成回路34へ出力される。
図9示すように、L視差画像データ出力処理の実行が決定すると、視点分割数Kが11に決定し、仮想視点の設定数が10に決定する。CPU21は、視差マップ生成回路33に対して視差マップ38Lのマップ生成指令を発する。これにより、視差マップ生成回路33が視差マップ38Lを生成する。
図13に示すように、R視差画像データ出力処理の流れは、L視差画像データ出力処理の流れと基本的に同じである。ただし、R視差画像データ出力処理では視差マップ38Rが生成される。次いで、視差マップ38RとR視点画像データI(R)とに基づき、R視点V(R)に近い順から5視点の仮想視点V(6)~V(10)に対応するR仮想視点画像データIR(6)~IR(10)が生成される。そして、R仮想視点データIR(6)~IR(10)と、R視点画像データI(R)とからなる6視点のR視差画像データが画像記録部27へ出力される。以上でR視差画像データ出力処理が完了する。
図14に示すように、CPU21は、L,R視点画像データ出力処理の実行を決定した場合には、L,R視点画像データI(L),I(R)の両方に撮影障害が発生している旨を示す警告をモニタ30に表示させる。さらに、CPU21は、画像記録処理を一旦停止させるとともに、画像記録処理を続行するか否かを示すメッセージをモニタ30に表示させる。
1.通常視差画像データ出力処理
(1)視点分割数:K=n-1
(2)仮想視点設定数:n
(3)L仮想視点画像データ:IL(1)、IL(2)、・・・IL((K+1)/2-1)
(4)R仮想視点画像データ:IR((K+1)/2)、IR((K+1)/2+1)、・・・IR(K-1)
2.L視差画像データ出力処理
(1)視点分割数:K=2n-1
(2)仮想視点設定数:n
(3)L仮想視点画像データ:IL(1)、IL(2)、・・・IL((K+1)/2-1)
3.R視差画像データ出力処理
(1)視点分割数:K=2n-1
(2)仮想視点設定数:n
(3)R仮想視点画像データ:IR((K+1)/2)、IR((K+1)/2+1)、・・・IR(K-1)
次に、図15を用いて本発明の第2実施形態のプリンタ52について説明する。上記第1実施形態では、特殊視点設定処理時に設定される仮想視点の設定数が予め定められている。これに対してプリンタ52では、L,R視点画像データ内で指係り像46などの撮影障害が発生した領域(以下、単に撮影障害領域という)の面積に応じて仮想視点の設定数を定める。
次に、図16を用いて本発明の第3実施形態の立体画像プリントシステム58について説明を行う。第1実施形態では、L,R視差画像データ出力処理時に視差マップ生成回路33が生成した視差マップを基に仮想視点画像データを生成している。これに対して立体画像プリントシステム58では、L,R視点画像データI(L),I(R)のいずれか一方に撮影障害が発生している場合に、予め記憶しておいた視差マップを用いて仮想視点画像データの生成を行う。立体画像プリントシステム58は複眼カメラ59とプリンタ60とで構成される。
図21に示すように、L視差画像データ出力処理の実行が決定すると、視点分割数Kが「6」に決定し、仮想視点の設定数が「5」に決定する。次いで、CPU21は、仮想視点設定部43に対して特殊視点設定指令を発する。
図23に示すように、CPU21は、L視点画像データI(L)に撮影障害が発生している場合に、R視差画像データ出力処理を実行する。このR視差画像データ出力処理の流れは、上述のL視差画像データ出力処理の流れと基本的に同じである。ただし、R視差画像データ出力処理では、最適視差マップとR視点画像データI(R)とに基づき、仮想視点V(1)~V(5)に対応するR仮想視点画像データIR(1)~IR(5)が生成される。次いで、R仮想視点データIR(1)~IR(5)と、R視点画像データI(R)とからなる6視点のR視差画像データが画像記録部27へ出力される。以上でR視差画像データ出力処理が完了する。
次に、図24を用いて本発明の第4実施形態の立体画像プリントシステム80について説明を行う。上記各実施形態ではプリンタにて仮想視点画像データの生成を行っているが、立体画像プリントシステム80では複眼カメラにて仮想視点画像データの生成を行う。この立体画像プリントシステム80は、複眼カメラ81とプリンタ82とで構成される。
11,59,81 複眼カメラ
52,60,82 プリンタ
29,64,92 画像処理回路
32 撮影障害検知回路
33 視差マップ生成回路
34,67 仮想視点画像生成回路
35 画像出力回路
53 面積検知部
54 仮想視点設定制御部
65 視差マップ格納部
66 視差マップ出力回路
Claims (10)
- 被写体を異なる視点から撮影して得られた視差を有する第1視点画像及び第2視点画像を基に、前記視点とは異なる所定数の仮想視点から被写体を見たときの仮想視点画像を生成する画像生成装置において、
前記第1視点画像及び第2視点画像に異常があるか否かを検知する検知手段と、
前記検知手段の検知結果に基づき、前記第1視点画像及び第2視点画像の一方が異常のある異常画像である場合に、異常のない他方の正常画像の各画素にそれぞれ対応する前記異常画像上の対応点を抽出し、この抽出結果に基づき被写体の奥行き分布を示す視差マップを生成する視差マップ生成手段と、
前記視差マップと前記正常画像とに基づき、前記仮想視点画像を生成する仮想視点画像生成手段と、
を備えることを特徴とする画像生成装置。 - 前記正常画像と前記仮想視点画像とを、所定の出力先へ出力する画像出力手段を備えること特徴とする請求項1記載の画像生成装置。
- 前記異常画像及び前記正常画像の視点間に前記所定数よりも多くの前記仮想視点を設定する仮想視点設定手段を備え、
前記仮想視点画像生成手段は、前記仮想視点設定手段が設定した各仮想視点の中から、前記正常画像の視点に近い側より順に前記所定数の前記仮想視点を選択して、前記仮想視点画像を生成することを特徴とする請求項1または2記載の画像生成装置。 - 前記仮想視点は、被写体を中心に等角度間隔で設定されることを特徴とする請求項3記載の画像生成装置。
- 前記異常画像内で前記異常が発生した領域の面積を検知する面積検知手段を備え、
前記仮想視点設定手段は、前記面積が大きくなるのに従って前記仮想視点の設定数を増加させることを特徴とする請求項3または4記載の画像生成装置。 - 被写体を異なる視点から撮影する複数の撮像部を有する撮影装置から、前記第1視点画像及び第2視点画像を取得する画像取得手段を備えることを特徴とする請求項1ないし5いずれか1項記載の画像生成装置。
- 前記異常は、フレア、及び前記撮像部の撮影レンズの少なくとも一部を遮蔽していた障害物の像の少なくともいずれかを含むことを特徴とする請求項6記載の画像生成装置。
- 請求項1ないし7いずれか1項記載の画像生成装置と、
前記第1視点画像及び第2視点画像のいずれか一方が前記異常画像であるときに、前記正常画像と前記仮想視点画像とに基づき、立体視が可能な画像を記録媒体に記録する記録手段と、
を備えることを特徴とするプリンタ。 - 前記第1視点画像及び第2視点画像の両方に異常がある場合に、その旨を警告表示する警告表示手段を備えることを特徴とする請求項8記載のプリンタ。
- 被写体を異なる視点から撮影して得られた視差を有する第1視点画像及び第2視点画像を基に、前記視点とは異なる所定数の仮想視点から被写体を見たときの仮想視点画像を生成する画像生成方法において、
前記第1視点画像及び第2視点画像に異常あるか否かを検知する検知ステップと、
前記検知ステップでの検知結果に基づき、前記第1視点画像及び第2視点画像のいずれか一方が異常のある異常画像である場合に、異常のない他方の正常画像の各画素にそれぞれ対応する前記異常画像上の対応点を抽出し、この抽出結果に基づき被写体の奥行き分布を示す視差マップを生成する視差マップ生成ステップと、
前記視差マップと前記正常画像とに基づき、前記仮想視点画像を生成する仮想視点画像生成ステップと、
を有することを特徴とする画像生成方法。
Priority Applications (3)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN2011800178085A CN102835118A (zh) | 2010-04-06 | 2011-03-18 | 图像生成装置、方法及打印机 |
US13/634,539 US20130003128A1 (en) | 2010-04-06 | 2011-03-18 | Image generation device, method, and printer |
JP2012509388A JPWO2011125461A1 (ja) | 2010-04-06 | 2011-03-18 | 画像生成装置及び方法並びにプリンタ |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2010-087751 | 2010-04-06 | ||
JP2010087751 | 2010-04-06 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2011125461A1 true WO2011125461A1 (ja) | 2011-10-13 |
Family
ID=44762417
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2011/056561 WO2011125461A1 (ja) | 2010-04-06 | 2011-03-18 | 画像生成装置及び方法並びにプリンタ |
Country Status (4)
Country | Link |
---|---|
US (1) | US20130003128A1 (ja) |
JP (1) | JPWO2011125461A1 (ja) |
CN (1) | CN102835118A (ja) |
WO (1) | WO2011125461A1 (ja) |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014006243A (ja) * | 2012-05-28 | 2014-01-16 | Ricoh Co Ltd | 異常診断装置、異常診断方法、撮像装置、移動体制御システム及び移動体 |
JP2015005891A (ja) * | 2013-06-21 | 2015-01-08 | キヤノン株式会社 | 撮像装置およびその制御方法 |
Families Citing this family (8)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2461238B1 (en) | 2010-12-02 | 2017-06-28 | LG Electronics Inc. | Image display apparatus including an input device |
US9363504B2 (en) * | 2011-06-23 | 2016-06-07 | Lg Electronics Inc. | Apparatus and method for displaying 3-dimensional image |
KR20130036593A (ko) * | 2011-10-04 | 2013-04-12 | 삼성디스플레이 주식회사 | 이미지 중첩 현상을 방지할 수 있는 3d 디스플레이 장치 |
US10477184B2 (en) * | 2012-04-04 | 2019-11-12 | Lifetouch Inc. | Photography system with depth and position detection |
US10681280B2 (en) * | 2017-04-06 | 2020-06-09 | Lenovo (Singapore) Pte. Ltd. | Camera component location indicator on display |
JP2019103067A (ja) * | 2017-12-06 | 2019-06-24 | キヤノン株式会社 | 情報処理装置、記憶装置、画像処理装置、画像処理システム、制御方法、及びプログラム |
US11800056B2 (en) | 2021-02-11 | 2023-10-24 | Logitech Europe S.A. | Smart webcam system |
US11800048B2 (en) | 2021-02-24 | 2023-10-24 | Logitech Europe S.A. | Image generating system with background replacement or modification capabilities |
Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001346226A (ja) * | 2000-06-02 | 2001-12-14 | Canon Inc | 画像処理装置、立体写真プリントシステム、画像処理方法、立体写真プリント方法、及び処理プログラムを記録した媒体 |
JP2007053621A (ja) * | 2005-08-18 | 2007-03-01 | Mitsubishi Electric Corp | 画像生成装置 |
JP2009048033A (ja) * | 2007-08-22 | 2009-03-05 | Panasonic Corp | 立体画像撮像装置 |
Family Cites Families (15)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
DE69231667D1 (de) * | 1991-04-22 | 2001-03-08 | Canon Kk | Druckersteuerprogramm-Übertragungsverfahren und Drucker, geeignet zum Empfangen eines Steuerprogrammes |
KR100603601B1 (ko) * | 2004-11-08 | 2006-07-24 | 한국전자통신연구원 | 다시점 콘텐츠 생성 장치 및 그 방법 |
JP4953789B2 (ja) * | 2006-12-07 | 2012-06-13 | キヤノン株式会社 | 画像処理装置、記録装置、画像処理方法、プログラム、および記憶媒体 |
WO2009023156A2 (en) * | 2007-08-15 | 2009-02-19 | Thomson Licensing | Method and apparatus for error concealment in multi-view coded video |
KR100971730B1 (ko) * | 2010-04-15 | 2010-07-21 | (주)에이직뱅크 | 평행축 입체카메라 |
KR101291071B1 (ko) * | 2010-06-08 | 2013-08-01 | 주식회사 에스칩스 | 입체 영상 오류 개선 방법 및 장치 |
JP2012023546A (ja) * | 2010-07-14 | 2012-02-02 | Jvc Kenwood Corp | 制御装置、立体映像撮像装置、および制御方法 |
JP5489897B2 (ja) * | 2010-07-22 | 2014-05-14 | パナソニック株式会社 | ステレオ測距装置及びステレオ測距方法 |
KR101752690B1 (ko) * | 2010-12-15 | 2017-07-03 | 한국전자통신연구원 | 변이 맵 보정 장치 및 방법 |
US20120162386A1 (en) * | 2010-12-22 | 2012-06-28 | Electronics And Telecommunications Research Institute | Apparatus and method for correcting error in stereoscopic image |
KR20130008746A (ko) * | 2011-07-13 | 2013-01-23 | 삼성전자주식회사 | 3d 영상변환장치, 그 구현방법 및 그 저장매체 |
US9076267B2 (en) * | 2011-07-19 | 2015-07-07 | Panasonic Intellectual Property Corporation Of America | Image coding device, integrated circuit thereof, and image coding method |
US9191646B2 (en) * | 2011-08-29 | 2015-11-17 | Nokia Technologies Oy | Apparatus, a method and a computer program for video coding and decoding |
JP5572647B2 (ja) * | 2012-02-17 | 2014-08-13 | 任天堂株式会社 | 表示制御プログラム、表示制御装置、表示制御システム、および表示制御方法 |
JP5948976B2 (ja) * | 2012-03-06 | 2016-07-06 | 富士ゼロックス株式会社 | 画像形成装置および情報処理装置 |
-
2011
- 2011-03-18 JP JP2012509388A patent/JPWO2011125461A1/ja not_active Withdrawn
- 2011-03-18 US US13/634,539 patent/US20130003128A1/en not_active Abandoned
- 2011-03-18 WO PCT/JP2011/056561 patent/WO2011125461A1/ja active Application Filing
- 2011-03-18 CN CN2011800178085A patent/CN102835118A/zh active Pending
Patent Citations (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001346226A (ja) * | 2000-06-02 | 2001-12-14 | Canon Inc | 画像処理装置、立体写真プリントシステム、画像処理方法、立体写真プリント方法、及び処理プログラムを記録した媒体 |
JP2007053621A (ja) * | 2005-08-18 | 2007-03-01 | Mitsubishi Electric Corp | 画像生成装置 |
JP2009048033A (ja) * | 2007-08-22 | 2009-03-05 | Panasonic Corp | 立体画像撮像装置 |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2014006243A (ja) * | 2012-05-28 | 2014-01-16 | Ricoh Co Ltd | 異常診断装置、異常診断方法、撮像装置、移動体制御システム及び移動体 |
JP2015005891A (ja) * | 2013-06-21 | 2015-01-08 | キヤノン株式会社 | 撮像装置およびその制御方法 |
Also Published As
Publication number | Publication date |
---|---|
JPWO2011125461A1 (ja) | 2013-07-08 |
CN102835118A (zh) | 2012-12-19 |
US20130003128A1 (en) | 2013-01-03 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
WO2011125461A1 (ja) | 画像生成装置及び方法並びにプリンタ | |
JP5982751B2 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
CN103493484B (zh) | 成像装置和成像方法 | |
JP5299214B2 (ja) | 画像処理装置、および画像処理方法、並びにプログラム | |
JP4657313B2 (ja) | 立体画像表示装置および方法並びにプログラム | |
JP5284306B2 (ja) | 立体撮像装置、ゴースト像処理装置およびゴースト像処理方法 | |
US7920176B2 (en) | Image generating apparatus and image regenerating apparatus | |
US20130113888A1 (en) | Device, method and program for determining obstacle within imaging range during imaging for stereoscopic display | |
US8130259B2 (en) | Three-dimensional display device and method as well as program | |
US20120263372A1 (en) | Method And Apparatus For Processing 3D Image | |
JP5295426B2 (ja) | 複眼撮像装置、その視差調整方法及びプログラム | |
US20100315517A1 (en) | Image recording device and image recording method | |
US20130162764A1 (en) | Image processing apparatus, image processing method, and non-transitory computer-readable medium | |
JP5526233B2 (ja) | 立体視用画像撮影装置およびその制御方法 | |
JP5978573B2 (ja) | 映像信号処理装置および映像信号処理方法 | |
JP2009244502A (ja) | 画像処理装置、画像表示装置、撮像装置及び画像処理方法 | |
US20130069864A1 (en) | Display apparatus, display method, and program | |
US8648953B2 (en) | Image display apparatus and method, as well as program | |
EP2717247A2 (en) | Image processing apparatus and method for performing image rendering based on orientation of display | |
JPWO2013065543A1 (ja) | 視差調節装置及び方法、撮影装置、再生表示装置 | |
WO2011061973A1 (ja) | 立体映像表示装置および動きベクトル導出方法 | |
US20110193937A1 (en) | Image processing apparatus and method, and image producing apparatus, method and program | |
JP2006013851A (ja) | 撮像表示装置および撮像表示方法 | |
CN104041026A (zh) | 图像输出装置、方法以及程序及其记录介质 | |
CN104054333A (zh) | 图像处理装置、方法以及程序及其记录介质 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
WWE | Wipo information: entry into national phase |
Ref document number: 201180017808.5 Country of ref document: CN |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 11765363 Country of ref document: EP Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2012509388 Country of ref document: JP |
|
WWE | Wipo information: entry into national phase |
Ref document number: 13634539 Country of ref document: US |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 11765363 Country of ref document: EP Kind code of ref document: A1 |