WO2019167571A1 - Image processing device and image processing method - Google Patents

Image processing device and image processing method Download PDF

Info

Publication number
WO2019167571A1
WO2019167571A1 PCT/JP2019/004143 JP2019004143W WO2019167571A1 WO 2019167571 A1 WO2019167571 A1 WO 2019167571A1 JP 2019004143 W JP2019004143 W JP 2019004143W WO 2019167571 A1 WO2019167571 A1 WO 2019167571A1
Authority
WO
WIPO (PCT)
Prior art keywords
color image
image
color
parallax
information
Prior art date
Application number
PCT/JP2019/004143
Other languages
French (fr)
Japanese (ja)
Inventor
一輝 大橋
泰史 佐藤
建行 藤井
松原 義明
Original Assignee
ソニーセミコンダクタソリューションズ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by ソニーセミコンダクタソリューションズ株式会社 filed Critical ソニーセミコンダクタソリューションズ株式会社
Publication of WO2019167571A1 publication Critical patent/WO2019167571A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/50Image enhancement or restoration using two or more images, e.g. averaging or subtraction
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors

Definitions

  • the present technology relates to an image processing apparatus and an image processing method. More specifically, the present technology relates to an image processing apparatus and an image processing method for synthesizing two images and obtaining one image.
  • the smartphone is provided with two cameras on the back thereof.
  • One image can be acquired using two image data captured by each of the two cameras.
  • a first imaging unit that captures a first image, a second imaging unit that captures a second image, the first image, and the second image An image processing apparatus comprising: a parallax determination unit that determines the presence or absence of parallax; and a synthesis unit that synthesizes each pixel of the first image and the second image according to a determination result of the parallax determination unit.
  • a parallax determination unit that determines the presence or absence of parallax
  • a synthesis unit that synthesizes each pixel of the first image and the second image according to a determination result of the parallax determination unit.
  • the dual camera has a configuration in which, for example, two imaging units are arranged side by side on one surface such as the back surface of a smartphone. Since the viewpoints of the two image capturing units are different, there is a parallax between the two images captured by the two image capturing units. Therefore, when the two images are combined, image quality degradation caused by the parallax can occur. Therefore, it is required to combine the two images more appropriately to obtain a higher quality image.
  • This technology aims to provide a new image processing technology.
  • an image processing apparatus having a specific configuration can solve the problem caused by the parallax and can acquire a high-quality image.
  • the color filter covering the light receiving surface does not include either the white region or the no-filter region, and the color filter covering the light receiving surface includes the white region or the no-filter region.
  • a parallax information acquisition unit that acquires information on parallax
  • a synthesis information generation unit that generates information on how to combine both images based on the information on parallax acquired by the parallax information acquisition unit, and the synthesis information generation unit
  • An image processing apparatus comprising: an image combining unit configured to combine the first color image with the second color image based on the generated combination information; Subjected to.
  • the color filter included in the first color image pickup device may be a Bayer array color filter.
  • the color filter included in the second color image pickup device may include a white region.
  • the color filter included in the second color image pickup device may include a region of a WRGB array or a WRG array.
  • the parallax information acquisition unit may acquire parallax for each pixel or each pixel block between the first color image and the second color image.
  • the image processing apparatus may further include a parallax compensation unit that performs parallax compensation on the first color image based on the information on the parallax.
  • the synthesis information generation unit includes a search range outside region detection unit that detects a region having a disparity value larger than a predetermined search range in a disparity map created based on information on disparity. sell.
  • the composite information generation unit may include an occlusion area detection unit that detects an occlusion area by comparing the first color image and the second color image.
  • the composite information generation unit calculates a value obtained by dividing a color difference variance value by a luminance variance value for each predetermined region in the first color image or the second color image.
  • a color difference / luminance determination unit that detects an area in which the divided value is equal to or greater than a predetermined value may be included.
  • the synthesis information generation unit detects an out-of-search-range area in which a pixel or a pixel block having a disparity value larger than a predetermined search range is detected in a disparity map created based on information on disparity
  • An occlusion area detection unit that detects an occlusion area by comparing the first color image and the second color image, and for each predetermined area in the first color image or the second color image. Includes at least one component selected from a color difference / brightness determination unit that calculates a value obtained by dividing the chrominance dispersion value by the luminance dispersion value and detects an area in which the divided value is equal to or greater than a predetermined value.
  • the combination information that the first color image should not be combined with the second color image can be generated.
  • the combination information that the combination information generation unit should or should not combine the first color image for each region that forms the second color image Can be generated.
  • the synthesis information generation unit may give a synthesis rate for synthesizing the first color image for each region constituting the second color image.
  • the present technology is imaged by the first color image imaging device, the second color image imaging device having higher sensitivity than the first color image imaging device, and the first color image imaging device.
  • a parallax information acquisition unit that acquires information about parallax between the first color image and the second color image captured by the second color image imaging device, and the parallax acquired by the parallax information acquisition unit Based on the information, the first color image is added to the second color image based on the combination information generated by the combination information generation unit that generates information on how to combine the two images.
  • An image processing apparatus including an image composition unit for composition is also provided.
  • the present technology provides a first image acquisition step of acquiring a first color image by a first color image pickup device, in which the color filter covering the light receiving surface includes neither a white area nor an unfiltered area; A color filter covering the surface includes a white area or a non-filter area; a second image acquisition step of acquiring a second color image by a second color image pickup device; the first color image and the second color image; A parallax information acquisition step for acquiring information on parallax between color images, and a synthesis information generation step for generating information on how to combine both images based on the information on parallax acquired in the parallax information acquisition step; An image synthesis step of synthesizing the first color image with the second color image based on the synthesis information generated in the synthesis information generation step; To provide an image processing method comprising.
  • the present technology also includes a first image acquisition step of acquiring a first color image by the first color image imaging device, and a second color image imaging device having higher sensitivity than the first color image imaging device.
  • a second image acquisition step of acquiring a second color image, a parallax information acquisition step of acquiring information on the parallax between the first color image and the second color image, and the parallax information acquisition step Based on the information on the parallax acquired in step S2, a combination information generation step for generating information on how to combine both images, and on the second color image based on the combination information generated in the combination information generation step,
  • An image processing method including an image combining step of combining the first color image is also provided.
  • This technology can acquire a high-quality image in which the problem caused by the parallax between the two images acquired by the two imaging units is solved.
  • played by this technique is not necessarily limited to the effect described here, and may be the any effect described in this specification.
  • FIG. 1 It is a figure which shows the structural example of the block diagram of the synthetic
  • the color filter covering the light receiving surface includes the first color image pickup element that does not include either the white region or the no filter region, and the color filter covering the light receiving surface includes the white region or the no filter region.
  • a second color image pickup device may be included.
  • the image processing device according to the present technology can include two different color image pickup devices.
  • the image processing apparatus according to the present technology provides a parallax between a first color image captured by the first color image imaging device and a second color image captured by the second color image imaging device.
  • the image processing device acquires disparity information between two color images captured by the two color image capturing elements, generates information on a synthesis method based on the disparity information, Then, based on the information relating to the generated synthesis method, the two images are synthesized.
  • the color filter that covers the light receiving surface of the first color image pickup device does not include either the white region or the no-filter region, but the color filter that covers the light receiving surface of the second color image pickup device is the white region or filter. Includes no area. Therefore, the sensitivity of the second color image pickup device is higher than that of the first color image pickup device. Further, the image synthesizing unit captures the first color image captured by the first color image imaging device by the second color image imaging device having higher sensitivity than the first color image imaging device. Composite to the second color image. That is, the first color image is combined with the second color image with reference to the second color image having higher image quality than the first color image.
  • the part has high image quality.
  • the present technology even if there is a region in which the first color image is not combined in the second color image serving as a reference, the combined image has an effect of high image quality. The effect will be further described below while comparing the present technique with other methods conceivable for the synthesis of two images. For the description, reference is made below to FIG.
  • the viewpoint movement occurs only for the human part in the synthesized image.
  • a color image a2 with low sensitivity must be selected and output.
  • a color image is used as a reference image for image composition. For this reason, for the region where image synthesis is not performed, the above-described problem of viewpoint movement does not occur by adopting a reference color image.
  • the reference image is a color image, the area where image composition is not performed is not black and white but color. For example, as shown in FIG.
  • the image composition when two images are combined, for example, it is conceivable to combine color images having the same sensitivity without adopting a black and white captured image.
  • the image composition when an area that is determined to deteriorate image quality due to the composition of two images is in a reference color image, the image composition may not be performed for the area. .
  • a reference color image or the other color image is adopted.
  • the image quality is deteriorated in the area where the composition is not performed as compared with other areas where the composition is performed. For example, as shown in FIG.
  • the reference color image is an image captured by an image sensor having higher sensitivity than the image sensor that captured the other color image as described above.
  • a higher-sensitivity color image is adopted for a region where no synthesis is performed.
  • the degree of image quality degradation in the area where the synthesis is not performed is smaller than in the case of synthesizing color images having the same sensitivity.
  • FIG. 1C when two color images are synthesized as in the case of FIG. 1B, a color image c1 captured with higher sensitivity than the color image b2 instead of the color image b1. Is used for synthesis. That is, it is assumed that the color image b2 and the color image c1 having higher sensitivity than the color image b2 are combined with the color image c1 as a reference.
  • the problem of viewpoint movement does not occur.
  • the reference color image c1 is an image picked up by an image sensor having a higher sensitivity than the color image b1, image quality deterioration in a region where no synthesis is performed, compared to the case of FIG. The degree of is smaller.
  • the parallax information acquisition unit acquires information about the parallax between the first color image and the second color image
  • the synthesis information generation unit uses the information about the parallax to calculate Information relating to the composition method is generated, and the image composition unit synthesizes the first color image with the second color image based on the information related to the composition method.
  • the image used as the reference for the combination is a second color image with higher image quality
  • the second color image with higher image quality is adopted for the region where image combination is not performed. For this reason, the degree of image quality degradation in a region where image synthesis is not performed is low.
  • the image processing apparatus includes at least two color image pickup elements.
  • the color filter covering the light receiving surface of one image pickup device does not include either the white region or the no-filter region, and the color filter covering the light receiving surface of the other image pickup device is the white region. Or it includes the unfiltered area.
  • the former is called a first color image pickup device, and the latter is called a second color image pickup device. Due to the difference in the color filters, the second color image pickup device is more sensitive than the first color image pickup device.
  • the first color image imaging element and the second color image imaging element are preferably arranged side by side on one surface of the image processing apparatus.
  • the first color image pickup device and the second color image pickup device may be arranged so as to be able to pick up images in the same direction.
  • the image processing apparatus according to the present technology is the smartphone 10
  • the first color image pickup device 11-1 and the second color image pickup device 11-1 are provided on the surface opposite to the display surface 12 of the smartphone 10.
  • the color image pickup elements 11-2 are arranged side by side so that the same direction can be picked up.
  • the first color image pickup device and the second color image pickup device may be provided not only on the opposite surface but also on the display surface.
  • the image processing apparatus acquires information on the parallax using the parallax information acquisition unit. Based on the information on the parallax, the combination information generation unit generates information on how to combine both images. Then, based on the information related to the composition method, the image composition unit synthesizes two images.
  • the image processing apparatus according to the present technology may be a smartphone as described above, for example.
  • the image processing apparatus according to the present technology may be an information processing apparatus other than a smartphone.
  • the image processing device according to the present technology is a mobile phone other than a smartphone, a video processing device (particularly a portable video processing device), a game device (particularly a portable game device), a notebook PC (Personal Computer), or a tablet type. It may be a PC. That is, an information processing apparatus provided with the first color image pickup element and the second color image pickup element and configured to be able to synthesize images according to the present technology is included in the image processing apparatus according to the present technology. Is done.
  • FIG. 3 is a block diagram of an example of an image processing apparatus according to the present technology.
  • the image processing apparatus 300 includes a first color image imaging element 301-1, a second color image imaging element 301-2, a preprocessing unit 302-1, a preprocessing unit 302-2, a parallax.
  • An information acquisition unit 303, a parallax compensation unit 304, a synthesis information generation unit 305, an image synthesis unit 306, and a post-processing unit 307 are provided.
  • the color filter that covers the light receiving surface of the first color image pickup element 301-1 includes neither a white area nor an unfiltered area.
  • the first color image pickup element 301-1 can be composed of, for example, a red (R) pixel, a blue (B) pixel, and a green (G) pixel. That is, the color filter that covers the light receiving surface of the first color image pickup element 301-1 includes a red (R) filter, a blue (B) filter, and a green (G) filter.
  • the first color image pickup device 301-1 can be composed of only R pixels, B pixels, and G pixels.
  • the color filter that covers the light receiving surface of the first color image pickup element 301-1 includes only a red (R) filter, a blue (B) filter, and a green (G) filter.
  • the number of G pixels in the first color image pickup element 301-1 is preferably larger than the number of R pixels and B pixels. Further, the number of R pixels and B pixels may be equal.
  • a Bayer array pixel array can be employed for the first color image pickup element 301-1. That is, in the present technology, the color filter that covers the light receiving surface of the first color image pickup element 301-1 can be a Bayer array color filter. An example of the Bayer array is shown in FIG. As shown in FIG.
  • the first color image pickup device 301-1 is not limited to the one having a Bayer array, and may be an image pickup device having a three-layer structure as shown in FIG.
  • components other than the color filter constituting the first color image pickup device 301-1 components used in an image pickup device known in this technical field may be employed.
  • a known CMOS image sensor or CCD image sensor is used. The components used in the above may be employed.
  • the color filter covering the light receiving surface of the second color image pickup element 301-2 includes a white region or a non-filter region, and preferably includes a white region. That is, the second color image pickup element 301-2 includes white (W) pixels or unfiltered (C) pixels, and preferably includes W pixels.
  • the W pixel is also called a transparent pixel.
  • the total number of pixels corresponding to the white area and the non-filter area may occupy, for example, 1/8 or more, preferably 1 ⁇ 4 or more, or 1 ⁇ 2 or more of all pixels.
  • an image sensor in which W pixels, R pixels, G pixels, and B pixels are provided at 1: 1: 1: 1 is a second color image image sensor 301. -2 can be used.
  • a WRGB sensor can be used as the second color image pickup element 301-2.
  • the color filter included in the second color image pickup element may include a WRGB array region.
  • an image sensor in which W pixels, R pixels, and G pixels are provided at 2: 1: 1 is a second color image image sensor 301-. 2 can be used. That is, the WRG sensor can be used as the second color image pickup element 301-2.
  • the color filter included in the second color image pickup device may include a WRG array region.
  • FIG. 5B an image pickup device in which yellow (Y) pixels, C pixels, magenta (M) pixels, and G pixels are provided at 1: 1: 1: 1.
  • the second color image pickup element 301-2 can be used. That is, a YCMG sensor can be used as the second color image pickup element 301-2.
  • the color filter included in the second color image pickup device may include a YCMG array region.
  • an image sensor in which W pixels, Y pixels, and cyan (Cy) pixels are provided at 2: 1: 1 is a second color image image sensor. It can be used as 301-2. That is, the WYCy sensor can be used as the second color image pickup element 301-2.
  • the color filter included in the second color image pickup device may include a WYCy array region.
  • FIG. 5C an image sensor in which W pixels, Y pixels, and cyan (Cy) pixels are provided at 2: 1: 1 is a second color image image sensor. It can be used as 301-2. That is, the WYCy sensor can be used as the second color image pickup element 301-2.
  • the color filter included in the second color image pickup device may include a WYCy array region.
  • an image sensor composed of W pixels, R pixels, and B pixels can be used as the second color image image sensor 301-2.
  • W pixels are incorporated in the clear bit array. That is, the W clear bit sensor can be used as the second color image pickup element 301-2.
  • the pixel arrangement direction is rotated 45 degrees with respect to the normal arrangement direction.
  • the color filter included in the second color image pickup device includes a region of a WRGB array or a WRG array. That is, the second color image pickup element can be preferably a WRGB sensor or a WRG sensor.
  • a component used in an image pickup device known in this technical field may be employed.
  • a known CMOS image sensor or CCD image sensor is used.
  • the components used in the above may be employed.
  • a first color image imaged by the first color image imaging element 301-1 and a second color image imaged by the second color image imaging element 301-2 are taken at the same time. By combining two images captured at the same time, a more appropriate image can be combined.
  • the preprocessing units 302-1 and 302-2 can perform preprocessing performed in a normal imaging apparatus. Examples of the preprocessing include gain adjustment, white balance correction, noise reduction, demosaic processing, scaling, and module shift correction processing. The pre-processing units 302-1 and 302-2 can perform one or more of these processes.
  • the parallax information acquisition unit 303 is between the first color image captured by the first color image imaging element 301-1 and the second color image captured by the second color image imaging element 301-2. Obtain information on parallax.
  • the information regarding the parallax can include parallax of each region in the image, for example, parallax for each pixel and / or parallax for each pixel block.
  • a pixel block refers to a block composed of a plurality of pixels.
  • Information on the parallax can be acquired by corresponding point detection processing such as block matching, for example.
  • the image processing apparatus according to the present technology can be synthesized based on the second color image captured by the second color image imaging element 301-2 having higher sensitivity.
  • the parallax information acquisition unit 303 acquires information related to the parallax of the first color image with the second color image as a reference. Further, the parallax information acquisition unit 303 may generate a parallax map.
  • the parallax map is a map showing the distribution of parallax between two color images.
  • the parallax map can be used, for example, by a synthesis information generation unit 305 described below, in particular, an out-search range region detection unit 601 and / or an occlusion region detection unit 602 included in the synthesis information generation unit 305.
  • the parallax information acquisition unit 303 can acquire a parallax map indicating the parallax distribution of the first color image with respect to the second color image serving as a reference for synthesis, for example.
  • the disparity information acquisition unit 303 distributes the disparity of the second color image with respect to the first color image in addition to the disparity map indicating the disparity distribution of the first color image with respect to the second color image that is a reference for combining. You may acquire the parallax map which shows.
  • the occlusion area detection unit 602 described below can detect an occlusion area by using two parallax maps based on images from two viewpoints.
  • the parallax information acquisition unit 303 outputs information on parallax to the parallax compensation unit 304 and / or the synthesis information generation unit 305.
  • the parallax compensation unit 304 can perform parallax compensation of the first color image based on the information on the parallax acquired by the parallax information acquisition unit 303.
  • the parallax compensation unit 304 can move the position of the pixel or the pixel block on the image data of the first color image based on the information on the parallax.
  • the movement of the position of the pixel or the pixel block may be performed so that the parallax between the first color image and the second color image is eliminated, for example.
  • the parallax compensation unit 304 can move the pixel or pixel block of the first color image to the position of the pixel or pixel block in the second color image corresponding to the pixel or pixel block.
  • the parallax compensation unit 304 generates a first color image that is parallax-compensated by parallax compensation. By the parallax compensation, each region of the first color image is moved to each corresponding region in the second color image, and as a result, appropriate image composition is possible.
  • the parallax compensation unit 304 outputs the first color image subjected to parallax compensation to the image synthesis unit 306.
  • the combination information generation unit 305 generates information about how to combine the first color image and the second color image. More preferably, the synthesis information generation unit 305 generates information on how to combine the first color image and the second color image that have been subjected to parallax compensation. For example, the synthesis information generation unit 305 should synthesize or synthesize the first color image (particularly the parallax-compensated first color image) for each region constituting the second color image. Compositing information that should not be generated can be generated. The synthesis information can be generated for each pixel or each pixel block. As a result of the determination, the synthesis information generation unit 305 may generate information that should be combined or should not be combined for each pixel or each pixel block. The synthesis information generation unit 305 outputs information related to the synthesis method to the image synthesis unit 306.
  • the composite information generation unit 305 may include an out-of-search area detection unit 601, an occlusion area detection unit 602, and a color difference / luminance determination unit 603.
  • the composite information generation unit 305 may include any one, two, or three of the search range outside region detection unit 601, the occlusion region detection unit 602, and the color difference / luminance determination unit 603.
  • the composite information generation unit 305 may include a comprehensive determination unit 604 as illustrated in FIG.
  • the above-described components that can be included in the composite information generation unit 305 will be described.
  • the search area outside area detection unit 601 can detect an area having a parallax value larger than a predetermined search range in a parallax map created based on information on parallax.
  • the area may be a pixel or a pixel block having a parallax value larger than a predetermined search range.
  • the region outside the search range detection unit 601 may detect a region having a disparity value larger than a predetermined search range in the disparity map generated by the disparity information acquisition unit 303.
  • the information related to the parallax used by the search range outside area detection unit 601 is not limited to the parallax map generated by the parallax information acquisition unit 303.
  • the out-of-search range area detection unit 601 may detect pixels or pixel blocks having a parallax value larger than a predetermined search range using, for example, information on parallax obtained by a distance measuring sensor (particularly, parallax). Good.
  • a distance measuring sensor for example, a ToF (Time of Flight) type distance measuring sensor can be cited.
  • the out-of-search-range area detection unit 601 can output an area having a parallax value larger than the predetermined search range, for example, to the comprehensive determination unit 604.
  • the occlusion area detection unit 602 can detect the occlusion area by comparing the first color image and the second color image.
  • the occlusion area is an area that is imaged by one of the two image sensors of the first color image sensor and the second color image sensor, but is not imaged by the other image sensor.
  • the occlusion area is an area that is imaged by one of the two image sensors of the first color image sensor and the second color image sensor, but is not imaged by the other image sensor.
  • the detection of the occlusion area uses a bi-directional parallax map to move the left image to the position of the right image, and then detect the area that does not return to the same position when moved to the left image again. May be done by: Alternatively, adjacent parallax values are compared from a parallax map of one viewpoint, and an area where the difference exceeds a predetermined range can be detected as an occlusion area. As a result, the occlusion area can be detected more easily.
  • the occlusion area detection unit 602 can output the detected occlusion area to the comprehensive determination unit 604, for example.
  • the detection of the occlusion area is performed by, for example, two parallaxes of a first parallax map of the second color image with respect to the first color image and a second parallax map of the first color image with respect to the second color image. This may be done using a map.
  • the occlusion area detection unit 602 moves the first color image based on the first parallax map, and then moves the first color image after the movement based on the second parallax map. Pixels or pixel blocks that do not return to the position of the first color image can be detected as the occlusion area.
  • Pixels or pixel blocks that do not return to the position of the second color image can be detected as occlusion areas.
  • FIG. 8 shows an example in which an occlusion region is detected using the left image L captured by the left image sensor and the right image R captured by the right image sensor.
  • the left image L is moved based on the parallax map of the right image R with respect to the left image L, and then the moved left image L is changed to the right image.
  • the position returns to the position before the movement.
  • the occlusion area detection unit 602 can detect the occlusion area based on the two parallax maps.
  • the color difference / luminance determination unit 603 calculates a value obtained by dividing the color difference dispersion value by the luminance dispersion value for each predetermined region in the first color image or the second color image, and the divided value is a predetermined value. A region that is greater than or equal to the value can be detected.
  • the color difference / luminance determination unit 603 can detect an area where there is no luminance difference but there is a color difference. For example, a region 901 in which a red region Ar and a blue region Ab are adjacent as shown in FIG. 9 is assumed. The red area Ar and the blue area Ab have the same luminance signal but different color difference signals.
  • the parallax value between the red area Ar and the blue area Ab cannot be obtained accurately, and as a result, the color edge may not be accurately identified. Therefore, the boundary area 902 between the red area Ar and the blue area Ab may deteriorate image quality when the first color image and the second color image are combined.
  • the chrominance variance value / luminance variance value of the area 901 is larger than that of the area consisting only of the red area Ar. Therefore, the color difference / luminance determination unit 603 can obtain a color difference variance value / luminance variance value for each region, and can detect a region where this value is larger than a predetermined value. The color difference / luminance determination unit 603 can output the detected area to the comprehensive determination unit.
  • the comprehensive determination unit 604 can determine whether to synthesize each region in the image based on information output from a component that detects a region that should or should not be image-synthesized. Based on the result of the determination, the comprehensive determination unit 604 can generate combination information regarding how to combine the first color image and the second color image.
  • the component may be included in the composite information generation unit 305. For example, any one of the out-of-search range region detection unit 601, the occlusion region detection unit 602, and the color difference / luminance determination unit 603 described above. There can be one, two, or three.
  • the overall determination unit 604 can determine that the region detected by at least one of these components is a region where the first color image should not be combined with the second color image. Then, with respect to the region, it is possible to generate synthesis information indicating that the first color image is not synthesized with the second color image.
  • the synthesis information generation unit 305 includes all of the out-of-search range region detection unit 601, the occlusion region detection unit 602, and the color difference / luminance determination unit 603, the region detected by any of these components
  • the composition information that the first color image should not be composed with the second color image can be generated.
  • the composition information generation unit 305 when the ratio of the areas in the second color image that the first color image should not be synthesized exceeds a predetermined value, all of the second color images Compositing information that the first color image should not be synthesized for the region may be generated.
  • the combination information is generated, the first color image is not combined with the second color image.
  • the second color image itself is output.
  • the degree of image quality deterioration can be suppressed. For example, when the first color image is completely different from the second color image due to an abnormality in the first color image pickup device, image quality deterioration is avoided by generating such composite information. be able to.
  • the synthesis information generation unit 305 may determine the synthesis rate ⁇ [0 ⁇ ⁇ ⁇ 1] for each region in the second color image.
  • the maximum ratio may be appropriately set by a person skilled in the art or by factors such as the types of two images to be combined.
  • the maximum proportion may be, for example, 30% to 70%, in particular 40% to 60%.
  • the composition information generation unit 305 can output the composition rate ⁇ determined for each region to the image composition unit 306 as composition information.
  • the image synthesis unit 306 can synthesize the first color image with the second color image based on the synthesis information generated by the synthesis information generation unit 305 using the second color image as a reference. For example, among the areas constituting the second color image, for the area where the first color image is to be synthesized, the first color image is synthesized with the second color image, and For the region where the first color image should not be synthesized, the second color image itself is adopted without synthesizing the first color image.
  • the synthesis information includes the synthesis rate ⁇
  • the image synthesis unit 306 can perform image synthesis for each region in the image according to the synthesis rate.
  • the post-processing unit 307 can perform processing for reducing a difference between the first color image imaging device and the second color image imaging device.
  • the post-processing performed by the post-processing unit 307 is performed on the area where the first color image is combined with the second color image, and the post-processing performed by the post-processing unit 307 is performed on the area where the combination is not performed.
  • the post-processing performed by the post-processing unit 307 may be different. For example, when there is a color difference between the first color image and the second color image due to the spectral difference between the two color image pickup devices, the post-processing unit 307 determines the color difference. To compensate, Matrix conversion processing and / or multi-axis color conversion processing can be performed for each region.
  • the post-processing unit 307 can perform NR with different NR intensity for each region in order to compensate for the SN difference.
  • the post-processing unit 307 can perform post-processing according to the synthesis rate for each region.
  • the image processing device may include a first color image imaging device and a second color image imaging device having higher sensitivity than the first color image imaging device. As described above, the image processing device according to the present technology may include two color image pickup elements having different sensitivities. Furthermore, the image processing device according to the present technology relates to a parallax between a first color image captured by the first color image imaging device and a second color image captured by the second color image imaging device.
  • the image processing device acquires disparity information between two color images captured by the two color image capturing elements, generates information on a synthesis method based on the disparity information, Then, based on the information relating to the generated synthesis method, the two images are synthesized.
  • First embodiment image processing apparatus
  • the difference in sensitivity between the two color image pickup devices can be introduced, for example, by selecting the type of color filter that covers the light receiving surface of each image pickup device.
  • the color filter of each image sensor may be, for example, as described in “1. First embodiment (image processing apparatus)”. Therefore, as a specific example of the color filter, the color filter described in “1. First embodiment (image processing apparatus)” may be employed in this embodiment.
  • the color filter that covers the light receiving surface of the first color image pickup device does not include either the white region or the no-filter region, and the color filter that covers the light receiving surface of the second color image pickup device has the white region. Or a non-filtered area, preferably a white area.
  • each component constituting the image processing device for example, an image pickup device, a preprocessing unit, a parallax information acquisition unit, a parallax compensation unit, a composite information generation unit, an image Regarding the specific processing of the image processing apparatus and the composition unit and the post-processing unit, the description given in “1.
  • First embodiment (image processing apparatus)” applies to the present embodiment. Therefore, the description about these is abbreviate
  • An image processing method includes: a first image acquisition step of acquiring a first color image by a first color image pickup device, wherein the color filter covering the light receiving surface includes neither a white area nor an unfiltered area;
  • the color filter covering the light receiving surface includes a second image acquisition step of acquiring a second color image by a second color image pickup element including a white region or a non-filter region.
  • the image processing method according to the present technology includes a color image imaging process using two different color image imaging elements.
  • the image processing method according to the present technology relates to a parallax information acquisition step of acquiring information about a parallax between the first color image and the second color image, and a parallax acquired in the parallax information acquisition step.
  • the first color image is added to the second color image based on the combination information generation step for generating information on how to combine the two images and the combination information generated in the combination information generation step.
  • an image composition step for composition As described above, the image processing method according to the present technology obtains disparity information between the two color images captured in the two color image capturing steps, generates information on how to combine based on the disparity information, Then, based on the information relating to the generated synthesis method, the two images are synthesized.
  • First embodiment image processing apparatus
  • the following image processing apparatus may be used.
  • FIG. 10 is a diagram illustrating an example of a flow of an image processing method according to the present technology.
  • step S101 the image processing apparatus 300 starts image processing according to the present technology.
  • step S102 the image processing apparatus 300 performs imaging with the first color image imaging element 301-1 and the second color image imaging element 301-2. Imaging with these imaging elements is preferably performed simultaneously. As a result of imaging by these imaging elements, a first color image (more particularly, first color image data) and a second color image (more particularly, second color image data) are acquired.
  • the image processing apparatus 300 may perform preprocessing on each of the first color image and the second color image as necessary. More specifically, preprocessing can be performed by each of the preprocessing units 302-1 and 302-2. The pretreatment is performed in the above 1. 1 or two or more of the pre-processing examples described regarding the pre-processing units 302-1 and 302-2 in “(2) First example (image processing apparatus) of the first embodiment” Good. The preprocessing performed on the first color image and the second color image may be different processing or the same.
  • the image processing apparatus 300 includes the first color image captured by the first color image imaging element 301-1 and the second color image captured by the second color image imaging element 301-2.
  • the parallax information can be acquired by the parallax information acquisition unit 303.
  • the information regarding the parallax can include parallax of each region in the image, for example, parallax for each pixel and / or parallax for each pixel block.
  • the information regarding the parallax may include a parallax map, for example.
  • the parallax information acquisition unit 303 can acquire a parallax map indicating the parallax distribution of the first color image with respect to the second color image serving as a reference for synthesis, for example.
  • the disparity information acquisition unit 303 distributes the disparity of the second color image with respect to the first color image in addition to the disparity map indicating the disparity distribution of the first color image with respect to the second color image that is a reference for combining. You may acquire the parallax map which shows.
  • step S105 the image processing apparatus 300 can perform the parallax compensation of the first color image based on the information regarding the parallax acquired in step S104.
  • the parallax compensation is performed, for example, in the above 1.
  • the parallax compensation unit 304 may perform this.
  • step S105 the position of the pixel or the pixel block can be moved based on the information on the parallax with respect to the image data of the first color image. The movement of the position may be performed so that the parallax between the first color image and the second color image is eliminated, for example.
  • each region (especially a pixel and / or pixel block) of the first color image can be moved to the position of each region in the second color image corresponding to the region.
  • a first color image with parallax compensation is generated.
  • each region of the first color image is moved to each corresponding region in the second color image, and as a result, appropriate image composition is possible.
  • step S106 the image processing apparatus 300 generates information on how to synthesize the first color image and the second color image. More preferably, in step S106, the image processing apparatus 300 generates information regarding how to combine the first color image and the second color image that have been subjected to parallax compensation. For example, the above-described 1. As described in “(2) First example of first embodiment (image processing apparatus)” in FIG. More specific processing in step S106 will be separately described below with reference to FIG.
  • step S107 the image processing apparatus 300 synthesizes the parallax-compensated first color image with the reference second color image based on the information regarding the synthesis method generated in step S106.
  • the synthesis can be performed by, for example, the above 1.
  • More specific processing of step S107 will be separately described below with reference to FIG.
  • step S108 the image processing apparatus 300 performs post-processing on the image obtained as a result of the synthesis in step S107.
  • the post-processing is, for example, the above-described 1.
  • the post-processing unit 307 may perform the processing. More specific processing in step S108 will be separately described below with reference to FIG.
  • step S109 the image processing apparatus 300 ends the image processing according to the present technology.
  • FIG. 11 is an example of the flow of the composite information generation process.
  • step S201 the composite information generation unit 305 starts a composite information generation process.
  • step S202 the composite information generation unit 305 detects a region outside the search range.
  • the detection can be performed by the search area outside detection unit 601 in particular.
  • step S202 for example, information on the parallax acquired in step S104 can be used. More specific processing of the out-of-search area detection processing by the out-of-search area detection unit 601 will be separately described below with reference to FIG.
  • step S203 the composite information generation unit 305 detects an occlusion area. This detection can be performed by the occlusion area detection unit 602 in particular. In step S203, for example, information on the parallax acquired in step S104 can be used. More specific processing of the occlusion region detection processing by the occlusion region detection unit 602 will be separately described below with reference to FIG.
  • step S204 the composite information generation unit 305 calculates a value obtained by dividing the color difference dispersion value by the luminance dispersion value for each predetermined region in the first color image or the second color image, and the divided value. A region where is equal to or greater than a predetermined value is detected.
  • the detection can be performed by the color difference / luminance determination unit 603 in particular. More specific processing of the color difference / luminance determination processing by the color difference / luminance determination unit 603 will be separately described below with reference to FIG.
  • step S205 the synthesis information generation unit 305 determines whether each area in the image should be synthesized based on the processing results in steps S202 to S204. For example, for each region in the image, it is determined that the region detected in at least one of steps S202 to S204 should not be image-synthesized.
  • the composite information generation unit 305 outputs the determination result as information regarding how to combine both images.
  • step S206 the synthesis information generation unit 305 ends the synthesis information generation process.
  • FIG. 12 is an example of the flow of the search area outside detection process.
  • step S301 in FIG. 12 the out-of-search area detection unit 601 starts out-of-search area detection processing.
  • step S302 the out-of-search-range area detection unit 601 determines an attention area in the parallax map generated by the parallax information acquisition unit 303, for example.
  • step S303 the out-of-search area detection unit 601 acquires the parallax value of the attention area.
  • step S304 the out-of-search area detection unit 601 determines whether the acquired parallax value is equal to or greater than a predetermined threshold. When the acquired parallax value is equal to or greater than the predetermined threshold value, the search area outside detection unit 601 advances the process to step S305. When the acquired parallax value is less than the predetermined threshold value, the out-of-search-range area detection unit 601 advances the process to step S306.
  • step S304 the out-of-search-range area detection unit 601 outputs an area in which the obtained parallax value is equal to or greater than a predetermined threshold to the comprehensive determination unit 604 as an out-of-search area area.
  • step S306 the out-of-search area detection unit 601 determines whether there is an area in the disparity map where the out-of-search area detection processing is not performed. If there is an area where the process has not been performed, the search-outside area detection unit 601 returns the process to step S302, and performs a search-outside area detection process on the area. When there is no area where the process is not performed, the out-of-search area detection unit 601 advances the process to step S307.
  • step S ⁇ b> 307 the out-of-search area detection unit 601 ends the out-of-search area detection.
  • FIG. 13 is an example of the flow of the occlusion area detection process.
  • step S401 in FIG. 13 the occlusion area detection unit 602 starts an occlusion area detection process.
  • step S402 the occlusion area detection unit 602 moves each area of the first color image based on the first parallax map of the second color image with respect to the first color image.
  • step S403 the occlusion area detection unit 602 moves each area of the first color image after the movement in step S402 again based on the second parallax map of the first color image with respect to the second color image.
  • step S404 the occlusion area detection unit 602 detects an area that does not return to the position of the original first color image as an occlusion area.
  • the occlusion area detection unit 602 outputs the detected occlusion area to the comprehensive determination unit 604.
  • step S405 the occlusion area detection unit 602 ends the occlusion area detection process.
  • FIG. 14 is an example of the flow of color difference / luminance determination processing.
  • step S501 in FIG. 14 the color difference / luminance determination unit 603 starts the color difference / luminance determination process.
  • step S502 the color difference / luminance determination unit 603 determines a region of interest in the second color image, for example.
  • step S503 the color difference / luminance determination unit 603 calculates the value of the color difference variance / luminance variance of the region of interest.
  • step S504 the color difference / luminance determination unit 603 determines whether the calculated color difference variance / luminance variance value is equal to or greater than a predetermined threshold. If the calculated value is equal to or greater than the predetermined threshold, the color difference / luminance determination unit 603 advances the process to step S505. If the calculated value is less than the predetermined threshold, the color difference / luminance determination unit 603 advances the process to step S506.
  • step S505 the color difference / luminance determination unit 603 outputs an area where the calculated value is equal to or greater than a predetermined threshold to the general determination unit 604 as an area where image quality may be degraded.
  • step S506 the color difference / luminance determination unit 603 determines whether there is an area in the second color image where the processing in steps S502 to S505 has not been performed. When there is an area where the process is not performed, the color difference / luminance determination unit 603 returns the process to step S502 and performs the color difference / brightness determination process on the area. If there is no area where the process is not performed, the color difference / luminance determination unit 603 advances the process to step S507.
  • step S507 the color difference / luminance determination unit 603 ends the color difference / luminance determination processing.
  • FIG. 15 is an example of the flow of image composition processing.
  • step S601 in FIG. 15 the image composition unit 306 starts composition processing of the first color image and the second color image that have been subjected to parallax compensation.
  • step S602 the image composition unit 306 determines a region of interest (for example, a pixel of interest or a pixel block of interest) in the second color image, for example.
  • a region of interest for example, a pixel of interest or a pixel block of interest
  • step S603 the image synthesis unit 306 refers to the synthesis information generated by the synthesis information generation unit 305.
  • step S604 as a result of referring to the synthesis information generated by the synthesis information generation unit 305, the image synthesis unit 306 synthesizes the parallax-compensated first color image with the second color image in the region of interest. If so, the process advances to step S605. If not, the process proceeds to step S606.
  • step S605 the image combining unit 306 combines the parallax-compensated first color image with the second color image in the region of interest. Regarding the region of interest, the image composition unit 306 outputs an image obtained by the composition.
  • step S606 the image composition unit 306 outputs the second color image regarding the attention area.
  • step S607 the image composition unit 306 determines whether there is an area in the second color image where the processing in steps S602 to S606 has not been performed. If there is an area where the process is not performed, the image composition unit 306 returns the process to step S602 and performs the image composition process on the area. If there is no area where the process is not performed, the image composition unit 306 advances the process to step S608.
  • step S608 the image composition unit 306 ends the image composition process.
  • FIG. 16 is an example of the post-processing flow.
  • step S701 in FIG. 16 the post-processing unit 307 starts post-processing of the composite image obtained as a result of the synthesis by the image synthesis unit 306.
  • step S702 the post-processing unit 307 determines a region of interest (for example, a pixel of interest or a pixel block of interest) in the composite image, for example.
  • a region of interest for example, a pixel of interest or a pixel block of interest
  • step S703 the post-processing unit 307 refers to the synthesis information generated by the synthesis information generation unit 305.
  • step S ⁇ b> 704 the post-processing unit 307 combines the parallax-compensated first color image with the second color image as a result of referring to the synthesis information generated by the synthesis information generation unit 305. If so, the process advances to step S705. If not, the process proceeds to step S706.
  • step S705 the post-processing unit 307 performs signal processing suitable for the synthesized image on the region of interest.
  • step S706 the post-processing unit 307 performs signal processing suitable for an unsynthesized image on the region of interest.
  • step S ⁇ b> 707 the post-processing unit 307 determines whether there is an area that has not been post-processed in the composite image. If there is an area that has not been post-processed, the post-processing unit 307 returns the process to step S702 and performs post-processing on the area. If there is no area where post-processing is not performed, the post-processing unit 307 advances the processing to step S708.
  • step S708 the post-processing unit 307 ends the post-processing.
  • two color images are acquired using the first color image pickup device and the second color image pickup device having higher sensitivity than the first color image pickup device.
  • two color images are acquired by using two color image pickup elements having different sensitivities.
  • the image processing method according to the present technology relates to a parallax between a first color image captured by the first color image imaging device and a second color image captured by the second color image imaging device.
  • the composite information generation step for generating information on how to combine both images based on the information on the disparity acquired in the disparity information acquisition step, and the composite information generation step And an image combining step of combining the first color image with the second color image based on the combined information.
  • FIG. 18 is a diagram illustrating an example of a schematic configuration of an image processing device according to the present technology.
  • An image processing apparatus 1001 shown in FIG. 18 includes a CPU (Central Processing Unit) 1002 and a RAM 1003.
  • the CPU 1002 and the RAM 1003 are connected to each other via a bus 1005 and are also connected to other components of the image processing apparatus 1001 via the bus 1005.
  • the image processing apparatus 1001 includes an imaging apparatus 1011.
  • the imaging device 1011 may be connected to other components.
  • the image pickup apparatus 1011 includes the first color image pickup element and the second color image described in “1. First embodiment (image processing apparatus)” or “2. Second embodiment (image processing apparatus)”. Includes an image pickup device.
  • the image processing method according to the present technology may be performed by the imaging device 1011.
  • a CPU 1002 performs control and calculation of the image processing apparatus 1001.
  • An arbitrary processor can be used as the CPU 1002, and examples thereof include a Xeon (registered trademark) series, a Core (trademark) series, and an Atom (trademark) series processor.
  • Image processing according to the present technology may be realized by the CPU 1002, for example.
  • the RAM 1003 includes, for example, a cache memory and a main memory, and can temporarily store a program used by the CPU 1002, for example, a program for causing the apparatus to execute an image processing method according to the present technology.
  • the image processing apparatus 1001 may include a disk 1004, a communication apparatus 1006, an output apparatus 1007, and a drive 1009. Any of these components can be connected to the bus 1005.
  • the disk 1004 includes an operating system (for example, WINDOWS (registered trademark), UNIX (registered trademark), or LINUX (registered trademark)), a program for executing an image processing method according to the present technology, and various data (for example, Image data) can be stored.
  • the communication device 1006 connects the image processing device 1001 to the network 1010 by wire or wireless.
  • the communication apparatus 1006 can acquire various data (for example, image data) from the image processing apparatus 1001 via the network 1010. The acquired data can be stored in the disk 1004, for example.
  • the type of the communication device 1006 may be appropriately selected by those skilled in the art.
  • the output device 1007 can output an image obtained by image processing according to the present technology.
  • the output device 1007 may be a display, for example.
  • the input device 1008 is a device for the user to operate the image processing device 1001.
  • the display may function as the input apparatus 1008.
  • the drive 1009 can read information recorded on the recording medium and output the information to the RAM 1003.
  • the recording medium is, for example, an SD memory card or a flash memory, but is not limited thereto.
  • this technique can also take the following structures.
  • a first color image pickup device in which the color filter covering the light receiving surface does not include any of the white area and the non-filter area;
  • a color filter that covers the light receiving surface includes a white area or a non-filter area;
  • a parallax information acquisition unit that acquires information about parallax between the first color image captured by the first color image imaging device and the second color image captured by the second color image imaging device; , Based on information on parallax acquired by the parallax information acquisition unit, a synthesis information generation unit that generates information on how to combine both images;
  • An image combining unit that combines the first color image with the second color image based on the combination information generated by the combination information generation unit;
  • An image processing apparatus In addition, this technique can also take the following structures.
  • the image processing apparatus according to [1], wherein the color filter included in the first color image pickup device is a Bayer array color filter. [3] The image processing device according to [1] or [2], wherein the color filter included in the second color image pickup device includes a white region. [4] The image processing apparatus according to any one of [1] to [3], wherein the color filter included in the second color image pickup device includes a region of a WRGB array or a WRG array. [5] The parallax information acquisition unit acquires parallax for each pixel or each pixel block between the first color image and the second color image. Any one of [1] to [4] The image processing apparatus according to one.
  • the composite information generation unit includes a search range outside region detection unit that detects a region having a disparity value larger than a predetermined search range in a disparity map created based on information on disparity. 6].
  • the image processing apparatus according to any one of [6].
  • the image processing apparatus according to one.
  • the composite information generation unit calculates a value obtained by dividing the color difference dispersion value by the luminance dispersion value for each predetermined region in the first color image or the second color image, and the divided value is
  • the image processing apparatus according to any one of [1] to [8], further including a color difference / luminance determination unit that detects an area that is equal to or greater than a predetermined value.
  • the composite information generating unit A region outside the search range that detects a pixel or a pixel block having a parallax value larger than a predetermined search range in a parallax map created based on information on parallax;
  • An occlusion area detecting unit for detecting an occlusion area by comparing the first color image and the second color image; and a color difference for each predetermined area in the first color image or the second color image.
  • a color difference / luminance determination unit that calculates a value obtained by dividing the variance value by the luminance variance value and detects an area in which the divided value is equal to or greater than a predetermined value; Including at least one component selected from Generating combined information that the first color image should not be combined with the second color image for the area detected in any of the components; [1] The image processing apparatus according to any one of [9]. [11] The composition information generation unit generates composition information indicating that the first color image should be synthesized or should not be synthesized for each region constituting the second color image. ] The image processing apparatus according to any one of [10] to [10].
  • the synthesis information generation unit The image processing apparatus according to any one of [1] to [11], which generates synthesis information that the first color image should not be synthesized for all regions of the color image. [13] Any one of [1] to [12], wherein the synthesis information generation unit assigns a synthesis rate for synthesizing the first color image for each region constituting the second color image.
  • An image processing apparatus according to 1.
  • a first color image pickup device A second color image pickup device having higher sensitivity than the first color image pickup device;
  • a parallax information acquisition unit that acquires information about parallax between the first color image captured by the first color image imaging device and the second color image captured by the second color image imaging device; , Based on information on parallax acquired by the parallax information acquisition unit, a synthesis information generation unit that generates information on how to combine both images;
  • An image combining unit that combines the first color image with the second color image based on the combination information generated by the combination information generation unit;
  • An image processing apparatus An image processing apparatus.
  • An image processing method including: [16] a first image acquisition step of acquiring a first color image by the first color image pickup device; A second image acquisition step of acquiring a second color image by means of a second color image pickup device having higher
  • Image Processing Device 301-1 First Color Image Imaging Element 301-2 Second Color Image Imaging Element 302-1 Preprocessing Unit 302-2 Preprocessing Unit 303 Parallax Information Acquisition Unit 304 Parallax Compensation Unit 305 Composite Information Generation Unit 306 Image composition unit 307 Post-processing unit

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

The purpose of the present invention is to acquire a high-quality image in which a problem resulting from a disparity between two images acquired by two image capturing units is solved. Provided is an image processing device equipped with: a first color image capturing element in which a color filter covering a light receiving surface includes neither a white region nor an unfiltered region; a second color image capturing element in which the color filter covering the light receiving surface includes the white region or the unfiltered region; a disparity information acquisition unit which acquires information relating to a disparity between a first color image captured by the first color image capturing element and a second color image captured by the second color image capturing element; a composition information generation unit which, on the basis of the information relating to the disparity and acquired by the disparity information acquisition unit, generates information relating to a way of compositing both the images; and an image composition unit which, on the basis of the composition information generated by the composition information generation unit, composites the first color image with the second color image.

Description

画像処理装置及び画像処理方法Image processing apparatus and image processing method
 本技術は、画像処理装置及び画像処理方法に関する。より詳細には、本技術は、2つの画像を合成して1つの画像を取得するための画像処理装置及び画像処理方法に関する。 The present technology relates to an image processing apparatus and an image processing method. More specifically, the present technology relates to an image processing apparatus and an image processing method for synthesizing two images and obtaining one image.
 近年、デュアルカメラを搭載したスマートフォンに注目が集まっている。当該スマートフォンには、例えばその背面に2つのカメラが設けられている。当該2つのカメラのそれぞれによって撮像された2つの画像データを用いて、1つの画像が取得されうる。 In recent years, smartphones equipped with dual cameras have attracted attention. For example, the smartphone is provided with two cameras on the back thereof. One image can be acquired using two image data captured by each of the two cameras.
 例えば下記特許文献1に記載には、第1の画像を撮像する第1の撮像部と、第2の画像を撮像する第2の撮像部と、前記第1の画像および前記第2の画像の視差の有無を判定する視差判定部と、前記視差判定部の判定結果に応じて、前記第1の画像および前記第2の画像のそれぞれの画素を合成する合成部とを含む画像処理装置が記載されている。当該画像処理装置は、2枚の画像を視差に応じて適切に合成することができるので、複数の撮像部により同一の範囲を撮像して重ね合わせるといった合成により、適切な高画質化を実現できる。 For example, in the following Patent Document 1, a first imaging unit that captures a first image, a second imaging unit that captures a second image, the first image, and the second image An image processing apparatus comprising: a parallax determination unit that determines the presence or absence of parallax; and a synthesis unit that synthesizes each pixel of the first image and the second image according to a determination result of the parallax determination unit. Has been. Since the image processing apparatus can appropriately combine two images according to the parallax, an appropriate image quality can be improved by combining a plurality of imaging units to capture and overlap the same range. .
特開2017-69926号公報JP 2017-69926 A
 デュアルカメラは、例えばスマートフォンの背面などの1つの面に2つの撮像部が並んで配置されているという構成を有する。当該2つの撮像部の視点は異なるために、当該2つの撮像部により撮像された2つの画像の間には視差がある。そのため、当該2つの画像を合成する場合には、当該視差に起因する画質劣化が生じうる。そこで、当該2つの画像をより適切に合成してより高画質な画像を取得することが求められている。 The dual camera has a configuration in which, for example, two imaging units are arranged side by side on one surface such as the back surface of a smartphone. Since the viewpoints of the two image capturing units are different, there is a parallax between the two images captured by the two image capturing units. Therefore, when the two images are combined, image quality degradation caused by the parallax can occur. Therefore, it is required to combine the two images more appropriately to obtain a higher quality image.
 本技術は、新たな画像処理技術を提供することを目的とする。 This technology aims to provide a new image processing technology.
 本発明者らは、特定の構成を有する画像処理装置によって、前記視差に起因する問題を解消し且つ高画質な画像を取得することができることを見出した。 The present inventors have found that an image processing apparatus having a specific configuration can solve the problem caused by the parallax and can acquire a high-quality image.
 すなわち、本技術は、受光面を覆うカラーフィルタが、ホワイト領域及びフィルタ無し領域のいずれも含まない、第一のカラー画像撮像素子と、受光面を覆うカラーフィルタが、ホワイト領域又はフィルタ無し領域を含む、第二のカラー画像撮像素子と、前記第一のカラー画像撮像素子により撮像された第一のカラー画像と前記第二のカラー画像撮像素子により撮像された第二のカラー画像との間の視差に関する情報を取得する視差情報取得部と、前記視差情報取得部により取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成部と、前記合成情報生成部により生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成部と、を備えている画像処理装置を提供する。
 本技術の一つの実施態様に従い、前記第一のカラー画像撮像素子に含まれるカラーフィルタが、ベイヤ配列のカラーフィルタでありうる。
 本技術の一つの実施態様に従い、前記第二のカラー画像撮像素子に含まれるカラーフィルタが、ホワイト領域を含みうる。
 本技術の一つの実施態様に従い、前記第二のカラー画像撮像素子に含まれるカラーフィルタが、WRGB配列又はWRG配列の領域を含みうる。
 本技術の一つの実施態様に従い、前記視差情報取得部が、前記第一のカラー画像と前記第二のカラー画像との間で、画素毎又は画素ブロック毎の視差を取得しうる。
 本技術の一つの実施態様に従い、前記画像処理装置は、前記視差に関する情報に基づき、前記第一のカラー画像に対して視差補償を行う視差補償部をさらに備えていてよい。
 本技術の一つの実施態様に従い、前記合成情報生成部が、視差に関する情報に基づき作成された視差マップ内で所定の探索範囲より大きな視差値を有する領域を検出する探索範囲外領域検出部を含みうる。
 本技術の一つの実施態様に従い、前記合成情報生成部が、前記第一のカラー画像と前記第二のカラー画像とを対比してオクルージョン領域を検出するオクルージョン領域検出部を含みうる。
 本技術の一つの実施態様に従い、前記合成情報生成部が、前記第一のカラー画像又は前記第二のカラー画像内の所定の領域毎に色差分散値を輝度分散値で除した値を算出し、当該除した値が所定の値以上である領域を検出する色差/輝度判定部を含みうる。
 本技術の一つの実施態様に従い、前記合成情報生成部が、視差に関する情報に基づき作成された視差マップ内で所定の探索範囲より大きな視差値を有する画素又は画素ブロックを検出する探索範囲外領域検出部、前記第一のカラー画像と前記第二のカラー画像とを対比してオクルージョン領域を検出するオクルージョン領域検出部、及び前記第一のカラー画像又は前記第二のカラー画像内の所定の領域毎に色差分散値を輝度分散値で除した値を算出し、当該除した値が所定の値以上である領域を検出する色差/輝度判定部、から選ばれる少なくとも1つの構成要素を含み、いずれかの構成要素において検出された領域について、前記第二のカラー画像に前記第一のカラー画像を合成すべきでないとする合成情報を生成しうる。
 本技術の一つの実施態様に従い、前記合成情報生成部が、前記第二のカラー画像を構成する領域毎に、前記第一のカラー画像を合成すべきとする又は合成すべきでないとする合成情報を生成しうる。
 本技術の一つの実施態様に従い、前記合成情報生成部が、前記第二のカラー画像のうちの、前記第一のカラー画像を合成すべきでないとする領域の割合が所定の値を超えた場合に、前記第二のカラー画像の全ての領域について前記第一のカラー画像を合成すべきでないとする合成情報を生成しうる。
 本技術の一つの実施態様に従い、前記合成情報生成部が、前記第二のカラー画像を構成する領域毎に、前記第一のカラー画像を合成する合成率を付与しうる。
That is, in the present technology, the color filter covering the light receiving surface does not include either the white region or the no-filter region, and the color filter covering the light receiving surface includes the white region or the no-filter region. A second color image pick-up device, a first color image picked up by the first color image pick-up device, and a second color image picked up by the second color image pick-up device. A parallax information acquisition unit that acquires information on parallax, a synthesis information generation unit that generates information on how to combine both images based on the information on parallax acquired by the parallax information acquisition unit, and the synthesis information generation unit An image processing apparatus comprising: an image combining unit configured to combine the first color image with the second color image based on the generated combination information; Subjected to.
According to one embodiment of the present technology, the color filter included in the first color image pickup device may be a Bayer array color filter.
According to one embodiment of the present technology, the color filter included in the second color image pickup device may include a white region.
According to one embodiment of the present technology, the color filter included in the second color image pickup device may include a region of a WRGB array or a WRG array.
According to one embodiment of the present technology, the parallax information acquisition unit may acquire parallax for each pixel or each pixel block between the first color image and the second color image.
According to one embodiment of the present technology, the image processing apparatus may further include a parallax compensation unit that performs parallax compensation on the first color image based on the information on the parallax.
According to one embodiment of the present technology, the synthesis information generation unit includes a search range outside region detection unit that detects a region having a disparity value larger than a predetermined search range in a disparity map created based on information on disparity. sell.
According to one embodiment of the present technology, the composite information generation unit may include an occlusion area detection unit that detects an occlusion area by comparing the first color image and the second color image.
According to one embodiment of the present technology, the composite information generation unit calculates a value obtained by dividing a color difference variance value by a luminance variance value for each predetermined region in the first color image or the second color image. A color difference / luminance determination unit that detects an area in which the divided value is equal to or greater than a predetermined value may be included.
According to one embodiment of the present technology, the synthesis information generation unit detects an out-of-search-range area in which a pixel or a pixel block having a disparity value larger than a predetermined search range is detected in a disparity map created based on information on disparity An occlusion area detection unit that detects an occlusion area by comparing the first color image and the second color image, and for each predetermined area in the first color image or the second color image. Includes at least one component selected from a color difference / brightness determination unit that calculates a value obtained by dividing the chrominance dispersion value by the luminance dispersion value and detects an area in which the divided value is equal to or greater than a predetermined value. For the area detected in the component, the combination information that the first color image should not be combined with the second color image can be generated.
According to one embodiment of the present technology, the combination information that the combination information generation unit should or should not combine the first color image for each region that forms the second color image Can be generated.
In accordance with one embodiment of the present technology, when the ratio of the area in the second color image that the first color image should not be synthesized exceeds a predetermined value, according to one embodiment of the present technology In addition, it is possible to generate synthesis information indicating that the first color image should not be synthesized for all regions of the second color image.
According to one embodiment of the present technology, the synthesis information generation unit may give a synthesis rate for synthesizing the first color image for each region constituting the second color image.
 また、本技術は、第一のカラー画像撮像素子と、前記第一のカラー画像撮像素子よりも感度が高い、第二のカラー画像撮像素子と、前記第一のカラー画像撮像素子により撮像された第一のカラー画像と前記第二のカラー画像撮像素子により撮像された第二のカラー画像との間の視差に関する情報を取得する視差情報取得部と、前記視差情報取得部により取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成部と、前記合成情報生成部により生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成部と、を備えている画像処理装置も提供する。 Further, the present technology is imaged by the first color image imaging device, the second color image imaging device having higher sensitivity than the first color image imaging device, and the first color image imaging device. A parallax information acquisition unit that acquires information about parallax between the first color image and the second color image captured by the second color image imaging device, and the parallax acquired by the parallax information acquisition unit Based on the information, the first color image is added to the second color image based on the combination information generated by the combination information generation unit that generates information on how to combine the two images. An image processing apparatus including an image composition unit for composition is also provided.
 また、本技術は、受光面を覆うカラーフィルタが、ホワイト領域及びフィルタ無し領域のいずれも含まない、第一のカラー画像撮像素子により第一のカラー画像を取得する第一画像取得工程と、受光面を覆うカラーフィルタが、ホワイト領域又はフィルタ無し領域を含む、第二のカラー画像撮像素子により第二のカラー画像を取得する第二画像取得工程と、前記第一のカラー画像と前記第二のカラー画像との間の視差に関する情報を取得する視差情報取得工程と、前記視差情報取得工程において取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成工程と、前記合成情報生成工程において生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成工程と、を含む画像処理方法を提供する。 In addition, the present technology provides a first image acquisition step of acquiring a first color image by a first color image pickup device, in which the color filter covering the light receiving surface includes neither a white area nor an unfiltered area; A color filter covering the surface includes a white area or a non-filter area; a second image acquisition step of acquiring a second color image by a second color image pickup device; the first color image and the second color image; A parallax information acquisition step for acquiring information on parallax between color images, and a synthesis information generation step for generating information on how to combine both images based on the information on parallax acquired in the parallax information acquisition step; An image synthesis step of synthesizing the first color image with the second color image based on the synthesis information generated in the synthesis information generation step; To provide an image processing method comprising.
 また、本技術は、第一のカラー画像撮像素子により第一のカラー画像を取得する第一画像取得工程と、前記第一のカラー画像撮像素子よりも感度が高い、第二のカラー画像撮像素子により第二のカラー画像を取得する第二画像取得工程と、前記第一のカラー画像と前記第二のカラー画像との間の視差に関する情報を取得する視差情報取得工程と、前記視差情報取得工程において取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成工程と、前記合成情報生成工程において生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成工程と、を含む画像処理方法も提供する。 The present technology also includes a first image acquisition step of acquiring a first color image by the first color image imaging device, and a second color image imaging device having higher sensitivity than the first color image imaging device. A second image acquisition step of acquiring a second color image, a parallax information acquisition step of acquiring information on the parallax between the first color image and the second color image, and the parallax information acquisition step Based on the information on the parallax acquired in step S2, a combination information generation step for generating information on how to combine both images, and on the second color image based on the combination information generated in the combination information generation step, An image processing method including an image combining step of combining the first color image is also provided.
 本技術により、2つの撮像部により取得される2つの画像の間の視差に起因する問題が解消された高画質な画像を取得することができる。なお、本技術により奏される効果は、ここに記載された効果に必ずしも限定されるものではなく、本明細書中に記載されたいずれかの効果であってもよい。 This technology can acquire a high-quality image in which the problem caused by the parallax between the two images acquired by the two imaging units is solved. In addition, the effect show | played by this technique is not necessarily limited to the effect described here, and may be the any effect described in this specification.
本技術に従う画像処理により得られる画像及び他の画像合成手法により得られる画像の例を示す図である。It is a figure which shows the example of the image obtained by the image obtained by the image processing according to this technique, and another image composition method. 本技術に従う画像処理装置に備えられる2つのカラー画像撮像素子の構成例を示す図である。It is a figure which shows the structural example of two color image pick-up elements with which the image processing apparatus according to this technique is equipped. 本技術に従う画像処理装置のブロック図の一例である。It is an example of the block diagram of the image processing apparatus according to this technique. 本技術に従う画像処理装置に備えられる第一のカラー画像撮像素子及び第二のカラー画像撮像素子の受光面を覆うカラーフィルタの例と示す図である。It is a figure shown with the example of the color filter which covers the light-receiving surface of the 1st color image pick-up element and the 2nd color image pick-up element with which the image processing device according to this art is equipped. 本技術に従う画像処理装置に備えられる第二のカラー画像撮像素子の受光面を覆うカラーフィルタの例と示す図である。It is a figure which shows as an example of the color filter which covers the light-receiving surface of the 2nd color image imaging element with which the image processing apparatus according to this technique is equipped. 本技術に従う画像処理装置に含まれる合成情報生成部のブロック図の構成例を示す図である。It is a figure which shows the structural example of the block diagram of the synthetic | combination information generation part contained in the image processing apparatus according to this technique. オクルージョン領域を説明するための図である。It is a figure for demonstrating an occlusion area | region. オクルージョン領域の検出手法の一例を説明するための図である。It is a figure for demonstrating an example of the detection method of an occlusion area | region. 画質劣化が生じる可能性のある領域の例を示す図である。It is a figure which shows the example of the area | region where image quality degradation may arise. 本技術に従う画像処理方法のフローの一例を示す図である。It is a figure which shows an example of the flow of the image processing method according to this technique. 合成情報生成処理のフローの一例を示す図である。It is a figure which shows an example of the flow of a synthetic | combination information generation process. 探索範囲外領域検出処理のフローの一例を示す図である。It is a figure which shows an example of the flow of a search range outside area | region detection process. オクルージョン領域検出処理のフローの一例を示す図である。It is a figure which shows an example of the flow of an occlusion area | region detection process. 色差/輝度判定処理のフローの一例を示す図である。It is a figure which shows an example of the flow of a color difference / luminance determination process. 第一のカラー画像と第二のカラー画像との合成処理のフローの一例を示す図である。It is a figure which shows an example of the flow of a synthetic | combination process with a 1st color image and a 2nd color image. 合成画像の後処理のフローの一例を示す図である。It is a figure which shows an example of the flow of the post-process of a synthesized image. 三層構造を有する撮像素子の一例を示す図である。It is a figure which shows an example of the image pick-up element which has a three-layer structure. 本技術に従う画像処理装置の構成例を示す図である。It is a figure which shows the structural example of the image processing apparatus according to this technique.
 以下、本技術を実施するための好適な形態について説明する。なお、以下に説明する実施形態は、本技術の代表的な実施形態を示したものであり、本技術の範囲がこれらの実施形態に限定されることはない。なお、本技術の説明は以下の順序で行う。
1.第1の実施形態(画像処理装置)
(1)第1の実施形態の説明
(2)第1の実施形態の第1の例(画像処理装置)
2.第2の実施形態(画像処理装置)
3.第3の実施形態(画像処理方法)
(1)第3の実施形態の説明
(2)第3の実施形態の第1の例(画像処理方法)
4.第4の実施形態(画像処理方法)
5.装置の構成例
Hereinafter, preferred embodiments for carrying out the present technology will be described. Note that the embodiments described below are representative embodiments of the present technology, and the scope of the present technology is not limited to these embodiments. In addition, description of this technique is performed in the following order.
1. First embodiment (image processing apparatus)
(1) Description of the first embodiment (2) First example of the first embodiment (image processing apparatus)
2. Second embodiment (image processing apparatus)
3. Third Embodiment (Image Processing Method)
(1) Description of the third embodiment (2) First example of the third embodiment (image processing method)
4). Fourth embodiment (image processing method)
5. Device configuration example
1.第1の実施形態(画像処理装置) 1. First embodiment (image processing apparatus)
(1)第1の実施形態の説明 (1) Description of the first embodiment
 本技術に従う画像処理装置は、受光面を覆うカラーフィルタがホワイト領域及びフィルタ無し領域のいずれも含まない第一のカラー画像撮像素子と、受光面を覆うカラーフィルタがホワイト領域又はフィルタ無し領域を含む第二のカラー画像撮像素子と含みうる。このように、本技術に従う画像処理装置は、異なる2つのカラー画像撮像素子を含みうる。
 さらに、本技術に従う画像処理装置は、前記第一のカラー画像撮像素子により撮像された第一のカラー画像と前記第二のカラー画像撮像素子により撮像された第二のカラー画像との間の視差に関する情報を取得する視差情報取得部と、前記視差情報取得部により取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成部と、前記合成情報生成部により生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成部と、を備えている。このように、本技術に従う画像処理装置は、前記2つのカラー画像撮像素子によって撮像された2つのカラー画像の間の視差情報を取得し、当該視差情報に基づき合成の仕方に関する情報を生成し、そして、生成された合成の仕方に関する情報に基づき、2つの画像を合成する。
In the image processing device according to the present technology, the color filter covering the light receiving surface includes the first color image pickup element that does not include either the white region or the no filter region, and the color filter covering the light receiving surface includes the white region or the no filter region. A second color image pickup device may be included. Thus, the image processing device according to the present technology can include two different color image pickup devices.
Furthermore, the image processing apparatus according to the present technology provides a parallax between a first color image captured by the first color image imaging device and a second color image captured by the second color image imaging device. Generated by the disparity information acquisition unit that acquires information on the information, the combination information generation unit that generates information on how to combine both images based on the information on the disparity acquired by the disparity information acquisition unit, and the combination information generation unit And an image composition unit for compositing the first color image with the second color image based on the synthesized information. As described above, the image processing device according to the present technology acquires disparity information between two color images captured by the two color image capturing elements, generates information on a synthesis method based on the disparity information, Then, based on the information relating to the generated synthesis method, the two images are synthesized.
 前記第一のカラー画像撮像素子の受光面を覆うカラーフィルタは、ホワイト領域及びフィルタ無し領域のいずれも含まないが、第二のカラー画像撮像素子の受光面を覆うカラーフィルタは、ホワイト領域又はフィルタ無し領域を含む。そのため、第二のカラー画像撮像素子は、第一のカラー画像撮像素子よりも感度が高い。
 また、前記画像合成部は、第一のカラー画像撮像素子により撮像された第一のカラー画像が、第一のカラー画像撮像素子よりも高い感度を有する第二のカラー画像撮像素子により撮像された第二のカラー画像に合成される。すなわち、第一のカラー画像よりも高画質な第二のカラー画像を基準として、当該第二のカラー画像に第一のカラー画像が合成される。その結果、例えば第二のカラー画像中の一部が、第一のカラー画像を合成されないとしても、当該一部は高画質である。
 このように、本技術によって、基準となる第二のカラー画像中に、第一のカラー画像が合成されない領域があったとしても、合成後の画像は高画質であるという効果を奏する。
 当該効果を、2つの画像の合成のために考えられる他の手法と本技術とを対比しながら、以下でさらに説明する。当該説明のために、以下では図1を参照する。
The color filter that covers the light receiving surface of the first color image pickup device does not include either the white region or the no-filter region, but the color filter that covers the light receiving surface of the second color image pickup device is the white region or filter. Includes no area. Therefore, the sensitivity of the second color image pickup device is higher than that of the first color image pickup device.
Further, the image synthesizing unit captures the first color image captured by the first color image imaging device by the second color image imaging device having higher sensitivity than the first color image imaging device. Composite to the second color image. That is, the first color image is combined with the second color image with reference to the second color image having higher image quality than the first color image. As a result, for example, even if a part of the second color image is not synthesized with the first color image, the part has high image quality.
As described above, according to the present technology, even if there is a region in which the first color image is not combined in the second color image serving as a reference, the combined image has an effect of high image quality.
The effect will be further described below while comparing the present technique with other methods conceivable for the synthesis of two images. For the description, reference is made below to FIG.
 2つの画像を合成する場合、例えば、基準とする画像を高感度とするために、白黒撮像画を基準として、当該白黒撮像画にカラー画像を合成することが考えられる。このような合成において、2つの画像を合成することによって画質劣化が生じると判定される領域が白黒撮像画中にある場合に、当該領域に関しては画像合成を行わないとすることが考えられる。しかしながら、画像合成が行われない当該領域に関して当該カラー画像を採用すると、当該カラー画像の撮像視点は、基準となる白黒撮像画の撮像視点とは異なるために、当該領域のみ視点移動が起こるという問題が生じる。また、視点移動が起こらないようにするためには、画像合成が行われない当該領域に関して白黒撮像画を採用することも考えられるが、合成された画像のうち当該領域のみが白黒となるという問題が生じるため、そのような画像は許容されない。また、これらの問題を解消するために画像を合成しない場合は、感度の低いカラー画像を選択して出力せざるを得ない。
 例えば図1の(a)に示されるとおり、白黒撮像画である画像a1とカラー画像である画像a2とを画像a1を基準として合成するに際して両画像中の人部分が合成できない場合を想定する。この場合、合成できない人部分についてカラー画像a2を採用すると、合成された画像中の人部分についてのみ視点移動が起こる。また、合成できない人部分について白黒画像a1を採用することは許容されない。また、画像を合成しない場合は、感度の低いカラー画像a2を選択して出力せざるを得ない。
 本技術では、画像合成の際に基準となる画像がカラー画像である。そのため、画像合成が行われない領域に関しては、基準となるカラー画像を採用することで、上記の視点移動の問題は生じない。また、基準となる画像がカラー画像であるので、画像合成が行われない領域は白黒でなくカラーである。
 例えば図1の(c)に示されるとおり、カラー画像b2とカラー画像b2よりも高い感度で撮像されたカラー画像c1とをカラー画像c1を基準として合成するに際して両画像中の人部分が合成できない場合を想定する。合成により得られた画像がc3である。この場合、合成できない人部分c10について、カラー画像c1が採用されるので、上記の視点移動の問題は生じない。また、合成の基準となる画像がカラー画像c1であるので、画像合成が行われない領域はカラーである。
When combining two images, for example, it is conceivable to combine a color image with the black and white captured image on the basis of the black and white captured image in order to increase the sensitivity of the reference image. In such composition, if there is an area in the black-and-white captured image where it is determined that the image quality is deteriorated by compositing two images, it is conceivable that image composition is not performed for the area. However, when the color image is adopted for the area where image synthesis is not performed, the viewpoint of the color image is different from the viewpoint of the monochrome black-and-white captured image as a reference, and thus the viewpoint moves only in the area. Occurs. In order to prevent the viewpoint from moving, it may be possible to adopt a black-and-white captured image for the area where image synthesis is not performed. However, there is a problem that only the area of the synthesized image is black and white. Such an image is not allowed. In addition, when images are not synthesized to solve these problems, a color image with low sensitivity must be selected and output.
For example, as shown in FIG. 1A, a case is assumed in which a human part in both images cannot be synthesized when the image a1 that is a monochrome image and the image a2 that is a color image are synthesized on the basis of the image a1. In this case, if the color image a <b> 2 is adopted for the human part that cannot be synthesized, the viewpoint movement occurs only for the human part in the synthesized image. In addition, it is not allowed to adopt the black and white image a1 for the person portion that cannot be synthesized. When images are not combined, a color image a2 with low sensitivity must be selected and output.
In the present technology, a color image is used as a reference image for image composition. For this reason, for the region where image synthesis is not performed, the above-described problem of viewpoint movement does not occur by adopting a reference color image. In addition, since the reference image is a color image, the area where image composition is not performed is not black and white but color.
For example, as shown in FIG. 1C, when the color image b2 and the color image c1 captured with higher sensitivity than the color image b2 are combined based on the color image c1, human portions in both images cannot be combined. Assume a case. An image obtained by the synthesis is c3. In this case, since the color image c1 is adopted for the person portion c10 that cannot be synthesized, the above-described problem of viewpoint movement does not occur. In addition, since the image serving as a reference for synthesis is the color image c1, the region where the image synthesis is not performed is color.
 また、2つの画像を合成する場合、例えば、白黒撮像画を採用せずに、同じ感度のカラー画像を合成することが考えられる。このような合成において、2つの画像を合成することによって画質劣化が生じると判定される領域が、基準となるカラー画像中にある場合に、当該領域に関して画像合成を行わないとすることが考えられる。画像合成を行わない領域については、基準となるカラー画像又は他方のカラー画像のいずれかを採用することになる。しかしながら、合成を行わない領域は、合成が行われる他の領域と比べて、画質が劣化する。
 例えば図1の(b)に示されるとおり、カラー画像b1とカラー画像b1と同じ感度のカラー画像b2とをカラー画像b1を基準として合成するに際して両画像中の人部分が合成できない場合を想定する。合成により得られた画像がb3である。この場合、合成できない人部分についてカラー画像a1を採用することで、視点移動の問題は生じないが、合成を行わない人部分b10は、合成が行われる他の領域と比べて、画質が劣化する。
 本技術では、画像合成が行われる2つの画像のうち、基準となるカラー画像は、上記のとおり、他方のカラー画像を撮像した撮像素子よりも高感度の撮像素子により撮像された画像である。加えて、合成を行わない領域については、より高感度のカラー画像が採用される。その結果、本技術に従う画像合成では、同じ感度のカラー画像を合成する場合と比べて、合成を行わない領域における画質劣化の程度がより小さい。
 例えば図1の(c)に示されるとおり、(b)における場合と同様に2つのカラー画像を合成するに際して、カラー画像b1の代わりに、カラー画像b2よりも高感度で撮像されたカラー画像c1を合成に用いる場合を想定する。すなわち、カラー画像b2とカラー画像b2よりも高感度のカラー画像c1とをカラー画像c1を基準として合成する場合を想定する。この場合、合成できない人部分についてカラー画像c1を採用することで、視点移動の問題は生じない。加えて、基準となるカラー画像c1は、カラー画像b1よりも高感度の撮像素子で撮像された画像であるので、図1の(b)の場合と比べて、合成を行わない領域における画質劣化の程度がより小さい。
Further, when two images are combined, for example, it is conceivable to combine color images having the same sensitivity without adopting a black and white captured image. In such a composition, when an area that is determined to deteriorate image quality due to the composition of two images is in a reference color image, the image composition may not be performed for the area. . For a region where image synthesis is not performed, either a reference color image or the other color image is adopted. However, the image quality is deteriorated in the area where the composition is not performed as compared with other areas where the composition is performed.
For example, as shown in FIG. 1B, a case is assumed in which a human part in both images cannot be synthesized when the color image b1 and the color image b2 having the same sensitivity as the color image b1 are synthesized on the basis of the color image b1. . An image obtained by the synthesis is b3. In this case, by adopting the color image a1 for the person portion that cannot be combined, the problem of viewpoint movement does not occur, but the image quality of the person portion b10 that does not perform combining is deteriorated compared to other regions where combining is performed. .
In the present technology, of the two images to be combined, the reference color image is an image captured by an image sensor having higher sensitivity than the image sensor that captured the other color image as described above. In addition, a higher-sensitivity color image is adopted for a region where no synthesis is performed. As a result, in the image synthesis according to the present technology, the degree of image quality degradation in the area where the synthesis is not performed is smaller than in the case of synthesizing color images having the same sensitivity.
For example, as shown in FIG. 1C, when two color images are synthesized as in the case of FIG. 1B, a color image c1 captured with higher sensitivity than the color image b2 instead of the color image b1. Is used for synthesis. That is, it is assumed that the color image b2 and the color image c1 having higher sensitivity than the color image b2 are combined with the color image c1 as a reference. In this case, by adopting the color image c1 for the person portion that cannot be combined, the problem of viewpoint movement does not occur. In addition, since the reference color image c1 is an image picked up by an image sensor having a higher sensitivity than the color image b1, image quality deterioration in a region where no synthesis is performed, compared to the case of FIG. The degree of is smaller.
 また、本技術において、視差情報取得部が、第一のカラー画像と第二のカラー画像との間の視差に関する情報を取得し、合成情報生成部が、当該視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成し、画像合成部が、当該合成の仕方に関する情報に基づき、第二のカラー画像に第一のカラー画像を合成する。その結果、画像合成することによって画質劣化が生じる画像領域について画像合成をせず、且つ、その他の領域については合成をすることができる。さらに、合成の基準となる画像はより高画質の第二のカラー画像であるので、画像合成が行われない領域については、より高画質の第二のカラー画像が採用される。そのため、画像合成が行われない領域における画質劣化の程度が低い。 Further, in the present technology, the parallax information acquisition unit acquires information about the parallax between the first color image and the second color image, and the synthesis information generation unit uses the information about the parallax to calculate Information relating to the composition method is generated, and the image composition unit synthesizes the first color image with the second color image based on the information related to the composition method. As a result, it is possible to perform image composition for an image area where image quality is deteriorated by image composition and to perform composition for other areas. Furthermore, since the image used as the reference for the combination is a second color image with higher image quality, the second color image with higher image quality is adopted for the region where image combination is not performed. For this reason, the degree of image quality degradation in a region where image synthesis is not performed is low.
 本技術に従う画像処理装置は、2つのカラー画像撮像素子を少なくとも含む。当該2つのカラー画像撮像素子のうち、一方の撮像素子の受光面を覆うカラーフィルタが、ホワイト領域及びフィルタ無し領域のいずれも含まず、他方の撮像素子の受光面を覆うカラーフィルタが、ホワイト領域又はフィルタ無し領域を含む。前者を第一のカラー画像撮像素子といい、後者を第二のカラー画像撮像素子という。前記カラーフィルタの相違の故に、第二のカラー画像撮像素子は、第一のカラー画像撮像素子よりも感度が高い。 The image processing apparatus according to the present technology includes at least two color image pickup elements. Of the two color image pickup devices, the color filter covering the light receiving surface of one image pickup device does not include either the white region or the no-filter region, and the color filter covering the light receiving surface of the other image pickup device is the white region. Or it includes the unfiltered area. The former is called a first color image pickup device, and the latter is called a second color image pickup device. Due to the difference in the color filters, the second color image pickup device is more sensitive than the first color image pickup device.
 本技術において、前記第一のカラー画像撮像素子及び前記第二のカラー画像撮像素子は、好ましくは画像処理装置の一つの面に並んで配置されている。好ましくは、前記第一のカラー画像撮像素子及び前記第二のカラー画像撮像素子は同じ方向を撮像できるように配置されていてよい。
 例えば、図2に示されるとおり、本技術に従う画像処理装置がスマートフォン10である場合、スマートフォン10のディスプレイ面12とは反対側の面に、第一のカラー画像撮像素子11-1及び第二のカラー画像撮像素子11-2が、同じ方向を撮像できるように、並んで配置されている。第一のカラー画像撮像素子及び第二のカラー画像撮像素子は、前記反対側の面だけでなく、ディスプレイ面に設けられていてもよい。
 このように2つの撮像素子が配置されている場合、両撮像素子の視点は互いに異なるので、両撮像素子により撮像されたカラー画像の間には視差が生じる。本技術の画像処理装置は、当該視差に関する情報を視差情報取得部により取得する。当該視差に関する情報に基づき、合成情報生成部が、両画像の合成の仕方に関する情報を生成する。そして、当該合成の仕方に関する情報に基づき、画像合成部が2つの画像を合成する。
In the present technology, the first color image imaging element and the second color image imaging element are preferably arranged side by side on one surface of the image processing apparatus. Preferably, the first color image pickup device and the second color image pickup device may be arranged so as to be able to pick up images in the same direction.
For example, as illustrated in FIG. 2, when the image processing apparatus according to the present technology is the smartphone 10, the first color image pickup device 11-1 and the second color image pickup device 11-1 are provided on the surface opposite to the display surface 12 of the smartphone 10. The color image pickup elements 11-2 are arranged side by side so that the same direction can be picked up. The first color image pickup device and the second color image pickup device may be provided not only on the opposite surface but also on the display surface.
When two image sensors are arranged in this manner, the viewpoints of both image sensors are different from each other, so that parallax occurs between color images captured by both image sensors. The image processing apparatus according to the present technology acquires information on the parallax using the parallax information acquisition unit. Based on the information on the parallax, the combination information generation unit generates information on how to combine both images. Then, based on the information related to the composition method, the image composition unit synthesizes two images.
 本技術に従う画像処理装置は、例えば上記で述べたようにスマートフォンであってよい。また、本技術に従う画像処理装置は、スマートフォン以外の情報処理装置であってもよい。例えば、本技術に従う画像処理装置は、スマートフォン以外の携帯電話、映像処理装置(特には携帯用映像処理装置)、ゲーム機器(特には携帯用ゲーム機器)、ノートPC(Personal Computer)、又はタブレット型PCであってもよい。すなわち、前記第一のカラー画像撮像素子及び前記第二のカラー画像撮像素子が設けられており且つ本技術に従い画像合成ができるように構成された情報処理装置が、本技術に従う画像処理装置に包含される。 The image processing apparatus according to the present technology may be a smartphone as described above, for example. The image processing apparatus according to the present technology may be an information processing apparatus other than a smartphone. For example, the image processing device according to the present technology is a mobile phone other than a smartphone, a video processing device (particularly a portable video processing device), a game device (particularly a portable game device), a notebook PC (Personal Computer), or a tablet type. It may be a PC. That is, an information processing apparatus provided with the first color image pickup element and the second color image pickup element and configured to be able to synthesize images according to the present technology is included in the image processing apparatus according to the present technology. Is done.
(2)第1の実施形態の第1の例(画像処理装置) (2) First example of first embodiment (image processing apparatus)
 以下で、本技術に従う画像処理装置の例を、図3を参照しながら説明する。図3は、本技術に従う画像処理装置の一例のブロック図である。 Hereinafter, an example of an image processing apparatus according to the present technology will be described with reference to FIG. FIG. 3 is a block diagram of an example of an image processing apparatus according to the present technology.
 図3に示されるとおり、画像処理装置300は、第一のカラー画像撮像素子301-1、第二のカラー画像撮像素子301-2、前処理部302-1、前処理部302-2、視差情報取得部303、視差補償部304、合成情報生成部305、画像合成部306、及び後処理部307を備えている。 As shown in FIG. 3, the image processing apparatus 300 includes a first color image imaging element 301-1, a second color image imaging element 301-2, a preprocessing unit 302-1, a preprocessing unit 302-2, a parallax. An information acquisition unit 303, a parallax compensation unit 304, a synthesis information generation unit 305, an image synthesis unit 306, and a post-processing unit 307 are provided.
 第一のカラー画像撮像素子301-1の受光面を覆うカラーフィルタは、ホワイト領域及びフィルタ無し領域のいずれも含まない。第一のカラー画像撮像素子301-1は、例えば、赤色(R)画素、青色(B)画素、及び緑色(G)画素から構成されうる。すなわち、第一のカラー画像撮像素子301-1の受光面を覆うカラーフィルタは、赤色(R)フィルタ、青色(B)フィルタ、及び緑色(G)フィルタからなる。より好ましい実施態様において、第一のカラー画像撮像素子301-1は、R画素、B画素、及びG画素のみから構成されうる。すなわち、第一のカラー画像撮像素子301-1の受光面を覆うカラーフィルタが、赤色(R)フィルタ、青色(B)フィルタ、及び緑色(G)フィルタのみからなる。
 第一のカラー画像撮像素子301-1中のG画素の数は、R画素及びB画素の数よりも多いことが好ましい。また、R画素及びB画素の数は等しくてよい。より好ましくは、ベイヤ配列の画素配列が、第一のカラー画像撮像素子301-1に採用されうる。
 すなわち、本技術において、第一のカラー画像撮像素子301-1の受光面を覆うカラーフィルタは、好ましくはベイヤ配列のカラーフィルタでありうる。ベイヤ配列の例を、図4(b)に示す。図4(b)に示されるとおり、ベイヤ配列におけるR画素、B画素、及びG画素の数は、R:B:G=1:1:2でありうる。
 また、本技術において、第一のカラー画像撮像素子301-1は、ベイヤ配列を有するものに限られず、図17に示されるとおり、三層構造を有する撮像素子であってもよい。
 第一のカラー画像撮像素子301-1を構成するカラーフィルタ以外の構成要素として、当技術分野で既知の撮像素子において用いられる構成要素が採用されてよく、例えば既知のCMOSイメージセンサ又はCCDイメージセンサにおいて用いられる構成要素が採用されてよい。
The color filter that covers the light receiving surface of the first color image pickup element 301-1 includes neither a white area nor an unfiltered area. The first color image pickup element 301-1 can be composed of, for example, a red (R) pixel, a blue (B) pixel, and a green (G) pixel. That is, the color filter that covers the light receiving surface of the first color image pickup element 301-1 includes a red (R) filter, a blue (B) filter, and a green (G) filter. In a more preferred embodiment, the first color image pickup device 301-1 can be composed of only R pixels, B pixels, and G pixels. That is, the color filter that covers the light receiving surface of the first color image pickup element 301-1 includes only a red (R) filter, a blue (B) filter, and a green (G) filter.
The number of G pixels in the first color image pickup element 301-1 is preferably larger than the number of R pixels and B pixels. Further, the number of R pixels and B pixels may be equal. More preferably, a Bayer array pixel array can be employed for the first color image pickup element 301-1.
That is, in the present technology, the color filter that covers the light receiving surface of the first color image pickup element 301-1 can be a Bayer array color filter. An example of the Bayer array is shown in FIG. As shown in FIG. 4B, the number of R pixels, B pixels, and G pixels in the Bayer array may be R: B: G = 1: 1: 2.
In the present technology, the first color image pickup device 301-1 is not limited to the one having a Bayer array, and may be an image pickup device having a three-layer structure as shown in FIG.
As components other than the color filter constituting the first color image pickup device 301-1, components used in an image pickup device known in this technical field may be employed. For example, a known CMOS image sensor or CCD image sensor is used. The components used in the above may be employed.
 第二のカラー画像撮像素子301-2の受光面を覆うカラーフィルタは、ホワイト領域又はフィルタ無し領域を含み、好ましくはホワイト領域を含む。すなわち、第二のカラー画像撮像素子301-2は、ホワイト(W)画素又はフィルタ無し(C)画素を含み、好ましくはW画素を含む。W画素は、透明画素とも呼ばれる。
 ホワイト領域及びフィルタ無し領域に該当する画素の総数は、全画素のうち例えば1/8以上を占めてよく、好ましくは1/4以上又は1/2以上を占めてよい。
 例えば、図4(a)に示されるように、W画素、R画素、G画素、及びB画素が1:1:1:1で設けられている撮像素子が、第二のカラー画像撮像素子301-2として用いられうる。すなわち、WRGBセンサが第二のカラー画像撮像素子301-2として用いられうる。この場合、第二のカラー画像撮像素子に含まれるカラーフィルタは、WRGB配列の領域を含むものであってよい。
 代替的には、図5(a)に示されるように、W画素、R画素、及びG画素が、2:1:1で設けられている撮像素子が、第二のカラー画像撮像素子301-2として用いられうる。すなわち、WRGセンサが第二のカラー画像撮像素子301-2として用いられうる。この場合、第二のカラー画像撮像素子に含まれるカラーフィルタは、WRG配列の領域を含むものであってよい。
 代替的には、図5(b)に示されるとおり、黄色(Y)画素、C画素、マゼンタ(M)画素、及びG画素が、1:1:1:1で設けられている撮像素子が、第二のカラー画像撮像素子301-2として用いられうる。すなわち、YCMGセンサが第二のカラー画像撮像素子301-2として用いられうる。この場合、第二のカラー画像撮像素子に含まれるカラーフィルタは、YCMG配列の領域を含むものであってよい。
 代替的には、図5(c)に示されるとおり、W画素、Y画素、及びシアン(Cy)画素が、2:1:1で設けられている撮像素子が、第二のカラー画像撮像素子301-2として用いられうる。すなわち、WYCyセンサが第二のカラー画像撮像素子301-2として用いられうる。この場合、第二のカラー画像撮像素子に含まれるカラーフィルタは、WYCy配列の領域を含むものであってよい。
 代替的には、図5(d)に示されるように、W画素、R画素、及びB画素から構成される撮像素子が、第二のカラー画像撮像素子301-2として用いられうる。図5(d)に示される画素構成は、クリアビット配列にW画素が組み込まれている。すなわち、Wクリアビットセンサが第二のカラー画像撮像素子301-2として用いられうる。クリアビット配列では、画素の配列方向が、通常の配列方向に対して45度回転されている。
 本技術の好ましい実施態様に従い、第二のカラー画像撮像素子に含まれるカラーフィルタは、WRGB配列又はWRG配列の領域を含む。すなわち、第二のカラー画像撮像素子は、好ましくはWRGBセンサ又はWRGセンサでありうる。
 第二のカラー画像撮像素子301-2を構成するカラーフィルタ以外の構成要素として、当技術分野で既知の撮像素子において用いられる構成要素が採用されてよく、例えば既知のCMOSイメージセンサ又はCCDイメージセンサにおいて用いられる構成要素が採用されてよい。
The color filter covering the light receiving surface of the second color image pickup element 301-2 includes a white region or a non-filter region, and preferably includes a white region. That is, the second color image pickup element 301-2 includes white (W) pixels or unfiltered (C) pixels, and preferably includes W pixels. The W pixel is also called a transparent pixel.
The total number of pixels corresponding to the white area and the non-filter area may occupy, for example, 1/8 or more, preferably ¼ or more, or ½ or more of all pixels.
For example, as illustrated in FIG. 4A, an image sensor in which W pixels, R pixels, G pixels, and B pixels are provided at 1: 1: 1: 1 is a second color image image sensor 301. -2 can be used. That is, a WRGB sensor can be used as the second color image pickup element 301-2. In this case, the color filter included in the second color image pickup element may include a WRGB array region.
Alternatively, as shown in FIG. 5A, an image sensor in which W pixels, R pixels, and G pixels are provided at 2: 1: 1 is a second color image image sensor 301-. 2 can be used. That is, the WRG sensor can be used as the second color image pickup element 301-2. In this case, the color filter included in the second color image pickup device may include a WRG array region.
Alternatively, as shown in FIG. 5B, an image pickup device in which yellow (Y) pixels, C pixels, magenta (M) pixels, and G pixels are provided at 1: 1: 1: 1. The second color image pickup element 301-2 can be used. That is, a YCMG sensor can be used as the second color image pickup element 301-2. In this case, the color filter included in the second color image pickup device may include a YCMG array region.
Alternatively, as shown in FIG. 5C, an image sensor in which W pixels, Y pixels, and cyan (Cy) pixels are provided at 2: 1: 1 is a second color image image sensor. It can be used as 301-2. That is, the WYCy sensor can be used as the second color image pickup element 301-2. In this case, the color filter included in the second color image pickup device may include a WYCy array region.
Alternatively, as shown in FIG. 5D, an image sensor composed of W pixels, R pixels, and B pixels can be used as the second color image image sensor 301-2. In the pixel configuration shown in FIG. 5D, W pixels are incorporated in the clear bit array. That is, the W clear bit sensor can be used as the second color image pickup element 301-2. In the clear bit arrangement, the pixel arrangement direction is rotated 45 degrees with respect to the normal arrangement direction.
According to a preferred embodiment of the present technology, the color filter included in the second color image pickup device includes a region of a WRGB array or a WRG array. That is, the second color image pickup element can be preferably a WRGB sensor or a WRG sensor.
As a component other than the color filter constituting the second color image pickup device 301-2, a component used in an image pickup device known in this technical field may be employed. For example, a known CMOS image sensor or CCD image sensor is used. The components used in the above may be employed.
 本技術の好ましい実施態様に従い、第一のカラー画像撮像素子301-1により撮像される第一のカラー画像と、前記第二のカラー画像撮像素子301-2により撮像される第二のカラー画像とは、同時に撮像されたものである。同時に撮像された2つの画像を合成することによって、より適切な画像合成が可能となる。 In accordance with a preferred embodiment of the present technology, a first color image imaged by the first color image imaging element 301-1 and a second color image imaged by the second color image imaging element 301-2 Are taken at the same time. By combining two images captured at the same time, a more appropriate image can be combined.
 前処理部302-1及び302-2は、通常の撮像装置において行われる前処理を行いうる。当該前処理の例として、例えば、ゲイン調整、ホワイトバランス補正、ノイズリダクション、デモザイク処理、スケーリング、及びモジュールズレ補正処理を挙げることができる。前処理部302-1及び302-2は、これらの処理のうち1つ又は2つ以上を行いうる。 The preprocessing units 302-1 and 302-2 can perform preprocessing performed in a normal imaging apparatus. Examples of the preprocessing include gain adjustment, white balance correction, noise reduction, demosaic processing, scaling, and module shift correction processing. The pre-processing units 302-1 and 302-2 can perform one or more of these processes.
 視差情報取得部303は、第一のカラー画像撮像素子301-1により撮像された第一のカラー画像と第二のカラー画像撮像素子301-2により撮像された第二のカラー画像との間の視差に関する情報を取得する。当該視差に関する情報は、画像中の各領域の視差、例えば画素毎の視差及び/又は画素ブロック毎の視差、を含みうる。画素ブロックとは、複数の画素から構成されるブロックをいう。当該視差に関する情報は、例えばブロックマッチングなどの対応点検出処理によって取得されうる。
 本技術に従う画像処理装置は、上記で述べたとおり、より高感度である第二のカラー画像撮像素子301-2により撮像された第二のカラー画像を基準として合成されうる。そのため、視差情報取得部303は、第二のカラー画像を基準とした第一のカラー画像の視差に関する情報を取得することが好ましい。
 また、視差情報取得部303は、視差マップを生成してもよい。視差マップとは、2つのカラー画像間の視差の分布を示すマップである。当該視差マップは、例えば、以下で説明する合成情報生成部305、特には合成情報生成部305に含まれる探索範囲外領域検出部601及び/又はオクルージョン領域検出部602によって利用されうる。
 視差情報取得部303は、例えば合成の基準となる第二のカラー画像に対する第一のカラー画像の視差の分布を示す視差マップを取得しうる。視差情報取得部303は、合成の基準となる第二のカラー画像に対する第一のカラー画像の視差の分布を示す視差マップに加えて、第一のカラー画像に対する第二のカラー画像の視差の分布を示す視差マップを取得してもよい。このように2つの視点からの画像に基づく2つの視差マップを利用して、以下で説明するオクルージョン領域検出部602がオクルージョン領域を検出することもできる。
 視差情報取得部303は、視差に関する情報を、視差補償部304及び/又は合成情報生成部305に出力する。
The parallax information acquisition unit 303 is between the first color image captured by the first color image imaging element 301-1 and the second color image captured by the second color image imaging element 301-2. Obtain information on parallax. The information regarding the parallax can include parallax of each region in the image, for example, parallax for each pixel and / or parallax for each pixel block. A pixel block refers to a block composed of a plurality of pixels. Information on the parallax can be acquired by corresponding point detection processing such as block matching, for example.
As described above, the image processing apparatus according to the present technology can be synthesized based on the second color image captured by the second color image imaging element 301-2 having higher sensitivity. Therefore, it is preferable that the parallax information acquisition unit 303 acquires information related to the parallax of the first color image with the second color image as a reference.
Further, the parallax information acquisition unit 303 may generate a parallax map. The parallax map is a map showing the distribution of parallax between two color images. The parallax map can be used, for example, by a synthesis information generation unit 305 described below, in particular, an out-search range region detection unit 601 and / or an occlusion region detection unit 602 included in the synthesis information generation unit 305.
The parallax information acquisition unit 303 can acquire a parallax map indicating the parallax distribution of the first color image with respect to the second color image serving as a reference for synthesis, for example. The disparity information acquisition unit 303 distributes the disparity of the second color image with respect to the first color image in addition to the disparity map indicating the disparity distribution of the first color image with respect to the second color image that is a reference for combining. You may acquire the parallax map which shows. As described above, the occlusion area detection unit 602 described below can detect an occlusion area by using two parallax maps based on images from two viewpoints.
The parallax information acquisition unit 303 outputs information on parallax to the parallax compensation unit 304 and / or the synthesis information generation unit 305.
 視差補償部304は、視差情報取得部303により取得された視差に関する情報に基づき、第一のカラー画像の視差補償を行いうる。例えば、視差補償部304は、第一のカラー画像の画像データに対して、前記視差に関する情報に基づき、画素又は画素ブロックの位置の移動を行いうる。画素又は画素ブロックの位置の移動は、例えば第一のカラー画像と第二のカラー画像との間の視差が解消されるように行われてよい。特には、視差補償部304は、第一のカラー画像の画素又は画素ブロックを、当該画素又は画素ブロックに対応する第二のカラー画像中の画素又は画素ブロックの位置へ移動させうる。視差補償部304は、視差補償によって、視差補償された第一のカラー画像を生成する。視差補償により、第一のカラー画像の各領域が、第二のカラー画像中の対応する各領域へ移動され、その結果、適切な画像合成が可能となる。
 視差補償部304は、視差補償された第一のカラー画像を、画像合成部306に出力する。
The parallax compensation unit 304 can perform parallax compensation of the first color image based on the information on the parallax acquired by the parallax information acquisition unit 303. For example, the parallax compensation unit 304 can move the position of the pixel or the pixel block on the image data of the first color image based on the information on the parallax. The movement of the position of the pixel or the pixel block may be performed so that the parallax between the first color image and the second color image is eliminated, for example. In particular, the parallax compensation unit 304 can move the pixel or pixel block of the first color image to the position of the pixel or pixel block in the second color image corresponding to the pixel or pixel block. The parallax compensation unit 304 generates a first color image that is parallax-compensated by parallax compensation. By the parallax compensation, each region of the first color image is moved to each corresponding region in the second color image, and as a result, appropriate image composition is possible.
The parallax compensation unit 304 outputs the first color image subjected to parallax compensation to the image synthesis unit 306.
 合成情報生成部305は、第一のカラー画像及び第二のカラー画像の合成の仕方に関する情報を生成する。より好ましくは、合成情報生成部305は、視差補償された第一のカラー画像と第二のカラー画像の合成の仕方に関する情報を生成する。例えば、合成情報生成部305は、前記第二のカラー画像を構成する領域毎に、前記第一のカラー画像(特には前記視差補償された第一のカラー画像)を合成すべきとする又は合成すべきでないとする合成情報を生成しうる。当該合成情報は、画素毎に又は画素ブロック毎に生成されうる。当該判定の結果、合成情報生成部305は、画素毎に又は画素ブロック毎に、合成すべきである又は合成すべきでないとする情報を生成してよい。
 合成情報生成部305は、合成の仕方に関する情報を、画像合成部306に出力する。
The combination information generation unit 305 generates information about how to combine the first color image and the second color image. More preferably, the synthesis information generation unit 305 generates information on how to combine the first color image and the second color image that have been subjected to parallax compensation. For example, the synthesis information generation unit 305 should synthesize or synthesize the first color image (particularly the parallax-compensated first color image) for each region constituting the second color image. Compositing information that should not be generated can be generated. The synthesis information can be generated for each pixel or each pixel block. As a result of the determination, the synthesis information generation unit 305 may generate information that should be combined or should not be combined for each pixel or each pixel block.
The synthesis information generation unit 305 outputs information related to the synthesis method to the image synthesis unit 306.
 合成情報生成部305のより詳細な構成例のブロック図を、図6に示す。図6に示されるとおり、合成情報生成部305は、探索範囲外領域検出部601、オクルージョン領域検出部602、及び色差/輝度判定部603を含みうる。合成情報生成部305は、探索範囲外領域検出部601、オクルージョン領域検出部602、及び色差/輝度判定部603のうちのいずれか1つ、2つ、又は3つを含んでもよい。また、合成情報生成部305は、図6に示されるとおり、総合判定部604を含みうる。
 以下で、合成情報生成部305に含まれうる上記構成要素について、それぞれ説明する。
A block diagram of a more detailed configuration example of the synthesis information generation unit 305 is shown in FIG. As illustrated in FIG. 6, the composite information generation unit 305 may include an out-of-search area detection unit 601, an occlusion area detection unit 602, and a color difference / luminance determination unit 603. The composite information generation unit 305 may include any one, two, or three of the search range outside region detection unit 601, the occlusion region detection unit 602, and the color difference / luminance determination unit 603. Further, the composite information generation unit 305 may include a comprehensive determination unit 604 as illustrated in FIG.
Hereinafter, the above-described components that can be included in the composite information generation unit 305 will be described.
 探索範囲外領域検出部601は、視差に関する情報に基づき作成された視差マップ内で所定の探索範囲より大きな視差値を有する領域を検出しうる。探索範囲外領域について画像合成を行う場合、画質劣化の可能性が高くなる。そのため、探索範囲外領域において画像合成を行わないことが好ましい。当該領域は、所定の探索範囲より大きな視差値を有する画素又は画素ブロックでありうる。探索範囲外領域検出部601は、例えば、視差情報取得部303によって生成された視差マップ内で、所定の探索範囲より大きな視差値を有する領域を検出してもよい。
 探索範囲外領域検出部601が利用する視差に関する情報は、視差情報取得部303により生成された視差マップに限られない。例えば、探索範囲外領域検出部601は、例えば測距センサにより得られる視差に関する情報(特には視差)を利用して、所定の探索範囲より大きな視差値を有する画素又は画素ブロックを検出してもよい。測距センサの例として、例えばToF(Time of Flight)方式の測距センサを挙げることができる。
 探索範囲外領域検出部601は、所定の探索範囲より大きな視差値を有する領域を例えば総合判定部604に出力しうる。
The search area outside area detection unit 601 can detect an area having a parallax value larger than a predetermined search range in a parallax map created based on information on parallax. When image composition is performed for a region outside the search range, the possibility of image quality deterioration increases. Therefore, it is preferable not to perform image composition in the region outside the search range. The area may be a pixel or a pixel block having a parallax value larger than a predetermined search range. For example, the region outside the search range detection unit 601 may detect a region having a disparity value larger than a predetermined search range in the disparity map generated by the disparity information acquisition unit 303.
The information related to the parallax used by the search range outside area detection unit 601 is not limited to the parallax map generated by the parallax information acquisition unit 303. For example, the out-of-search range area detection unit 601 may detect pixels or pixel blocks having a parallax value larger than a predetermined search range using, for example, information on parallax obtained by a distance measuring sensor (particularly, parallax). Good. As an example of the distance measuring sensor, for example, a ToF (Time of Flight) type distance measuring sensor can be cited.
The out-of-search-range area detection unit 601 can output an area having a parallax value larger than the predetermined search range, for example, to the comprehensive determination unit 604.
 オクルージョン領域検出部602は、第一のカラー画像と第二のカラー画像とを対比してオクルージョン領域を検出しうる。オクルージョン領域とは、第一のカラー画像撮像素子及び第二のカラー画像撮像素子の2つの撮像素子のうち、一方の撮像素子によって撮像されているが、他方の撮像素子によって撮像されていない領域をいう。例えば、図7に示されるとおり、撮像素子701から点線方向に向かって撮像した場合、背景Bのうち領域Aは、前景Fによって遮られていないので、撮像される。一方で、撮像素子702から一点鎖線方向に向かって撮像した場合、背景Bのうち領域Aは、前景Fによって遮られているので、撮像されない。領域Aがオクルージョン領域である。オクルージョン領域について画像合成を行う場合、画質劣化の可能性が高くなる。そのため、オクルージョン領域において画像合成を行わないことが好ましい。
 オクルージョン領域の検出は、双方向の視差マップを用いて、左画像を右画像の位置に移動させ、そして次に、左画像に再度移動させたときに同じ位置に戻ってこない領域を検出することによって行われてよい。代替的には、一視点の視差マップから隣接視差値どうしを比較し、その差分が所定の範囲を超える領域がオクルージョン領域として検出されうる。これによってより簡易にオクルージョン領域が検出されうる。
 オクルージョン領域検出部602は、検出したオクルージョン領域を例えば総合判定部604に出力しうる。
The occlusion area detection unit 602 can detect the occlusion area by comparing the first color image and the second color image. The occlusion area is an area that is imaged by one of the two image sensors of the first color image sensor and the second color image sensor, but is not imaged by the other image sensor. Say. For example, as shown in FIG. 7, when an image is picked up from the image sensor 701 in the direction of the dotted line, the area A of the background B is not blocked by the foreground F, and is picked up. On the other hand, when the image is picked up from the image pickup element 702 in the direction of the one-dot chain line, the area A of the background B is blocked by the foreground F and is not picked up. Area A is an occlusion area. When image synthesis is performed for the occlusion area, the possibility of image quality deterioration increases. Therefore, it is preferable not to perform image composition in the occlusion area.
The detection of the occlusion area uses a bi-directional parallax map to move the left image to the position of the right image, and then detect the area that does not return to the same position when moved to the left image again. May be done by: Alternatively, adjacent parallax values are compared from a parallax map of one viewpoint, and an area where the difference exceeds a predetermined range can be detected as an occlusion area. As a result, the occlusion area can be detected more easily.
The occlusion area detection unit 602 can output the detected occlusion area to the comprehensive determination unit 604, for example.
 より具体的には、オクルージョン領域の検出は、例えば第一のカラー画像に対する第二のカラー画像の第一視差マップ及び第二のカラー画像に対する第一のカラー画像の第二視差マップの2つの視差マップを用いて行われてよい。この場合、オクルージョン領域検出部602は、第一視差マップに基づき第一のカラー画像を移動させそして次に当該移動後の第一のカラー画像を第二視差マップに基づき移動させた場合に、元の第一のカラー画像の位置に戻ってこない画素又は画素ブロックをオクルージョン領域として検出しうる。又は、オクルージョン領域検出部602は、第二視差マップに基づき第二のカラー画像を移動させそして次に当該移動後の第二のカラー画像を第一視差マップに基づき移動させた場合に、元の第二のカラー画像の位置に戻ってこない画素又は画素ブロックをオクルージョン領域として検出しうる。 More specifically, the detection of the occlusion area is performed by, for example, two parallaxes of a first parallax map of the second color image with respect to the first color image and a second parallax map of the first color image with respect to the second color image. This may be done using a map. In this case, the occlusion area detection unit 602 moves the first color image based on the first parallax map, and then moves the first color image after the movement based on the second parallax map. Pixels or pixel blocks that do not return to the position of the first color image can be detected as the occlusion area. Alternatively, when the occlusion area detection unit 602 moves the second color image based on the second parallax map and then moves the second color image after the movement based on the first parallax map, Pixels or pixel blocks that do not return to the position of the second color image can be detected as occlusion areas.
 2つの視差マップを用いるオクルージョン領域の検出手法の一例を、図8を参照して説明する。図8は、左側撮像素子により撮像された左画像Lと右側撮像素子により撮像された右画像Rとを用いてオクルージョン領域を検出する例を示す。図8の(a)に示されるとおり、オクルージョン領域でない領域は、左画像Lを、左画像Lに対する右画像Rの視差マップに基づき移動させ、次に、移動後の左画像Lを、右画像Rに対する左画像Lの視差マップに基づき移動させた場合、これらの移動前の位置に戻る。一方で、図8の(b)に示されるとおり、オクルージョン領域は、前記2つの移動を行った場合、これらの移動を行う前の位置に戻らない。このように、オクルージョン領域検出部602は、2つの視差マップに基づきオクルージョン領域を検出しうる。 An example of an occlusion area detection method using two parallax maps will be described with reference to FIG. FIG. 8 shows an example in which an occlusion region is detected using the left image L captured by the left image sensor and the right image R captured by the right image sensor. As shown in FIG. 8A, in the region that is not the occlusion region, the left image L is moved based on the parallax map of the right image R with respect to the left image L, and then the moved left image L is changed to the right image. When moved based on the parallax map of the left image L with respect to R, the position returns to the position before the movement. On the other hand, as shown in FIG. 8B, when the two movements are performed, the occlusion area does not return to the position before performing these movements. Thus, the occlusion area detection unit 602 can detect the occlusion area based on the two parallax maps.
 色差/輝度判定部603は、前記第一のカラー画像又は前記第二のカラー画像内の所定の領域毎に色差分散値を輝度分散値で除した値を算出し、当該除した値が所定の値以上である領域を検出しうる。例えば、色差/輝度判定部603は、輝度差はないが色差がある領域を検出しうる。
 例えば図9に示されるような、赤領域Ar及び青領域Abが隣接している領域901を想定する。赤領域Ar及び青領域Abは、輝度信号は同じであるが色差信号は異なる。輝度ベースの対応点検出により視差を求める場合、赤領域Ar及び青領域Abの間の視差値は正確に求めることができず、その結果、色エッジを正確に同定できない場合がある。そのため、赤領域Ar及び青領域Abの境界領域902は、第一のカラー画像及び第二のカラー画像を合成した場合に、画質劣化が生じる可能性がある。領域901の色差分散値/輝度分散値は、赤領域Arのみからなる領域と比べて、この値は大きくなる。そこで、色差/輝度判定部603は、領域毎の色差分散値/輝度分散値を求め、この値が所定の値より大きい領域を検出しうる。色差/輝度判定部603は、検出した領域を総合判定部に出力しうる。
The color difference / luminance determination unit 603 calculates a value obtained by dividing the color difference dispersion value by the luminance dispersion value for each predetermined region in the first color image or the second color image, and the divided value is a predetermined value. A region that is greater than or equal to the value can be detected. For example, the color difference / luminance determination unit 603 can detect an area where there is no luminance difference but there is a color difference.
For example, a region 901 in which a red region Ar and a blue region Ab are adjacent as shown in FIG. 9 is assumed. The red area Ar and the blue area Ab have the same luminance signal but different color difference signals. When the parallax is obtained by luminance-based corresponding point detection, the parallax value between the red area Ar and the blue area Ab cannot be obtained accurately, and as a result, the color edge may not be accurately identified. Therefore, the boundary area 902 between the red area Ar and the blue area Ab may deteriorate image quality when the first color image and the second color image are combined. The chrominance variance value / luminance variance value of the area 901 is larger than that of the area consisting only of the red area Ar. Therefore, the color difference / luminance determination unit 603 can obtain a color difference variance value / luminance variance value for each region, and can detect a region where this value is larger than a predetermined value. The color difference / luminance determination unit 603 can output the detected area to the comprehensive determination unit.
 総合判定部604は、画像合成をすべきである又はすべきでない領域を検出する構成要素から出力された情報に基づき、画像内の各領域について合成すべきかを判定しうる。総合判定部604は、当該判定の結果に基づき、第一のカラー画像及び第二のカラー画像の合成の仕方に関する合成情報を生成しうる。当該構成要素は、合成情報生成部305に含まれるものであってよく、例えば、上記で述べた探索範囲外領域検出部601、オクルージョン領域検出部602、及び色差/輝度判定部603のいずれか1つ、2つ、又は3つでありうる。総合判定部604は、これらの構成要素の少なくとも一つによって検出された領域を、第二のカラー画像に第一のカラー画像を合成すべきでない領域であると判定しうる。そして、当該領域に関して、第二のカラー画像に第一のカラー画像を合成しないとする合成情報を生成しうる。 The comprehensive determination unit 604 can determine whether to synthesize each region in the image based on information output from a component that detects a region that should or should not be image-synthesized. Based on the result of the determination, the comprehensive determination unit 604 can generate combination information regarding how to combine the first color image and the second color image. The component may be included in the composite information generation unit 305. For example, any one of the out-of-search range region detection unit 601, the occlusion region detection unit 602, and the color difference / luminance determination unit 603 described above. There can be one, two, or three. The overall determination unit 604 can determine that the region detected by at least one of these components is a region where the first color image should not be combined with the second color image. Then, with respect to the region, it is possible to generate synthesis information indicating that the first color image is not synthesized with the second color image.
 好ましくは、合成情報生成部305は、探索範囲外領域検出部601、オクルージョン領域検出部602、及び色差/輝度判定部603の全てを含む場合、これらの構成要素のいずれかによって検出された領域に関して、第二のカラー画像に第一のカラー画像を合成すべきでないとする合成情報を生成しうる。 Preferably, when the synthesis information generation unit 305 includes all of the out-of-search range region detection unit 601, the occlusion region detection unit 602, and the color difference / luminance determination unit 603, the region detected by any of these components The composition information that the first color image should not be composed with the second color image can be generated.
 合成情報生成部305は、第二のカラー画像のうちの、第一のカラー画像を合成すべきでないとする領域の割合が所定の値を超えた場合に、前記第二のカラー画像の全ての領域について前記第一のカラー画像を合成すべきでないとする合成情報を生成してもよい。当該合成情報が生成された場合、前記第二のカラー画像に前記第一のカラー画像は合成されない。その結果、前記第二のカラー画像自体が出力される。本技術では、このような場合であっても、第二のカラー画像自体が高画質であるので、画質劣化の程度が抑制されうる。例えば第一のカラー画像撮像素子に異常が生じていることによって、第一のカラー画像が第二のカラー画像と全く異なる場合に、このような合成情報を生成することによって、画質劣化を回避することができる。 The composition information generation unit 305, when the ratio of the areas in the second color image that the first color image should not be synthesized exceeds a predetermined value, all of the second color images Compositing information that the first color image should not be synthesized for the region may be generated. When the combination information is generated, the first color image is not combined with the second color image. As a result, the second color image itself is output. In this technique, even in such a case, since the second color image itself has high image quality, the degree of image quality deterioration can be suppressed. For example, when the first color image is completely different from the second color image due to an abnormality in the first color image pickup device, image quality deterioration is avoided by generating such composite information. be able to.
 本技術の他の実施態様に従い、合成情報生成部305は、第二のカラー画像中の領域毎に、合成率α[0≦α≦1]を判定してもよい。合成率α=0の場合は、第二のカラー画像に第一のカラー画像を合成しない。合成率α=1の場合は、例えば第二のカラー画像に対する第一のカラー画像が、最大の割合で合成されうる。当該最大の割合は、当業者により又は例えば合成される2つの画像の種類などの要因によって、適宜設定されてよい。当該最大の割合は、例えば30%~70%、特には40%~60%であってよい。例えば、α=1の場合、第一のカラー画像と第二のカラー画像とは1:1の割合(すなわち50%)で合成されてよい。合成情報生成部305は、領域毎に判定した合成率αを合成情報として画像合成部306に出力しうる。 According to another embodiment of the present technology, the synthesis information generation unit 305 may determine the synthesis rate α [0 ≦ α ≦ 1] for each region in the second color image. When the synthesis rate α = 0, the first color image is not synthesized with the second color image. When the synthesis rate α = 1, for example, the first color image with respect to the second color image can be synthesized at the maximum ratio. The maximum ratio may be appropriately set by a person skilled in the art or by factors such as the types of two images to be combined. The maximum proportion may be, for example, 30% to 70%, in particular 40% to 60%. For example, when α = 1, the first color image and the second color image may be combined at a ratio of 1: 1 (that is, 50%). The composition information generation unit 305 can output the composition rate α determined for each region to the image composition unit 306 as composition information.
 画像合成部306は、合成情報生成部305によって生成された合成情報に基づき、第二のカラー画像を基準として、第二のカラー画像に第一のカラー画像を合成しうる。例えば、前記第二のカラー画像を構成する領域のうち、前記第一のカラー画像を合成すべきとする領域については第二のカラー画像に対して第一のカラー画像を合成し、且つ、前記第一のカラー画像を合成すべきでないとする領域については第一のカラー画像を合成せずに、第二のカラー画像自体を採用する。
 合成情報が合成率αを含む場合は、画像合成部306は、画像中の領域毎に合成率に応じて画像合成を行いうる。
The image synthesis unit 306 can synthesize the first color image with the second color image based on the synthesis information generated by the synthesis information generation unit 305 using the second color image as a reference. For example, among the areas constituting the second color image, for the area where the first color image is to be synthesized, the first color image is synthesized with the second color image, and For the region where the first color image should not be synthesized, the second color image itself is adopted without synthesizing the first color image.
When the synthesis information includes the synthesis rate α, the image synthesis unit 306 can perform image synthesis for each region in the image according to the synthesis rate.
 後処理部307は、第一のカラー画像撮像素子と第二のカラー画像撮像素子との間の差を軽減するための処理を行いうる。後処理部307により行われる後処理は、第二のカラー画像に第一のカラー画像が合成された領域に対して後処理部307により行われる後処理は、当該合成が行われなかった領域に対して後処理部307により行われる後処理と異なりうる。
 例えば、両カラー画像撮像素子の間の分光差に起因して第一のカラー画像及び第二のカラー画像の間に色味の違いがある場合は、後処理部307は、色味の違いを補うためにMatrix変換処理及び/又は多軸色変換処理を領域毎に行いうる。第一のカラー画像及び第二のカラー画像の間にSN差がある場合は、後処理部307は、SN差を補うために、領域毎にNR強度の異なるNRを行いうる。
 また、合成情報が合成率αを含む場合は、後処理部307は、合成率に応じた後処理を領域毎に行いうる。
The post-processing unit 307 can perform processing for reducing a difference between the first color image imaging device and the second color image imaging device. The post-processing performed by the post-processing unit 307 is performed on the area where the first color image is combined with the second color image, and the post-processing performed by the post-processing unit 307 is performed on the area where the combination is not performed. On the other hand, the post-processing performed by the post-processing unit 307 may be different.
For example, when there is a color difference between the first color image and the second color image due to the spectral difference between the two color image pickup devices, the post-processing unit 307 determines the color difference. To compensate, Matrix conversion processing and / or multi-axis color conversion processing can be performed for each region. When there is an SN difference between the first color image and the second color image, the post-processing unit 307 can perform NR with different NR intensity for each region in order to compensate for the SN difference.
When the synthesis information includes the synthesis rate α, the post-processing unit 307 can perform post-processing according to the synthesis rate for each region.
 本技術に従う画像処理装置により行われる具体的な画像処理は、以下「3.第3の実施形態(画像処理方法)」において説明する。 Specific image processing performed by the image processing apparatus according to the present technology will be described below in “3. Third embodiment (image processing method)”.
2.第2の実施形態(画像処理装置) 2. Second embodiment (image processing apparatus)
 本技術に従う画像処理装置は、第一のカラー画像撮像素子と、前記第一のカラー画像撮像素子よりも感度が高い、第二のカラー画像撮像素子とを含みうる。このように、本技術に従う画像処理装置は、感度の異なる2つのカラー画像撮像素子を含みうる。
 さらに本技術に従う画像処理装置は、前記第一のカラー画像撮像素子により撮像された第一のカラー画像と前記第二のカラー画像撮像素子により撮像された第二のカラー画像との間の視差に関する情報を取得する視差情報取得部と、前記視差情報取得部により取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成部と、前記合成情報生成部により生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成部と、を備えている。このように、本技術に従う画像処理装置は、前記2つのカラー画像撮像素子によって撮像された2つのカラー画像の間の視差情報を取得し、当該視差情報に基づき合成の仕方に関する情報を生成し、そして、生成された合成の仕方に関する情報に基づき、2つの画像を合成する。
The image processing device according to the present technology may include a first color image imaging device and a second color image imaging device having higher sensitivity than the first color image imaging device. As described above, the image processing device according to the present technology may include two color image pickup elements having different sensitivities.
Furthermore, the image processing device according to the present technology relates to a parallax between a first color image captured by the first color image imaging device and a second color image captured by the second color image imaging device. Generated by a disparity information acquisition unit that acquires information, a combination information generation unit that generates information on how to combine both images based on information on the disparity acquired by the disparity information acquisition unit, and the combination information generation unit And an image composition unit for compositing the first color image with the second color image based on the composition information. As described above, the image processing device according to the present technology acquires disparity information between two color images captured by the two color image capturing elements, generates information on a synthesis method based on the disparity information, Then, based on the information relating to the generated synthesis method, the two images are synthesized.
 当該画像処理装置によって、上記「1.第1の実施形態(画像処理装置)」で述べた効果が奏される。
 また、前記2つのカラー画像撮像素子の間の感度の差は、例えば各撮像素子の受光面を覆うカラーフィルタの種類を選択することによって導入されうる。各撮像素子のカラーフィルタは、例えば上記「1.第1の実施形態(画像処理装置)」で述べたとおりのものであってよい。そのため、カラーフィルタの具体例として、上記「1.第1の実施形態(画像処理装置)」で述べたカラーフィルタが、本実施形態においても採用されてよい。例えば、第一のカラー画像撮像素子の受光面を覆うカラーフィルタは、ホワイト領域及びフィルタ無し領域のいずれも含まず、且つ、第二のカラー画像撮像素子の受光面を覆うカラーフィルタは、ホワイト領域又はフィルタ無し領域を含み、好ましくはホワイト領域を含む。
 前記2つのカラー画像撮像素子の配置、画像処理装置の具体例、画像処理装置を構成する各構成要素(例えば撮像素子、前処理部、視差情報取得部、視差補償部、合成情報生成部、画像合成部、及び後処理部など)、及び画像処理装置の具体的な処理に関しても、上記「1.第1の実施形態(画像処理装置)」で述べた説明が、本実施形態に当てはまる。そのため、これらについての説明は省略する。
The effect described in “1. First embodiment (image processing apparatus)” is achieved by the image processing apparatus.
Further, the difference in sensitivity between the two color image pickup devices can be introduced, for example, by selecting the type of color filter that covers the light receiving surface of each image pickup device. The color filter of each image sensor may be, for example, as described in “1. First embodiment (image processing apparatus)”. Therefore, as a specific example of the color filter, the color filter described in “1. First embodiment (image processing apparatus)” may be employed in this embodiment. For example, the color filter that covers the light receiving surface of the first color image pickup device does not include either the white region or the no-filter region, and the color filter that covers the light receiving surface of the second color image pickup device has the white region. Or a non-filtered area, preferably a white area.
Arrangement of the two color image pickup devices, a specific example of the image processing device, each component constituting the image processing device (for example, an image pickup device, a preprocessing unit, a parallax information acquisition unit, a parallax compensation unit, a composite information generation unit, an image Regarding the specific processing of the image processing apparatus and the composition unit and the post-processing unit, the description given in “1. First embodiment (image processing apparatus)” applies to the present embodiment. Therefore, the description about these is abbreviate | omitted.
3.第3の実施形態(画像処理方法) 3. Third Embodiment (Image Processing Method)
(1)第3の実施形態の説明 (1) Description of the third embodiment
 本技術に従う画像処理方法は、受光面を覆うカラーフィルタが、ホワイト領域及びフィルタ無し領域のいずれも含まない、第一のカラー画像撮像素子により第一のカラー画像を取得する第一画像取得工程と、受光面を覆うカラーフィルタが、ホワイト領域又はフィルタ無し領域を含む、第二のカラー画像撮像素子により第二のカラー画像を取得する第二画像取得工程とを含む。このように、本技術に従う画像処理方法は、異なる2つのカラー画像撮像素子によるカラー画像撮像工程を含む。
 さらに、本技術に従う画像処理方法は、前記第一のカラー画像と前記第二のカラー画像との間の視差に関する情報を取得する視差情報取得工程と、前記視差情報取得工程において取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成工程と、前記合成情報生成工程において生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成工程とを含む。このように、本技術に従う画像処理方法は、前記2つのカラー画像撮像工程において撮像された2つのカラー画像の間の視差情報を取得し、当該視差情報に基づき合成の仕方に関する情報を生成し、そして、生成された合成の仕方に関する情報に基づき、2つの画像を合成する。
An image processing method according to the present technology includes: a first image acquisition step of acquiring a first color image by a first color image pickup device, wherein the color filter covering the light receiving surface includes neither a white area nor an unfiltered area; The color filter covering the light receiving surface includes a second image acquisition step of acquiring a second color image by a second color image pickup element including a white region or a non-filter region. As described above, the image processing method according to the present technology includes a color image imaging process using two different color image imaging elements.
Furthermore, the image processing method according to the present technology relates to a parallax information acquisition step of acquiring information about a parallax between the first color image and the second color image, and a parallax acquired in the parallax information acquisition step. Based on the information, the first color image is added to the second color image based on the combination information generation step for generating information on how to combine the two images and the combination information generated in the combination information generation step. And an image composition step for composition. As described above, the image processing method according to the present technology obtains disparity information between the two color images captured in the two color image capturing steps, generates information on how to combine based on the disparity information, Then, based on the information relating to the generated synthesis method, the two images are synthesized.
 本技術に従う画像処理方法によって、上記「1.第1の実施形態(画像処理装置)」で述べた効果が奏される。本技術に従う画像処理方法を行うための画像処理装置として、例えば上記「1.第1の実施形態(画像処理装置)」又は上記「2.第2の実施形態(画像処理装置)」で述べたとおりの画像処理装置が用いられてよい。 The effect described in “1. First embodiment (image processing apparatus)” is achieved by the image processing method according to the present technology. As an image processing apparatus for performing an image processing method according to the present technology, for example, described in “1. First embodiment (image processing apparatus)” or “2. Second embodiment (image processing apparatus)”. The following image processing apparatus may be used.
(2)第3の実施形態の第1の例(画像処理方法) (2) First example of third embodiment (image processing method)
 以下で、本技術に従う画像処理方法を、図3及び図10を参照して説明する。図3は、上記1.の「(2)第1の実施形態の第1の例(画像処理装置)」で説明したとおりである。図10は、本技術に従う画像処理方法のフローの一例を示す図である。 Hereinafter, an image processing method according to the present technology will be described with reference to FIGS. 3 and 10. FIG. "(2) First example of first embodiment (image processing apparatus)". FIG. 10 is a diagram illustrating an example of a flow of an image processing method according to the present technology.
 ステップS101において、画像処理装置300は、本技術に従う画像処理を開始する。 In step S101, the image processing apparatus 300 starts image processing according to the present technology.
 ステップS102において、画像処理装置300は、第一のカラー画像撮像素子301-1及び第二のカラー画像撮像素子301-2による撮像を行う。これらの撮像素子による撮像は、好ましくは同時に行われる。これらの撮像素子による撮像の結果、第一のカラー画像(より特には第一のカラー画像データ)及び第二のカラー画像(より特には第二のカラー画像データ)が取得される。 In step S102, the image processing apparatus 300 performs imaging with the first color image imaging element 301-1 and the second color image imaging element 301-2. Imaging with these imaging elements is preferably performed simultaneously. As a result of imaging by these imaging elements, a first color image (more particularly, first color image data) and a second color image (more particularly, second color image data) are acquired.
 ステップS103において、画像処理装置300は、前記第一のカラー画像及び前記第二のカラー画像の夫々について、必要に応じて前処理を行いうる。より特には、前処理部302-1及び302-2のそれぞれによって、前処理が行われうる。当該前処理は、上記1.の「(2)第1の実施形態の第1の例(画像処理装置)」において前処理部302-1及び302-2に関して述べた前処理例のうちの1つ又は2つ以上であってよい。前記第一のカラー画像及び前記第二のカラー画像に対して行われる前処理は、異なる処理であってよく、又は、同じであってもよい。 In step S103, the image processing apparatus 300 may perform preprocessing on each of the first color image and the second color image as necessary. More specifically, preprocessing can be performed by each of the preprocessing units 302-1 and 302-2. The pretreatment is performed in the above 1. 1 or two or more of the pre-processing examples described regarding the pre-processing units 302-1 and 302-2 in “(2) First example (image processing apparatus) of the first embodiment” Good. The preprocessing performed on the first color image and the second color image may be different processing or the same.
 ステップS104において、画像処理装置300は、第一のカラー画像撮像素子301-1により撮像された第一のカラー画像と第二のカラー画像撮像素子301-2により撮像された第二のカラー画像との間の視差に関する情報を取得する。より特には、視差に関する情報の取得は、視差情報取得部303により行われうる。例えば上記1.の「(2)第1の実施形態の第1の例(画像処理装置)」において述べたとおりに、視差情報が視差情報取得部303により取得されうる。当該視差に関する情報は、画像中の各領域の視差、例えば画素毎の視差及び/又は画素ブロック毎の視差、を含みうる。
 当該視差に関する情報は、例えば視差マップを含んでもよい。視差情報取得部303は、例えば合成の基準となる第二のカラー画像に対する第一のカラー画像の視差の分布を示す視差マップを取得しうる。視差情報取得部303は、合成の基準となる第二のカラー画像に対する第一のカラー画像の視差の分布を示す視差マップに加えて、第一のカラー画像に対する第二のカラー画像の視差の分布を示す視差マップを取得してもよい。
In step S104, the image processing apparatus 300 includes the first color image captured by the first color image imaging element 301-1 and the second color image captured by the second color image imaging element 301-2. Information on the parallax between. More specifically, acquisition of information related to parallax can be performed by the parallax information acquisition unit 303. For example, the above 1. As described in “(2) First example of first embodiment (image processing apparatus)”, the parallax information can be acquired by the parallax information acquisition unit 303. The information regarding the parallax can include parallax of each region in the image, for example, parallax for each pixel and / or parallax for each pixel block.
The information regarding the parallax may include a parallax map, for example. The parallax information acquisition unit 303 can acquire a parallax map indicating the parallax distribution of the first color image with respect to the second color image serving as a reference for synthesis, for example. The disparity information acquisition unit 303 distributes the disparity of the second color image with respect to the first color image in addition to the disparity map indicating the disparity distribution of the first color image with respect to the second color image that is a reference for combining. You may acquire the parallax map which shows.
 ステップS105において、画像処理装置300は、ステップS104において取得された視差に関する情報に基づき、前記第一のカラー画像の視差補償を行いうる。当該視差補償は、例えば上記1.の「(2)第1の実施形態の第1の例(画像処理装置)」において述べたとおりに、視差補償部304により行われうる。
 ステップS105において、第一のカラー画像の画像データに対して、当該視差に関する情報に基づき、画素又は画素ブロックの位置が移動されうる。当該位置の移動は、例えば前記第一のカラー画像と前記第二のカラー画像との間の視差が解消されるように行われてよい。特には、前記第一のカラー画像の各領域(特には画素及び/又は画素ブロック)を、当該各領域に対応する第二のカラー画像中の各領域の位置へ移動させうる。視差補償によって、視差補償された第一のカラー画像が生成される。視差補償により、第一のカラー画像の各領域が、第二のカラー画像中の対応する各領域へ移動され、その結果、適切な画像合成が可能となる。
In step S105, the image processing apparatus 300 can perform the parallax compensation of the first color image based on the information regarding the parallax acquired in step S104. The parallax compensation is performed, for example, in the above 1. As described in “(2) First example (image processing apparatus) of the first embodiment”, the parallax compensation unit 304 may perform this.
In step S105, the position of the pixel or the pixel block can be moved based on the information on the parallax with respect to the image data of the first color image. The movement of the position may be performed so that the parallax between the first color image and the second color image is eliminated, for example. In particular, each region (especially a pixel and / or pixel block) of the first color image can be moved to the position of each region in the second color image corresponding to the region. By the parallax compensation, a first color image with parallax compensation is generated. By the parallax compensation, each region of the first color image is moved to each corresponding region in the second color image, and as a result, appropriate image composition is possible.
 ステップS106において、画像処理装置300は、第一のカラー画像及び第二のカラー画像の合成の仕方に関する情報を生成する。より好ましくは、ステップS106において、画像処理装置300は、視差補償された第一のカラー画像と第二のカラー画像の合成の仕方に関する情報を生成する。合成の仕方に関する情報の生成は、例えば上記1.の「(2)第1の実施形態の第1の例(画像処理装置)」において述べたとおりに、合成情報生成部305により行われうる。ステップS106のより具体的な処理は、別途以下で図11を参照して説明する。 In step S106, the image processing apparatus 300 generates information on how to synthesize the first color image and the second color image. More preferably, in step S106, the image processing apparatus 300 generates information regarding how to combine the first color image and the second color image that have been subjected to parallax compensation. For example, the above-described 1. As described in “(2) First example of first embodiment (image processing apparatus)” in FIG. More specific processing in step S106 will be separately described below with reference to FIG.
 ステップS107において、画像処理装置300は、ステップS106において生成された合成の仕方に関する情報に基づき、視差補償された第一のカラー画像を、基準となる第二のカラー画像に合成する。当該合成は、例えば上記1.の「(2)第1の実施形態の第1の例(画像処理装置)」において述べたとおりに、画像合成部306により行われる。ステップS107のより具体的な処理は、別途以下で図15を参照して説明する。 In step S107, the image processing apparatus 300 synthesizes the parallax-compensated first color image with the reference second color image based on the information regarding the synthesis method generated in step S106. The synthesis can be performed by, for example, the above 1. As described in “(2) First example of first embodiment (image processing apparatus)” in FIG. More specific processing of step S107 will be separately described below with reference to FIG.
 ステップS108において、画像処理装置300は、ステップS107における合成の結果得られた画像に対して後処理を行う。当該後処理は、例えば上記1.の「(2)第1の実施形態の第1の例(画像処理装置)」において述べたとおりに、後処理部307により行われうる。ステップS108のより具体的な処理は、別途以下で図16を参照して説明する。 In step S108, the image processing apparatus 300 performs post-processing on the image obtained as a result of the synthesis in step S107. The post-processing is, for example, the above-described 1. As described in “(2) First example of first embodiment (image processing apparatus)”, the post-processing unit 307 may perform the processing. More specific processing in step S108 will be separately described below with reference to FIG.
 ステップS109において、画像処理装置300は、本技術に従う画像処理を終了する。 In step S109, the image processing apparatus 300 ends the image processing according to the present technology.
<合成情報生成処理> <Synthesis information generation processing>
 上記ステップS106における合成情報生成部による合成情報生成処理の一例を、以下で図11を参照して説明する。図11は、合成情報生成処理のフローの一例である。 An example of the synthesis information generation process by the synthesis information generation unit in step S106 will be described below with reference to FIG. FIG. 11 is an example of the flow of the composite information generation process.
 ステップS201において、合成情報生成部305は、合成情報生成処理を開始する。 In step S201, the composite information generation unit 305 starts a composite information generation process.
 ステップS202において、合成情報生成部305は、探索範囲外領域を検出する。当該検出は、特には探索範囲外領域検出部601により行われうる。ステップS202において、例えば、前記ステップS104において取得された視差に関する情報が利用されうる。探索範囲外領域検出部601による探索範囲外領域検出処理のより具体的な処理は、別途以下で図12を参照して説明する。 In step S202, the composite information generation unit 305 detects a region outside the search range. The detection can be performed by the search area outside detection unit 601 in particular. In step S202, for example, information on the parallax acquired in step S104 can be used. More specific processing of the out-of-search area detection processing by the out-of-search area detection unit 601 will be separately described below with reference to FIG.
 ステップS203において、合成情報生成部305は、オクルージョン領域を検出する。当該検出は、特にはオクルージョン領域検出部602により行われうる。ステップS203において、例えば、前記ステップS104において取得された視差に関する情報が利用されうる。オクルージョン領域検出部602によるオクルージョン領域検出処理のより具体的な処理は、別途以下で図13を参照して説明する。 In step S203, the composite information generation unit 305 detects an occlusion area. This detection can be performed by the occlusion area detection unit 602 in particular. In step S203, for example, information on the parallax acquired in step S104 can be used. More specific processing of the occlusion region detection processing by the occlusion region detection unit 602 will be separately described below with reference to FIG.
 ステップS204において、合成情報生成部305は、前記第一のカラー画像又は前記第二のカラー画像内の所定の領域毎に色差分散値を輝度分散値で除した値を算出し、当該除した値が所定の値以上である領域を検出する。当該検出は、特には色差/輝度判定部603により行われうる。色差/輝度判定部603による色差/輝度判定処理のより具体的な処理は、別途以下で図14を参照して説明する。 In step S204, the composite information generation unit 305 calculates a value obtained by dividing the color difference dispersion value by the luminance dispersion value for each predetermined region in the first color image or the second color image, and the divided value. A region where is equal to or greater than a predetermined value is detected. The detection can be performed by the color difference / luminance determination unit 603 in particular. More specific processing of the color difference / luminance determination processing by the color difference / luminance determination unit 603 will be separately described below with reference to FIG.
 ステップS205において、合成情報生成部305は、ステップS202~ステップS204における処理の結果に基づき、画像内の各領域について合成すべきかを判定する。例えば、画像内の各領域について、ステップS202~S204の少なくとも一つにおいて検出された領域を画像合成すべきでないと判定する。合成情報生成部305は、判定結果を、両画像の合成の仕方に関する情報として出力する。 In step S205, the synthesis information generation unit 305 determines whether each area in the image should be synthesized based on the processing results in steps S202 to S204. For example, for each region in the image, it is determined that the region detected in at least one of steps S202 to S204 should not be image-synthesized. The composite information generation unit 305 outputs the determination result as information regarding how to combine both images.
 ステップS206において、合成情報生成部305は、合成情報生成工程を終了する。 In step S206, the synthesis information generation unit 305 ends the synthesis information generation process.
<探索範囲外領域検出処理> <Out-search area detection processing>
 上記ステップS202における探索範囲外領域検出部601による探索範囲外領域検出処理の一例を、以下で図12を参照して説明する。図12は、探索範囲外領域検出処理のフローの一例である。 An example of the out-of-search area detection processing by the out-of-search area detection unit 601 in step S202 will be described below with reference to FIG. FIG. 12 is an example of the flow of the search area outside detection process.
 図12におけるステップS301において、探索範囲外領域検出部601は、探索範囲外領域検出処理を開始する。 In step S301 in FIG. 12, the out-of-search area detection unit 601 starts out-of-search area detection processing.
 ステップS302において、探索範囲外領域検出部601は、例えば視差情報取得部303により生成された視差マップ中の注目領域を決定する。 In step S302, the out-of-search-range area detection unit 601 determines an attention area in the parallax map generated by the parallax information acquisition unit 303, for example.
 ステップS303において、探索範囲外領域検出部601は、前記注目領域の視差値を取得する。 In step S303, the out-of-search area detection unit 601 acquires the parallax value of the attention area.
 ステップS304において、探索範囲外領域検出部601は、取得された視差値が所定の閾値以上であるかを判定する。取得された視差値が所定の閾値以上である場合、探索範囲外領域検出部601は、処理をステップS305に進める。取得された視差値が所定の閾値未満である場合、探索範囲外領域検出部601は、処理をステップS306に進める。 In step S304, the out-of-search area detection unit 601 determines whether the acquired parallax value is equal to or greater than a predetermined threshold. When the acquired parallax value is equal to or greater than the predetermined threshold value, the search area outside detection unit 601 advances the process to step S305. When the acquired parallax value is less than the predetermined threshold value, the out-of-search-range area detection unit 601 advances the process to step S306.
 ステップS304において、探索範囲外領域検出部601は、取得された視差値が所定の閾値以上である領域を、探索範囲外領域として総合判定部604に出力する。 In step S304, the out-of-search-range area detection unit 601 outputs an area in which the obtained parallax value is equal to or greater than a predetermined threshold to the comprehensive determination unit 604 as an out-of-search area area.
 ステップS306において、探索範囲外領域検出部601は、前記視差マップ内に、探索範囲外領域検出処理が行われていない領域があるかを判定する。当該処理が行われていない領域がある場合、探索範囲外領域検出部601は、処理をステップS302に戻し、当該領域に対して探索範囲外領域検出処理を行う。当該処理が行われていない領域が無い場合、探索範囲外領域検出部601は、処理をステップS307に進める。 In step S306, the out-of-search area detection unit 601 determines whether there is an area in the disparity map where the out-of-search area detection processing is not performed. If there is an area where the process has not been performed, the search-outside area detection unit 601 returns the process to step S302, and performs a search-outside area detection process on the area. When there is no area where the process is not performed, the out-of-search area detection unit 601 advances the process to step S307.
 ステップS307において、探索範囲外領域検出部601は、探索範囲外領域検出を終了する。 In step S <b> 307, the out-of-search area detection unit 601 ends the out-of-search area detection.
<オクルージョン領域検出処理> <Occlusion area detection processing>
 上記ステップS203におけるオクルージョン領域検出部602によるオクルージョン領域検出処理の一例を、以下で図13を参照して説明する。図13は、オクルージョン領域検出処理のフローの一例である。 An example of the occlusion area detection process performed by the occlusion area detection unit 602 in step S203 will be described below with reference to FIG. FIG. 13 is an example of the flow of the occlusion area detection process.
 図13におけるステップS401において、オクルージョン領域検出部602は、オクルージョン領域検出処理を開始する。 In step S401 in FIG. 13, the occlusion area detection unit 602 starts an occlusion area detection process.
 ステップS402において、オクルージョン領域検出部602は、第一のカラー画像に対する第二のカラー画像の第一視差マップに基づき、第一のカラー画像の各領域を移動させる。 In step S402, the occlusion area detection unit 602 moves each area of the first color image based on the first parallax map of the second color image with respect to the first color image.
 ステップS403において、オクルージョン領域検出部602は、第二のカラー画像に対する第一のカラー画像の第二視差マップに基づき、ステップS402における移動後の第一のカラー画像の各領域を再度移動させる。 In step S403, the occlusion area detection unit 602 moves each area of the first color image after the movement in step S402 again based on the second parallax map of the first color image with respect to the second color image.
 ステップS404において、オクルージョン領域検出部602は、元の第一のカラー画像の位置に戻ってこない領域をオクルージョン領域として検出する。オクルージョン領域検出部602は、検出されたオクルージョン領域を、総合判定部604に出力する。 In step S404, the occlusion area detection unit 602 detects an area that does not return to the position of the original first color image as an occlusion area. The occlusion area detection unit 602 outputs the detected occlusion area to the comprehensive determination unit 604.
 ステップS405において、オクルージョン領域検出部602は、オクルージョン領域検出処理を終了する。 In step S405, the occlusion area detection unit 602 ends the occlusion area detection process.
<色差/輝度判定処理> <Color difference / luminance determination processing>
 上記ステップS204における色差/輝度判定部603による色差/輝度判定処理の一例を、以下で図14を参照して説明する。図14は、色差/輝度判定処理のフローの一例である。 An example of color difference / luminance determination processing by the color difference / luminance determination unit 603 in step S204 will be described below with reference to FIG. FIG. 14 is an example of the flow of color difference / luminance determination processing.
 図14におけるステップS501において、色差/輝度判定部603は、色差/輝度判定処理を開始する。 In step S501 in FIG. 14, the color difference / luminance determination unit 603 starts the color difference / luminance determination process.
 ステップS502において、色差/輝度判定部603は、例えば前記第二のカラー画像内の注目領域を決定する。 In step S502, the color difference / luminance determination unit 603 determines a region of interest in the second color image, for example.
 ステップS503において、色差/輝度判定部603は、前記注目領域の色差分散値/輝度分散値の値を算出する。 In step S503, the color difference / luminance determination unit 603 calculates the value of the color difference variance / luminance variance of the region of interest.
 ステップS504において、色差/輝度判定部603は、算出された色差分散値/輝度分散値の値が所定の閾値以上であるかを判定する。算出された値が所定の閾値以上である場合、色差/輝度判定部603は、処理をステップS505に進める。算出された値が所定の閾値未満である場合、色差/輝度判定部603は、処理をステップS506に進める。 In step S504, the color difference / luminance determination unit 603 determines whether the calculated color difference variance / luminance variance value is equal to or greater than a predetermined threshold. If the calculated value is equal to or greater than the predetermined threshold, the color difference / luminance determination unit 603 advances the process to step S505. If the calculated value is less than the predetermined threshold, the color difference / luminance determination unit 603 advances the process to step S506.
 ステップS505において、色差/輝度判定部603は、算出された値が所定の閾値以上である領域を、画質劣化が生じる可能性がある領域として総合判定部604に出力する。 In step S505, the color difference / luminance determination unit 603 outputs an area where the calculated value is equal to or greater than a predetermined threshold to the general determination unit 604 as an area where image quality may be degraded.
 ステップS506において、色差/輝度判定部603は、前記第二カラー画像内に、ステップS502~S505の処理が行われていない領域があるかを判定する。当該処理が行われていない領域がある場合、色差/輝度判定部603は、処理をステップS502に戻し、当該領域に対して色差/輝度判定処理を行う。当該処理が行われていない領域が無い場合、色差/輝度判定部603は、処理をステップS507に進める。 In step S506, the color difference / luminance determination unit 603 determines whether there is an area in the second color image where the processing in steps S502 to S505 has not been performed. When there is an area where the process is not performed, the color difference / luminance determination unit 603 returns the process to step S502 and performs the color difference / brightness determination process on the area. If there is no area where the process is not performed, the color difference / luminance determination unit 603 advances the process to step S507.
 ステップS507において、色差/輝度判定部603は、色差/輝度判定処理を終了する。 In step S507, the color difference / luminance determination unit 603 ends the color difference / luminance determination processing.
<画像合成処理> <Image composition processing>
 上記ステップS107における画像合成部306による画像合成処理の一例を、以下で図15を参照して説明する。図15は、画像合成処理のフローの一例である。 An example of image composition processing by the image composition unit 306 in step S107 will be described below with reference to FIG. FIG. 15 is an example of the flow of image composition processing.
 図15におけるステップS601において、画像合成部306は、視差補償された第一のカラー画像と第二のカラー画像との合成処理を開始する。 In step S601 in FIG. 15, the image composition unit 306 starts composition processing of the first color image and the second color image that have been subjected to parallax compensation.
 ステップS602において、画像合成部306は、例えば前記第二のカラー画像内の注目領域(例えば注目画素又は注目画素ブロック)を決定する。 In step S602, the image composition unit 306 determines a region of interest (for example, a pixel of interest or a pixel block of interest) in the second color image, for example.
 ステップS603において、画像合成部306は、合成情報生成部305により生成された合成情報を参照する。 In step S603, the image synthesis unit 306 refers to the synthesis information generated by the synthesis information generation unit 305.
 ステップS604において、画像合成部306は、合成情報生成部305により生成された合成情報の参照の結果、前記注目領域において前記第二のカラー画像に前記視差補償された第一のカラー画像を合成すべき場合は、処理をステップS605に進める。合成すべきでない場合は、処理をステップS606に進める。 In step S604, as a result of referring to the synthesis information generated by the synthesis information generation unit 305, the image synthesis unit 306 synthesizes the parallax-compensated first color image with the second color image in the region of interest. If so, the process advances to step S605. If not, the process proceeds to step S606.
 ステップS605において、画像合成部306は、前記注目領域において前記第二のカラー画像に前記視差補償された第一のカラー画像を合成する。前記注目領域に関して、画像合成部306は、当該合成により得られた画像を出力する。 In step S605, the image combining unit 306 combines the parallax-compensated first color image with the second color image in the region of interest. Regarding the region of interest, the image composition unit 306 outputs an image obtained by the composition.
 ステップS606において、画像合成部306は、前記注目領域に関して、前記第二のカラー画像を出力する。 In step S606, the image composition unit 306 outputs the second color image regarding the attention area.
 ステップS607において、画像合成部306は、前記第二カラー画像内に、ステップS602~S606の処理が行われていない領域があるかを判定する。当該処理が行われていない領域がある場合、画像合成部306は、処理をステップS602に戻し、当該領域に対して画像合成処理を行う。当該処理が行われていない領域が無い場合、画像合成部306は、処理をステップS608に進める。 In step S607, the image composition unit 306 determines whether there is an area in the second color image where the processing in steps S602 to S606 has not been performed. If there is an area where the process is not performed, the image composition unit 306 returns the process to step S602 and performs the image composition process on the area. If there is no area where the process is not performed, the image composition unit 306 advances the process to step S608.
 ステップS608において、画像合成部306は、画像合成処理を終了する。 In step S608, the image composition unit 306 ends the image composition process.
<後処理> <Post-processing>
 上記ステップS108における後処理部307による後処理の一例を、以下で図16を参照して説明する。図16は、後処理のフローの一例である。 An example of post-processing by the post-processing unit 307 in step S108 will be described below with reference to FIG. FIG. 16 is an example of the post-processing flow.
 図16におけるステップS701において、後処理部307は、画像合成部306による合成の結果得られた合成画像の後処理を開始する。 In step S701 in FIG. 16, the post-processing unit 307 starts post-processing of the composite image obtained as a result of the synthesis by the image synthesis unit 306.
 ステップS702において、後処理部307は、例えば前記合成画像内の注目領域(例えば注目画素又は注目画素ブロック)を決定する。 In step S702, the post-processing unit 307 determines a region of interest (for example, a pixel of interest or a pixel block of interest) in the composite image, for example.
 ステップS703において、後処理部307は、合成情報生成部305により生成された合成情報を参照する。 In step S703, the post-processing unit 307 refers to the synthesis information generated by the synthesis information generation unit 305.
 ステップS704において、後処理部307は、合成情報生成部305により生成された合成情報の参照の結果、前記注目領域において前記第二のカラー画像に前記視差補償された第一のカラー画像を合成すべき場合は、処理をステップS705に進める。合成すべきでない場合は、処理をステップS706に進める。 In step S <b> 704, the post-processing unit 307 combines the parallax-compensated first color image with the second color image as a result of referring to the synthesis information generated by the synthesis information generation unit 305. If so, the process advances to step S705. If not, the process proceeds to step S706.
 ステップS705において、後処理部307は、合成された画像に適した信号処理を前記注目領域に対して行う。 In step S705, the post-processing unit 307 performs signal processing suitable for the synthesized image on the region of interest.
 ステップS706において、後処理部307は、合成されていない画像に適した信号処理を前記注目領域に対して行う。 In step S706, the post-processing unit 307 performs signal processing suitable for an unsynthesized image on the region of interest.
 ステップS707において、後処理部307は、前記合成画像内に、後処理が行われていない領域があるかを判定する。後処理が行われていない領域がある場合、後処理部307は、処理をステップS702に戻し、当該領域に対して後処理を行う。後処理が行われていない領域が無い場合、後処理部307は、処理をステップS708に進める。 In step S <b> 707, the post-processing unit 307 determines whether there is an area that has not been post-processed in the composite image. If there is an area that has not been post-processed, the post-processing unit 307 returns the process to step S702 and performs post-processing on the area. If there is no area where post-processing is not performed, the post-processing unit 307 advances the processing to step S708.
 ステップS708において、後処理部307は、後処理を終了する。 In step S708, the post-processing unit 307 ends the post-processing.
4.第4の実施形態(画像処理方法) 4). Fourth embodiment (image processing method)
 本技術に従う画像処理方法では、第一のカラー画像撮像素子と、前記第一のカラー画像撮像素子よりも感度が高い、第二のカラー画像撮像素子とを用いて2つのカラー画像が取得される。このように、本技術に従う画像処理方法では、感度の異なる2つのカラー画像撮像素子が用いて2つのカラー画像が取得される。
 さらに本技術に従う画像処理方法は、前記第一のカラー画像撮像素子により撮像された第一のカラー画像と前記第二のカラー画像撮像素子により撮像された第二のカラー画像との間の視差に関する情報を取得する視差情報取得工程と、前記視差情報取得工程において取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成工程と、前記合成情報生成工程において生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成工程と、を含む。このように、本技術に従う画像処理方法では、前記2つのカラー画像撮像素子によって撮像された2つのカラー画像の間の視差情報が取得され、当該視差情報に基づき合成の仕方に関する情報が生成され、そして、生成された合成の仕方に関する情報に基づき、2つの画像が合成される。
In the image processing method according to the present technology, two color images are acquired using the first color image pickup device and the second color image pickup device having higher sensitivity than the first color image pickup device. . As described above, in the image processing method according to the present technology, two color images are acquired by using two color image pickup elements having different sensitivities.
Furthermore, the image processing method according to the present technology relates to a parallax between a first color image captured by the first color image imaging device and a second color image captured by the second color image imaging device. Generated in the disparity information acquisition step for acquiring information, the composite information generation step for generating information on how to combine both images based on the information on the disparity acquired in the disparity information acquisition step, and the composite information generation step And an image combining step of combining the first color image with the second color image based on the combined information. As described above, in the image processing method according to the present technology, parallax information between two color images captured by the two color image imaging elements is acquired, and information on how to combine is generated based on the parallax information. Then, two images are synthesized based on the information on the generated synthesis method.
 当該画像処理方法によって、上記「1.第1の実施形態(画像処理装置)」で述べた効果が奏される。
 また、当該画像処理方法における各工程における具体的な処理は、例えば上記「3.第3の実施形態(画像処理方法)」において述べたとおりのものであってよい。そのため、各工程における具体的な処理についての説明は省略する。
By the image processing method, the effects described in “1. First embodiment (image processing apparatus)” are produced.
In addition, specific processing in each step in the image processing method may be, for example, as described in “3. Third embodiment (image processing method)”. Therefore, the description about the specific process in each process is abbreviate | omitted.
5.装置の構成例 5. Device configuration example
 以下で、図18を参照しながら、本技術に従う画像処理装置の構成の一例を説明する。図18は、本技術に従う画像処理装置の概略的な構成の一例を示す図である。 Hereinafter, an example of the configuration of the image processing apparatus according to the present technology will be described with reference to FIG. FIG. 18 is a diagram illustrating an example of a schematic configuration of an image processing device according to the present technology.
 図18に示される画像処理装置1001は、CPU(中央演算処理装置)1002及びRAM1003を備えている。CPU1002及びRAM1003は、バス1005を介して相互に接続されており、また、画像処理装置1001の他の構成要素ともバス1005を介して接続されている。画像処理装置1001には、撮像装置1011が備えられている。撮像装置1011も、同様に他の構成要素と接続されていてよい。撮像装置1011は、上記「1.第1の実施形態(画像処理装置)」又は「2.第2の実施形態(画像処理装置)」で述べた第一のカラー画像撮像素子及び第二のカラー画像撮像素子を含む。撮像装置1011により、本技術に従う画像処理方法が行われてよい。
 CPU1002は、画像処理装置1001の制御及び演算を行う。CPU1002として、任意のプロセッサを用いることができ、その例としてXeon(登録商標)シリーズ、Core(商標)シリーズ、又はAtom(商標)シリーズのプロセッサを挙げることができる。本技術に従う画像処理は、例えばCPU1002により実現されてもよい。
 RAM1003は、例えばキャッシュ・メモリ及びメイン・メモリを含み、CPU1002により使用されるプログラム、例えば本技術に従う画像処理方法を装置に実行させるためのプログラムなどを一時記憶しうる。
An image processing apparatus 1001 shown in FIG. 18 includes a CPU (Central Processing Unit) 1002 and a RAM 1003. The CPU 1002 and the RAM 1003 are connected to each other via a bus 1005 and are also connected to other components of the image processing apparatus 1001 via the bus 1005. The image processing apparatus 1001 includes an imaging apparatus 1011. Similarly, the imaging device 1011 may be connected to other components. The image pickup apparatus 1011 includes the first color image pickup element and the second color image described in “1. First embodiment (image processing apparatus)” or “2. Second embodiment (image processing apparatus)”. Includes an image pickup device. The image processing method according to the present technology may be performed by the imaging device 1011.
A CPU 1002 performs control and calculation of the image processing apparatus 1001. An arbitrary processor can be used as the CPU 1002, and examples thereof include a Xeon (registered trademark) series, a Core (trademark) series, and an Atom (trademark) series processor. Image processing according to the present technology may be realized by the CPU 1002, for example.
The RAM 1003 includes, for example, a cache memory and a main memory, and can temporarily store a program used by the CPU 1002, for example, a program for causing the apparatus to execute an image processing method according to the present technology.
 画像処理装置1001は、ディスク1004、通信装置1006、出力装置1007、及びドライブ1009を備えていてもよい。これらの構成要素はいずれもバス1005に接続されうる。
 ディスク1004には、オペレーティング・システム(例えば、WINDOWS(登録商標)、UNIX(登録商標)、又はLINUX(登録商標)など)、本技術に従う画像処理方法を実行するためのプログラム、並びに各種データ(例えば画像データ)が格納されうる。
 通信装置1006は、画像処理装置1001をネットワーク1010に有線又は無線で接続する。通信装置1006は、画像処理装置1001を、ネットワーク1010を介して各種データ(例えば画像データなど)を取得することができる。取得したデータは、例えばディスク1004に格納されうる。通信装置1006の種類は当業者により適宜選択されてよい。
 出力装置1007は、本技術に従う画像処理によって得られた画像を出力しうる。出力装置1007は、例えばディスプレイであってよい。
 入力装置1008は、ユーザが画像処理装置1001を操作するための装置である。例えば画像処理装置1001がスマートフォンである場合、ディスプレイが入力装置1008として機能してよい。
 ドライブ1009は、記録媒体に記録されている情報を読み出して、RAM1003に出力することができる。記録媒体は、例えば、SDメモリカード又はフラッシュメモリであるが、これらに限定されない。
The image processing apparatus 1001 may include a disk 1004, a communication apparatus 1006, an output apparatus 1007, and a drive 1009. Any of these components can be connected to the bus 1005.
The disk 1004 includes an operating system (for example, WINDOWS (registered trademark), UNIX (registered trademark), or LINUX (registered trademark)), a program for executing an image processing method according to the present technology, and various data (for example, Image data) can be stored.
The communication device 1006 connects the image processing device 1001 to the network 1010 by wire or wireless. The communication apparatus 1006 can acquire various data (for example, image data) from the image processing apparatus 1001 via the network 1010. The acquired data can be stored in the disk 1004, for example. The type of the communication device 1006 may be appropriately selected by those skilled in the art.
The output device 1007 can output an image obtained by image processing according to the present technology. The output device 1007 may be a display, for example.
The input device 1008 is a device for the user to operate the image processing device 1001. For example, when the image processing apparatus 1001 is a smartphone, the display may function as the input apparatus 1008.
The drive 1009 can read information recorded on the recording medium and output the information to the RAM 1003. The recording medium is, for example, an SD memory card or a flash memory, but is not limited thereto.
 以上で説明した本技術に関して、当業者は、本技術の範囲内又はその均等物の範囲内において、種々の変更、コンビネーション、サブコンビネーション、又は代替が、例えば設計上の要請又は他の要因などに応じて可能であることを理解する。 With respect to the present technology described above, those skilled in the art will recognize that various modifications, combinations, sub-combinations, or alternatives may occur due to design requirements or other factors within the scope of the present technology or equivalents thereof. Understand what is possible.
 なお、本技術は、以下のような構成をとることもできる。
〔1〕受光面を覆うカラーフィルタが、ホワイト領域及びフィルタ無し領域のいずれも含まない、第一のカラー画像撮像素子と、
 受光面を覆うカラーフィルタが、ホワイト領域又はフィルタ無し領域を含む、第二のカラー画像撮像素子と、
 前記第一のカラー画像撮像素子により撮像された第一のカラー画像と前記第二のカラー画像撮像素子により撮像された第二のカラー画像との間の視差に関する情報を取得する視差情報取得部と、
 前記視差情報取得部により取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成部と、
 前記合成情報生成部により生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成部と、
 を備えている画像処理装置。
〔2〕前記第一のカラー画像撮像素子に含まれるカラーフィルタが、ベイヤ配列のカラーフィルタである、〔1〕に記載の画像処理装置。
〔3〕前記第二のカラー画像撮像素子に含まれるカラーフィルタが、ホワイト領域を含む、〔1〕又は〔2〕に記載の画像処理装置。
〔4〕前記第二のカラー画像撮像素子に含まれるカラーフィルタが、WRGB配列又はWRG配列の領域を含む、〔1〕~〔3〕のいずれか一つに記載の画像処理装置。
〔5〕前記視差情報取得部が、前記第一のカラー画像と前記第二のカラー画像との間で、画素毎又は画素ブロック毎の視差を取得する、〔1〕~〔4〕のいずれか一つに記載の画像処理装置。
〔6〕前記視差に関する情報に基づき、前記第一のカラー画像に対して視差補償を行う視差補償部をさらに備えている、〔1〕~〔5〕のいずれか一つ1に記載の画像処理装置。
〔7〕前記合成情報生成部が、視差に関する情報に基づき作成された視差マップ内で所定の探索範囲より大きな視差値を有する領域を検出する探索範囲外領域検出部を含む、〔1〕~〔6〕のいずれか一つに記載の画像処理装置。
〔8〕前記合成情報生成部が、前記第一のカラー画像と前記第二のカラー画像とを対比してオクルージョン領域を検出するオクルージョン領域検出部を含む、〔1〕~〔7〕のいずれか一つに記載の画像処理装置。
〔9〕前記合成情報生成部が、前記第一のカラー画像又は前記第二のカラー画像内の所定の領域毎に色差分散値を輝度分散値で除した値を算出し、当該除した値が所定の値以上である領域を検出する色差/輝度判定部を含む、〔1〕~〔8〕のいずれか一つに記載の画像処理装置。
〔10〕前記合成情報生成部が、
 視差に関する情報に基づき作成された視差マップ内で所定の探索範囲より大きな視差値を有する画素又は画素ブロックを検出する探索範囲外領域検出部、
 前記第一のカラー画像と前記第二のカラー画像とを対比してオクルージョン領域を検出するオクルージョン領域検出部、及び
 前記第一のカラー画像又は前記第二のカラー画像内の所定の領域毎に色差分散値を輝度分散値で除した値を算出し、当該除した値が所定の値以上である領域を検出する色差/輝度判定部、
 から選ばれる少なくとも1つの構成要素を含み、
 いずれかの構成要素において検出された領域について、前記第二のカラー画像に前記第一のカラー画像を合成すべきでないとする合成情報を生成する、
 〔1〕~〔9〕のいずれか一つに記載の画像処理装置。
〔11〕前記合成情報生成部が、前記第二のカラー画像を構成する領域毎に、前記第一のカラー画像を合成すべきとする又は合成すべきでないとする合成情報を生成する、〔1〕~〔10〕のいずれか一つに記載の画像処理装置。
〔12〕前記合成情報生成部が、前記第二のカラー画像のうちの、前記第一のカラー画像を合成すべきでないとする領域の割合が所定の値を超えた場合に、前記第二のカラー画像の全ての領域について前記第一のカラー画像を合成すべきでないとする合成情報を生成する、〔1〕~〔11〕のいずれか一つに記載の画像処理装置。
〔13〕前記合成情報生成部が、前記第二のカラー画像を構成する領域毎に、前記第一のカラー画像を合成する合成率を付与する、〔1〕~〔12〕のいずれか一つに記載の画像処理装置。
〔14〕第一のカラー画像撮像素子と、
 前記第一のカラー画像撮像素子よりも感度が高い、第二のカラー画像撮像素子と、
 前記第一のカラー画像撮像素子により撮像された第一のカラー画像と前記第二のカラー画像撮像素子により撮像された第二のカラー画像との間の視差に関する情報を取得する視差情報取得部と、
 前記視差情報取得部により取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成部と、
 前記合成情報生成部により生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成部と、
 を備えている画像処理装置。
〔15〕受光面を覆うカラーフィルタが、ホワイト領域及びフィルタ無し領域のいずれも含まない、第一のカラー画像撮像素子により第一のカラー画像を取得する第一画像取得工程と、
 受光面を覆うカラーフィルタが、ホワイト領域又はフィルタ無し領域を含む、第二のカラー画像撮像素子により第二のカラー画像を取得する第二画像取得工程と、
 前記第一のカラー画像と前記第二のカラー画像との間の視差に関する情報を取得する視差情報取得工程と、
 前記視差情報取得工程において取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成工程と、
 前記合成情報生成工程において生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成工程と、
 を含む画像処理方法。
〔16〕第一のカラー画像撮像素子により第一のカラー画像を取得する第一画像取得工程と、
 前記第一のカラー画像撮像素子よりも感度が高い、第二のカラー画像撮像素子により第二のカラー画像を取得する第二画像取得工程と、
 前記第一のカラー画像と前記第二のカラー画像との間の視差に関する情報を取得する視差情報取得工程と、
 前記視差情報取得工程において取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成工程と、
 前記合成情報生成工程において生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成工程と、
 を含む画像処理方法。
In addition, this technique can also take the following structures.
[1] a first color image pickup device in which the color filter covering the light receiving surface does not include any of the white area and the non-filter area;
A color filter that covers the light receiving surface includes a white area or a non-filter area;
A parallax information acquisition unit that acquires information about parallax between the first color image captured by the first color image imaging device and the second color image captured by the second color image imaging device; ,
Based on information on parallax acquired by the parallax information acquisition unit, a synthesis information generation unit that generates information on how to combine both images;
An image combining unit that combines the first color image with the second color image based on the combination information generated by the combination information generation unit;
An image processing apparatus.
[2] The image processing apparatus according to [1], wherein the color filter included in the first color image pickup device is a Bayer array color filter.
[3] The image processing device according to [1] or [2], wherein the color filter included in the second color image pickup device includes a white region.
[4] The image processing apparatus according to any one of [1] to [3], wherein the color filter included in the second color image pickup device includes a region of a WRGB array or a WRG array.
[5] The parallax information acquisition unit acquires parallax for each pixel or each pixel block between the first color image and the second color image. Any one of [1] to [4] The image processing apparatus according to one.
[6] The image processing according to any one of [1] to [5], further including a parallax compensation unit that performs parallax compensation on the first color image based on the information on the parallax. apparatus.
[7] The composite information generation unit includes a search range outside region detection unit that detects a region having a disparity value larger than a predetermined search range in a disparity map created based on information on disparity. 6]. The image processing apparatus according to any one of [6].
[8] Any one of [1] to [7], wherein the composite information generation unit includes an occlusion area detection unit that detects an occlusion area by comparing the first color image and the second color image. The image processing apparatus according to one.
[9] The composite information generation unit calculates a value obtained by dividing the color difference dispersion value by the luminance dispersion value for each predetermined region in the first color image or the second color image, and the divided value is The image processing apparatus according to any one of [1] to [8], further including a color difference / luminance determination unit that detects an area that is equal to or greater than a predetermined value.
[10] The composite information generating unit
A region outside the search range that detects a pixel or a pixel block having a parallax value larger than a predetermined search range in a parallax map created based on information on parallax;
An occlusion area detecting unit for detecting an occlusion area by comparing the first color image and the second color image; and a color difference for each predetermined area in the first color image or the second color image. A color difference / luminance determination unit that calculates a value obtained by dividing the variance value by the luminance variance value and detects an area in which the divided value is equal to or greater than a predetermined value;
Including at least one component selected from
Generating combined information that the first color image should not be combined with the second color image for the area detected in any of the components;
[1] The image processing apparatus according to any one of [9].
[11] The composition information generation unit generates composition information indicating that the first color image should be synthesized or should not be synthesized for each region constituting the second color image. ] The image processing apparatus according to any one of [10] to [10].
[12] When the ratio of the area in the second color image that the first color image should not be synthesized exceeds a predetermined value, the synthesis information generation unit The image processing apparatus according to any one of [1] to [11], which generates synthesis information that the first color image should not be synthesized for all regions of the color image.
[13] Any one of [1] to [12], wherein the synthesis information generation unit assigns a synthesis rate for synthesizing the first color image for each region constituting the second color image. An image processing apparatus according to 1.
[14] a first color image pickup device;
A second color image pickup device having higher sensitivity than the first color image pickup device;
A parallax information acquisition unit that acquires information about parallax between the first color image captured by the first color image imaging device and the second color image captured by the second color image imaging device; ,
Based on information on parallax acquired by the parallax information acquisition unit, a synthesis information generation unit that generates information on how to combine both images;
An image combining unit that combines the first color image with the second color image based on the combination information generated by the combination information generation unit;
An image processing apparatus.
[15] a first image acquisition step of acquiring a first color image by a first color image pickup device, wherein the color filter covering the light receiving surface does not include any of the white region and the non-filter region;
A second image acquisition step of acquiring a second color image by a second color image pickup device, wherein the color filter covering the light receiving surface includes a white region or a non-filter region;
A parallax information obtaining step of obtaining information on parallax between the first color image and the second color image;
Based on information on parallax acquired in the parallax information acquisition step, a synthesis information generation step of generating information on how to combine both images;
An image synthesis step of synthesizing the first color image with the second color image based on the synthesis information generated in the synthesis information generation step;
An image processing method including:
[16] a first image acquisition step of acquiring a first color image by the first color image pickup device;
A second image acquisition step of acquiring a second color image by means of a second color image pickup device having higher sensitivity than the first color image pickup device;
A parallax information obtaining step of obtaining information on parallax between the first color image and the second color image;
Based on information on parallax acquired in the parallax information acquisition step, a synthesis information generation step of generating information on how to combine both images;
An image synthesis step of synthesizing the first color image with the second color image based on the synthesis information generated in the synthesis information generation step;
An image processing method including:
300 画像処理装置
301-1 第一のカラー画像撮像素子
301-2 第二のカラー画像撮像素子
302-1 前処理部
302-2 前処理部
303 視差情報取得部
304 視差補償部
305 合成情報生成部
306 画像合成部
307 後処理部
300 Image Processing Device 301-1 First Color Image Imaging Element 301-2 Second Color Image Imaging Element 302-1 Preprocessing Unit 302-2 Preprocessing Unit 303 Parallax Information Acquisition Unit 304 Parallax Compensation Unit 305 Composite Information Generation Unit 306 Image composition unit 307 Post-processing unit

Claims (16)

  1.  受光面を覆うカラーフィルタが、ホワイト領域及びフィルタ無し領域のいずれも含まない、第一のカラー画像撮像素子と、
     受光面を覆うカラーフィルタが、ホワイト領域又はフィルタ無し領域を含む、第二のカラー画像撮像素子と、
     前記第一のカラー画像撮像素子により撮像された第一のカラー画像と前記第二のカラー画像撮像素子により撮像された第二のカラー画像との間の視差に関する情報を取得する視差情報取得部と、
     前記視差情報取得部により取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成部と、
     前記合成情報生成部により生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成部と、
     を備えている画像処理装置。
    A color filter covering the light-receiving surface, the first color image pickup element including neither a white area nor an area without a filter;
    A color filter that covers the light receiving surface includes a white area or a non-filter area;
    A parallax information acquisition unit that acquires information about parallax between the first color image captured by the first color image imaging device and the second color image captured by the second color image imaging device; ,
    Based on information on parallax acquired by the parallax information acquisition unit, a synthesis information generation unit that generates information on how to combine both images;
    An image combining unit that combines the first color image with the second color image based on the combination information generated by the combination information generation unit;
    An image processing apparatus.
  2.  前記第一のカラー画像撮像素子に含まれるカラーフィルタが、ベイヤ配列のカラーフィルタである、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the color filter included in the first color image pickup device is a Bayer array color filter.
  3.  前記第二のカラー画像撮像素子に含まれるカラーフィルタが、ホワイト領域を含む、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the color filter included in the second color image pickup device includes a white region.
  4.  前記第二のカラー画像撮像素子に含まれるカラーフィルタが、WRGB配列又はWRG配列の領域を含む、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the color filter included in the second color image pickup device includes a region of a WRGB array or a WRG array.
  5.  前記視差情報取得部が、前記第一のカラー画像と前記第二のカラー画像との間で、画素毎又は画素ブロック毎の視差を取得する、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the parallax information acquisition unit acquires parallax for each pixel or each pixel block between the first color image and the second color image.
  6.  前記視差に関する情報に基づき、前記第一のカラー画像に対して視差補償を行う視差補償部をさらに備えている、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, further comprising a parallax compensation unit that performs parallax compensation on the first color image based on information on the parallax.
  7.  前記合成情報生成部が、視差に関する情報に基づき作成された視差マップ内で所定の探索範囲より大きな視差値を有する領域を検出する探索範囲外領域検出部を含む、請求項1に記載の画像処理装置。 2. The image processing according to claim 1, wherein the composite information generation unit includes a search range outside region detection unit that detects a region having a parallax value larger than a predetermined search range in a parallax map created based on information on parallax. apparatus.
  8.  前記合成情報生成部が、前記第一のカラー画像と前記第二のカラー画像とを対比してオクルージョン領域を検出するオクルージョン領域検出部を含む、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the composite information generation unit includes an occlusion region detection unit that detects an occlusion region by comparing the first color image and the second color image.
  9.  前記合成情報生成部が、前記第一のカラー画像又は前記第二のカラー画像内の所定の領域毎に色差分散値を輝度分散値で除した値を算出し、当該除した値が所定の値以上である領域を検出する色差/輝度判定部を含む、請求項1に記載の画像処理装置。 The composite information generation unit calculates a value obtained by dividing a color difference variance value by a luminance variance value for each predetermined region in the first color image or the second color image, and the divided value is a predetermined value. The image processing apparatus according to claim 1, further comprising a color difference / luminance determination unit that detects the region as described above.
  10.  前記合成情報生成部が、
     視差に関する情報に基づき作成された視差マップ内で所定の探索範囲より大きな視差値を有する画素又は画素ブロックを検出する探索範囲外領域検出部、
     前記第一のカラー画像と前記第二のカラー画像とを対比してオクルージョン領域を検出するオクルージョン領域検出部、及び
     前記第一のカラー画像又は前記第二のカラー画像内の所定の領域毎に色差分散値を輝度分散値で除した値を算出し、当該除した値が所定の値以上である領域を検出する色差/輝度判定部、
     から選ばれる少なくとも1つの構成要素を含み、
     いずれかの構成要素において検出された領域について、前記第二のカラー画像に前記第一のカラー画像を合成すべきでないとする合成情報を生成する、
     請求項1に記載の画像処理装置。
    The composite information generation unit
    A region outside the search range that detects a pixel or a pixel block having a parallax value larger than a predetermined search range in a parallax map created based on information on parallax;
    An occlusion area detecting unit for detecting an occlusion area by comparing the first color image and the second color image; and a color difference for each predetermined area in the first color image or the second color image. A color difference / luminance determination unit that calculates a value obtained by dividing the variance value by the luminance variance value and detects an area in which the divided value is equal to or greater than a predetermined value;
    Including at least one component selected from
    Generating combined information that the first color image should not be combined with the second color image for the area detected in any of the components;
    The image processing apparatus according to claim 1.
  11.  前記合成情報生成部が、前記第二のカラー画像を構成する領域毎に、前記第一のカラー画像を合成すべきとする又は合成すべきでないとする合成情報を生成する、請求項1に記載の画像処理装置。 The composition information generation unit generates composition information indicating that the first color image should be synthesized or should not be synthesized for each region constituting the second color image. Image processing apparatus.
  12.  前記合成情報生成部が、前記第二のカラー画像のうちの、前記第一のカラー画像を合成すべきでないとする領域の割合が所定の値を超えた場合に、前記第二のカラー画像の全ての領域について前記第一のカラー画像を合成すべきでないとする合成情報を生成する、請求項1に記載の画像処理装置。 When the ratio of the area in the second color image that should not be combined with the first color image exceeds a predetermined value, the combination information generation unit The image processing apparatus according to claim 1, wherein synthesis information is generated that the first color image should not be synthesized for all regions.
  13.  前記合成情報生成部が、前記第二のカラー画像を構成する領域毎に、前記第一のカラー画像を合成する合成率を付与する、請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the synthesis information generation unit assigns a synthesis rate for synthesizing the first color image for each region constituting the second color image.
  14.  第一のカラー画像撮像素子と、
     前記第一のカラー画像撮像素子よりも感度が高い、第二のカラー画像撮像素子と、
     前記第一のカラー画像撮像素子により撮像された第一のカラー画像と前記第二のカラー画像撮像素子により撮像された第二のカラー画像との間の視差に関する情報を取得する視差情報取得部と、
     前記視差情報取得部により取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成部と、
     前記合成情報生成部により生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成部と、
     を備えている画像処理装置。
    A first color image pickup device;
    A second color image pickup device having higher sensitivity than the first color image pickup device;
    A parallax information acquisition unit that acquires information about parallax between the first color image captured by the first color image imaging device and the second color image captured by the second color image imaging device; ,
    Based on information on parallax acquired by the parallax information acquisition unit, a synthesis information generation unit that generates information on how to combine both images;
    An image combining unit that combines the first color image with the second color image based on the combination information generated by the combination information generation unit;
    An image processing apparatus.
  15.  受光面を覆うカラーフィルタが、ホワイト領域及びフィルタ無し領域のいずれも含まない、第一のカラー画像撮像素子により第一のカラー画像を取得する第一画像取得工程と、
     受光面を覆うカラーフィルタが、ホワイト領域又はフィルタ無し領域を含む、第二のカラー画像撮像素子により第二のカラー画像を取得する第二画像取得工程と、
     前記第一のカラー画像と前記第二のカラー画像との間の視差に関する情報を取得する視差情報取得工程と、
     前記視差情報取得工程において取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成工程と、
     前記合成情報生成工程において生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成工程と、
     を含む画像処理方法。
    A first image acquisition step of acquiring a first color image by a first color image pickup device, wherein the color filter covering the light receiving surface does not include any of the white region and the non-filter region;
    A second image acquisition step of acquiring a second color image by a second color image pickup device, wherein the color filter covering the light receiving surface includes a white region or a non-filter region;
    A parallax information obtaining step of obtaining information on parallax between the first color image and the second color image;
    Based on information on parallax acquired in the parallax information acquisition step, a synthesis information generation step of generating information on how to combine both images;
    An image synthesis step of synthesizing the first color image with the second color image based on the synthesis information generated in the synthesis information generation step;
    An image processing method including:
  16.  第一のカラー画像撮像素子により第一のカラー画像を取得する第一画像取得工程と、
     前記第一のカラー画像撮像素子よりも感度が高い、第二のカラー画像撮像素子により第二のカラー画像を取得する第二画像取得工程と、
     前記第一のカラー画像と前記第二のカラー画像との間の視差に関する情報を取得する視差情報取得工程と、
     前記視差情報取得工程において取得された視差に関する情報に基づき、両画像の合成の仕方に関する情報を生成する合成情報生成工程と、
     前記合成情報生成工程において生成された合成情報に基づき、前記第二のカラー画像に、前記第一のカラー画像を合成する画像合成工程と、
     を含む画像処理方法。
    A first image acquisition step of acquiring a first color image by the first color image pickup device;
    A second image acquisition step of acquiring a second color image by means of a second color image pickup device having higher sensitivity than the first color image pickup device;
    A parallax information obtaining step of obtaining information on parallax between the first color image and the second color image;
    Based on information on parallax acquired in the parallax information acquisition step, a synthesis information generation step of generating information on how to combine both images;
    An image synthesis step of synthesizing the first color image with the second color image based on the synthesis information generated in the synthesis information generation step;
    An image processing method including:
PCT/JP2019/004143 2018-02-27 2019-02-06 Image processing device and image processing method WO2019167571A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2018-032823 2018-02-27
JP2018032823 2018-02-27

Publications (1)

Publication Number Publication Date
WO2019167571A1 true WO2019167571A1 (en) 2019-09-06

Family

ID=67805254

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2019/004143 WO2019167571A1 (en) 2018-02-27 2019-02-06 Image processing device and image processing method

Country Status (1)

Country Link
WO (1) WO2019167571A1 (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022550191A (en) * 2019-09-30 2022-11-30 アークソフト コーポレイション リミテッド IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, AND ELECTRONIC DEVICE USING IT

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157863A (en) * 2008-12-26 2010-07-15 Fujifilm Corp Compound-eye camera and image processing method
JP2013092552A (en) * 2011-10-24 2013-05-16 Toshiba Corp Solid state image pickup device and camera module
JP2017069926A (en) * 2015-10-02 2017-04-06 ソニー株式会社 Image processing device, and image processing method, and program
WO2017208536A1 (en) * 2016-06-02 2017-12-07 ソニー株式会社 Image processing apparatus and image processing method, and learning apparatus and learning method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010157863A (en) * 2008-12-26 2010-07-15 Fujifilm Corp Compound-eye camera and image processing method
JP2013092552A (en) * 2011-10-24 2013-05-16 Toshiba Corp Solid state image pickup device and camera module
JP2017069926A (en) * 2015-10-02 2017-04-06 ソニー株式会社 Image processing device, and image processing method, and program
WO2017208536A1 (en) * 2016-06-02 2017-12-07 ソニー株式会社 Image processing apparatus and image processing method, and learning apparatus and learning method

Cited By (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2022550191A (en) * 2019-09-30 2022-11-30 アークソフト コーポレイション リミテッド IMAGE PROCESSING METHOD, IMAGE PROCESSING DEVICE, AND ELECTRONIC DEVICE USING IT
JP7412545B2 (en) 2019-09-30 2024-01-12 アークソフト コーポレイション リミテッド Image processing method, image processing device, and electronic equipment applying the same

Similar Documents

Publication Publication Date Title
US10547772B2 (en) Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
USRE49256E1 (en) High resolution thin multi-aperture imaging systems
US8885067B2 (en) Multocular image pickup apparatus and multocular image pickup method
US8896668B2 (en) Combining data from multiple image sensors
KR101340688B1 (en) Image capture using luminance and chrominance sensors
US9001227B2 (en) Combining data from multiple image sensors
CN102892008B (en) Dual image capture processes
US20130229544A1 (en) Image processing device
CN110557584B (en) Image processing method and device, and computer readable storage medium
US20070159542A1 (en) Color filter array with neutral elements and color image formation
CN115550570B (en) Image processing method and electronic equipment
US8218021B2 (en) Image capture apparatus, method of controlling the same, and program
CN115835034A (en) White balance processing method and electronic equipment
WO2019167571A1 (en) Image processing device and image processing method
KR20130133370A (en) Method and apparatus for image processing
JP2006135823A (en) Image processor, imaging apparatus and image processing program
US8675106B2 (en) Image processing apparatus and control method for the same
US8654220B2 (en) Image processing apparatus and control method for the same
GB2555585A (en) Multiple view colour reconstruction
CN115170554A (en) Image detection method and electronic equipment
CN115734082A (en) Image acquisition apparatus including a plurality of image sensors and electronic apparatus including the same
JP2016051085A (en) Imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 19760069

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 19760069

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP