WO2024016288A1 - 拍摄装置以及控制方法 - Google Patents

拍摄装置以及控制方法 Download PDF

Info

Publication number
WO2024016288A1
WO2024016288A1 PCT/CN2022/107202 CN2022107202W WO2024016288A1 WO 2024016288 A1 WO2024016288 A1 WO 2024016288A1 CN 2022107202 W CN2022107202 W CN 2022107202W WO 2024016288 A1 WO2024016288 A1 WO 2024016288A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
light
sub
pixels
pixel
Prior art date
Application number
PCT/CN2022/107202
Other languages
English (en)
French (fr)
Inventor
山下雄一郎
小林篤
Original Assignee
北京小米移动软件有限公司
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 北京小米移动软件有限公司 filed Critical 北京小米移动软件有限公司
Priority to CN202280002739.9A priority Critical patent/CN117769841A/zh
Priority to JP2022550177A priority patent/JP2024530080A/ja
Priority to PCT/CN2022/107202 priority patent/WO2024016288A1/zh
Publication of WO2024016288A1 publication Critical patent/WO2024016288A1/zh

Links

Images

Definitions

  • the present disclosure relates to a photographing device, and a control method.
  • Patent Document 1 discloses an aperture mechanism in which a drive ring rotated by a stepping motor controls the aperture diameter by rotating a plurality of blades in the same direction.
  • Patent Document 1 Patent No. 4618860
  • an object of the present disclosure is to provide a photographing technology that can determine whether to combine a plurality of images based on light received at different portions of pixels in one photographing.
  • An imaging device includes: a plurality of microlenses; and an imaging element including a plurality of pixels arranged on each of the plurality of microlenses and receiving light from the plurality of microlenses; and an image generating unit that generates an image based on light received at a plurality of pixels each having a first part including a central part and a second part surrounding the central part, and the image generating part determines whether to A first image based on light received in the first part is combined with a second image based on light received in the second part.
  • a control method is a control method executed by a processor included in an imaging device, and includes: based on a plurality of pixels arranged on each of a plurality of microlenses provided in the imaging device, A step of generating an image using light from a plurality of microlenses.
  • Each of the plurality of pixels has a first part including a central part and a second part surrounding the central part.
  • the generating step determines whether or not the image received in the first part is based on a prescribed condition.
  • the first image of the light is combined with a second image based on the light received in the second portion.
  • FIG. 1 is a diagram showing an example of the configuration of an imaging device according to an embodiment of the present disclosure.
  • FIG. 2 is a diagram illustrating the light reception situation in the main lens viewed from the direction II in FIG. 1 according to the embodiment of the present disclosure.
  • FIG. 3 is a diagram illustrating light reception conditions in a pixel viewed from the direction III in FIG. 1 according to the embodiment of the present disclosure.
  • FIG. 4 is a diagram showing an example of the structure of an image sensor according to an embodiment of the present disclosure.
  • FIG. 5 is a diagram showing an example of the functional configuration of the control unit according to the embodiment of the present disclosure.
  • FIG. 6 is a flowchart showing an example of image generation processing according to the embodiment of the present disclosure.
  • FIG. 7 is a diagram illustrating an example of a process of electronically adjusting background defocus according to the embodiment of the present disclosure.
  • FIG. 8 is a diagram illustrating an example of how light travels due to the wave nature of light according to the embodiment of the present disclosure.
  • FIG. 9 is a diagram illustrating the optical path length from the light exit surface of the microlens to the light entrance surface of the first part of the pixel and the optical path from the exit surface to the light entrance surface of the second part of the pixel according to the embodiment of the present disclosure.
  • FIG. 10 is a diagram illustrating the optical path length from the light exit surface of the microlens to the light entrance surface of the first part of the pixel and the optical path from the exit surface to the light entrance surface of the second part of the pixel according to the embodiment of the present disclosure. Figure of another example with different lengths.
  • FIG. 11(A) is a diagram showing an example of the structure of a sub-pixel according to the embodiment of the present disclosure.
  • FIG. 11(B) is a diagram showing another example of the structure of a sub-pixel according to the embodiment of the present disclosure.
  • FIG. 1 is a diagram showing an example of the configuration of an imaging device according to an embodiment of the present disclosure.
  • the imaging device 100 includes an image sensor 10 (imaging element), an optical system 20 , and a control unit 30 .
  • the image sensor 10 is a device that receives light emitted from the subject S and converts the brightness and darkness of the light into electrical information.
  • the image sensor 10 includes at least a pixel group composed of a plurality of pixels 2 and a control circuit 1 that controls and drives the pixel group, reads out data based on light signals accumulated in the pixel group, and outputs the data to the image sensor. 10's exterior.
  • the plurality of pixels 2 in the pixel group are arranged on each of a plurality of microlenses 7 described below, and receive light from the plurality of microlenses 7 .
  • the specific structure of the image sensor 10 will be described with reference to FIG. 4 .
  • the image sensor 10 may include a pixel group, and the optical system 20 may include a pixel group.
  • the control unit 30 generates an image by analyzing data and the like based on the data output from the image sensor 10 .
  • the specific structure of the control unit 30 will be described with reference to FIG. 5 .
  • the optical system 20 includes one or more devices that condense the light emitted from the subject S.
  • the optical system 20 includes, for example, a main lens 6 , a micro lens 7 , and a color filter 8 .
  • the main lens 6 functions as a photographic lens, for example.
  • the main lens 6 has a central area LIA and a peripheral area LOA.
  • the setting method of the central area LIA and the peripheral area LOA in the main lens 6 is arbitrary, and can be appropriately set based on the properties and layout of each component of the imaging device 100 .
  • the microlens 7 is, for example, a condenser lens.
  • One or more microlenses 7 are arranged above or in front of the pixel group 2 , and each of the plurality of pixels included in the pixel group 2 has the function of condensing desired light.
  • Each microlens 7 corresponds to a pixel of the color filter 8 described below (for example, each of a plurality of pixels included in the pixel group 2).
  • the color filter 8 corresponds to any one of red, green, and blue as primary colors, for example.
  • the color filter 8 may be a color filter of complementary colors (for example, yellow, light blue, and magenta), and may be appropriately adapted according to the purpose.
  • the color filter 8 is, for example, an on-chip type, but is not limited to this, and may be an adhesive type or other forms.
  • the color filter 8 may have a different structure from the microlens 7 or may be a part of the microlens 7 .
  • FIG. 2 is a diagram illustrating how light is received in the main lens 6 when viewed from the direction II in FIG. 1 according to the embodiment of the present disclosure.
  • the light emitted from the subject S passes through the central area LIA and the peripheral area LOA of the main lens 6 , respectively, and enters the microlens 7 .
  • FIG. 3 is a diagram illustrating how light is received in a pixel viewed from the direction III in FIG. 1 according to the embodiment of the present disclosure.
  • each of the plurality of pixels 2 has a central sub-pixel 2 a (first part) including the central part, and peripheral sub-pixels 2 b (second part) surrounding the central part.
  • the central sub-pixel 2 a receives an optical signal (main signal PS) of light passing through the central area LIA of the main lens 6 .
  • the peripheral sub-pixel 2 b receives an optical signal (auxiliary signal SS) of light passing through the peripheral area LOA of the main lens 6 .
  • FIG. 4 is a diagram showing an example of the structure of an image sensor according to an embodiment of the present disclosure.
  • the image sensor 10 is, for example, a CMOS image sensor or the like.
  • the image sensor 10 includes, for example, the control circuit 1 shown in FIG. 1 , a pixel group of a plurality of two-dimensionally arranged pixels 2 , a signal line 3 , a readout circuit 4 , and a digital signal processing unit (DSP) 5 .
  • DSP digital signal processing unit
  • the configuration of the plurality of pixels 2 is arbitrary.
  • a plurality of pixels 2 can be grouped by collecting a plurality of individual pixels to form a pixel group (unit pixel group).
  • the plurality of pixels 2 may be grouped into 4 (2 ⁇ 2) pixels, for example, to form one pixel group.
  • the plurality of pixels 2 may include, for example, 3 (3 ⁇ 1) pixels, 8 (4 ⁇ 2) pixels, 9 (3 ⁇ 3) pixels, and 16 (4 ⁇ 4) pixels as unit pixel groups.
  • a plurality of pixels 2 are two-dimensionally arranged. Based on the control signal from the control circuit 1 and the control signal generated by the plurality of pixels 2 themselves, the optical signal brought to the image sensor 10 is accumulated and used as data (electrical signal) based on the optical signal. )read out.
  • the electrical signals read out from the plurality of pixels 2 are transmitted to the readout circuit 4 through the signal lines 3 (typically, column signal lines parallel to the column direction), and the electrical signals are converted from analog to digital.
  • the digital signal processing unit (DSP) 5 processes the digital signal converted from analog to digital by the readout circuit 4 . Then, the processed digital signal is transmitted to the processor, memory, etc. included in the photographing device through the data bus.
  • the DSP 5 is not limited to this configuration.
  • the image sensor 10 may not include the DSP 5 and a subsequent-stage processor (for example, the control unit 30 ) may include the DSP.
  • the DSP included in the DSP 5 of the image sensor 10 and a subsequent-stage processor may process a part of the digital signal processing in the image processing.
  • the location of the DSP in the present disclosure is not limited to the specified location.
  • FIG. 5 is a diagram showing an example of the functional configuration of the control unit according to the embodiment of the present disclosure.
  • the control unit 30 (for example, a processor) functionally includes an analysis unit 32 , a filter processing unit 34 , and an image generation unit 36 .
  • the above-mentioned components of the control unit 30 can be realized, for example, by using a storage area such as a memory or a hard disk provided in the imaging device 100 or by using a processor to execute a program stored in the storage area.
  • the analysis unit 32 analyzes data based on the data output from the image sensor 10 .
  • the analysis unit 32 analyzes the main signal PS or the first image (for example, the main image generated based on the main signal PS), and acquires information on the depth of field, sensitivity to light, and the like.
  • the analysis unit 32 analyzes the secondary signal SS or the second image (for example, a secondary image generated based on the secondary signal SS), and obtains information such as whether the secondary signal SS contains a flare component and whether it loses sharpness.
  • the analysis unit 32 may acquire and analyze position information of each image, position information of pixels corresponding to each image, and the like.
  • the analysis unit 32 may calculate a mutual relationship between a plurality of images and specify a portion with a large correlation (for example, a focused portion) or a small portion (for example, an out-of-focus portion).
  • the filter processing unit 34 performs filter processing on the generated image based on the analysis result of the analysis unit 32 .
  • the filter processing unit 34 may perform a predetermined spatial filtering process on the second image generated based on the position information and the like of the image acquired by the analysis unit 32 .
  • the filter processing unit 34 may also perform a predetermined low-pass filtering process on a portion with a small correlation based on the relationship between the plurality of images analyzed by the analysis unit 32 .
  • the image generating section 36 can generate one or more images based on the light received in the plurality of pixels 2 .
  • the image generation unit 36 determines whether to combine the main image (first image) based on the light (main signal PS) received at the central sub-pixel 2a shown in FIGS.
  • the sub-image (second image) of the light (sub-signal SS) received by 2b is synthesized.
  • the “second image” includes a sub-image generated based on only the sub-signal SS, but is not limited thereto and may include a sub-image generated based on the main signal PS and the sub-signal SS.
  • the “predetermined conditions” include conditions related to the main image (first image), but are not limited thereto.
  • a condition related to a sub-image (second image) may be substituted for a condition related to a main image or may be further referred to.
  • the "predetermined condition” it may be set in advance whether the imaging device 100 synthesizes the main image (first image) and the sub-image (second image) or not.
  • the "prescribed conditions” may be fixed, or may be appropriately changed based on the user's usage conditions, etc.
  • the main signal PS main image
  • the sub signal SS sub image
  • usage methods of the main signal PS and the sub signal SS are classified as follows (1) to (3).
  • the image generation unit 36 generates a main image based only on the main signal PS, and uses the generated main image as a final image.
  • the analysis unit 32 analyzes the main signal PS for at least one of depth of field and sensitivity to light. For example, it is assumed that at least one of depth above a predetermined threshold and sensitivity below a predetermined threshold is detected.
  • the predetermined threshold value related to the depth of field and the predetermined threshold value related to the sensitivity are arbitrary values and may be fixed values or values that can be changed appropriately depending on the design of the imaging device 100 or the like.
  • the predetermined conditions regarding the main image include, for example, whether the depth of field of the main signal PS (main image) is equal to or higher than a predetermined threshold, and whether the sensitivity to light of the main signal PS (main image) is within a predetermined threshold. At least one of the thresholds below. For example, when the depth of field of a certain main signal PS (main image) is equal to or greater than a predetermined threshold, this condition is satisfied. Therefore, the sub-image based on the sub-signal SS is not used, and only the main image based on the main signal PS is used (see below). Steps S3 and S4 in Figure 6).
  • the imaging device 100 can perform photography with a high depth of field and low sensitivity to light using only the main signal PS (main image).
  • the imaging device 100 can perform so-called electronic aperture processing and does not require a conventional mechanical aperture mechanism.
  • usage method (2) may also include the following subcategories (i), (ii) and (iii).
  • the image generation unit 34 synthesizes the main image and the sub-image based on information related to the sub-image (for example, the spot component of the sub-signal SS or the component related to non-ideal optical characteristics of the main lens 6 or the like). .
  • the analysis unit 32 detects the speckle component.
  • the image generation unit 34 reconstructs a sub-image based on the analysis results of the light spot components.
  • the image generation unit 34 generates a final image by adding the reconstructed sub-image to the main image, thereby improving the image quality of the final image and the like.
  • the analysis unit 32 Check out this ingredient.
  • the image generation unit 34 reconstructs the sub-image based on the analysis results of components related to non-ideal optical characteristics.
  • the image generation unit 34 generates a final image by adding the reconstructed sub-image to the main image, thereby improving the image quality of the final image and the like.
  • the SNR can be restored by performing sharpness processing on the sub-signal SS and then adding it to the main signal PS.
  • the sharpness processing may include unsharp masking, deconvolution processing, optimization processing using the main signal PS and the amount of defocus as reference information, processing through a neural network, and the like.
  • image reconstruction methods include methods such as modeling optical properties, using analytical inverse functions or using inverse functions prepared as look-up tables.
  • the image reconstruction method can also include the following methods: modeling the optical characteristics, separately calculating the point spread function (PSF) (Point Spread Function), and performing deconvolution processing.
  • PSF point spread function
  • the image reconstruction method can also include the following methods: simplifying the physical optical properties to a certain extent and then modeling them to perform regularization, normalization or optimization, or using AI technology (for example, Deep Learning; deep learning) to generate Final image.
  • AI technology for example, Deep Learning; deep learning
  • the usage method (3) includes the method of controlling the background defocus (Bokeh) of the image described with reference to FIG. 7 , and the like.
  • the imaging device 100 is provided with: a plurality of microlenses 7; and an imaging element including a plurality of pixels 2.
  • the plurality of pixels 2 are arranged on each of the plurality of microlenses 7 and receive signals from the plurality of microlenses 7. 7; and an image generating unit 36 that generates an image based on the light received at the plurality of pixels 2.
  • each of the plurality of pixels 2 has a central sub-pixel 2a including a central portion, and peripheral sub-pixels 2b surrounding the central portion.
  • the image generating unit 36 determines whether to combine the main image and the sub-image based on the light received in the peripheral sub-pixel 2 b based on predetermined conditions regarding the main image based on the light received in the central sub-pixel 2 a.
  • the photographing device 100 does not require the aperture function of a mechanical lens.
  • multiple images with different aperture values can be obtained through electronic processing.
  • the photographing device 100 may, for example, perform image processing using multiple acquired images with different aperture values after photographing, and may also electronically perform changes in the depth of field (for example, changing the depth or shallowness) and the amount of incident light. at least one.
  • FIG. 6 is a flowchart showing an example of image generation processing according to the embodiment of the present disclosure.
  • the imaging device 100 receives the main signal PS in the central sub-pixel 2 a shown in FIGS. 1 and 3 , and receives the sub-signal SS in the peripheral sub-pixel 2 b (step S1 ).
  • the imaging device 100 generates a main image (first image) based on the main signal PS (step S2).
  • the imaging device 100 determines whether the main image satisfies predetermined conditions (step S3). If the predetermined condition regarding the main image is satisfied (in the case of No), the process proceeds to step S4.
  • the imaging device 100 uses the generated main image as the final image (step S4).
  • step S3 if the main image does not satisfy the predetermined conditions (Yes in step S3), the process proceeds to step S5.
  • the imaging device 100 generates a sub-image (second image) based on the sub-signal SS (step S5).
  • the photographing device 100 generates a final image (third image) based on the generated main image and sub-image (step S6).
  • step S5 the generation of the secondary image based on the secondary signal SS (step S5) may be performed together with the generation of the main image based on the main signal PS in step S2. In this case, if "Yes” is obtained in step S3, step S5 is omitted and step S6 is executed. On the other hand, in the case of "No" in step S3, the sub-image based on the generated sub-signal SS is not used, and only the main image is used as the final image.
  • FIG. 7 is a diagram illustrating an example of a process of electronically adjusting background defocus according to the embodiment of the present disclosure.
  • the analysis unit 32 shown in FIG. 5 analyzes, for example, a main image (Primary image) based on the main signal PS, and a secondary image (sub-image based on the Primary image and the Secondary image) based on the main signal PS and the secondary signal SS.
  • the analysis unit 32 performs spatial frequency analysis on the main image and the sub-image. For example, the analysis unit 32 calculates the mutual relationship between the main image and the sub-image, and specifies a portion with a large correlation (eg, a focused portion) or a small portion (eg, an out-of-focus portion).
  • the analysis unit 32 generates a defocus map based on the analysis results of the main image and the sub-image by the analysis unit 32 . Furthermore, based on the defocus map generated by the analysis unit 32 , the filter processing unit 34 performs a low-pass filter process on, for example, a portion of the sub-image with a small correlation. In this way, the imaging device 100 can generate a final image with a shallow depth of field by combining the main image and the sub-image.
  • FIG. 8 is a diagram illustrating an example of how light travels due to the wave nature of light according to the embodiment of the present disclosure. For example, when the sizes of the central sub-pixel 2a and the peripheral sub-pixels 2b are small, due to the fluctuation of light, the separation ability of light at the central sub-pixel 2a and the peripheral sub-pixel 2b may be reduced, and the concentration in the sub-pixels may be reduced. Light efficiency may decrease.
  • all the light incident on the boundary B between the central sub-pixel 2 a and the peripheral sub-pixel 2 b generally does not converge on one of the central sub-pixel 2 a or the peripheral sub-pixel 2 b.
  • light is divided by its wave nature and converges to both the central sub-pixel 2a and the peripheral sub-pixel 2b.
  • the imaging device 100 is configured to connect light from the light exit surface S1 of the plurality of microlenses 7 to the photoelectric conversion surface S3 (first light) of the central sub-pixel 2 a.
  • the effective optical path length (first optical path length) related to the light LR of the light LR from the exit surface S1 to the peripheral sub-pixel 2b (second incident surface) and the effective optical path length ( The second optical path length) is different. According to this configuration, the light separation capability at the boundary between the central subpixel 2a and the peripheral subpixel 2b can be improved. Therefore, the efficiency of light collection in the sub-pixel can be improved.
  • the exit surface S1 and the photoelectric conversion surface S3 of the light of the central sub-pixel 2 a and the photoelectric conversion surface S5 of the light of the peripheral sub-pixel 2 b (for example, the color filter 8 A convex lens 9 is arranged on the surface on the pixel 2 side).
  • the traveling condition eg, traveling direction
  • the effective optical path length from the exit surface S1 to the photoelectric conversion surface S3 can be made different from the effective optical path length from the exit surface S1 to the photoelectric conversion surface S5.
  • the properties of the convex lens 9 are arbitrary as long as the propagation of light can be adjusted to an extent that improves the light separation capability at the boundary between the central sub-pixel 2a and the peripheral sub-pixel 2b.
  • the arrangement position of the convex lens 9 is arbitrary as long as it is between the exit surface S1 and the photoelectric conversion surface S3 of the light of the central sub-pixel 2a and the photoelectric conversion surface S5 of the light of the peripheral sub-pixel 2b.
  • the convex lens 9 may be disposed on the exit surface S1 of the microlens 7 , or may be disposed between the color filter 8 and the pixel 2 .
  • the pixels 2 are arranged such that the first distance from the emission surface S1 to the photoelectric conversion surface S3 is different from the second distance from the emission surface S1 to the photoelectric conversion surfaces S5 and S7 (Central sub-pixel 2a and peripheral sub-pixel 2b).
  • the pixels 2 are arranged so that the photoelectric conversion surface S3 and the photoelectric conversion surfaces S5 and S7 are at different heights.
  • the pixel 2 is arranged so that the photoelectric conversion surface S3 is closer to the emission surface S1 of the microlens 7 than the photoelectric conversion surfaces S5 and S7 .
  • the photoelectric conversion surface S5 and the photoelectric conversion surface S7 may have the same height or different heights.
  • FIG. 11 is a diagram showing an example of the structure of a sub-pixel according to the embodiment of the present disclosure.
  • the plurality of pixels 2 in the unit pixel UP in this embodiment are respectively composed of sub-pixels corresponding to respective colors (for example, red, blue, and green).
  • the structure of the sub-pixels is arbitrary, but for example, as shown in (A) of FIG. 11 , the plurality of pixels 2 each include a central sub-pixel 2 a and a peripheral sub-pixel 2 b.
  • each of the plurality of pixels 2 may include a central subpixel 2a, a peripheral subpixel 2b, and a peripheral subpixel 2c. According to this configuration, it is possible to collect the light passing through the central area LIA of the main lens 6 shown in FIG. 1 in the central sub-pixel 2a, and simultaneously detect the defocus information in the left peripheral sub-pixel 2b and the right peripheral sub-pixel 2c.
  • the peripheral sub-pixel may include three or more peripheral sub-pixels.
  • each of the plurality of pixels 2 may be provided with a plurality of central sub-pixels in addition to one or more peripheral sub-pixels.
  • the above-mentioned embodiment is intended to facilitate understanding of the present invention and is not to be construed as limiting the present invention.
  • the present invention can be changed/improved without departing from the gist thereof, and the present invention also includes equivalents thereof.
  • the present invention can form various disclosures by appropriately combining the plurality of constituent elements disclosed in the above embodiments. For example, some components may be deleted from all the components shown in the embodiment. Furthermore, the constituent elements may be appropriately combined in different embodiments.
  • the shooting device 100 of the present disclosure can be applied to digital cameras, and terminal devices such as smartphones, tablet terminals, and laptop personal computers with camera functions.

Landscapes

  • Transforming Light Signals Into Electric Signals (AREA)
  • Studio Devices (AREA)

Abstract

拍摄装置(100)具备:多个微透镜(7);拍摄元件,其具备多个像素,该多个像素配置在多个微透镜(7)中的每一个上,并接受来自多个微透镜(7)的光;以及图像生成部(36),其基于在多个像素(2)接受的光来生成图像,多个像素(2)分别具有包含中央部分的第一部分、以及包围中央部分的第二部分,图像生成部(36)基于规定的条件,判断是否将基于在第一部分接受的光的第一图像与基于在第二部分接受的光的第二图像合成。由此,可以判断在一次摄影中是否将基于像素的不同部分接受的光的多个图像合成。

Description

拍摄装置以及控制方法 技术领域
本公开涉及拍摄装置、以及控制方法。
背景技术
以往,在内置了固体拍摄元件的摄像机、数码摄像头等中,为了调节在固体拍摄元件或胶片成像的被摄体的光量,设置了控制开口直径的机械光圈机构。
例如,在专利文献1中公开了一种光圈机构,其中通过步进电机旋转的驱动环通过使多个叶片向相同的方向旋转来控制光圈直径。
现有技术文献
专利文献
专利文献1:专利第4618860号
发明内容
发明要解决的问题
然而,在具备专利文献1中记载的机械光圈机构的拍摄装置中,原本就无法在一次摄影中生成不同光圈的多个图像。因此,不会设想判断在一次摄影中是否将不同光圈的图像合成。
因此,本公开的目的在于,提供一种拍摄技术,其可以判断在一次摄影中是否将基于在像素的不同的部分接受的光的多个图像合成。
用于解决问题的方案
本公开的一方面的拍摄装置具备:多个微透镜;拍摄元件,其具备多个像素,该多个像素配置在多个微透镜中的每一个上,并接受来自多个微透镜的光;以及图像生成部,其基于在多个像素接受的光来生成图像,多个像素分别具有包含中央部分的第一部分、以及包围中央部分的第二部分,图像生成部基于规定的条件,判断是否将基于在第一部分接受的光的第一图像与基于在第二部分接受的光的第二图像合成。
本公开的一方面的控制方法,是拍摄装置中包含的处理器所执行的控制方法,其包含:基于在拍摄装置所具备的多个微透镜中的每一个上配置的多个像素接受的、来自多个微透镜的光来生成图像的步骤,多个像素分别具有包含中央部分的第一部分、以及包围中央部分的第二部分,生成步骤基于规定的条件,判断是否将基于在第一部分接受的光的第一图像与基于在第二部分接受的光的第二图像合成。
发明效果
根据本公开,可以判断在一次摄影中是否将基于像素的不同部分接受的光的多个图像合成。
附图说明
图1是示出本公开的实施方式的拍摄装置的构成的一个例子的图。
图2是说明本公开的实施方式的、从图1的II方向观看的主透镜中的受光情况的图。
图3是说明本公开的实施方式的、从图1的III方向观看的像素中的受光情况的图。
图4是示出本公开的实施方式的图像传感器的构成的一个例子的图。
图5是示出本公开的实施方式的控制部的功能构成的一个例子的图。
图6是示出本公开的实施方式的图像生成处理的一个例子的流程图。
图7是说明本公开的实施方式的、电子调整背景散焦的处理的一个例子的图。
图8是示出本公开的实施方式的、光的波动性引起的光的行进情况的一个例子的图。
图9是示出本公开的实施方式的、从微透镜的光的出射面到像素的第一部分的光的入射面的光路长度与从出射面到像素的第二部分的光的入射面的光路长度不同的情况下的一个例子的图。
图10是示出本公开的实施方式的、从微透镜的光的出射面到像素的第一部分的光的入射面的光路长度与从出射面到像素的第二部分的光的入射面的光路长度不同的情况下的另一例子的图。
图11(A)是示出本公开的实施方式的子像素的构成的一个例子的图。
图11(B)是示出本公开的实施方式的子像素的构成的另一例子的图。
具体实施方式
以下,将参照附图具体描述本公开的优选实施方式。此外,以下说明的各实施方式始终是举出用于实施本公开的具体的一例的实施方式,并不限定性地解释本公开。另外,为了便于理解说明,有时在各附图中对相同的构成要素尽可能标注相同的符号,省略重复的说明。
图1是示出本公开的实施方式的拍摄装置的构成的一个例子的图。例如,拍摄装置100具备图像传感器10(拍摄元件)、光学系统20、以及控制部30。
图像传感器10是接受从被摄体S发出的光,并将光的明暗等转换为电信息的装置。例如,图像传感器10至少具备由多个像素2构成的像素组以及控制电路1,控制电路1控制驱动像素组,读出基于像素组中积累的光信号的数据,并将该数据输出到图像传感器10的外部。
像素组中的多个像素2配置在后述的多个微透镜7中的每一个上,并接受来自多个微透镜7的光。参照图4来说明图像传感器10的具体的构成。此外,如上所述,图像传感器10可以具备像素组,光学系统20也可以具备像素组。
控制部30通过基于从图像传感器10输出的数据来解析数据等从而生成图像。参照图5来说明控制部30的具体构成。
光学系统20具备对从被摄体S发出的光进行聚光等的一个或多个装置。光学系统20例如具备主透镜6、微透镜7、以及彩色滤光片8。
主透镜6例如具有作为摄影透镜的功能。主透镜6具有中央区域LIA和周边区域LOA。主透镜6中的中央区域LIA和周边区域LOA的设定方法是任意的,可以基于拍摄装置100的各构成的性质以及配置等来适当设定。
微透镜7例如是聚光透镜。一个或多个微透镜7配置在像素组2的上部或前级,像素组2所包含的多个像素分别具有会聚所希望的光的功能。各微透镜7对应于后述的彩色滤光片8的像素(例如,像素组2中包含的多个像素中的每一个像素)。
彩色滤光片8例如对应于作为原色的红色、绿色、以及蓝色中的任意一个颜色。彩色滤光片8可以是互补色(例如黄色,浅蓝色及品红色)的滤色器,可以根据用途适当适应。彩色滤光片8例如是片上型,但不限于此,也可以是粘贴型等其形式。另外,彩色滤光片8可以与微透镜7不同的构成,也可以是微透镜7的一部分的构成。
图2是说明本公开的实施方式的、从图1的II方向观看的主透镜6中的受光的情况的图。如图1以及图2所示,在拍摄装置100中,从被摄体S发出的光例如分别穿过主透镜6的中央区域LIA以及周边区域LOA,并入射到微透镜7。
图3是说明本公开的实施方式的、从图1的III方向观看的像素中的受光的情况的图。如图1以及图3所示,例如,多个像素2中的每一个具有:包含中央部分的中央子像素2a(第一部分);以及包围中央部分的周边子像素2b(第二部分)。中央子像素2a接收穿过主透镜6的中央区域LIA的光的光信号(主信号PS)。周边子像素2b接收穿过主透镜6的周边区域LOA的光的光信号(副信号SS)。
图4是示出本公开的实施方式的图像传感器的构成的一个例子的图。图像传感器10例如是CMOS图像传感器等。图像传感器10例如具备图1所示的控制电路1、二维排列的多个像素2的像素组、信号线3、读出电路4、以及数字信号处理部(DSP)5。
多个像素2的构成是任意的。例如,多个像素2可以通过收集多个单个像素进行分组,从而形成一个像素组(单位像素组)。另外,如图4所示,多个像素2例如可以将4(2×2)像素分组,从而形成一个像素组。而且,多个像素2例如可以将3(3×1)像素、8(4×2)像素、9(3×3)像素、以及16(4×4)像素等作为单位像素组。
多个像素2二维排列,基于来自控制电路1的控制信号以及该多个像素2本身生成的控制信号,积累被带到图像传感器10的光信号,并作为基于该光信号的数据(电信号)读出。
从多个像素2读出的电信号通过信号线3(典型来说,与列方向平行的列信号线)传输到读出电路4,且该电信号被模拟数字转换。
数字信号处理部(DSP)5处理被读出电路4模拟数字转换后的数字信号。并且,被处理后的数字信号通过数据总线传输到拍摄装置所具有的处理器、存储器等。
此外,DSP5不限于这种配置构成,例如,也可以是图像传感器10不包含DSP5,而后级的处理器(例如控制部30)具有DSP的构成。另外,还可以是如下构成:分别在图像传感器10的DSP5以及后级的处理器等中包含的DSP处理图像处理中的数字信号处理的一部分。换言之,本公开中的DSP的位置不限于指定的位置。
图5是示出本公开的实施方式的控制部的功能构成的一个例子的图。如图5所示,控制部30(例如处理器)在功能上,具备解析部32、滤波处理部34、以及图像生成部36。此外,控制部30的上述各部例如能够通过使用拍摄装置100所具备的存储器、硬盘等的存储区域,或处理器执行存储区域中储存的程序来实现。
解析部32基于从图像传感器10输出的数据来解析数据。例如,解析部32解析主信号PS或第一图像(例如,基于主信号PS生成的主图像),获取关于景深以及针对光的灵敏度的信息等。解析部32 解析副信号SS或第二图像(例如,基于副信号SS生成的副图像),获取副信号SS是否包含光斑(Flare)成分,以及是否丢失清晰度(Sharpness)等的信息。
例如,在生成一个或多个图像的情况下,解析部32也可以获取并解析各图像的位置信息以及与各图像对应的像素的位置信息等。解析部32也可以计算多个图像的相互关系,并指定相关性大的部分(例如合焦的部分)或小的部分(例如失焦的部分)。
滤波处理部34基于解析部32的解析结果,对生成的图像执行滤波处理。例如,滤波处理部34也可以针对基于解析部32获取到的图像的位置信息等生成的第二图像,执行规定的空间滤波处理。
滤波处理部34还可以基于解析部32解析出的多个图像的相互关系,对相关性小的部分执行规定的低通滤波处理。
图像生成部36可以基于在多个像素2接受的光生成一个或多个图像。例如,图像生成部36基于规定的条件,判断是否将基于在图1以及图3所示的中央子像素2a接受的光(主信号PS)的主图像(第一图像)与基于在周边子像素2b接受的光(副信号SS)的副图像(第二图像)合成。此外,如上所述,“第二图像”包含仅基于副信号SS生成的副图像,但不限于此,也可以包含基于主信号PS和副信号SS生成的副图像。
在此,“规定的条件”包含与主图像(第一图像)有关的条件,但不限于此。例如,在判断多个图像的合成时,作为“规定的条件”,可以将与副图像(第二图像)有关的条件代替主图像有关的条件或再附加参照。而且,作为“规定的条件”,也可以预先设定拍摄装置100是否将主图像(第一图像)与副图像(第二图像)合成或不合成。此外,“规定的条件”可以是固定的,也可以基于用户的使用情况等适当变更。
在此,将本实施方式中的、主信号PS(主图像)、副信号SS(副图像)、以及主信号PS及副信号SS的使用方式分类如下(1)至(3)。
<(1)不需要副信号SS(副图像)的情况>
关于该使用方式(1),图像生成部36仅基于主信号PS生成主图像,将生成的主图像作为最终图像。假设如下场景:通过解析部32,对主信号PS解析景深和针对光的灵敏度中的至少一个,例如设想检出规定的阈值以上的深度和规定的阈值以下的灵敏度中的至少一个。与景深有关的规定的阈值、以及与灵敏度有关的规定的阈值是任意的值,可以是固定的值,也可以是根据拍摄装置100的设计等可以适当变更的值。
在此,与主图像(第一图像)有关的规定的条件例如包含主信号PS(主图像)的景深是否在规定的阈值以上、以及针对主信号PS(主图像)的光的灵敏度是否在规定的阈值以下中的至少一个。例如,当某主信号PS(主图像)的景深在规定的阈值以上时,由于满足该条件,因此,不使用基于副信号SS的副图像,仅使用基于主信号PS的主图像(参照后述的图6中的步骤S3以及S4)。
在该使用方式(1)中,拍摄装置100可以仅使用主信号PS(主图像)来执行景深高、针对光的灵敏度低的摄影。另外,拍摄装置100可以进行所谓的电子光圈处理,不需要传统那样的机械光圈机构。
<(2)使用副信号SS并且副信号SS包含例如光学上不优选的劣化等的情况>
在上述使用方式(1)的情况下,当景深增加时,最终图像所使用的光信号会减少,因此,要权衡信噪比(SNR)(Signal to Noise Ratio)恶化。特别是被摄体的亮度低的情况下,其权衡的不良影响变得显著,因此,期望如后所述通过使用副信号SS消除/降低权衡的不良影响,同时改善最终图像的画质等。此外,画质等的改善例如包含改善景深、SNR、MTF、以及颜色再现性中的至少一个。
例如,使用方式(2)还可以包含以下的小分类(i)、(ii)以及(iii)。在使用方式(2)中,图像生成部34基于与副图像有关的信息(例如,副信号SS的光斑成分或与主透镜6等的非理想的光学特性有关的成分)合成主图像和副图像。
<<(i)副信号SS含有光斑成分的情况>>
当副信号SS包含由于摄像头光学系统内部的不希望的反射等导致的光斑成分时,解析部32检出该光斑成分。图像生成部34基于光斑成分的解析结果重建副图像。图像生成部34通过将重建的副图像添加到主图像来生成最终图像,从而可以改善最终图像的画质等。
<<(ii)副信号SS产生清晰度损失的情况>>
当副信号SS包含与图1所示的主透镜6等的非理想的光学特性(例如,制造上的限制以及制造上的偏差)有关的成分(例如发生清晰度的损失)时,解析部32检出该成分。图像生成部34基于与非理想的光学特性有关的成分的解析结果重建副图像。图像生成部34通过将重建的副图像添加到主图像来生成最终图像,从而可以改善最终图像的画质等。
<<(iii)通过减少SNR下降的权衡来增加景深>>
在担心主信号PS的SNR下降的摄影条件下,通过在对副信号SS进行清晰度处理后将其加入到主 信号PS,可以实现SNR的恢复。此外,例如,清晰度处理可以包含非锐化掩模、反卷积处理、将主信号PS、失焦量作为参考信息的优化处理、以及通过神经网络的处理等。
图像的重建的方法是任意的。例如,图像的重建方法包含如下方法:对光学特性进行建模,使用解析的反函数或使用作为查表准备的反函数。图像的重建方法也可以包含如下方法:对光学特性进行建模,单独计算点扩散函数(PSF)(Point Spread Function),并进行反卷积处理。图像的重建方法还可以包含如下方法:在一定程度上简化物理光学特性后对其进行建模,以执行正则化、归一化或优化,或者使用AI技术(例如,Deep Learning;深度学习)生成最终图像。
<(3)将主信号PS以及副信号SS中的至少一个用作附加信息来改变主信号PS以及副信号SS的情况>
例如,使用方式(3)包含参照图7来说明的控制图像的背景散焦(Bokeh)的方法等。
根据本实施方式,拍摄装置100具备:多个微透镜7;拍摄元件,其具备多个像素2,多个像素2配置在多个微透镜7中的每一个上,并接受来自多个微透镜7的光;以及图像生成部36,其基于在多个像素2接受的光来生成图像。在拍摄装置100中,多个像素2分别具有包含中央部分的中央子像素2a、以及包围中央部分的周边子像素2b。在拍摄装置100中,图像生成部36基于与基于在中央子像素2a接受的光的主图像有关的规定的条件,判断是否将主图像与基于在周边子像素2b接受的光的副图像合成。
因此,在一次摄影中可以判断是否将基于在像素的不同部分接受的光的多个图像合成。另外,拍摄装置100与现有的拍摄装置不同,不需要机械透镜的光圈功能,例如,在一次摄影中,在同一时刻,通过电子处理可以获得多个不同光圈数值的图像。而且,拍摄装置100例如在摄影后可以使用获取到的多个不同光圈数值的图像进行图像处理,还可以分别以电子方式进行景深的变更(例如,改变深或浅)以及入射光量的变更中的至少一个。
图6是示出本公开的实施方式的图像生成处理的一个例子的流程图。如图6所示,拍摄装置100在图1以及图3所示的中央子像素2a接收主信号PS,在周边子像素2b接收副信号SS(步骤S1)。拍摄装置100基于主信号PS生成主图像(第一图像)(步骤S2)。拍摄装置100判断主图像是否满足规定的条件(步骤S3)。在满足与主图像有关的规定的条件的情况下(No的情况下)进入步骤S4。拍摄装置100将生成的主图像作为最终图像(步骤S4)。
另一方面,在主图像不满足规定的条件的情况下(步骤S3中Yes的情况下)进入步骤S5。拍摄装置100基于副信号SS生成副图像(第二图像)(步骤S5)。拍摄装置100基于生成的主图像和副图像生成最终图像(第三图像)(步骤S6)。
此外,实施方式的图像生成处理的各步骤的顺序不受上述限制,可以适当地进行变更。例如,基于副信号SS的副图像的生成(步骤S5)可以与在步骤S2中基于主信号PS的主图像的生成一起执行。在这种情况下,在步骤S3中“Yes”的情况下,省略步骤S5,执行步骤S6。另一方面,在步骤S3中“No”的情况下,不使用基于已生成的副信号SS的副图像,仅使用主图像作为最终图像。
图7是说明本公开的实施方式的、电子调整背景散焦的处理的一个例子的图。图5所示的解析部32例如解析基于主信号PS的主图像(Primary image)、以及基于主信号PS和副信号SS的副图像(基于Primary image和Secondary image的副图像)。解析部32对主图像以及副图像执行空间频率分析。例如,解析部32计算主图像和副图像的相互关系,指定相关性大的部分(例如合焦的部分)或小的部分(例如失焦的部分)。
解析部32基于解析部32对主图像以及副图像的解析结果,生成失焦图。并且,滤波处理部34基于通过解析部32生成的失焦图,例如对副图像中相关性小的部分执行低通滤波处理。这样,拍摄装置100可以通过主图像以及副图像的合成生成景深浅的最终图像。
图8是示出本公开的实施方式的、光的波动性引起的光的行进情况的一个例子的图。例如,在中央子像素2a以及周边子像素2b的尺寸小的情况下,由于光的波动性,在中央子像素2a和周边子像素2b处的光的分离能力可能会下降,子像素中的聚光效率可能会下降。
具体地说,如图8所示,例如,入射到中央子像素2a与周边子像素2b之间的边界B的所有的光通常不会汇聚到中央子像素2a或周边子像素2b中的一个上。实际上,由光的波动性划分,其汇聚至中央子像素2a和周边子像素2b这两者。
因此,在本实施方式中,如图9以及图10所示,拍摄装置100构成为与从多个微透镜7的光的出射面S1到中央子像素2a的光的光电转换面S3(第一入射面)的光线LR有关的有效光路长度(第一光路长度)和与从出射面S1到周边子像素2b的光的光电转换面S5(第二入射面)的光线LR有关的有效光路长度(第二光路长度)不同。根据该构成,能够提高在中央子像素2a与周边子像素2b之间的边界部的光线分离能力。因此,能够提高子像素中的聚光的效率。
如图9所示,在拍摄装置100中,在出射面S1与中央子像素2a的光的光电转换面S3以及周边子像素2b的光的光电转换面S5之间(例如彩色滤光片8的像素2侧的面)配置有凸透镜9。根据凸透镜9的性质(例如,形状以及折射率),可以调整凸透镜9中的光的行进情况(例如行进方向)。因而,通过使用该凸透镜9,可以使从出射面S1到光电转换面S3的有效光路长度与从出射面S1到光电转换面S5的有效光路长度不同。只要可以将光的行进情况调整到能够提高中央子像素2a与周边子像素2b之间的边界部的光线分离能力的程度,凸透镜9的性质是任意的。
此外,凸透镜9的配置位置是任意的,只要在出射面S1与中央子像素2a的光的光电转换面S3以及周边子像素2b的光的光电转换面S5之间。凸透镜9的配置位置可以配置在微透镜7中的出射面S1上,也可以配置在彩色滤光片8与像素2之间。
如图10所示,在拍摄装置100中,以使得从出射面S1到光电转换面S3的第一距离与从出射面S1到光电转换面S5、S7的第二距离不同的方式,配置像素2(中央子像素2a以及周边子像素2b)。例如,在拍摄装置100中,以使光电转换面S3与光电转换面S5、S7为不同的高度的方式,配置像素2(中央子像素2a以及周边子像素2b)。
更具体地说,以使光电转换面S3相比光电转换面S5、S7更靠近微透镜7的出射面S1的方式配置像素2。此外,光电转换面S5和光电转换面S7可以是相同的高度,也可以是不同的高度。
图11是示出本公开的实施方式的子像素的构成的一个例子的图。例如,本实施方式中的单位像素UP中的多个像素2分别由于各个颜色(例如红色、蓝色以及绿色)对应的子像素构成。子像素的构成是任意的,但例如,如图11的(A)所示,多个像素2分别具备中央子像素2a和周边子像素2b。
如图11的(B)所示,多个像素2也可以分别具备中央子像素2a、周边子像素2b、以及周边子像素2c。根据该构成,可以在中央子像素2a会聚穿过图1所示的主透镜6的中央区域LIA的光,同时在左侧的周边子像素2b和右侧的周边子像素2c探测失焦信息。
此外,在多个像素2中的每一个中,周边子像素可以具备三个以上的周边子像素。另外,在多个像素2中的每一个中,除了一个或多个周边子像素外,还可以具备多个中央子像素。
另外,上述实施方式用于使本发明的理解变得容易,并不限定本发明进行解释。本发明可以在不脱离其主旨的情况下进行变更/改良,并且本发明还包括其等价物。另外,本发明可以通过上述实施方式中公开的多个构成要素的适当组合来形成各种公开。例如,也可以从实施方式所示的全部构成要素中删除几个构成要素。进而,也可以在不同的实施方式中适当组合构成要素。
本公开的拍摄装置100可适用于数码摄像头、以及具有摄像头功能的智能手机、平板终端、以及膝上型个人计算机等终端装置。
附图标记说明:
1…控制电路,2…像素,3…信号线,4…读出电路,5…数字信号处理部(DSP),6…主透镜,7…微透镜,8…彩色滤光片,10…图像传感器,20…光学系统,30…控制部,32…解析部,34…滤波处理部,36…图像生成部,100…拍摄装置,S…被摄体

Claims (8)

  1. 一种拍摄装置,其特征在于,具备:
    多个微透镜;
    拍摄元件,其具备多个像素,该多个像素配置在所述多个微透镜中的每一个上,并接受来自所述多个微透镜的光;以及
    图像生成部,其基于在所述多个像素接受的光来生成图像,
    所述多个像素分别具有包含中央部分的第一部分、以及包围所述中央部分的第二部分,
    所述图像生成部基于规定的条件,判断是否将基于在所述第一部分接受的光的第一图像与基于在所述第二部分接受的光的第二图像合成。
  2. 根据权利要求1所述的拍摄装置,其中,
    所述图像生成部基于所述规定的条件,使用与所述第二图像相关的信息将所述第一图像与所述第二图像合成。
  3. 根据权利要求1所述的拍摄装置,其中,
    所述规定的条件包含关于与所述第一图像相关的景深和针对光的灵敏度中的至少一个的条件。
  4. 根据权利要求1所述的拍摄装置,其中,
    从所述多个微透镜的光的出射面到所述第一部分的光的第一入射面的第一光路长度与从所述出射面到所述第二部分的光的第二入射面的第二光路长度是不同的。
  5. 根据权利要求4所述的拍摄装置,其中,
    在所述出射面与所述第一入射面以及所述第二入射面之间配置有凸透镜。
  6. 根据权利要求4所述的拍摄装置,其中,
    从所述出射面到所述第一入射面的第一距离与从所述出射面到所述第二入射面的第二距离是不同的。
  7. 一种控制方法,其特征在于,是拍摄装置中包含的处理器所执行的控制方法,其包含:
    生成步骤,其基于在所述拍摄装置所具备的多个微透镜中的每一个上配置的多个像素接受的、来自所述多个微透镜的光来生成图像,
    所述多个像素分别具有包含中央部分的第一部分、以及包围所述中央部分的第二部分,
    所述生成步骤是基于规定的条件,判断是否将基于在所述第一部分接受的光的第一图像与基于在所述第二部分接受的光的第二图像合成。
  8. 一种终端,其特征在于,
    具备权利要求1~6中任意一项所述的拍摄装置。
PCT/CN2022/107202 2022-07-21 2022-07-21 拍摄装置以及控制方法 WO2024016288A1 (zh)

Priority Applications (3)

Application Number Priority Date Filing Date Title
CN202280002739.9A CN117769841A (zh) 2022-07-21 2022-07-21 拍摄装置以及控制方法
JP2022550177A JP2024530080A (ja) 2022-07-21 2022-07-21 撮像装置、及び、制御方法
PCT/CN2022/107202 WO2024016288A1 (zh) 2022-07-21 2022-07-21 拍摄装置以及控制方法

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/CN2022/107202 WO2024016288A1 (zh) 2022-07-21 2022-07-21 拍摄装置以及控制方法

Publications (1)

Publication Number Publication Date
WO2024016288A1 true WO2024016288A1 (zh) 2024-01-25

Family

ID=89616717

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2022/107202 WO2024016288A1 (zh) 2022-07-21 2022-07-21 拍摄装置以及控制方法

Country Status (3)

Country Link
JP (1) JP2024530080A (zh)
CN (1) CN117769841A (zh)
WO (1) WO2024016288A1 (zh)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2009069704A (ja) * 2007-09-14 2009-04-02 Ricoh Co Ltd 撮像装置、合焦情報取得装置、合焦装置、撮像方法、合焦情報取得方法および合焦方法
JP4618860B2 (ja) 2000-10-13 2011-01-26 日本電産コパル株式会社 カメラ用絞り機構
CN105594197A (zh) * 2013-09-27 2016-05-18 富士胶片株式会社 摄像装置及摄像方法
JP2017034722A (ja) * 2016-11-04 2017-02-09 株式会社ニコン 画像処理装置および撮像装置
CN113596431A (zh) * 2015-05-08 2021-11-02 佳能株式会社 图像处理设备、摄像设备、图像处理方法和存储介质
CN114208145A (zh) * 2019-08-08 2022-03-18 麦克赛尔株式会社 摄像装置和方法

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4618860B2 (ja) 2000-10-13 2011-01-26 日本電産コパル株式会社 カメラ用絞り機構
JP2009069704A (ja) * 2007-09-14 2009-04-02 Ricoh Co Ltd 撮像装置、合焦情報取得装置、合焦装置、撮像方法、合焦情報取得方法および合焦方法
CN105594197A (zh) * 2013-09-27 2016-05-18 富士胶片株式会社 摄像装置及摄像方法
CN113596431A (zh) * 2015-05-08 2021-11-02 佳能株式会社 图像处理设备、摄像设备、图像处理方法和存储介质
JP2017034722A (ja) * 2016-11-04 2017-02-09 株式会社ニコン 画像処理装置および撮像装置
CN114208145A (zh) * 2019-08-08 2022-03-18 麦克赛尔株式会社 摄像装置和方法

Also Published As

Publication number Publication date
CN117769841A (zh) 2024-03-26
JP2024530080A (ja) 2024-08-16

Similar Documents

Publication Publication Date Title
US9615030B2 (en) Luminance source selection in a multi-lens camera
EP2471258B1 (en) Reducing noise in a color image
US9495751B2 (en) Processing multi-aperture image data
JP5728673B2 (ja) 多開口画像データの処理
US8872962B2 (en) Focus detection apparatus and image pickup apparatus
EP2420051B1 (en) Producing full-color image with reduced motion blur
US9438786B2 (en) Focus adjustment apparatus, focus adjustment method and program, and imaging apparatus
US20170034456A1 (en) Sensor assembly with selective infrared filter array
US9531960B2 (en) Imaging apparatus for generating HDR image from images captured at different viewpoints and method for controlling imaging apparatus
US20150085137A1 (en) Flash system for multi-aperture imaging
US10321044B2 (en) Image pickup apparatus and image pickup system with point image intensity distribution calculation
US11388383B2 (en) Image processing apparatus, image pickup apparatus, image processing method, and non-transitory computer-readable storage medium
US9813687B1 (en) Image-capturing device, image-processing device, image-processing method, and image-processing program
JPWO2019078335A1 (ja) 撮像装置および方法、並びに、画像処理装置および方法
JP5127628B2 (ja) 画像処理装置および画像処理方法
US10462352B2 (en) Focus detection apparatus and image pickup apparatus
WO2024016288A1 (zh) 拍摄装置以及控制方法
US20220159191A1 (en) Image pickup apparatus
US10200622B2 (en) Image processing method, image processing apparatus, and imaging apparatus
JP2020038319A (ja) 撮像装置及びその制御方法

Legal Events

Date Code Title Description
WWE Wipo information: entry into national phase

Ref document number: 2022550177

Country of ref document: JP

Ref document number: 202280002739.9

Country of ref document: CN

WWE Wipo information: entry into national phase

Ref document number: 202347079442

Country of ref document: IN