WO2013166215A1 - CAMERA MODULES PATTERNED WITH pi FILTER GROUPS - Google Patents

CAMERA MODULES PATTERNED WITH pi FILTER GROUPS Download PDF

Info

Publication number
WO2013166215A1
WO2013166215A1 PCT/US2013/039155 US2013039155W WO2013166215A1 WO 2013166215 A1 WO2013166215 A1 WO 2013166215A1 US 2013039155 W US2013039155 W US 2013039155W WO 2013166215 A1 WO2013166215 A1 WO 2013166215A1
Authority
WO
WIPO (PCT)
Prior art keywords
camera
array
cameras
color
camera module
Prior art date
Application number
PCT/US2013/039155
Other languages
French (fr)
Inventor
Semyon Nisenzon
Kartik Venkataraman
Original Assignee
Pelican Imaging Corporation
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Pelican Imaging Corporation filed Critical Pelican Imaging Corporation
Priority to JP2015510443A priority Critical patent/JP2015521411A/en
Priority to EP13785220.8A priority patent/EP2845167A4/en
Priority to CN201380029203.7A priority patent/CN104335246B/en
Publication of WO2013166215A1 publication Critical patent/WO2013166215A1/en

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformation in the plane of the image
    • G06T3/40Scaling the whole image or part thereof
    • G06T3/4015Demosaicing, e.g. colour filter array [CFA], Bayer pattern
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/10Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
    • H04N25/11Arrangement of colour filter arrays [CFA]; Filter mosaics
    • H04N25/13Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
    • H04N25/134Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/239Image signal generators using stereoscopic image cameras using two 2D image sensors having a relative position equal to or related to the interocular distance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/271Image signal generators wherein the generated image signals comprise depth maps or disparity maps
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/12Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with one sensor only
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/10Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
    • H04N23/13Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths with multiple sensors
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/45Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from two or more image sensors being of different type or operating in different modes, e.g. with a CMOS sensor for moving images in combination with a charge-coupled device [CCD] for still images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N2209/00Details of colour television systems
    • H04N2209/04Picture signal generators
    • H04N2209/041Picture signal generators using solid-state devices
    • H04N2209/048Picture signal generators using solid-state devices having several pick-up sensors

Definitions

  • the present invention relates generally to digital cameras and more specifically to filter patterns utilized in camera modules of array cameras.
  • Conventional digital cameras typically include a single focal plane with a lens stack.
  • the focal plane includes an array of light sensitive pixels and is part of a sensor.
  • the lens stack creates an optical channel that forms an image of a scene upon the array of light sensitive pixels in the focal plane.
  • Each light sensitive pixel can generate image data based upon the light incident upon the pixel.
  • an array of color filters is typically applied to the pixels in the focal plane of the camera's sensor.
  • Typical color filters can include red, green and blue color filters.
  • a demosaicing algorithm can be used to interpolate a set of complete red, green and blue values for each pixel of image data captured by the focal plane given a specific color filter pattern.
  • One example of a camera color filter pattern is the Bayer filter pattern.
  • the Bayer filter pattern describes a specific pattern of red, green and blue color filters that results in 50% of the pixels in a focal plane capturing green light, 25% capturing red light and 25% capturing blue light.
  • binocular disparity or parallax
  • parallax provide information that can be used to calculate depth in the visual scene, providing a major means of depth perception.
  • the impression of depth associated with stereoscopic depth perception can also be obtained under other conditions, such as when an observer views a scene with only one eye while moving.
  • the observed parallax can be utilized to obtain depth information for objects in the scene. Similar principles in machine vision can be used to gather depth information.
  • two cameras separated by a distance can take pictures of the same scene and the captured images can be compared by shifting the pixels of two or more images to find parts of the images that match.
  • the amount an object shifts between different camera views is called the disparity, which is inversely proportional to the distance to the object.
  • a disparity search that detects the shift of an object in multiple images can be used to calculate the distance to the object based upon the baseline distance between the cameras and the focal length of the cameras involved.
  • the approach of using two or more cameras to generate stereoscopic three- dimensional images is commonly referred to as multi-view stereo.
  • a pixel that captures image data concerning a portion of a scene, which is not visible in images captured of the scene from other viewpoints, can be referred to as an occluded pixel.
  • FIGS. 1A and IB illustrate the principles of parallax and occlusion.
  • FIG. 1A depicts the image 100 captured by a first camera having a first field of view
  • FIG. IB depicts the image 102 captured by a second adjacent camera having a second field of view.
  • a foreground object 104 appears slightly to the right of the background object 106.
  • the foreground object 104 appears shifted to the left hand side of the background object 106.
  • the disparity introduced by the different fields of view of the two cameras is equal to the difference between the location of the foreground object 104 in the image captured by the first camera (indicated in the image captured by the second camera by ghost lines 108) and its location in the image captured by the second camera.
  • the distance from the two cameras to the foreground object can be obtained by determining the disparity of the foreground object in the two captured images, and this is described in U.S. Patent Application Serial No. 61/780,906, entitled “Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras.”
  • the disclosure of U.S. Patent Application Serial No. 61/780,906 is incorporated by reference herein in its entirety.
  • the pixels contained within the ghost lines 108 in the image 102 can be considered to be occluded pixels (i.e. the pixels capture image data from a portion of the scene that is visible in the image 102 captured by the second camera and is not visible in the image 100 captured by the first camera).
  • the pixels of the foreground object 104 can be referred to as occluding pixels as they capture portions of the scene that occlude the pixels contained within the ghost lines 108 in the image 102.
  • an array camera module includes: an M x N imager array including a plurality of focal planes, each focal plane including an array of light sensitive pixels; an M x N optic array of lens stacks, where each lens stack corresponds to a focal plane, and where each lens stack forms an image of a scene on its corresponding focal plane; where each pairing of a lens stack and its corresponding focal plane thereby defines a camera; where at least one row in the M x N array of cameras includes at least one red color camera, at least one green color camera, and at least one blue color camera; and where at least one column in the M x N array of cameras includes at least one red color camera, at least one green color camera, and at least one blue color camera.
  • M and N are each greater than two and at least one of M and N is even; color filters are implemented within the cameras in the array camera module such that the array camera module is patterned with at least one ⁇ filter group including: a 3 x 3 array of cameras including: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras surrounding the reference camera.
  • each of the four green color cameras surrounding the reference camera is disposed at a corner location of the 3 x 3 array of cameras.
  • the reference camera is a green color camera.
  • the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
  • each of the two red color cameras is located at a corner location of the 3 x 3 array of cameras, and each of the two blue color cameras is located at a corner location of the 3 x 3 array of cameras.
  • At least one color filter is implemented on the imager array.
  • At least one color filter is implemented on a lens stack.
  • a 3 x 3 array camera module includes: a 3 x 3 imager array including a 3 x 3 arrangement of focal planes, each focal plane including an array of light sensitive pixels; a 3 x 3 optic array of lens stacks, where each lens stack corresponds to a focal plane, and where each lens stack forms an image of a scene on its corresponding focal plane; where each pairing of a lens stack and its corresponding focal plane thereby defines a camera; where the 3 x 3 array of cameras includes: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras, each located at a corner location of the 3 x 3 array of cameras; where each of the color cameras is achieved using a color filter.
  • At least one color filter is implemented on the imager array to achieve a color camera.
  • At least one color filter is implemented within a lens stack to achieve a color camera.
  • the reference camera is a green color camera.
  • the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
  • a method of patterning an array camera module with at least one ⁇ filter group includes: evaluating whether an imager array of M x N focal planes, where each focal plane comprises an array of light sensitive pixels, includes any defective focal planes; assembling an M x N array camera module using: the imager array of M x N focal planes; an M x N optic array of lens stacks, where each lens stack corresponds with a focal plane, where the M x N array camera module is assembled so that: each lens stack and its corresponding focal plane define a camera; color filters are implemented within the array camera module such that the array camera module is patterned with at least one ⁇ filter group including: a 3 x 3 array of cameras including: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras surrounding the reference camera; and where the array camera module is patterned with the at least one ⁇ filter group including:
  • At least one color filter is implemented on the imager array.
  • At least one color filter is implemented within a lens stack.
  • the reference camera is a green color camera.
  • an array camera module includes: an imager array comprising M x N focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane; an optic array of M x N lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks; wherein the imager array and the optic array of lens stacks form an M x N array of cameras that are configured to independently capture an image of a scene; where at least one row in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and where at least one column in the M x N array
  • the red color camera is a camera that captures image data including electromagnetic waves having a wavelength within the range of 620 nm and 750 nm;
  • the green color camera is a camera that captures image data including electromagnetic waves having a wavelength within the range of 495 nm and 570 nm;
  • the blue color camera is a camera that captures image data including electromagnetic waves having a wavelength within the range of 450 nm and 495 nm.
  • the optics of each camera within the array camera module are configured so that each camera has a field of view of a scene that is shifted with respect to the fields of view of the other cameras so that each shift of the field of view of each camera with respect to the fields of view of the other cameras is configured to include a unique sub-pixel shifted view of the scene.
  • M and N are each greater than two and at least one of M and N is even; color filters are implemented within the cameras in the array camera module such that the array camera module is patterned with at least one ⁇ filter group including: a 3 x 3 array of cameras including: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras surrounding the reference camera.
  • each of the four green color cameras surrounding the reference camera is disposed at a corner location of the 3 x 3 array of cameras.
  • M is four; N is four; the first row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a red color camera; the second row of cameras of the 4 x 4 array camera module includes, in the following order, a red color camera, a green color camera, a red color camera, and a green color camera; the third row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a blue color camera; and the fourth row of cameras of the 4 x 4 array camera module includes, in the following order, a blue color camera, a green color camera, a red color camera, and a green color camera.
  • the reference camera within the at least one ⁇ filter group is a green color camera.
  • the reference camera within the at least one ⁇ filter group is a camera that incorporates a Bayer filter.
  • the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
  • each of the two red color cameras is located at a corner location of the 3 x 3 array of cameras, and wherein each of the two blue color cameras is located at a corner location of the 3 x 3 array of cameras.
  • At least one color filter is implemented on the imager array.
  • a 3 x 3 array camera module includes: a 3 x 3 imager array including a 3 x 3 arrangement of focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane; a 3 x 3 optic array of lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks; where the imager array and the optic array of lens stacks form a 3 x 3 array of cameras that are configured to independently capture an image of a scene; where the 3 x 3 array of cameras includes: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green
  • At least one color filter is implemented on the imager array to achieve a color camera.
  • At least one color filter is implemented within a lens stack to achieve a color camera.
  • the reference camera is a green color camera.
  • the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
  • an array camera module includes: an imager array comprising M x N focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane; an optic array of M x N lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks; wherein the imager array and the optic array of lens stacks form an M x N array of cameras that are configured to independently capture an image of a scene; and wherein at least either one row or one column in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera.
  • an array camera includes: an array camera module, including: an imager array comprising M x N focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane; an optic array of M x N lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks; where the imager array and the optic array of lens stacks form an M x N array of cameras that are configured to independently capture an image of a scene; where at least one row in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and where at least one column in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and a processor that includes an image processing pipeline, the image processing pipeline including: a parallax detection module
  • FIGS. 1A and IB illustrate the principles of parallax and occlusion as they pertain to image capture, and which can be addressed in accordance with embodiments of the invention.
  • FIG. 2 illustrates an array camera with a camera module and processor in accordance with an embodiment of the invention.
  • FIG. 3 illustrates a camera module with an optic array and imager array in accordance with an embodiment of the invention.
  • FIG. 4 illustrates an image processing pipeline in accordance with an embodiment of the invention.
  • FIG. 5 A conceptually illustrates a 3 x 3 camera module patterned with a ⁇ filter group where red cameras are arranged horizontally and blue cameras are arranged vertically in accordance with an embodiment of the invention.
  • FIG. 5B conceptually illustrates a 3 x 3 camera module patterned with a ⁇ filter group where red cameras are arranged vertically and blue cameras are arranged horizontally in accordance with an embodiment of the invention.
  • FIG. 5C conceptually illustrates a 3 x 3 camera module patterned with a ⁇ filter group where red cameras and blue cameras are arranged at the corner locations of the 3 x 3 camera module in accordance with an embodiment of the invention
  • FIGS. 5D and 5E conceptually illustrate a number of 3 x 3 camera modules patterned with a ⁇ filter group.
  • FIG. 6 conceptually illustrates a 4 x 4 camera module patterned with two ⁇ filter groups in accordance with an embodiment of the invention.
  • FIG. 7 conceptually illustrates a 4 x 4 camera module patterned with two ⁇ filter groups with two cameras that could each act as a reference camera in accordance with an embodiment of the invention.
  • FIG. 8A illustrates a process for testing an imager array for defective focal planes to create a camera module that reduces the effect of any defective focal plane in accordance with an embodiment of the invention.
  • FIG. 8B conceptually illustrates a 4 x 4 camera module patterned with two ⁇ filter groups where a faulty focal plane causes a loss of red coverage around possible reference cameras.
  • FIG. 8C conceptually illustrates the 4 x 4 camera module patterned with a different arrangement of ⁇ filter groups relative to FIG. 6B where the faulty focal plane does not result in a loss of red coverage around possible reference cameras in accordance with an embodiment of the invention.
  • FIG. 9 A conceptually illustrates use of a subset of cameras to produce a left virtual viewpoint for an array camera operating in 3D mode on a 4 x 4 camera module patterned with ⁇ filter groups in accordance with an embodiment of the invention.
  • FIG. 9B conceptually illustrates use of a subset of cameras to produce a right virtual viewpoint for an array camera operating in 3D mode on a 4 x 4 camera module patterned with ⁇ filter groups in accordance with an embodiment of the invention.
  • FIGS. 9C and 9D conceptually illustrate array camera modules that employ ⁇ filter groups to capture stereoscopic images with viewpoints that correspond to the viewpoints of reference cameras within the camera array.
  • FIG. 10 conceptually illustrates a 4 x 4 camera module patterned with ⁇ filter groups where nine cameras are utilized to capture image data used to synthesize frames of video in accordance with an embodiment of the invention.
  • FIG. 11 is a flow chart illustrating a process for generating color filter patterns including ⁇ filter groups in accordance with embodiments of the invention.
  • FIGS. 12 A - 12D illustrate a process for generating a color filter pattern including ⁇ filter groups for a 5 x 5 array of cameras in accordance with embodiments of the invention.
  • FIGS. 13A - 13D illustrate a process for generating a color filter pattern including ⁇ filter groups for a 4 x 5 array of cameras in accordance with embodiments of the invention.
  • FIG. 14 illustrates a 7 x 7 array of cameras patterned using ⁇ filter groups in accordance with embodiments of the invention.
  • camera modules of an array camera are patterned with one or more ⁇ filter groups.
  • the term patterned here refers to the use of specific color filters in individual cameras within the camera module so that the cameras form a pattern of color channels within the array camera.
  • the term color channel or color camera can be used to refer to a camera that captures image data within a specific portion of the spectrum and is not necessarily limited to image data with respect to a specific color.
  • a 'red color camera' is a camera that captures image data that corresponds with electromagnetic waves (i.e., within the electromagnetic spectrum) that humans conventionally perceive as red, and similarly for 'blue color cameras', 'green color cameras', etc.
  • a red color camera may capture image data corresponding with electromagnetic waves having wavelengths of between approximately 620 nm and 750 nm
  • a green color camera may capture image data corresponding with electromagnetic waves having wavelengths of between approximately 495 nm and approximately 570 nm
  • a blue color camera may capture image data corresponding with electromagnetic waves having wavelengths of between approximately 450 nm and 495 nm.
  • the portions of the visible light spectrum that are captured by blue color cameras, green color cameras and red color cameras can depend upon the requirements of a specific application.
  • Bayer camera can be used to refer to a camera that captures image data using the Bayer filter pattern on the image plane.
  • a color channel can include a camera that captures infrared light, ultraviolet light, extended color and any other portion of the visible spectrum appropriate to a specific application.
  • ⁇ filter group refers to a 3 x 3 group of cameras including a central camera and color cameras distributed around the central camera to reduce occlusion zones in each color channel.
  • the central camera of a ⁇ filter group can be used as a reference camera when synthesizing an image using image data captured by an imager array.
  • a camera is a reference camera when its viewpoint is used as the viewpoint of the synthesized image.
  • the central camera of a ⁇ filter group is surrounded by color cameras in a way that minimizes occlusion zones for each color camera when the central camera is used as a reference camera.
  • Occlusion zones are areas surrounding foreground objects not visible to cameras that are spatially offset from the reference camera due to the effects of parallax.
  • a similar decrease in the likelihood that a portion of the scene visible from the reference viewpoint will be occluded in every other image captured within a specific color channel can be achieved using two cameras in the same color channel that are located on opposite sides of a reference camera or three cameras in each color channel that are distributed in three sectors around the reference camera. In other embodiments, cameras can be distributed in more than four sectors around the reference camera.
  • the central camera of a ⁇ filter group is a green camera while in other embodiments the central camera captures image data from any appropriate portion of the spectrum.
  • the central camera is a Bayer camera (i.e. a camera that utilizes a Bayer filter pattern to capture a color image).
  • a ⁇ filter group is a 3 x 3 array of cameras with a green color camera at each corner and a green color camera at the center which can serve as the reference camera with a symmetrical distribution of red and blue cameras around the central green camera.
  • the symmetrical distribution can include arrangements where either red color cameras are directly above and below the center green reference camera with blue color cameras directly to the left and right, or blue color cameras directly above and below the green center reference camera with red color cameras directly to the left and right.
  • Camera modules of dimensions greater than a 3 x 3 array of cameras can be patterned with ⁇ filter groups in accordance with many embodiments of the invention.
  • patterning a camera module with ⁇ filter groups enables an efficient distribution of cameras around a reference camera that reduces occlusion zones.
  • patterns of ⁇ filter groups can overlap with each other such that two overlapping ⁇ filter groups on a camera module share common cameras.
  • cameras that are not part of a ⁇ filter group can be assigned a color to reduce occlusion zones in the resulting camera array by distributing cameras in each color channel within each of a predetermined number of sectors surrounding a reference camera and/or multiple cameras that can act as reference cameras within the camera array.
  • camera modules can be patterned with ⁇ filter groups such that either at least one row in the camera module or at least one column in the camera module includes at least one red color camera, at least one green color camera, and at least one blue color camera.
  • at least one row and at least one column of the array camera module include at least one red color camera, at least one green color camera, and at least one blue color camera.
  • At least one row and at least one column of the array camera module include at least one cyan color camera, at least one magenta color camera, and at least one yellow color camera (e.g. color cameras that correspond with the CMYK color model).
  • at least one row and at least one column of the array camera module include at least one red color camera, at least one yellow color camera, and at least one blue color camera (e.g. color cameras that correspond with the RYB color model).
  • camera modules of an M x N dimension may also be patterned with ⁇ filter groups in accordance with many embodiments of the invention.
  • These camera modules are distinct from an M x N camera module where both M and N are odd numbers insofar as where at least one of M and N is even, none of the constituent cameras align with the center of the camera array.
  • M and N are both odd, there is a camera that corresponds with the center of the camera array.
  • there is a central camera that corresponds with the center of the camera array. Cameras that align with the center of the camera array are typically selected as the reference camera of the camera module.
  • any suitable camera may be utilized as the reference camera of the camera module.
  • color cameras surrounding the reference camera need not be uniformly distributed but need only be distributed in a way to minimize or reduce occlusion zones of each color from the perspective of the reference camera. Utilization of a reference camera in a i filter group to synthesize an image from captured image data can be significantly less computationally intensive than synthesizing an image using the same image data from a virtual viewpoint.
  • High quality images or video can be captured by an array camera including a camera module patterned with ⁇ filter groups utilizing a subset of cameras within the camera module (i.e. not requiring that all cameras on a camera module be utilized). Similar techniques can also be used.
  • -U- be used for efficient generation of stereoscopic 3D images utilizing image data captured by subsets of the cameras within the camera module.
  • Patterning camera modules with ⁇ filter groups also enables robust fault tolerance in camera modules with multiple ⁇ filter groups as multiple possible reference cameras can be utilized if a reference camera begins to perform sub optimally. Patterning camera modules with ⁇ filter groups also allows for yield improvement in manufacturing camera modules as the impact of a defective focal plane on a focal plane array can be minimized by simply changing the pattern of the color lens stacks in an optic array. Various ⁇ filter groups and the patterning of camera modules with ⁇ filter groups in accordance with embodiments of the invention are discussed further below.
  • an array camera includes a camera module and a processor.
  • An array camera with a camera module patterned with ⁇ filter groups in accordance with an embodiment of the invention is illustrated in FIG. 2.
  • the array camera 200 includes a camera module 202 as an array of individual cameras 204 where each camera 204 includes a focal plane with a corresponding lens stack.
  • An array of individual cameras refers to a plurality of cameras in a particular arrangement, such as (but not limited to) the square arrangement utilized in the illustrated embodiment.
  • the camera module 202 is connected 206 to a processor 208.
  • a camera 204 labeled as "R” refers to a red camera with a red filtered color channel
  • G refers to a green camera with a green filtered color channel
  • B refers to a blue camera with a blue filtered color channel.
  • Array camera modules in accordance with embodiments of the invention can be constructed from an imager array or sensor including an array of focal planes and an optic array including a lens stack for each focal plane in the imager array. Sensors including multiple focal planes are discussed in U.S. Patent Application Serial No. 13/106,797 entitled “Architectures for System on Chip Array Cameras", to Pain et al., the disclosure of which is incorporated herein by reference in its entirety.
  • Light filters can be used within each optical channel formed by the lens stacks in the optic array to enable different cameras within an array camera module to capture image data with respect to different portions of the electromagnetic spectrum.
  • the camera module 300 includes an imager array 330 including an array of focal planes 340 along with a corresponding optic array 310 including an array of lens stacks 320.
  • each lens stack 320 creates an optical channel that forms an image of a scene on an array of light sensitive pixels within a corresponding focal plane 340.
  • Each pairing of a lens stack 320 and focal plane 340 forms a single camera 204 within the camera module, and thereby an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks.
  • Each pixel within a focal plane 340 of a camera 204 generates image data that can be sent from the camera 204 to the processor 208.
  • the lens stack within each optical channel is configured so that pixels of each focal plane 340 sample the same object space or region within the scene.
  • the lens stacks are configured so that the pixels that sample the same object space do so with sub-pixel offsets to provide sampling diversity that can be utilized to recover increased resolution through the use of super-resolution processes.
  • the optics of each camera module can be configured so that each camera within the camera module has a field of view of a scene that is shifted with respect to the fields of view of the other cameras within the camera module so that each shift of the field of view of each camera with respect to the fields of view of the other cameras is configured to include a unique sub-pixel shifted view of the scene.
  • the focal planes are configured in a 5 x 5 array.
  • Each focal plane 340 on the sensor is capable of capturing an image of the scene.
  • each focal plane includes a plurality of rows of pixels that also forms a plurality of columns of pixels, and each focal plane is contained within a region of the imager that does not contain pixels from another focal plane.
  • image data capture and readout of each focal plane can be independently controlled.
  • the imager array and the optic array of lens stacks form an array of cameras that can be configured to independently capture an image of a scene.
  • image capture settings including (but not limited to) the exposure times and analog gains of pixels within a focal can be determined independently to enable image capture settings to be tailored based upon factors including (but not limited to) a specific color channel and/or a specific portion of the scene dynamic range.
  • the sensor elements utilized in the focal planes can be individual light sensing elements such as, but not limited to, traditional CIS (CMOS Image Sensor) pixels, CCD (charge-coupled device) pixels, high dynamic range sensor elements, multispectral sensor elements and/or any other structure configured to generate an electrical signal indicative of light incident on the structure.
  • the sensor elements of each focal plane have similar physical properties and receive light via the same optical channel and color filter (where present).
  • the sensor elements have different characteristics and, in many instances, the characteristics of the sensor elements are related to the color filter applied to each sensor element.
  • color filters in individual cameras can be used to pattern the camera module with ⁇ filter groups. These cameras can be used to capture data with respect to different colors, or a specific portion of the spectrum.
  • color filters in many embodiments of the invention are included in the lens stack.
  • a green color camera can include a lens stack with a green light filter that allows green light to pass through the optical channel.
  • the pixels in each focal plane are the same and the light information captured by the pixels is differentiated by the color filters in the corresponding lens stack for each filter plane.
  • camera modules including ⁇ filter groups can be implemented in a variety of ways including (but not limited to) by applying color filters to the pixels of the focal planes of the camera module similar to the manner in which color filters are applied to the pixels of a conventional color camera.
  • at least one of the cameras in the camera module can include uniform color filters applied to the pixels in its focal plane.
  • a Bayer filter pattern is applied to the pixels of one of the cameras in a camera module.
  • camera modules are constructed in which color filters are utilized in both the lens stacks and on the pixels of the imager array.
  • an array camera generates image data from the multiple focal planes and uses a processor to synthesize one or more images of a scene.
  • the image data captured by a single focal plane in the sensor array can constitute a low resolution image, or an "LR image” (the term low resolution here is used only to contrast with higher resolution images or super-resolved images, alternatively a "HR image” or "SR image”), which the processor can use in combination with other low resolution image data captured by the camera module to construct a higher resolution image through Super Resolution processing.
  • LR image the term low resolution here is used only to contrast with higher resolution images or super-resolved images, alternatively a "HR image” or "SR image”
  • the processor can use in combination with other low resolution image data captured by the camera module to construct a higher resolution image through Super Resolution processing.
  • Super Resolution processes that can be used to synthesize high resolution images using low resolution images captured by an array camera are discussed in U.S. Patent Application No. 12/967,807 entitled “Systems and Methods for Synthe
  • any of a variety of regular or irregular layouts of imagers including imagers that sense visible light, portions of the visible light spectrum, near-IR light, other portions of the spectrum and/or combinations of different portions of the spectrum can be utilized to capture LR images that provide one or more channels of information for use in SR processes in accordance with embodiments of the invention.
  • the processing of captured LR images is discussed further below.
  • the processing of LR images to obtain an SR image in accordance with embodiments of the invention typically occurs in an array camera's image processing pipeline.
  • the image processing pipeline performs processes that register the LR images prior to performing SR processes on the LR images.
  • the image processing pipeline also performs processes that eliminate problem pixels and compensate for parallax.
  • FIG. 4 An image processing pipeline incorporating a SR module for fusing information from LR images to obtain a synthesized HR image in accordance with an embodiment of the invention is illustrated in FIG. 4.
  • pixel information is read out from focal planes 340 and is provided to a photometric conversion module 402 for photometric normalization.
  • the photometric conversion module can perform any of a variety of photometric image processing processes including but not limited to one or more of photometric normalization, Black Level calculation and adjustments, vignetting correction, and lateral color correction.
  • the photometric conversion module also performs temperature normalization.
  • the inputs of the photometric normalization module are photometric calibration data 401 and the captured LR images.
  • the photometric calibration data is typically captured during an offline calibration process.
  • the output of the photometric conversion module 402 is a set of photometrically normalized LR images. These photometrically normalized images are provided to a parallax detection module 404 and to a super-resolution module 406.
  • the image processing pipeline Prior to performing SR processing, the image processing pipeline detects parallax that becomes more apparent as objects in the scene captured by the imager array approach the imager array.
  • parallax (or disparity) detection is performed using the parallax detection module 404.
  • the parallax detection module 404 generates an occlusion map for the occlusion zones around foreground objects.
  • the occlusion maps are binary maps created for pairs of LR imagers.
  • occlusion maps are generated to illustrate whether a point in the scene is visible in the field of view of a reference LR imager and whether points in the scene visible within the field of view of the reference imager are visible in the field of view of other imagers.
  • the use of ⁇ filter groups can increase the likelihood that a pixel visible in a reference LR image is visible (i.e. not occluded) in at least one other LR image.
  • the parallax detection module 404 performs scene independent geometric corrections to the photometrically normalized LR images using geometric calibration data 408 obtained via an address conversion module 410.
  • parallax detection module 404 can then compare the geometrically and photometrically corrected LR images to detect the presence of scene dependent geometric displacements between LR images.
  • Information concerning these scene dependent geometric displacements can be referred to as parallax information and can be provided to the super-resolution module 406 in the form of scene dependent parallax corrections and occlusion maps.
  • parallax information can also include generated depth maps which can also be provided to the super-resolution module 406.
  • Geometric calibration (or scene-independent geometric correction) data 408 can be generated using an off line calibration process or a subsequent recalibration process.
  • the scene- independent correction information along with the scene-dependent geometric correction information (parallax) and occlusion maps, form the geometric correction information for the LR images.
  • the parallax information and the photometrically normalized LR images are provided to the super-resolution module 406 for use in the synthesis of one or more HR images 420.
  • the super-resolution module 406 performs scene independent and scene dependent geometric corrections (i.e. geometric corrections) using the parallax information and geometric calibration data 408 obtained via the address conversion module 410.
  • the photometrically normalized and geometrically registered LR images are then utilized in the synthesis of an HR image.
  • the synthesized HR image may then be fed to a downstream color processing module 412, which can be implemented using any standard color processing module configured to perform color correction and/or chroma level adjustment.
  • the color processing module performs operations including but not limited to one or more of white balance, color correction, gamma correction, and RGB to YUV correction.
  • image processing pipelines in accordance with embodiments of the invention include a dynamic refocus module.
  • the dynamic refocus module enables the user to specify a focal plane within a scene for use when synthesizing an HR image.
  • the dynamic refocus module builds an estimated HR depth map for the scene.
  • the dynamic refocus module can use the HR depth map to blur the synthesized image to make portions of the scene that do not lie on the focal plane to appear out of focus.
  • the SR processing is limited to pixels lying on the focal plane and within a specified Z-range around the focal plane.
  • the synthesized high resolution image 420 is encoded using any of a variety of standards based or proprietary encoding processes including but not limited to encoding the image in accordance with the JPEG standard developed by the Joint Photographic Experts Group.
  • the encoded image can then be stored in accordance with a file format appropriate to the encoding technique used including but not limited to the JPEG Interchange Format (JIF), the JPEG File Interchange Format (JFIF), or the Exchangeable image file format (Exif).
  • JIF JPEG Interchange Format
  • JFIF JPEG File Interchange Format
  • Exchangeable image file format Exif
  • parallax information can be used to generate depth maps as well as occlusion maps, and this is discussed below.
  • Array cameras in accordance with many embodiments of the invention use disparity observed in images captured by the array cameras to generate a depth map.
  • a depth map is typically regarded as being a layer of meta-data concerning an image (often a reference image captured by a reference camera) that describes the distance from the camera to specific pixels or groups of pixels within the image (depending upon the resolution of the depth map relative to the resolution of the original input images).
  • Array cameras in accordance with a number of embodiments of the invention use depth maps for a variety of purposes including (but not limited to) generating scene dependent geometric shifts during the synthesis of a high resolution image and/or performing dynamic refocusing of a synthesized image.
  • the process of determining the depth of a portion of scene based upon pixel disparity is theoretically straightforward.
  • the viewpoint of a specific camera in the array camera is chosen as a reference viewpoint
  • the distance to a portion of the scene visible from the reference viewpoint can be determined using the disparity between the corresponding pixels in some or all of the other images captured by the camera array (often referred to as alternate view images).
  • alternate view images the disparity between the corresponding pixels in some or all of the other images captured by the camera array
  • a pixel corresponding to a pixel in the reference image captured from the reference viewpoint will be located in each alternate view image along an epipolar line (i.e. a line parallel to the baseline vector between the two cameras).
  • the distance along the epipolar line of the disparity corresponds to the distance between the camera and the portion of the scene captured by the pixels. Therefore, by comparing the pixels in the captured reference image and alternate view image(s) that are expected to correspond at a specific depth, a search can be conducted for the depth that yields the pixels having the highest degree of similarity. The depth at which the corresponding pixels in the reference image and the alternate view image(s) have the highest degree of similarity can be selected as the most likely distance between the camera and the portion of the scene captured by the pixel. [0097] Many challenges exist, however, in determining an accurate depth map using the method outlined above. In several embodiments, the cameras in an array camera are similar but not the same.
  • image characteristics including (but not limited to) optical characteristics, different sensor characteristics (such as variations in sensor response due to offsets, different transmission or gain responses, non-linear characteristics of pixel response), noise in the captured images, and/or warps or distortions related to manufacturing tolerances related to the assembly process can vary between the images reducing the similarity of corresponding pixels in different images.
  • super-resolution processes rely on sampling diversity in the images captured by an imager array in order to synthesize higher resolution images.
  • increasing sampling diversity can also involve decreasing similarity between corresponding pixels in captured images in a light field. Given that the process for determining depth outlined above relies upon the similarity of pixels, the presence of photometric differences and sampling diversity between the captured images can reduce the accuracy with which a depth map can be determined.
  • occlusions occur when a pixel that is visible from the reference viewpoint is not visible in one or more of the captured images.
  • the effect of an occlusion is that at the correct depth, the pixel location that would otherwise be occupied by a corresponding pixel is occupied by a pixel sampling another portion of the scene (typically an object closer to the camera).
  • the occluding pixel is often very different to the occluded pixel. Therefore, a comparison of the similarity of the pixels at the correct depth is less likely to result in a significantly higher degree of similarity than at other depths.
  • the occluding pixel acts as a strong outlier masking the similarity of those pixels which in fact correspond at the correct depth. Accordingly, the presence of occlusions can introduce a strong source of error into a depth map. Furthermore, use of ⁇ filter groups to increase the likelihood that a pixel visible in an image captured by a reference camera is visible in alternate view images captured by other cameras within the array can reduce error in a depth map generated in the manner described above.
  • Processes for generating depth maps in accordance with many embodiments of the invention attempt to reduce sources of error that can be introduced into a depth map by sources including (but not limited to) those outlined above.
  • U.S. Patent Application Serial No. 61/780,906 entitled “Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras” discloses such processes.
  • U.S. Patent Application Serial No. 61/780,906 is incorporated by reference herein in its entirety.
  • use of ⁇ filter groups can significantly decrease the likelihood that a pixel visible from the viewpoint of a reference camera is occluded within all cameras within a color channel.
  • Many different array cameras are capable of utilizing ⁇ filter groups in accordance with embodiments of the invention. Camera modules utilizing ⁇ filter groups in accordance with embodiments of the invention are described in further detail below.
  • Camera modules can be patterned with ⁇ filter groups in accordance with embodiments of the invention.
  • ⁇ filter groups utilized as part of a camera module can each include a central camera that can function as a reference camera surrounded by color cameras in a way that reduces occlusion zones for each color.
  • the camera module is arranged in a rectangular format utilizing the RGB color model where a reference camera is a green camera surrounded by red, green and blue cameras.
  • a number of green cameras that is twice the number of red cameras and twice the number of blue cameras surround the reference camera.
  • red color cameras and blue color cameras are located in opposite positions on the 3 x 3 array of cameras.
  • any set of colors from any color model can be utilized to detect a useful range of colors in addition to the RGB color model, such as the cyan, magenta, yellow and key (CMYK) color model or red, yellow and blue (RYB) color model.
  • two ⁇ filter groups can be utilized in the patterning of a camera module when the RGB color model is used.
  • One ⁇ filter group is illustrated in FIG. 5A and the other ⁇ filter group is illustrated FIG. 5B. Either of these ⁇ filter groups can be used to pattern any camera module with dimensions greater than a 3 x 3 array of cameras.
  • patterning of the camera module with a ⁇ filter group includes only a single ⁇ filter group.
  • a ⁇ filter group on a 3 x 3 camera module in accordance with an embodiment of the invention is illustrated in FIG. 5A.
  • the ⁇ filter group 500 includes a green camera at each corner, a green reference camera in the center notated within a box 502, blue cameras above and below the reference camera, and red cameras to the left and right sides of the reference camera.
  • the number of green cameras surrounding the central reference camera is twice the number of red cameras and twice the number of blue cameras.
  • red cameras are located in opposite locations relative to the center of the 3 x 3 array of cameras to reduce occlusions.
  • FIG. 5B An alternative to the ⁇ filter group described in FIG. 5A is illustrated in FIG. 5B in accordance with an embodiment of the invention.
  • This ⁇ filter group also includes green cameras at the corners with a green reference camera 552 at the center, as denoted with a box.
  • the red cameras shown in FIG. 5B are above and below, and the blue cameras are to the left and right side of the reference camera.
  • the ⁇ filter group in FIG. 5B includes a central reference camera surrounded by a number of green cameras that is twice the number of red cameras and twice the number of blue cameras.
  • the reference camera need not be a green camera.
  • the configurations in FIGS. 5 A and 5B can be modified to include a central camera that employs a Bayer color filter.
  • the central camera is an infrared camera, an extended color camera and/or any other type of camera appropriate to a specific application, for example an infrared camera, or a UV camera.
  • any of a variety of color cameras can be distributed around the reference camera in opposite locations in the 3 x 3 array relative to the reference camera in a manner that reduces occlusion zones with respect to each color channel.
  • FIG. 5C depicts an embodiment where green color cameras are located above, below, to the left, and to the right of the central camera, while red and blue color cameras are disposed at the corner location of the ⁇ filter group.
  • the first and third rows and columns each have a red, green, and blue color filter, and this arrangement can reduce instances of occlusions.
  • the configuration shown in FIG. 5C can include slightly larger occlusion zones in the red and blue color channels compared with the embodiments illustrated in FIGS. 5 A and 5B, because the red and blue color cameras are slightly further away from central reference camera.
  • FIGS. 5C depicts an embodiment where green color cameras are located above, below, to the left, and to the right of the central camera, while red and blue color cameras are disposed at the corner location of the ⁇ filter group.
  • the first and third rows and columns each have a red, green, and blue color filter, and this arrangement can reduce instances of occlusions.
  • the configuration shown in FIG. 5C can include slightly larger occlusion zones in
  • 5D and 5E depict embodiments where color cameras surround a central green camera such that the cameras in each color channel are located in opposite positions in a 3 x 3 array relative to the central reference camera.
  • the blue or red color channel in which the cameras are in the corners of the 3 x 3 array are likely to have slightly larger occlusion zones than the blue or red color channel in which the cameras are located closer to the central reference camera (i.e. the cameras are not located in the corners).
  • the central reference camera can be any suitable camera, e.g. not just a green camera, in accordance with embodiments of the invention.
  • many embodiments are similar to those seen in FIGS. 5D and 5E, except they are utilize an arrangement that is the mirror image of those seen in FIGS. 5D and 5E.
  • numerous embodiments are similar to those seen in FIGS. 5D and 5E, except they utilize an arrangement that is rotated with respect to those seen in FIGS. 5D and 5E.
  • Any camera module with dimensions at and above 3 x 3 cameras can be patterned with one or more ⁇ filter groups, where cameras not within a ⁇ filter group are assigned a color that reduces or minimizes the likelihood of occlusion zones within the camera module given color filter assignments of the ⁇ filter groups.
  • a 4 x 4 camera module patterned with two ⁇ filter groups in accordance with an embodiment of the invention is illustrated in FIG. 6.
  • the camera module 600 includes a first ⁇ filter group 602 of nine cameras centered on a reference green camera 604.
  • a second ⁇ filter group 610 is diagonally located one camera shift to the lower right of the first ⁇ filter group.
  • the second ⁇ filter group shares the four center cameras 612 of the camera module 600 with the first ⁇ filter group.
  • the cameras serve different roles (i.e. different cameras act as reference cameras in the two ⁇ filter groups).
  • the two cameras at the corners 606 and 608 of the camera module are not included in the two ⁇ filter groups, 602 and 610.
  • the color filters utilized within these cameras are determined based upon minimization of occlusion zones given the color filter assignments of the cameras that are part of the two ⁇ filter groups, 602 and 610. Due to the patterning of the ⁇ filter groups, there is an even distribution of blue color cameras around the reference camera, but there is no red color camera above the reference camera.
  • selecting the upper right corner camera 606 to be red provides red image data from a viewpoint above the reference camera and the likelihood of occlusion zones above and to the right of the foreground images in a scene for the reference camera 604 and the center camera of the second ⁇ filter group is minimized.
  • selecting the lower left corner camera 608 to be blue provides blue image data from a viewpoint to the left of the reference camera and the likelihood of occlusion zones below and to the left of the foreground images in a scene for the reference camera 604 and the center camera of the second ⁇ filter group is minimized.
  • a camera module with dimensions greater than 3 x 3 can be patterned with ⁇ filter groups with colors assigned to cameras not included in any ⁇ filter group to reduce and/or minimize occlusion zones as discussed above.
  • the camera array includes at least one row and at least one column that contain a blue color camera, a green color camera, and a red color camera.
  • FIG. 7 A 4 x 4 camera module with two ⁇ filter groups in accordance with an embodiment of the invention is illustrated in FIG. 7.
  • the camera module 700 includes two ⁇ filter groups 702, 706 where the central camera of each ⁇ filter group 704, 708 can act as a reference camera. Irrespective of the reference camera that is selected, the distribution of cameras around the reference camera is equivalent due to the use of ⁇ filter groups.
  • a camera module 700 detects a defect with reference camera 704, the camera module 700 can switch to using the camera at the center of another ⁇ filter group as a reference camera 708 to avoid the defects of the first reference camera 704. Furthermore, patterning with ⁇ filter groups does not require that the reference camera or a virtual viewpoint be at the center of a camera module but rather that the reference camera is surrounded by color cameras in a way that reduces occlusion zones for each color. Although a specific camera module is discussed above, camera modules of any number of different dimensions can be utilized to create multiple reference camera options in accordance with embodiments of the invention.
  • Manufacturing processes inherently involve variations that can result in defects.
  • the manufacturing defects may be severe enough to render an entire focal plane within an imager array inoperable. If the failure of the focal plane results in the discarding of the imager array, then the cost to manufacture array cameras is increased.
  • Patterning camera modules with ⁇ filter groups can provide high manufacturing yield because the allocation of color filters in the optical channels of the optic array can be used to reduce the impact that a faulty focal plane has with respect to the creation of occlusion zones in the images synthesized using the image data captured by the array camera.
  • the light sensed by the pixels in a focal plane of an imager array is determined by a color filter included in the optical channel that focuses light onto the focal plane.
  • a color filter included in the optical channel that focuses light onto the focal plane.
  • the color filter pattern of the optical channels in the optic array can be determined so that the defective focal plane does not result in an increase in the size of occlusion zones.
  • FIG. 6A A process for detecting faulty focal planes before combining an optic array and imager array to create a camera module in accordance with embodiments of the invention is illustrated in FIG. 6A.
  • the color filter patterns are patterned on the optics array and not on the pixels of the imager array.
  • a process can systematically choose a specific optics array to force the faulty focal plane to pair with a color of a certain filter to ensure that the size of the occlusion zones in a given color channel are reduced and/or minimized.
  • the process 800 includes testing (802) an imager array for faulty focal planes.
  • a decision (804) is made as to whether a faulty focal plane is detected on the imager array. If a faulty focal plane is detected, then an optic array is selected based upon the location of the faulty focal plane (806). In many embodiments, an optic array is selected that reduces the effect of the faulty focal plane by assigning color filters to the operational focal planes in a way that minimizes the impact of the faulty focal plane on the creation of occlusion zones within images synthesized using image data captured by the imager array. Further discussion of selecting different optic arrays that reduce occlusion zones when there is a faulty focal plane is provided below with reference to FIGS. 6B and 6C.
  • the selected optic array is combined (808) with the imager array to create a camera module. If a faulty focal plane is not detected, then any of a variety of optic arrays including filter patterns based on ⁇ filter groups can be combined (808) with the tested imager array to create a camera module.
  • a typical process can include a default optic array including a first filter pattern based on ⁇ filter groups and a second filter pattern based on ⁇ filter groups can be utilized when specific defects are detected that would result in the faulty focal plane reducing the number of color cameras (or even specific color cameras such as color cameras around the outside of the camera module) in the camera module when the first filter pattern is used.
  • FIGS. 8B and 8C The manner in which modifying color filter assignments can reduce the impact of a faulty focal plane is illustrated in FIGS. 8B and 8C.
  • a camera module with a faulty red camera is illustrated in FIG. 8B.
  • the camera module 820 includes a first ⁇ filter group 828 with a possible reference camera 822 at the center, a second ⁇ filter group 832 with a possible reference camera 830 at the center and a faulty red camera 824 below both ⁇ filter groups 828 and 832.
  • an optic array including the filter pattern illustrated in FIG. 8B results in a defective red camera that prevents the capture of red color information below any reference camera, increasing the likelihood of occlusion zones below foreground objects.
  • an optic array patterned using ⁇ filter groups in different locations can result in all of the blue and red color filters being assigned to cameras that are active. In this way, the faulty focal plane only impacts the number of green cameras and does so in a way that reduces the likelihood of occlusion zones in an image synthesized using the image data captured by the resulting camera module.
  • yield can be improved under certain circumstances by combining the imager array that includes the faulty focal plane with an optic array that assigns the color filters of the active cameras based on ⁇ filter groups in a way that results in color information being captured around the reference camera in a way that minimizes the likelihood of occlusion zones given the location of the faulty focal plane.
  • FIG. 8C A camera module with the faulty focal plane of FIG. 8B but with an optic array patterned with ⁇ filter groups in such a way that the faulty focal plane does not reduce the capture of red or blue image data around the reference camera module is illustrated in FIG. 8C.
  • the optic array of FIG. 8C Relative to the pattern of the optic array of FIG. 8B, the optic array of FIG. 8C is flipped along the center vertical bisecting axis 826 of the optic array and includes two ⁇ filter groups 828' and 832'.
  • the lens stack associated with the faulty focal plane is green 854, as opposed to red 824 in FIG. 8B. As there are multiple green cameras below all possible reference cameras 852, 856 in FIG.
  • the loss of a green camera 854 is less impactful as opposed to the impact from the loss of the red camera 824 in FIG. 8B. Therefore, the impact of faulty focal planes on an imager array can be reduced by combining the faulty imager array with an optic array specifically selected to assign color filters to the focal planes in the imager array in a manner that reduces the likelihood that the faulty focal plane will create an occlusion zone in any of the color channels captured by the resulting camera module.
  • the example above discusses reducing red occlusion zones
  • the impact of a defective focal plane in any of the locations in an imager array can be similarly minimized by appropriate selection of a filter pattern based on ⁇ filter groups.
  • any of a variety of alternative color filter patterns including ⁇ filter groups can be utilized to increase manufacturing yield in accordance with embodiments of the invention.
  • Super Resolution processes can be used to synthesize high resolution images using low resolution images captured by an array camera including pairs of stereoscopic 3D images as disclosed in U.S. Patent Application No. 12/967,807 entitled “Systems and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes", filed December 14, 2010, the disclosure of which is incorporated by reference above.
  • Stereoscopic 3D image pairs are two images of a scene from spatially offset viewpoints that can be combined to create a 3D representation of the scene.
  • the use of a filter pattern including ⁇ filter groups can enable the synthesis of stereoscopic 3D images in a computationally efficient manner.
  • Image data captured by less than all of the cameras in the array camera can be used to synthesize each of the images that form the stereoscopic 3D image pair.
  • Patterning with ⁇ filter groups enables an efficient distribution of cameras around a reference camera that reduces occlusion zones and reduces the amount of image data captured by the camera module that is utilized to synthesize each of the images in a stereoscopic 3D image pair.
  • different subsets of the cameras are used to capture each of the images that form the stereoscopic 3D image pair and each of the subsets includes a ⁇ filter group.
  • the images that form the stereoscopic 3D image pair are captured from a virtual viewpoint that is slightly offset from the camera at the center of the ⁇ filter group.
  • the central camera of a ⁇ filter group is surrounded by color cameras in a way that minimizes occlusion zones for each color camera when the central camera is used as a reference camera.
  • the virtual viewpoint is proximate the center of a ⁇ filter group, the benefits of the distribution of color cameras around the virtual viewpoint are similar.
  • FIG. 9A A left virtual viewpoint for a stereoscopic 3D image pair captured using a camera module patterned using ⁇ filter groups is illustrated in FIG. 9A.
  • the left virtual viewpoint 904 is taken from image data from the 12 circled cameras Gi - G 3 , G 5 - G 7 , Bi - B 2 , B 4 , and R 2 - R 3 that form a 3x4 array.
  • the virtual viewpoint is offset relative to the green camera G 3 , which is the center of a ⁇ filter group 906.
  • FIG. 9B A right virtual viewpoint used to capture the second image in the stereoscopic pair using the camera module shown in FIG. 7 is illustrated in FIG. 9B.
  • the right virtual viewpoint 954 is taken from image data from the 12 circled cameras Bi - B 3 , G 2 - G 4 , G 6 - Gg, Ri, R 3 - R 4 that form a 3x4 array.
  • the virtual viewpoint is offset relative to the green camera G 6 , which is the center of a ⁇ filter group 956. Therefore, a single array camera can capture 3D images of a scene using image data from a subset of the cameras to synthesize each of the images that form the stereoscopic pair.
  • the computational complexity of generating the stereoscopic 3D image pair is reduced.
  • the location of the viewpoints of each of the images proximate a camera that is the center of a ⁇ filter group reduces the likelihood of occlusion zones in the synthesized images.
  • the viewpoints need not be virtual viewpoints.
  • array camera modules can be constructed using ⁇ filter groups so that the viewpoints from which stereoscopic images are captured are reference viewpoints obtained from reference cameras within the camera array.
  • a 3 x 5 camera module is provided that includes two overlapping ⁇ filter groups.
  • a 3 x 5 camera module that includes two overlapping ⁇ filter groups centered on each of two reference green color cameras is illustrated in FIG. 9C.
  • the camera module 960 includes two overlapping ⁇ filter groups 962 and 964, each centered on one of two reference green color cameras 966 and 968 respectively. The two reference cameras 966 and 968 are used to provide the two reference viewpoints.
  • an array camera module is configured to capture stereoscopic images using non-overlapping ⁇ filter groups.
  • a 3 x 6 array camera module that includes two non-overlapping ⁇ filter groups, which can be used to capture stereoscopic images is illustrated in FIG. 9D.
  • the array camera module 970 is similar to that seen in FIG. 9C, except that the two ⁇ filter groups 972 and 974 do not overlap.
  • the two ⁇ filter groups 972 and 974 are each centered on one of two green color cameras 976 and 978 respectively.
  • the two reference cameras 976 and 978 are used to provide the two reference viewpoints.
  • 9D demonstrates that ⁇ filter groups having different arrangements of cameras within each ⁇ filter group can be utilized to pattern an array camera module in accordance with embodiments of the invention.
  • the two ⁇ filter groups 972 and 974 use different 3 x 3 camera arrangement.
  • ⁇ filter groups incorporating different 3 x 3 arrangements of cameras can be utilized to construct any of a variety of camera arrays of different dimensions.
  • stereoscopic image pairs can be generated using subsets of cameras in any of a variety of camera modules in accordance with embodiments of the invention.
  • Array cameras with camera modules patterned with ⁇ filter groups can utilize less than all of the available cameras in operation in accordance with many embodiments of the invention. In several embodiments, using fewer cameras can minimize the computational complexity of generating an image using an array camera and can reduce the power consumption of the array camera. Reducing the number of cameras used to capture image data can be useful for applications such as video, where frames of video can be synthesized using less than all of the image data that can be captured by a camera module. In a number of embodiments, a single ⁇ filter group can be utilized to capture an image. In many embodiments, image data captured by a single ⁇ filter group is utilized to capture a preview image prior to capturing image data with a larger number of cameras.
  • the cameras in a single ⁇ filter group capture video image data.
  • image data can be captured using additional cameras to increase resolution and/or provide additional color information and reduce occlusions.
  • FIG. 10 A ⁇ filter group within a camera module that is utilized to capture image data that can be utilized to synthesize an image is illustrated in FIG. 10.
  • the reference camera is boxed and utilized cameras are encompassed in a dotted line.
  • the camera module 1000 includes a ⁇ filter group of cameras generating image data Gi - G 2 , G 5 - G 6 , Bi - B 2 and R 2 - R 3 with reference camera G 3 .
  • Image data can be acquired using additional cameras for increased resolution and to provide additional color information in occlusion zones. Accordingly, any number and arrangement of cameras can be utilized to capture image data using a camera module in accordance with many different embodiments of the invention.
  • Color filter patterns for any array of cameras having dimensions greater than 3 x 3 can be constructed in accordance with embodiments of the invention.
  • processes for constructing color filter patterns typically involve assigning color filters to the cameras in a camera module to maximize the number of overlapping ⁇ filter groups.
  • color filters can be assigned to the cameras based upon minimizing occlusions around the camera that is to be used as the reference camera for the purposes of synthesizing high-resolution images.
  • FIG. 11 A process for assigning color filters to cameras in a camera module in accordance with an embodiment of the invention is illustrated in FIG. 11.
  • the process 1100 includes selecting (1102) a corner of the array, assigning (1104) a ⁇ filter group to the selected corner.
  • the ⁇ filter group occupies a 3 x 3 grid.
  • Color filters can be assigned (1106) to the remaining cameras in such a way to maximize the number of overlapping ⁇ filter groups within the array.
  • the cameras are assigned (1108) color filters that reduce the likelihood of occlusion zones in images synthesized from the viewpoint of a camera selected as the reference camera for the array. At which point, all of the cameras in the array are assigned color filters.
  • FIGS. 12A - 12D The process of generating a simple filter pattern for a 5 x 5 array using ⁇ filter groups is illustrated in FIGS. 12A - 12D. The process starts with the selection of the top left corner of the array. A ⁇ filter group is assigned to the 3 x 3 group of cameras in the top left corner (cameras G1-G5, Bi-B 2 , and Ri-R 2 ).
  • a second overlapping ⁇ filter group is created by adding three green cameras and a blue camera and a red camera (G 6 -G8 and B3 and R3).
  • a third overlapping ⁇ filter group is created by adding another three green cameras and a blue camera and a red camera (G9-G11 and B 4 and R 4 ).
  • a fifth and sixth ⁇ filter groups are created by adding a single green camera, blue camera and red camera (Gi 2 , B 5 , R 5 and G13, B 6 , R 6 ). In the event that central camera (G 6 ) fails, a camera at the center of another ⁇ filter group can be utilized as the reference camera (e.g. G3).
  • FIGS. 13A - 13D A similar process for generating a simple filter pattern for a 4 x 5 array using ⁇ filter groups is illustrated in FIGS. 13A - 13D.
  • the process is very similar with the exception that two cameras are not included in ⁇ filter groups. Due to the fact that there are no blue cameras below the camera G 6 (which is the center of a ⁇ filter group), the cameras that do not form part of a ⁇ filter group are assigned as blue cameras (B5 and B 6 ).
  • similar processes can be applied to any array larger than a 3 x 3 array to generate a color filter pattern incorporating ⁇ filter groups in accordance with embodiments of the invention.
  • the process outlined above can be utilized to construct larger arrays including the 7 x 7 array of cameras illustrated in FIG. 14.
  • the same process can also be utilized to construct even larger arrays of any dimensions including square arrays where the number of cameras in each of the dimensions of the array is odd. Accordingly, the processes discussed herein can be utilized to construct a camera module and/or an array camera including a camera array having any dimensions appropriate to the requirements of a specific application in accordance with embodiments of the invention.

Abstract

Systems and methods in accordance with embodiments of the invention pattern array camera modules with π filter groups. In one embodiment, an array camera module includes: an M x N imager array including a plurality of focal planes, where each focal plane includes an array of pixels; an M x N optic array of lens stacks, where each lens stack corresponds to a focal plane, and where each lens stack forms an image of a scene on its corresponding focal plane; where each pairing of a lens stack and focal plane thereby defines a camera; where at least one row in the M x N array of cameras includes at least one red camera, one green camera, and one blue camera; and where at least one column in the M x N array of cameras includes at least one red camera, one green camera, and one blue camera.

Description

CAMERA MODULES PATTERNED WITH pi FILTER GROUPS FIELD OF THE INVENTION
[0001] The present invention relates generally to digital cameras and more specifically to filter patterns utilized in camera modules of array cameras.
BACKGROUND OF THE INVENTION
[0002] Conventional digital cameras typically include a single focal plane with a lens stack. The focal plane includes an array of light sensitive pixels and is part of a sensor. The lens stack creates an optical channel that forms an image of a scene upon the array of light sensitive pixels in the focal plane. Each light sensitive pixel can generate image data based upon the light incident upon the pixel.
[0003] In a conventional color digital camera, an array of color filters is typically applied to the pixels in the focal plane of the camera's sensor. Typical color filters can include red, green and blue color filters. A demosaicing algorithm can be used to interpolate a set of complete red, green and blue values for each pixel of image data captured by the focal plane given a specific color filter pattern. One example of a camera color filter pattern is the Bayer filter pattern. The Bayer filter pattern describes a specific pattern of red, green and blue color filters that results in 50% of the pixels in a focal plane capturing green light, 25% capturing red light and 25% capturing blue light.
[0004] Conventional photography can be enhanced with an understanding of binocular imaging. Binocular viewing of a scene creates two slightly different images of the scene due to the different fields of view of each eye. These differences, referred to as binocular disparity (or parallax), provide information that can be used to calculate depth in the visual scene, providing a major means of depth perception. The impression of depth associated with stereoscopic depth perception can also be obtained under other conditions, such as when an observer views a scene with only one eye while moving. The observed parallax can be utilized to obtain depth information for objects in the scene. Similar principles in machine vision can be used to gather depth information.
[0005] For example, two cameras separated by a distance can take pictures of the same scene and the captured images can be compared by shifting the pixels of two or more images to find parts of the images that match. The amount an object shifts between different camera views is called the disparity, which is inversely proportional to the distance to the object. A disparity search that detects the shift of an object in multiple images can be used to calculate the distance to the object based upon the baseline distance between the cameras and the focal length of the cameras involved. The approach of using two or more cameras to generate stereoscopic three- dimensional images is commonly referred to as multi-view stereo.
[0006] When multiple images of a scene are captured from different perspectives and the scene includes foreground objects, the disparity in the location of the foreground object in each of the images results in portions of the scene behind the foreground object being visible in some but not all of the images. A pixel that captures image data concerning a portion of a scene, which is not visible in images captured of the scene from other viewpoints, can be referred to as an occluded pixel.
[0007] FIGS. 1A and IB illustrate the principles of parallax and occlusion. FIG. 1A depicts the image 100 captured by a first camera having a first field of view, whereas FIG. IB depicts the image 102 captured by a second adjacent camera having a second field of view. In the image 100 captured by the first camera, a foreground object 104 appears slightly to the right of the background object 106. However, in the image 102 captured by the second camera, the foreground object 104 appears shifted to the left hand side of the background object 106. The disparity introduced by the different fields of view of the two cameras is equal to the difference between the location of the foreground object 104 in the image captured by the first camera (indicated in the image captured by the second camera by ghost lines 108) and its location in the image captured by the second camera. The distance from the two cameras to the foreground object can be obtained by determining the disparity of the foreground object in the two captured images, and this is described in U.S. Patent Application Serial No. 61/780,906, entitled "Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras." The disclosure of U.S. Patent Application Serial No. 61/780,906 is incorporated by reference herein in its entirety.
[0008] Additionally, referring to FIGS. 1A and IB, when the viewpoint of the second camera, the field of view of which is depicted in FIG. IB, is selected as a reference viewpoint, the pixels contained within the ghost lines 108 in the image 102 can be considered to be occluded pixels (i.e. the pixels capture image data from a portion of the scene that is visible in the image 102 captured by the second camera and is not visible in the image 100 captured by the first camera). In the second image 102, the pixels of the foreground object 104 can be referred to as occluding pixels as they capture portions of the scene that occlude the pixels contained within the ghost lines 108 in the image 102. Due to the occlusion of the pixels contained within the ghost lines 108 in the second image 102, the distance from the camera to portions of the scene visible within the ghost lines 108 cannot be determined from the two images as there are no corresponding pixels in the image 100 shown in FIG. 1A.
SUMMARY OF THE INVENTION
[0009] Systems and methods in accordance with embodiments of the invention pattern array camera modules with π filter groups. In one embodiment, an array camera module includes: an M x N imager array including a plurality of focal planes, each focal plane including an array of light sensitive pixels; an M x N optic array of lens stacks, where each lens stack corresponds to a focal plane, and where each lens stack forms an image of a scene on its corresponding focal plane; where each pairing of a lens stack and its corresponding focal plane thereby defines a camera; where at least one row in the M x N array of cameras includes at least one red color camera, at least one green color camera, and at least one blue color camera; and where at least one column in the M x N array of cameras includes at least one red color camera, at least one green color camera, and at least one blue color camera.
[0010] In another embodiment, M and N are each greater than two and at least one of M and N is even; color filters are implemented within the cameras in the array camera module such that the array camera module is patterned with at least one π filter group including: a 3 x 3 array of cameras including: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras surrounding the reference camera.
[0011] In yet another embodiment, each of the four green color cameras surrounding the reference camera is disposed at a corner location of the 3 x 3 array of cameras.
[0012] In still another embodiment, M is four; N is four; the first row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a red color camera; the second row of cameras of the 4 x 4 array camera module includes, in the following order, a red color camera, a green color camera, a red color camera, and a green color camera; the third row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a blue color camera; and the fourth row of cameras of the 4 x 4 array camera module includes, in the following order, a blue color camera, a green color camera, a red color camera, and a green color camera.
[0013] In an even further embodiment, M is four; N is four; the first row of cameras of the 4 x 4 array camera module includes, in the following order, a red color camera, a green color camera, a blue color camera, and a green color camera; the second row of cameras of the 4 x 4 array camera module includes, in the following order a green color camera, a red color camera, a green color camera, and a red color camera; the third row of cameras of the 4 x 4 array camera module includes, in the following order, a blue color camera, a green color camera, a blue color camera, and a green color camera; and the fourth row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a red color camera, a green color camera, and a blue color camera.
[0014] In still another embodiment, the reference camera is a green color camera.
[0015] In still yet another embodiment, the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
[0016] In a still yet further embodiment, each of the two red color cameras is located at a corner location of the 3 x 3 array of cameras, and each of the two blue color cameras is located at a corner location of the 3 x 3 array of cameras.
[0017] In another embodiment, at least one color filter is implemented on the imager array.
[0018] In a further embodiment, at least one color filter is implemented on a lens stack.
[0019] In another embodiment, a 3 x 3 array camera module includes: a 3 x 3 imager array including a 3 x 3 arrangement of focal planes, each focal plane including an array of light sensitive pixels; a 3 x 3 optic array of lens stacks, where each lens stack corresponds to a focal plane, and where each lens stack forms an image of a scene on its corresponding focal plane; where each pairing of a lens stack and its corresponding focal plane thereby defines a camera; where the 3 x 3 array of cameras includes: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras, each located at a corner location of the 3 x 3 array of cameras; where each of the color cameras is achieved using a color filter.
[0020] In a further embodiment, at least one color filter is implemented on the imager array to achieve a color camera.
[0021] In a still yet further embodiment, at least one color filter is implemented within a lens stack to achieve a color camera.
[0022] In yet another embodiment, the reference camera is a green color camera.
[0023] In an even further embodiment, the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
[0024] In another embodiment, a method of patterning an array camera module with at least one π filter group includes: evaluating whether an imager array of M x N focal planes, where each focal plane comprises an array of light sensitive pixels, includes any defective focal planes; assembling an M x N array camera module using: the imager array of M x N focal planes; an M x N optic array of lens stacks, where each lens stack corresponds with a focal plane, where the M x N array camera module is assembled so that: each lens stack and its corresponding focal plane define a camera; color filters are implemented within the array camera module such that the array camera module is patterned with at least one π filter group including: a 3 x 3 array of cameras including: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras surrounding the reference camera; and where the array camera module is patterned with the at least one π filter group such that a camera that includes a defective focal plane is a green color camera.
[0025] In a further embodiment, at least one color filter is implemented on the imager array.
[0026] In a still further embodiment, at least one color filter is implemented within a lens stack.
[0027] In an even further embodiment, the reference camera is a green color camera.
[0028] In still yet another embodiment, the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light. [0029] In another embodiment, an array camera module includes: an imager array comprising M x N focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane; an optic array of M x N lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks; wherein the imager array and the optic array of lens stacks form an M x N array of cameras that are configured to independently capture an image of a scene; where at least one row in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and where at least one column in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera.
[0030] In yet another embodiment, the red color camera is a camera that captures image data including electromagnetic waves having a wavelength within the range of 620 nm and 750 nm; the green color camera is a camera that captures image data including electromagnetic waves having a wavelength within the range of 495 nm and 570 nm; and the blue color camera is a camera that captures image data including electromagnetic waves having a wavelength within the range of 450 nm and 495 nm.
[0031] In still another embodiment, the optics of each camera within the array camera module are configured so that each camera has a field of view of a scene that is shifted with respect to the fields of view of the other cameras so that each shift of the field of view of each camera with respect to the fields of view of the other cameras is configured to include a unique sub-pixel shifted view of the scene.
[0032] In a further embodiment, M and N are each greater than two and at least one of M and N is even; color filters are implemented within the cameras in the array camera module such that the array camera module is patterned with at least one π filter group including: a 3 x 3 array of cameras including: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras surrounding the reference camera.
[0033] In an yet further embodiment, each of the four green color cameras surrounding the reference camera is disposed at a corner location of the 3 x 3 array of cameras. [0034] In yet still further embodiment, M is four; N is four; the first row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a red color camera; the second row of cameras of the 4 x 4 array camera module includes, in the following order, a red color camera, a green color camera, a red color camera, and a green color camera; the third row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a blue color camera; and the fourth row of cameras of the 4 x 4 array camera module includes, in the following order, a blue color camera, a green color camera, a red color camera, and a green color camera.
[0035] In an even further embodiment, M is four; N is four; the first row of cameras of the 4 x 4 array camera module includes, in the following order, a red color camera, a green color camera, a blue color camera, and a green color camera; the second row of cameras of the 4 x 4 array camera module includes, in the following order a green color camera, a red color camera, a green color camera, and a red color camera; the third row of cameras of the 4 x 4 array camera module includes, in the following order, a blue color camera, a green color camera, a blue color camera, and a green color camera; and the fourth row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a red color camera, a green color camera, and a blue color camera.
[0036] In another embodiment, the reference camera within the at least one π filter group is a green color camera.
[0037] In a further embodiment, the reference camera within the at least one π filter group is a camera that incorporates a Bayer filter.
[0038] In a still further embodiment, the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
[0039] In yet another embodiment, each of the two red color cameras is located at a corner location of the 3 x 3 array of cameras, and wherein each of the two blue color cameras is located at a corner location of the 3 x 3 array of cameras.
[0040] In still yet another embodiment, at least one color filter is implemented on the imager array.
[0041] In a yet further embodiment, at least one color filter is implemented on a lens stack. [0042] In another embodiment, a 3 x 3 array camera module includes: a 3 x 3 imager array including a 3 x 3 arrangement of focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane; a 3 x 3 optic array of lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks; where the imager array and the optic array of lens stacks form a 3 x 3 array of cameras that are configured to independently capture an image of a scene; where the 3 x 3 array of cameras includes: a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras, each located at a corner location of the 3 x 3 array of cameras; where each of the color cameras is achieved using a color filter.
[0043] In yet another embodiment, at least one color filter is implemented on the imager array to achieve a color camera.
[0044] In still yet another embodiment, at least one color filter is implemented within a lens stack to achieve a color camera.
[0045] In a further embodiment, the reference camera is a green color camera.
[0046] In a yet further embodiment, the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
[0047] In another embodiment, an array camera module includes: an imager array comprising M x N focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane; an optic array of M x N lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks; wherein the imager array and the optic array of lens stacks form an M x N array of cameras that are configured to independently capture an image of a scene; and wherein at least either one row or one column in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera.
[0048] In yet another embodiment, M is three; N is three; the first row of cameras of the 3 x 3 array camera module includes, in the following order, a blue color camera, a green color camera, and a green color camera; the second row of cameras of the 3 x 3 array camera module includes, in the following order a red color camera, a green color camera, and a red color camera; and the third row of cameras of the 3 x 3 array camera module includes, in the following order, a green color camera, a green color camera, and a blue color camera.
[0049] In still yet another embodiment, M is three; N is three; the first row of cameras of the 3 x 3 array camera module includes, in the following order, a red color camera, a green color camera, and a green color camera; the second row of cameras of the 3 x 3 array camera module includes, in the following order a blue color camera, a green color camera, and a blue color camera; and the third row of cameras of the 3 x 3 array camera module includes, in the following order, a green color camera, a green color camera, and a red color camera.
[0050] In another embodiment, an array camera includes: an array camera module, including: an imager array comprising M x N focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane; an optic array of M x N lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks; where the imager array and the optic array of lens stacks form an M x N array of cameras that are configured to independently capture an image of a scene; where at least one row in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and where at least one column in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and a processor that includes an image processing pipeline, the image processing pipeline including: a parallax detection module; and a super-resolution module; where the parallax detection module is configured to obtain a reference low resolution image of a scene and at least one alternate view image of the scene from the camera module; where the parallax detection module is configured to compare the reference image and the at least one alternate view image to determine a depth map and an occlusion map for the reference image; and where the super-resolution module is configured to synthesize a high resolution image using at least the reference image, the depth map, the occlusion map and the at least one alternate view image. BRIEF DESCRIPTION OF THE DRAWINGS
[0051] FIGS. 1A and IB illustrate the principles of parallax and occlusion as they pertain to image capture, and which can be addressed in accordance with embodiments of the invention.
[0052] FIG. 2 illustrates an array camera with a camera module and processor in accordance with an embodiment of the invention.
[0053] FIG. 3 illustrates a camera module with an optic array and imager array in accordance with an embodiment of the invention.
[0054] FIG. 4 illustrates an image processing pipeline in accordance with an embodiment of the invention.
[0055] FIG. 5 A conceptually illustrates a 3 x 3 camera module patterned with a π filter group where red cameras are arranged horizontally and blue cameras are arranged vertically in accordance with an embodiment of the invention.
[0056] FIG. 5B conceptually illustrates a 3 x 3 camera module patterned with a π filter group where red cameras are arranged vertically and blue cameras are arranged horizontally in accordance with an embodiment of the invention.
[0057] FIG. 5C conceptually illustrates a 3 x 3 camera module patterned with a π filter group where red cameras and blue cameras are arranged at the corner locations of the 3 x 3 camera module in accordance with an embodiment of the invention
[0058] FIGS. 5D and 5E conceptually illustrate a number of 3 x 3 camera modules patterned with a π filter group.
[0059] FIG. 6 conceptually illustrates a 4 x 4 camera module patterned with two π filter groups in accordance with an embodiment of the invention.
[0060] FIG. 7 conceptually illustrates a 4 x 4 camera module patterned with two π filter groups with two cameras that could each act as a reference camera in accordance with an embodiment of the invention.
[0061] FIG. 8A illustrates a process for testing an imager array for defective focal planes to create a camera module that reduces the effect of any defective focal plane in accordance with an embodiment of the invention.
[0062] FIG. 8B conceptually illustrates a 4 x 4 camera module patterned with two π filter groups where a faulty focal plane causes a loss of red coverage around possible reference cameras. [0063] FIG. 8C conceptually illustrates the 4 x 4 camera module patterned with a different arrangement of π filter groups relative to FIG. 6B where the faulty focal plane does not result in a loss of red coverage around possible reference cameras in accordance with an embodiment of the invention.
[0064] FIG. 9 A conceptually illustrates use of a subset of cameras to produce a left virtual viewpoint for an array camera operating in 3D mode on a 4 x 4 camera module patterned with π filter groups in accordance with an embodiment of the invention.
[0065] FIG. 9B conceptually illustrates use of a subset of cameras to produce a right virtual viewpoint for an array camera operating in 3D mode on a 4 x 4 camera module patterned with π filter groups in accordance with an embodiment of the invention.
[0066] FIGS. 9C and 9D conceptually illustrate array camera modules that employ π filter groups to capture stereoscopic images with viewpoints that correspond to the viewpoints of reference cameras within the camera array.
[0067] FIG. 10 conceptually illustrates a 4 x 4 camera module patterned with π filter groups where nine cameras are utilized to capture image data used to synthesize frames of video in accordance with an embodiment of the invention.
[0068] FIG. 11 is a flow chart illustrating a process for generating color filter patterns including π filter groups in accordance with embodiments of the invention.
[0069] FIGS. 12 A - 12D illustrate a process for generating a color filter pattern including π filter groups for a 5 x 5 array of cameras in accordance with embodiments of the invention.
[0070] FIGS. 13A - 13D illustrate a process for generating a color filter pattern including π filter groups for a 4 x 5 array of cameras in accordance with embodiments of the invention.
[0071] FIG. 14 illustrates a 7 x 7 array of cameras patterned using π filter groups in accordance with embodiments of the invention.
DETAILED DESCRIPTION
[0072] Turning now to the drawings, systems and methods for patterning array cameras with π filter groups in accordance with embodiments of the invention are illustrated. In many embodiments, camera modules of an array camera are patterned with one or more π filter groups. The term patterned here refers to the use of specific color filters in individual cameras within the camera module so that the cameras form a pattern of color channels within the array camera. The term color channel or color camera can be used to refer to a camera that captures image data within a specific portion of the spectrum and is not necessarily limited to image data with respect to a specific color. For example, a 'red color camera' is a camera that captures image data that corresponds with electromagnetic waves (i.e., within the electromagnetic spectrum) that humans conventionally perceive as red, and similarly for 'blue color cameras', 'green color cameras', etc. In other words, a red color camera may capture image data corresponding with electromagnetic waves having wavelengths of between approximately 620 nm and 750 nm; a green color camera may capture image data corresponding with electromagnetic waves having wavelengths of between approximately 495 nm and approximately 570 nm; and a blue color camera may capture image data corresponding with electromagnetic waves having wavelengths of between approximately 450 nm and 495 nm. In other embodiments, the portions of the visible light spectrum that are captured by blue color cameras, green color cameras and red color cameras can depend upon the requirements of a specific application. The term Bayer camera can be used to refer to a camera that captures image data using the Bayer filter pattern on the image plane. In many embodiments, a color channel can include a camera that captures infrared light, ultraviolet light, extended color and any other portion of the visible spectrum appropriate to a specific application. The term π filter group refers to a 3 x 3 group of cameras including a central camera and color cameras distributed around the central camera to reduce occlusion zones in each color channel. The central camera of a π filter group can be used as a reference camera when synthesizing an image using image data captured by an imager array. A camera is a reference camera when its viewpoint is used as the viewpoint of the synthesized image. The central camera of a π filter group is surrounded by color cameras in a way that minimizes occlusion zones for each color camera when the central camera is used as a reference camera. Occlusion zones are areas surrounding foreground objects not visible to cameras that are spatially offset from the reference camera due to the effects of parallax.
[0073] As is discussed further below, increasing the number of cameras capturing images of a scene from different viewpoints in complementary occlusion zones around the reference viewpoint increases the likelihood that every portion of the scene visible from the reference viewpoint is also visible from the viewpoint of at least one of the other cameras. When the array camera uses different cameras to capture different wavelengths of light (e.g. RGB), distributing at least one camera that captures each wavelength of light in the quadrants surrounding a reference viewpoint can significantly decrease the likelihood that a portion of the scene visible from the reference viewpoint will be occluded in every other image captured within a specific color channel. In a number of embodiments, a similar decrease in the likelihood that a portion of the scene visible from the reference viewpoint will be occluded in every other image captured within a specific color channel can be achieved using two cameras in the same color channel that are located on opposite sides of a reference camera or three cameras in each color channel that are distributed in three sectors around the reference camera. In other embodiments, cameras can be distributed in more than four sectors around the reference camera.
[0074] In several embodiments, the central camera of a π filter group is a green camera while in other embodiments the central camera captures image data from any appropriate portion of the spectrum. In a number of embodiments, the central camera is a Bayer camera (i.e. a camera that utilizes a Bayer filter pattern to capture a color image). In many embodiments, a π filter group is a 3 x 3 array of cameras with a green color camera at each corner and a green color camera at the center which can serve as the reference camera with a symmetrical distribution of red and blue cameras around the central green camera. The symmetrical distribution can include arrangements where either red color cameras are directly above and below the center green reference camera with blue color cameras directly to the left and right, or blue color cameras directly above and below the green center reference camera with red color cameras directly to the left and right.
[0075] Camera modules of dimensions greater than a 3 x 3 array of cameras can be patterned with π filter groups in accordance with many embodiments of the invention. In many embodiments, patterning a camera module with π filter groups enables an efficient distribution of cameras around a reference camera that reduces occlusion zones. In several embodiments, patterns of π filter groups can overlap with each other such that two overlapping π filter groups on a camera module share common cameras. When overlapping π filter groups do not span all of the cameras in the camera module, cameras that are not part of a π filter group can be assigned a color to reduce occlusion zones in the resulting camera array by distributing cameras in each color channel within each of a predetermined number of sectors surrounding a reference camera and/or multiple cameras that can act as reference cameras within the camera array.
[0076] In some embodiments, camera modules can be patterned with π filter groups such that either at least one row in the camera module or at least one column in the camera module includes at least one red color camera, at least one green color camera, and at least one blue color camera. In many embodiments, at least one row and at least one column of the array camera module include at least one red color camera, at least one green color camera, and at least one blue color camera. These arrangements can reduce instances of occlusion, as they result in the distribution of cameras that capture different wavelengths throughout the camera. Of course any suitable combination of cameras can be implemented using this scheme. For example, in several embodiments, at least one row and at least one column of the array camera module include at least one cyan color camera, at least one magenta color camera, and at least one yellow color camera (e.g. color cameras that correspond with the CMYK color model). In some embodiments, at least one row and at least one column of the array camera module include at least one red color camera, at least one yellow color camera, and at least one blue color camera (e.g. color cameras that correspond with the RYB color model).
[0077] Additionally, camera modules of an M x N dimension, where at least one of M and N is an even number may also be patterned with π filter groups in accordance with many embodiments of the invention. These camera modules are distinct from an M x N camera module where both M and N are odd numbers insofar as where at least one of M and N is even, none of the constituent cameras align with the center of the camera array. Conversely, where M and N are both odd, there is a camera that corresponds with the center of the camera array. For example, in the 3 x 3 camera module that employs a single π filter group, there is a central camera that corresponds with the center of the camera array. Cameras that align with the center of the camera array are typically selected as the reference camera of the camera module. Accordingly, where one of M and N is even, any suitable camera may be utilized as the reference camera of the camera module. Additionally, color cameras surrounding the reference camera need not be uniformly distributed but need only be distributed in a way to minimize or reduce occlusion zones of each color from the perspective of the reference camera. Utilization of a reference camera in a i filter group to synthesize an image from captured image data can be significantly less computationally intensive than synthesizing an image using the same image data from a virtual viewpoint.
[0078] High quality images or video can be captured by an array camera including a camera module patterned with π filter groups utilizing a subset of cameras within the camera module (i.e. not requiring that all cameras on a camera module be utilized). Similar techniques can also
-U- be used for efficient generation of stereoscopic 3D images utilizing image data captured by subsets of the cameras within the camera module.
[0079] Patterning camera modules with π filter groups also enables robust fault tolerance in camera modules with multiple π filter groups as multiple possible reference cameras can be utilized if a reference camera begins to perform sub optimally. Patterning camera modules with π filter groups also allows for yield improvement in manufacturing camera modules as the impact of a defective focal plane on a focal plane array can be minimized by simply changing the pattern of the color lens stacks in an optic array. Various π filter groups and the patterning of camera modules with π filter groups in accordance with embodiments of the invention are discussed further below.
Array Cameras
[0080] In many embodiments, an array camera includes a camera module and a processor. An array camera with a camera module patterned with π filter groups in accordance with an embodiment of the invention is illustrated in FIG. 2. The array camera 200 includes a camera module 202 as an array of individual cameras 204 where each camera 204 includes a focal plane with a corresponding lens stack. An array of individual cameras refers to a plurality of cameras in a particular arrangement, such as (but not limited to) the square arrangement utilized in the illustrated embodiment. The camera module 202 is connected 206 to a processor 208. In the illustrated embodiment, a camera 204 labeled as "R" refers to a red camera with a red filtered color channel, "G" refers to a green camera with a green filtered color channel and "B" refers to a blue camera with a blue filtered color channel. Although a specific array camera is illustrated in FIG. 2, any of a variety of different array camera configurations can be utilized in accordance with many different embodiments of the invention.
Array Camera Modules
[0081] Array camera modules (or "camera modules") in accordance with embodiments of the invention can be constructed from an imager array or sensor including an array of focal planes and an optic array including a lens stack for each focal plane in the imager array. Sensors including multiple focal planes are discussed in U.S. Patent Application Serial No. 13/106,797 entitled "Architectures for System on Chip Array Cameras", to Pain et al., the disclosure of which is incorporated herein by reference in its entirety. Light filters can be used within each optical channel formed by the lens stacks in the optic array to enable different cameras within an array camera module to capture image data with respect to different portions of the electromagnetic spectrum.
[0082] A camera module in accordance with an embodiment of the invention is illustrated in FIG. 3. The camera module 300 includes an imager array 330 including an array of focal planes 340 along with a corresponding optic array 310 including an array of lens stacks 320. Within the array of lens stacks, each lens stack 320 creates an optical channel that forms an image of a scene on an array of light sensitive pixels within a corresponding focal plane 340. Each pairing of a lens stack 320 and focal plane 340 forms a single camera 204 within the camera module, and thereby an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks. Each pixel within a focal plane 340 of a camera 204 generates image data that can be sent from the camera 204 to the processor 208. In many embodiments, the lens stack within each optical channel is configured so that pixels of each focal plane 340 sample the same object space or region within the scene. In several embodiments, the lens stacks are configured so that the pixels that sample the same object space do so with sub-pixel offsets to provide sampling diversity that can be utilized to recover increased resolution through the use of super-resolution processes. For example, the optics of each camera module can be configured so that each camera within the camera module has a field of view of a scene that is shifted with respect to the fields of view of the other cameras within the camera module so that each shift of the field of view of each camera with respect to the fields of view of the other cameras is configured to include a unique sub-pixel shifted view of the scene.
[0083] In the illustrated embodiment, the focal planes are configured in a 5 x 5 array. Each focal plane 340 on the sensor is capable of capturing an image of the scene. Typically, each focal plane includes a plurality of rows of pixels that also forms a plurality of columns of pixels, and each focal plane is contained within a region of the imager that does not contain pixels from another focal plane. In many embodiments, image data capture and readout of each focal plane can be independently controlled. In other words, the imager array and the optic array of lens stacks form an array of cameras that can be configured to independently capture an image of a scene. In this way, image capture settings including (but not limited to) the exposure times and analog gains of pixels within a focal can be determined independently to enable image capture settings to be tailored based upon factors including (but not limited to) a specific color channel and/or a specific portion of the scene dynamic range. The sensor elements utilized in the focal planes can be individual light sensing elements such as, but not limited to, traditional CIS (CMOS Image Sensor) pixels, CCD (charge-coupled device) pixels, high dynamic range sensor elements, multispectral sensor elements and/or any other structure configured to generate an electrical signal indicative of light incident on the structure. In many embodiments, the sensor elements of each focal plane have similar physical properties and receive light via the same optical channel and color filter (where present). In other embodiments, the sensor elements have different characteristics and, in many instances, the characteristics of the sensor elements are related to the color filter applied to each sensor element.
[0084] In several embodiments, color filters in individual cameras can be used to pattern the camera module with π filter groups. These cameras can be used to capture data with respect to different colors, or a specific portion of the spectrum. In contrast to applying color filters to the pixels of the camera, color filters in many embodiments of the invention are included in the lens stack. For example, a green color camera can include a lens stack with a green light filter that allows green light to pass through the optical channel. In many embodiments, the pixels in each focal plane are the same and the light information captured by the pixels is differentiated by the color filters in the corresponding lens stack for each filter plane. Although a specific construction of a camera module with an optic array including color filters in the lens stacks is described above, camera modules including π filter groups can be implemented in a variety of ways including (but not limited to) by applying color filters to the pixels of the focal planes of the camera module similar to the manner in which color filters are applied to the pixels of a conventional color camera. In several embodiments, at least one of the cameras in the camera module can include uniform color filters applied to the pixels in its focal plane. In many embodiments, a Bayer filter pattern is applied to the pixels of one of the cameras in a camera module. In a number of embodiments, camera modules are constructed in which color filters are utilized in both the lens stacks and on the pixels of the imager array.
[0085] In several embodiments, an array camera generates image data from the multiple focal planes and uses a processor to synthesize one or more images of a scene. In certain embodiments, the image data captured by a single focal plane in the sensor array can constitute a low resolution image, or an "LR image" (the term low resolution here is used only to contrast with higher resolution images or super-resolved images, alternatively a "HR image" or "SR image"), which the processor can use in combination with other low resolution image data captured by the camera module to construct a higher resolution image through Super Resolution processing. Super Resolution processes that can be used to synthesize high resolution images using low resolution images captured by an array camera are discussed in U.S. Patent Application No. 12/967,807 entitled "Systems and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes", filed December 14, 2010, the disclosure of which is hereby incorporated by reference in its entirety.
[0086] Although specific imager array configurations are disclosed above, any of a variety of regular or irregular layouts of imagers including imagers that sense visible light, portions of the visible light spectrum, near-IR light, other portions of the spectrum and/or combinations of different portions of the spectrum can be utilized to capture LR images that provide one or more channels of information for use in SR processes in accordance with embodiments of the invention. The processing of captured LR images is discussed further below.
Image Processing Pipelines
[0087] The processing of LR images to obtain an SR image in accordance with embodiments of the invention typically occurs in an array camera's image processing pipeline. In many embodiments, the image processing pipeline performs processes that register the LR images prior to performing SR processes on the LR images. In several embodiments, the image processing pipeline also performs processes that eliminate problem pixels and compensate for parallax.
[0088] An image processing pipeline incorporating a SR module for fusing information from LR images to obtain a synthesized HR image in accordance with an embodiment of the invention is illustrated in FIG. 4. In the illustrated image processing pipeline 400, pixel information is read out from focal planes 340 and is provided to a photometric conversion module 402 for photometric normalization. The photometric conversion module can perform any of a variety of photometric image processing processes including but not limited to one or more of photometric normalization, Black Level calculation and adjustments, vignetting correction, and lateral color correction. In several embodiments, the photometric conversion module also performs temperature normalization. In the illustrated embodiment, the inputs of the photometric normalization module are photometric calibration data 401 and the captured LR images. The photometric calibration data is typically captured during an offline calibration process. The output of the photometric conversion module 402 is a set of photometrically normalized LR images. These photometrically normalized images are provided to a parallax detection module 404 and to a super-resolution module 406.
[0089] Prior to performing SR processing, the image processing pipeline detects parallax that becomes more apparent as objects in the scene captured by the imager array approach the imager array. In the illustrated embodiment, parallax (or disparity) detection is performed using the parallax detection module 404. In several embodiments, the parallax detection module 404 generates an occlusion map for the occlusion zones around foreground objects. In many embodiments, the occlusion maps are binary maps created for pairs of LR imagers. In many embodiments, occlusion maps are generated to illustrate whether a point in the scene is visible in the field of view of a reference LR imager and whether points in the scene visible within the field of view of the reference imager are visible in the field of view of other imagers. As discussed above, the use of π filter groups can increase the likelihood that a pixel visible in a reference LR image is visible (i.e. not occluded) in at least one other LR image. In order to determine parallax, the parallax detection module 404 performs scene independent geometric corrections to the photometrically normalized LR images using geometric calibration data 408 obtained via an address conversion module 410. The parallax detection module 404 can then compare the geometrically and photometrically corrected LR images to detect the presence of scene dependent geometric displacements between LR images. Information concerning these scene dependent geometric displacements can be referred to as parallax information and can be provided to the super-resolution module 406 in the form of scene dependent parallax corrections and occlusion maps. As will be discussed in greater detail below, parallax information can also include generated depth maps which can also be provided to the super-resolution module 406. Geometric calibration (or scene-independent geometric correction) data 408 can be generated using an off line calibration process or a subsequent recalibration process. The scene- independent correction information, along with the scene-dependent geometric correction information (parallax) and occlusion maps, form the geometric correction information for the LR images. [0090] Once the parallax information has been generated, the parallax information and the photometrically normalized LR images are provided to the super-resolution module 406 for use in the synthesis of one or more HR images 420. In many embodiments, the super-resolution module 406 performs scene independent and scene dependent geometric corrections (i.e. geometric corrections) using the parallax information and geometric calibration data 408 obtained via the address conversion module 410. The photometrically normalized and geometrically registered LR images are then utilized in the synthesis of an HR image. The synthesized HR image may then be fed to a downstream color processing module 412, which can be implemented using any standard color processing module configured to perform color correction and/or chroma level adjustment. In several embodiments, the color processing module performs operations including but not limited to one or more of white balance, color correction, gamma correction, and RGB to YUV correction.
[0091] In a number of embodiments, image processing pipelines in accordance with embodiments of the invention include a dynamic refocus module. The dynamic refocus module enables the user to specify a focal plane within a scene for use when synthesizing an HR image. In several embodiments, the dynamic refocus module builds an estimated HR depth map for the scene. The dynamic refocus module can use the HR depth map to blur the synthesized image to make portions of the scene that do not lie on the focal plane to appear out of focus. In many embodiments, the SR processing is limited to pixels lying on the focal plane and within a specified Z-range around the focal plane.
[0092] In several embodiments, the synthesized high resolution image 420 is encoded using any of a variety of standards based or proprietary encoding processes including but not limited to encoding the image in accordance with the JPEG standard developed by the Joint Photographic Experts Group. The encoded image can then be stored in accordance with a file format appropriate to the encoding technique used including but not limited to the JPEG Interchange Format (JIF), the JPEG File Interchange Format (JFIF), or the Exchangeable image file format (Exif).
[0093] Processing pipelines similar to the processing pipeline illustrated in FIG. 4 that can also be utilized in an array camera in accordance with embodiments of the invention are described in PCT Publication WO 2009/151903. Although a specific image processing pipeline is described above, super-resolution processes in accordance with embodiments of the invention can be used within any of a variety of image processing pipelines that register the LR images prior to super-resolution processing in accordance with embodiments of the invention.
[0094] As alluded to above, parallax information can be used to generate depth maps as well as occlusion maps, and this is discussed below.
Using Disparity To Generate Depth Maps in Array Cameras
[0095] Array cameras in accordance with many embodiments of the invention use disparity observed in images captured by the array cameras to generate a depth map. A depth map is typically regarded as being a layer of meta-data concerning an image (often a reference image captured by a reference camera) that describes the distance from the camera to specific pixels or groups of pixels within the image (depending upon the resolution of the depth map relative to the resolution of the original input images). Array cameras in accordance with a number of embodiments of the invention use depth maps for a variety of purposes including (but not limited to) generating scene dependent geometric shifts during the synthesis of a high resolution image and/or performing dynamic refocusing of a synthesized image.
[0096] Based upon the discussion of disparity above, the process of determining the depth of a portion of scene based upon pixel disparity is theoretically straightforward. When the viewpoint of a specific camera in the array camera is chosen as a reference viewpoint, the distance to a portion of the scene visible from the reference viewpoint can be determined using the disparity between the corresponding pixels in some or all of the other images captured by the camera array (often referred to as alternate view images). In the absence of occlusions, a pixel corresponding to a pixel in the reference image captured from the reference viewpoint will be located in each alternate view image along an epipolar line (i.e. a line parallel to the baseline vector between the two cameras). The distance along the epipolar line of the disparity corresponds to the distance between the camera and the portion of the scene captured by the pixels. Therefore, by comparing the pixels in the captured reference image and alternate view image(s) that are expected to correspond at a specific depth, a search can be conducted for the depth that yields the pixels having the highest degree of similarity. The depth at which the corresponding pixels in the reference image and the alternate view image(s) have the highest degree of similarity can be selected as the most likely distance between the camera and the portion of the scene captured by the pixel. [0097] Many challenges exist, however, in determining an accurate depth map using the method outlined above. In several embodiments, the cameras in an array camera are similar but not the same. Therefore, image characteristics including (but not limited to) optical characteristics, different sensor characteristics (such as variations in sensor response due to offsets, different transmission or gain responses, non-linear characteristics of pixel response), noise in the captured images, and/or warps or distortions related to manufacturing tolerances related to the assembly process can vary between the images reducing the similarity of corresponding pixels in different images. In addition, super-resolution processes rely on sampling diversity in the images captured by an imager array in order to synthesize higher resolution images. However, increasing sampling diversity can also involve decreasing similarity between corresponding pixels in captured images in a light field. Given that the process for determining depth outlined above relies upon the similarity of pixels, the presence of photometric differences and sampling diversity between the captured images can reduce the accuracy with which a depth map can be determined.
[0098] The generation of a depth map is further complicated by occlusions. As discussed above, an occlusion occurs when a pixel that is visible from the reference viewpoint is not visible in one or more of the captured images. The effect of an occlusion is that at the correct depth, the pixel location that would otherwise be occupied by a corresponding pixel is occupied by a pixel sampling another portion of the scene (typically an object closer to the camera). The occluding pixel is often very different to the occluded pixel. Therefore, a comparison of the similarity of the pixels at the correct depth is less likely to result in a significantly higher degree of similarity than at other depths. Effectively, the occluding pixel acts as a strong outlier masking the similarity of those pixels which in fact correspond at the correct depth. Accordingly, the presence of occlusions can introduce a strong source of error into a depth map. Furthermore, use of π filter groups to increase the likelihood that a pixel visible in an image captured by a reference camera is visible in alternate view images captured by other cameras within the array can reduce error in a depth map generated in the manner described above.
[0099] Processes for generating depth maps in accordance with many embodiments of the invention attempt to reduce sources of error that can be introduced into a depth map by sources including (but not limited to) those outlined above. For example, U.S. Patent Application Serial No. 61/780,906, entitled "Systems and Methods for Parallax Detection and Correction in Images Captured Using Array Cameras" discloses such processes. As already stated above, the disclosure of U.S. Patent Application Serial No. 61/780,906 is incorporated by reference herein in its entirety. Additionally, as noted above, use of π filter groups can significantly decrease the likelihood that a pixel visible from the viewpoint of a reference camera is occluded within all cameras within a color channel. Many different array cameras are capable of utilizing π filter groups in accordance with embodiments of the invention. Camera modules utilizing π filter groups in accordance with embodiments of the invention are described in further detail below.
Patterning with π Filter Groups
[00100] Camera modules can be patterned with π filter groups in accordance with embodiments of the invention. In several embodiments, π filter groups utilized as part of a camera module can each include a central camera that can function as a reference camera surrounded by color cameras in a way that reduces occlusion zones for each color. In certain embodiments, the camera module is arranged in a rectangular format utilizing the RGB color model where a reference camera is a green camera surrounded by red, green and blue cameras. In several embodiments, a number of green cameras that is twice the number of red cameras and twice the number of blue cameras surround the reference camera. In many embodiments, red color cameras and blue color cameras are located in opposite positions on the 3 x 3 array of cameras. Of course, any set of colors from any color model can be utilized to detect a useful range of colors in addition to the RGB color model, such as the cyan, magenta, yellow and key (CMYK) color model or red, yellow and blue (RYB) color model.
[00101] In several embodiments, two π filter groups can be utilized in the patterning of a camera module when the RGB color model is used. One π filter group is illustrated in FIG. 5A and the other π filter group is illustrated FIG. 5B. Either of these π filter groups can be used to pattern any camera module with dimensions greater than a 3 x 3 array of cameras.
[00102] In embodiments with a 3 x 3 camera module, patterning of the camera module with a π filter group includes only a single π filter group. A π filter group on a 3 x 3 camera module in accordance with an embodiment of the invention is illustrated in FIG. 5A. The π filter group 500 includes a green camera at each corner, a green reference camera in the center notated within a box 502, blue cameras above and below the reference camera, and red cameras to the left and right sides of the reference camera. In this configuration, the number of green cameras surrounding the central reference camera is twice the number of red cameras and twice the number of blue cameras. In addition, red cameras are located in opposite locations relative to the center of the 3 x 3 array of cameras to reduce occlusions. Similarly, blue cameras are located in opposite locations relative to the center of the 3 x 3 array of cameras to reduce occlusions. An alternative to the π filter group described in FIG. 5A is illustrated in FIG. 5B in accordance with an embodiment of the invention. This π filter group also includes green cameras at the corners with a green reference camera 552 at the center, as denoted with a box. However, unlike FIG. 5A, the red cameras shown in FIG. 5B are above and below, and the blue cameras are to the left and right side of the reference camera. As with the π filter group shown in FIG. 5A, the π filter group in FIG. 5B includes a central reference camera surrounded by a number of green cameras that is twice the number of red cameras and twice the number of blue cameras. As discussed above, the reference camera need not be a green camera. In several embodiments, the configurations in FIGS. 5 A and 5B can be modified to include a central camera that employs a Bayer color filter. In other embodiments, the central camera is an infrared camera, an extended color camera and/or any other type of camera appropriate to a specific application, for example an infrared camera, or a UV camera. In further embodiments, any of a variety of color cameras can be distributed around the reference camera in opposite locations in the 3 x 3 array relative to the reference camera in a manner that reduces occlusion zones with respect to each color channel.
[00103] For example, FIG. 5C depicts an embodiment where green color cameras are located above, below, to the left, and to the right of the central camera, while red and blue color cameras are disposed at the corner location of the π filter group. Note that in this embodiment, the first and third rows and columns each have a red, green, and blue color filter, and this arrangement can reduce instances of occlusions. Similarly, the configuration shown in FIG. 5C can include slightly larger occlusion zones in the red and blue color channels compared with the embodiments illustrated in FIGS. 5 A and 5B, because the red and blue color cameras are slightly further away from central reference camera. FIGS. 5D and 5E depict embodiments where color cameras surround a central green camera such that the cameras in each color channel are located in opposite positions in a 3 x 3 array relative to the central reference camera. In this configuration, the blue or red color channel in which the cameras are in the corners of the 3 x 3 array are likely to have slightly larger occlusion zones than the blue or red color channel in which the cameras are located closer to the central reference camera (i.e. the cameras are not located in the corners). Of course, as mentioned above, the central reference camera can be any suitable camera, e.g. not just a green camera, in accordance with embodiments of the invention. Moreover, many embodiments are similar to those seen in FIGS. 5D and 5E, except they are utilize an arrangement that is the mirror image of those seen in FIGS. 5D and 5E. Similarly, numerous embodiments are similar to those seen in FIGS. 5D and 5E, except they utilize an arrangement that is rotated with respect to those seen in FIGS. 5D and 5E.
[00104] Any camera module with dimensions at and above 3 x 3 cameras can be patterned with one or more π filter groups, where cameras not within a π filter group are assigned a color that reduces or minimizes the likelihood of occlusion zones within the camera module given color filter assignments of the π filter groups. A 4 x 4 camera module patterned with two π filter groups in accordance with an embodiment of the invention is illustrated in FIG. 6. The camera module 600 includes a first π filter group 602 of nine cameras centered on a reference green camera 604. A second π filter group 610 is diagonally located one camera shift to the lower right of the first π filter group. The second π filter group shares the four center cameras 612 of the camera module 600 with the first π filter group. However, the cameras serve different roles (i.e. different cameras act as reference cameras in the two π filter groups). As illustrated in FIG. 6, the two cameras at the corners 606 and 608 of the camera module are not included in the two π filter groups, 602 and 610. The color filters utilized within these cameras are determined based upon minimization of occlusion zones given the color filter assignments of the cameras that are part of the two π filter groups, 602 and 610. Due to the patterning of the π filter groups, there is an even distribution of blue color cameras around the reference camera, but there is no red color camera above the reference camera. Therefore, selecting the upper right corner camera 606 to be red provides red image data from a viewpoint above the reference camera and the likelihood of occlusion zones above and to the right of the foreground images in a scene for the reference camera 604 and the center camera of the second π filter group is minimized. Similarly, selecting the lower left corner camera 608 to be blue provides blue image data from a viewpoint to the left of the reference camera and the likelihood of occlusion zones below and to the left of the foreground images in a scene for the reference camera 604 and the center camera of the second π filter group is minimized. Thereby, a camera module with dimensions greater than 3 x 3 can be patterned with π filter groups with colors assigned to cameras not included in any π filter group to reduce and/or minimize occlusion zones as discussed above. As a result, the camera array includes at least one row and at least one column that contain a blue color camera, a green color camera, and a red color camera. Although specific π filter groups are discussed above, any of a variety of π filter groups can pattern a camera module in accordance with many different embodiments of the invention.
Multiple Reference Camera Options with Equivalent Performance
[00105] The use of multiple π filter groups to pattern a camera module in accordance with embodiments of the invention enables multiple cameras to be used as the reference camera with equivalent performance. A 4 x 4 camera module with two π filter groups in accordance with an embodiment of the invention is illustrated in FIG. 7. The camera module 700 includes two π filter groups 702, 706 where the central camera of each π filter group 704, 708 can act as a reference camera. Irrespective of the reference camera that is selected, the distribution of cameras around the reference camera is equivalent due to the use of π filter groups. Thereby, if a camera module 700 detects a defect with reference camera 704, the camera module 700 can switch to using the camera at the center of another π filter group as a reference camera 708 to avoid the defects of the first reference camera 704. Furthermore, patterning with π filter groups does not require that the reference camera or a virtual viewpoint be at the center of a camera module but rather that the reference camera is surrounded by color cameras in a way that reduces occlusion zones for each color. Although a specific camera module is discussed above, camera modules of any number of different dimensions can be utilized to create multiple reference camera options in accordance with embodiments of the invention.
Manufacturing Yield Improvement
[00106] Manufacturing processes inherently involve variations that can result in defects. In some instances the manufacturing defects may be severe enough to render an entire focal plane within an imager array inoperable. If the failure of the focal plane results in the discarding of the imager array, then the cost to manufacture array cameras is increased. Patterning camera modules with π filter groups can provide high manufacturing yield because the allocation of color filters in the optical channels of the optic array can be used to reduce the impact that a faulty focal plane has with respect to the creation of occlusion zones in the images synthesized using the image data captured by the array camera.
[00107] In many embodiments, the light sensed by the pixels in a focal plane of an imager array is determined by a color filter included in the optical channel that focuses light onto the focal plane. During manufacture, defects in a focal plane can be detected. When a defect is detected, the color filter pattern of the optical channels in the optic array can be determined so that the defective focal plane does not result in an increase in the size of occlusion zones. Typically, this means patterning camera modules with π filter groups in such a way that the presence of the defective focal plane does not reduce the number of red or blue cameras in the camera array (i.e. a filter pattern is used that results in a green channel being assigned to the defective focal plane, which reduces the number of green cameras in the camera array by one camera).
[00108] A process for detecting faulty focal planes before combining an optic array and imager array to create a camera module in accordance with embodiments of the invention is illustrated in FIG. 6A. In the illustrated process, the color filter patterns are patterned on the optics array and not on the pixels of the imager array. By manufacturing different types of optics arrays with different filter patterns, a process can systematically choose a specific optics array to force the faulty focal plane to pair with a color of a certain filter to ensure that the size of the occlusion zones in a given color channel are reduced and/or minimized. The process 800 includes testing (802) an imager array for faulty focal planes. After testing (802) the imager array, a decision (804) is made as to whether a faulty focal plane is detected on the imager array. If a faulty focal plane is detected, then an optic array is selected based upon the location of the faulty focal plane (806). In many embodiments, an optic array is selected that reduces the effect of the faulty focal plane by assigning color filters to the operational focal planes in a way that minimizes the impact of the faulty focal plane on the creation of occlusion zones within images synthesized using image data captured by the imager array. Further discussion of selecting different optic arrays that reduce occlusion zones when there is a faulty focal plane is provided below with reference to FIGS. 6B and 6C. After selecting (806) an optic array based upon the location of the faulty focal plane, the selected optic array is combined (808) with the imager array to create a camera module. If a faulty focal plane is not detected, then any of a variety of optic arrays including filter patterns based on π filter groups can be combined (808) with the tested imager array to create a camera module. As is discussed further below, a typical process can include a default optic array including a first filter pattern based on π filter groups and a second filter pattern based on π filter groups can be utilized when specific defects are detected that would result in the faulty focal plane reducing the number of color cameras (or even specific color cameras such as color cameras around the outside of the camera module) in the camera module when the first filter pattern is used.
[00109] The manner in which modifying color filter assignments can reduce the impact of a faulty focal plane is illustrated in FIGS. 8B and 8C. A camera module with a faulty red camera is illustrated in FIG. 8B. The camera module 820 includes a first π filter group 828 with a possible reference camera 822 at the center, a second π filter group 832 with a possible reference camera 830 at the center and a faulty red camera 824 below both π filter groups 828 and 832. There is a lack of red image data below both the possible reference cameras 822 and 830 due to the faulty red camera. Therefore, irrespective of which of the two cameras at the center of a π filter group is chosen as the reference camera. Accordingly, combining an optic array including the filter pattern illustrated in FIG. 8B to an imager with the indicated faulty focal plane results in a defective red camera that prevents the capture of red color information below any reference camera, increasing the likelihood of occlusion zones below foreground objects. However, an optic array patterned using π filter groups in different locations can result in all of the blue and red color filters being assigned to cameras that are active. In this way, the faulty focal plane only impacts the number of green cameras and does so in a way that reduces the likelihood of occlusion zones in an image synthesized using the image data captured by the resulting camera module. Stated another way, yield can be improved under certain circumstances by combining the imager array that includes the faulty focal plane with an optic array that assigns the color filters of the active cameras based on π filter groups in a way that results in color information being captured around the reference camera in a way that minimizes the likelihood of occlusion zones given the location of the faulty focal plane.
[00110] A camera module with the faulty focal plane of FIG. 8B but with an optic array patterned with π filter groups in such a way that the faulty focal plane does not reduce the capture of red or blue image data around the reference camera module is illustrated in FIG. 8C. Relative to the pattern of the optic array of FIG. 8B, the optic array of FIG. 8C is flipped along the center vertical bisecting axis 826 of the optic array and includes two π filter groups 828' and 832'. The lens stack associated with the faulty focal plane is green 854, as opposed to red 824 in FIG. 8B. As there are multiple green cameras below all possible reference cameras 852, 856 in FIG. 8C, the loss of a green camera 854 is less impactful as opposed to the impact from the loss of the red camera 824 in FIG. 8B. Therefore, the impact of faulty focal planes on an imager array can be reduced by combining the faulty imager array with an optic array specifically selected to assign color filters to the focal planes in the imager array in a manner that reduces the likelihood that the faulty focal plane will create an occlusion zone in any of the color channels captured by the resulting camera module. Although the example above discusses reducing red occlusion zones, the impact of a defective focal plane in any of the locations in an imager array can be similarly minimized by appropriate selection of a filter pattern based on π filter groups. Although specific examples of camera modules patterned with π filter groups to minimize yield loss due to faulty focal planes are described above, any of a variety of alternative color filter patterns including π filter groups can be utilized to increase manufacturing yield in accordance with embodiments of the invention.
Capturing Stereoscopic 3D Images
[00111] In many embodiments, Super Resolution processes can be used to synthesize high resolution images using low resolution images captured by an array camera including pairs of stereoscopic 3D images as disclosed in U.S. Patent Application No. 12/967,807 entitled "Systems and Methods for Synthesizing High Resolution Images Using Super-Resolution Processes", filed December 14, 2010, the disclosure of which is incorporated by reference above. Stereoscopic 3D image pairs are two images of a scene from spatially offset viewpoints that can be combined to create a 3D representation of the scene. The use of a filter pattern including π filter groups can enable the synthesis of stereoscopic 3D images in a computationally efficient manner. Image data captured by less than all of the cameras in the array camera can be used to synthesize each of the images that form the stereoscopic 3D image pair.
[00112] Patterning with π filter groups enables an efficient distribution of cameras around a reference camera that reduces occlusion zones and reduces the amount of image data captured by the camera module that is utilized to synthesize each of the images in a stereoscopic 3D image pair. In many embodiments, different subsets of the cameras are used to capture each of the images that form the stereoscopic 3D image pair and each of the subsets includes a π filter group. In many embodiments, the images that form the stereoscopic 3D image pair are captured from a virtual viewpoint that is slightly offset from the camera at the center of the π filter group. The central camera of a π filter group is surrounded by color cameras in a way that minimizes occlusion zones for each color camera when the central camera is used as a reference camera. When the virtual viewpoint is proximate the center of a π filter group, the benefits of the distribution of color cameras around the virtual viewpoint are similar.
[00113] A left virtual viewpoint for a stereoscopic 3D image pair captured using a camera module patterned using π filter groups is illustrated in FIG. 9A. The left virtual viewpoint 904 is taken from image data from the 12 circled cameras Gi - G3, G5 - G7, Bi - B2, B4, and R2 - R3 that form a 3x4 array. The virtual viewpoint is offset relative to the green camera G3, which is the center of a π filter group 906. A right virtual viewpoint used to capture the second image in the stereoscopic pair using the camera module shown in FIG. 7 is illustrated in FIG. 9B. The right virtual viewpoint 954 is taken from image data from the 12 circled cameras Bi - B3, G2 - G4, G6 - Gg, Ri, R3 - R4 that form a 3x4 array. The virtual viewpoint is offset relative to the green camera G6, which is the center of a π filter group 956. Therefore, a single array camera can capture 3D images of a scene using image data from a subset of the cameras to synthesize each of the images that form the stereoscopic pair. By utilizing the image data captured by less than all of the cameras in the camera module, the computational complexity of generating the stereoscopic 3D image pair is reduced. In addition, the location of the viewpoints of each of the images proximate a camera that is the center of a π filter group reduces the likelihood of occlusion zones in the synthesized images.
[00114] In several embodiments, the viewpoints need not be virtual viewpoints. In many embodiments, array camera modules can be constructed using π filter groups so that the viewpoints from which stereoscopic images are captured are reference viewpoints obtained from reference cameras within the camera array. For example, in some embodiments a 3 x 5 camera module is provided that includes two overlapping π filter groups. A 3 x 5 camera module that includes two overlapping π filter groups centered on each of two reference green color cameras is illustrated in FIG. 9C. In particular, the camera module 960 includes two overlapping π filter groups 962 and 964, each centered on one of two reference green color cameras 966 and 968 respectively. The two reference cameras 966 and 968 are used to provide the two reference viewpoints. In many embodiments, an array camera module is configured to capture stereoscopic images using non-overlapping π filter groups. A 3 x 6 array camera module that includes two non-overlapping π filter groups, which can be used to capture stereoscopic images is illustrated in FIG. 9D. In particular, the array camera module 970 is similar to that seen in FIG. 9C, except that the two π filter groups 972 and 974 do not overlap. In the illustrated embodiment, as before, the two π filter groups 972 and 974 are each centered on one of two green color cameras 976 and 978 respectively. The two reference cameras 976 and 978 are used to provide the two reference viewpoints. The embodiment illustrated in FIG. 9D demonstrates that π filter groups having different arrangements of cameras within each π filter group can be utilized to pattern an array camera module in accordance with embodiments of the invention. The two π filter groups 972 and 974 use different 3 x 3 camera arrangement. Similarly, π filter groups incorporating different 3 x 3 arrangements of cameras can be utilized to construct any of a variety of camera arrays of different dimensions.
[00115] Although specific viewpoints and subsets of cameras for synthesizing stereoscopic 3D image pairs are illustrated in FIGS. 9A -9D, stereoscopic image pairs can be generated using subsets of cameras in any of a variety of camera modules in accordance with embodiments of the invention.
Capturing Images Using A Subset of Cameras
[00116] Array cameras with camera modules patterned with π filter groups can utilize less than all of the available cameras in operation in accordance with many embodiments of the invention. In several embodiments, using fewer cameras can minimize the computational complexity of generating an image using an array camera and can reduce the power consumption of the array camera. Reducing the number of cameras used to capture image data can be useful for applications such as video, where frames of video can be synthesized using less than all of the image data that can be captured by a camera module. In a number of embodiments, a single π filter group can be utilized to capture an image. In many embodiments, image data captured by a single π filter group is utilized to capture a preview image prior to capturing image data with a larger number of cameras. In several embodiments, the cameras in a single π filter group capture video image data. Depending upon the requirements of a specific application, image data can be captured using additional cameras to increase resolution and/or provide additional color information and reduce occlusions. [00117] A π filter group within a camera module that is utilized to capture image data that can be utilized to synthesize an image is illustrated in FIG. 10. In the illustrated embodiments, the reference camera is boxed and utilized cameras are encompassed in a dotted line. The camera module 1000 includes a π filter group of cameras generating image data Gi - G2, G5 - G6, Bi - B2 and R2 - R3 with reference camera G3. FIG. 10 illustrates how the cameras in a π filter group can be utilized to capture images. Image data can be acquired using additional cameras for increased resolution and to provide additional color information in occlusion zones. Accordingly, any number and arrangement of cameras can be utilized to capture image data using a camera module in accordance with many different embodiments of the invention.
Building Color Filter Patterns Including π Filter Groups
[00118] Color filter patterns for any array of cameras having dimensions greater than 3 x 3 can be constructed in accordance with embodiments of the invention. In many embodiments, processes for constructing color filter patterns typically involve assigning color filters to the cameras in a camera module to maximize the number of overlapping π filter groups. In the event that there are cameras that cannot be included in a π filter group, then color filters can be assigned to the cameras based upon minimizing occlusions around the camera that is to be used as the reference camera for the purposes of synthesizing high-resolution images.
[00119] A process for assigning color filters to cameras in a camera module in accordance with an embodiment of the invention is illustrated in FIG. 11. The process 1100 includes selecting (1102) a corner of the array, assigning (1104) a π filter group to the selected corner. The π filter group occupies a 3 x 3 grid. Color filters can be assigned (1106) to the remaining cameras in such a way to maximize the number of overlapping π filter groups within the array. In the event that there are cameras to which color filters are not assigned, the cameras are assigned (1108) color filters that reduce the likelihood of occlusion zones in images synthesized from the viewpoint of a camera selected as the reference camera for the array. At which point, all of the cameras in the array are assigned color filters. As noted above, the presence of multiple π filter groups provides benefits including (but not limited to) robustness to failures in specific cameras within the array and the ability to synthesize images with fewer than all of the cameras in the camera module utilizing image data captured by at least one π filter group. [00120] The process of generating a simple filter pattern for a 5 x 5 array using π filter groups is illustrated in FIGS. 12A - 12D. The process starts with the selection of the top left corner of the array. A π filter group is assigned to the 3 x 3 group of cameras in the top left corner (cameras G1-G5, Bi-B2, and Ri-R2). A second overlapping π filter group is created by adding three green cameras and a blue camera and a red camera (G6-G8 and B3 and R3). A third overlapping π filter group is created by adding another three green cameras and a blue camera and a red camera (G9-G11 and B4 and R4). A fifth and sixth π filter groups are created by adding a single green camera, blue camera and red camera (Gi2, B5, R5 and G13, B6, R6). In the event that central camera (G6) fails, a camera at the center of another π filter group can be utilized as the reference camera (e.g. G3).
[00121] A similar process for generating a simple filter pattern for a 4 x 5 array using π filter groups is illustrated in FIGS. 13A - 13D. The process is very similar with the exception that two cameras are not included in π filter groups. Due to the fact that there are no blue cameras below the camera G6 (which is the center of a π filter group), the cameras that do not form part of a π filter group are assigned as blue cameras (B5 and B6). As can readily be appreciated similar processes can be applied to any array larger than a 3 x 3 array to generate a color filter pattern incorporating π filter groups in accordance with embodiments of the invention. Similarly, the process outlined above can be utilized to construct larger arrays including the 7 x 7 array of cameras illustrated in FIG. 14. The same process can also be utilized to construct even larger arrays of any dimensions including square arrays where the number of cameras in each of the dimensions of the array is odd. Accordingly, the processes discussed herein can be utilized to construct a camera module and/or an array camera including a camera array having any dimensions appropriate to the requirements of a specific application in accordance with embodiments of the invention.
[00122] While the above description contains many specific embodiments of the invention, these should not be construed as limitations on the scope of the invention, but rather as an example of one embodiment thereof. It is therefore to be understood that the present invention may be practiced otherwise than specifically described, without departing from the scope and spirit of the present invention. Thus, embodiments of the present invention should be considered in all respects as illustrative and not restrictive.

Claims

WHAT IS CLAIMED IS:
An array camera module, comprising:
an M x N imager array comprising a plurality of focal planes, each focal plane comprising an array of light sensitive pixels;
an M x N optic array of lens stacks, where each lens stack corresponds to a focal plane, and where each lens stack forms an image of a scene on its corresponding focal plane; wherein each pairing of a lens stack and its corresponding focal plane thereby defines a camera;
wherein at least one row in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and wherein at least one column in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera.
The array camera module of claim 1 :
wherein M and N are each greater than two and at least one of M and N is even;
wherein color filters are implemented within the cameras in the array camera module such that the array camera module is patterned with at least one π filter group comprising: a 3 x 3 array of cameras comprising:
a reference camera at the center of the 3 x 3 array of cameras;
two red color cameras located on opposite sides of the 3 x 3 array of cameras;
two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and
four green color cameras surrounding the reference camera.
The array camera module of claim 2 wherein each of the four green color cameras surrounding the reference camera is disposed at a corner location of the 3 x 3 array of cameras.
The array camera module of claim 3, wherein: M is four;
N is four;
the first row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a red color camera;
the second row of cameras of the 4 x 4 array camera module includes, in the following order, a red color camera, a green color camera, a red color camera, and a green color camera;
the third row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a blue color camera; and
the fourth row of cameras of the 4 x 4 array camera module includes, in the following order, a blue color camera, a green color camera, a red color camera, and a green color camera.
5. The array camera module of claim 3, wherein:
M is four;
N is four;
the first row of cameras of the 4 x 4 array camera module includes, in the following order, a red color camera, a green color camera, a blue color camera, and a green color camera;
the second row of cameras of the 4 x 4 array camera module includes, in the following order a green color camera, a red color camera, a green color camera, and a red color camera;
the third row of cameras of the 4 x 4 array camera module includes, in the following order, a blue color camera, a green color camera, a blue color camera, and a green color camera; and
the fourth row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a red color camera, a green color camera, and a blue color camera.
6. The array camera module of claim 2, wherein the reference camera is a green color camera.
7. The array camera module of claim 2, wherein the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
8. The array camera module of claim 2 wherein each of the two red color cameras is located at a corner location of the 3 x 3 array of cameras, and wherein each of the two blue color cameras is located at a corner location of the 3 x 3 array of cameras.
9. The array camera module of claim 2, wherein at least one color filter is implemented on the imager array.
10. The array camera module of claim 2, wherein at least one color filter is implemented on a lens stack.
11. A 3 x 3 array camera module comprising:
a 3 x 3 imager array comprising a 3 x 3 arrangement of focal planes, each focal plane comprising an array of light sensitive pixels;
a 3 x 3 optic array of lens stacks, where each lens stack corresponds to a focal plane, and where each lens stack forms an image of a scene on its corresponding focal plane;
wherein each pairing of a lens stack and its corresponding focal plane thereby defines a camera;
wherein the 3 x 3 array of cameras comprises:
a reference camera at the center of the 3 x 3 array of cameras;
two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and
four green color cameras, each located at a corner location of the 3 x 3 array of cameras; wherein each of the color cameras is achieved using a color filter.
12. The 3 x 3 array camera module of claim 11, wherein at least one color filter is implemented on the imager array to achieve a color camera.
13. The 3 x 3 array camera module of claim 11, wherein at least one color filter is implemented within a lens stack to achieve a color camera.
14. The 3 x 3 array camera module of claim 11, wherein the reference camera is a green color camera.
15. The 3 x 3 array camera module of claim 11, wherein the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
16. A method of patterning an array camera module with at least one π filter group comprising:
evaluating whether an imager array of M x N focal planes, where each focal plane comprises an array of light sensitive pixels, includes any defective focal planes;
assembling an M x N array camera module using:
the imager array of M x N focal planes;
an M x N optic array of lens stacks, where each lens stack corresponds with a focal plane,
wherein the M x N array camera module is assembled so that:
each lens stack and its corresponding focal plane define a camera; color filters are implemented within the array camera module such that the array camera module is patterned with at least one π filter group comprising:
a 3 x 3 array of cameras comprising:
a reference camera at the center of the 3 x 3 array of cameras; two red color cameras located on opposite sides of the 3 x 3 array of cameras;
two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and
four green color cameras surrounding the reference camera; and
wherein the array camera module is patterned with the at least one π filter group such that a camera that includes a defective focal plane is a green color camera.
17. The method of patterning an array camera module with at least one π filter group of claim 16, wherein at least one color filter is implemented on the imager array.
18. The method of patterning an array camera module with at least one π filter group of claim 16, wherein at least one color filter is implemented within a lens stack.
19. The method of patterning an array camera module with at least one π filter group of claim 16, wherein the reference camera is a green color camera.
20. The method of patterning an array camera module with at least one π filter group of claim 16, wherein the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
21. An array camera module, comprising:
an imager array comprising M x N focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane;
an optic array of M x N lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks; wherein the imager array and the optic array of lens stacks form an M x N array of cameras that are configured to independently capture an image of a scene;
wherein at least one row in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and wherein at least one column in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera.
22. The array camera module of claim 21 , wherein:
the red color camera is a camera that captures image data including electromagnetic waves having a wavelength within the range of 620 nm and 750 nm;
the green color camera is a camera that captures image data including electromagnetic waves having a wavelength within the range of 495 nm and 570 nm; and
the blue color camera is a camera that captures image data including electromagnetic waves having a wavelength within the range of 450 nm and 495 nm.
23. The array camera module of claim 22, wherein the optics of each camera within the array camera module are configured so that each camera has a field of view of a scene that is shifted with respect to the fields of view of the other cameras so that each shift of the field of view of each camera with respect to the fields of view of the other cameras is configured to include a unique sub-pixel shifted view of the scene.
24. The array camera module of claim 23, wherein:
M and N are each greater than two and at least one of M and N is even;
color filters are implemented within the cameras in the array camera module such that the array camera module is patterned with at least one π filter group comprising:
a 3 x 3 array of cameras comprising:
a reference camera at the center of the 3 x 3 array of cameras;
two red color cameras located on opposite sides of the 3 x 3 array of cameras;
two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and four green color cameras surrounding the reference camera.
25. The array camera module of claim 24 wherein each of the four green color cameras surrounding the reference camera is disposed at a corner location of the 3 x 3 array of cameras.
26. The array camera module of claim 25, wherein:
M is four;
N is four;
the first row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a red color camera;
the second row of cameras of the 4 x 4 array camera module includes, in the following order, a red color camera, a green color camera, a red color camera, and a green color camera;
the third row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a blue color camera, a green color camera, and a blue color camera; and
the fourth row of cameras of the 4 x 4 array camera module includes, in the following order, a blue color camera, a green color camera, a red color camera, and a green color camera.
27. The array camera module of claim 25, wherein:
M is four;
N is four;
the first row of cameras of the 4 x 4 array camera module includes, in the following order, a red color camera, a green color camera, a blue color camera, and a green color camera;
the second row of cameras of the 4 x 4 array camera module includes, in the following order a green color camera, a red color camera, a green color camera, and a red color camera; the third row of cameras of the 4 x 4 array camera module includes, in the following order, a blue color camera, a green color camera, a blue color camera, and a green color camera; and
the fourth row of cameras of the 4 x 4 array camera module includes, in the following order, a green color camera, a red color camera, a green color camera, and a blue color camera.
28. The array camera module of claim 24, wherein the reference camera within the at least one π filter group is a green color camera.
29. The array camera module of claim 24, wherein the reference camera within the at least one π filter group is a camera that incorporates a Bayer filter.
30. The array camera module of claim 24, wherein the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
31. The array camera module of claim 24 wherein each of the two red color cameras is located at a corner location of the 3 x 3 array of cameras, and wherein each of the two blue color cameras is located at a corner location of the 3 x 3 array of cameras.
32. The array camera module of claim 24, wherein at least one color filter is implemented on the imager array.
33. The array camera module of claim 24, wherein at least one color filter is implemented on a lens stack.
34. A 3 x 3 array camera module comprising:
a 3 x 3 imager array comprising a 3 x 3 arrangement of focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane;
a 3 x 3 optic array of lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks;
wherein the imager array and the optic array of lens stacks form a 3 x 3 array of cameras that are configured to independently capture an image of a scene;
wherein the 3 x 3 array of cameras comprises:
a reference camera at the center of the 3 x 3 array of cameras;
two red color cameras located on opposite sides of the 3 x 3 array of cameras; two blue color cameras located on opposite sides of the 3 x 3 array of cameras; and
four green color cameras, each located at a corner location of the 3 x 3 array of cameras;
wherein each of the color cameras is achieved using a color filter.
The 3 x 3 array camera module of claim 34, wherein at least one color filter is implemented on the imager array to achieve a color camera.
The 3 x 3 array camera module of claim 34, wherein at least one color filter is implemented within a lens stack to achieve a color camera.
The 3 x 3 array camera module of claim 34, wherein the reference camera is a green color camera.
The 3 x 3 array camera module of claim 34, wherein the reference camera is one of: a camera that incorporates a Bayer filter, a camera that is configured to capture infrared light, and a camera that is configured to capture ultraviolet light.
An array camera module, comprising:
an imager array comprising M x N focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane;
an optic array of M x N lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks;
wherein the imager array and the optic array of lens stacks form an M x N array of cameras that are configured to independently capture an image of a scene; and
wherein at least either one row or one column in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera.
40. The array camera module of claim 39, wherein:
M is three;
N is three;
the first row of cameras of the 3 x 3 array camera module includes, in the following order, a blue color camera, a green color camera, and a green color camera;
wherein the second row of cameras of the 3 x 3 array camera module includes, in the following order a red color camera, a green color camera, and a red color camera; and the third row of cameras of the 3 x 3 array camera module includes, in the following order, a green color camera, a green color camera, and a blue color camera.
41. The array camera module of claim 39, wherein:
M is three;
N is three;
the first row of cameras of the 3 x 3 array camera module includes, in the following order, a red color camera, a green color camera, and a green color camera;
the second row of cameras of the 3 x 3 array camera module includes, in the following order a blue color camera, a green color camera, and a blue color camera; and
the third row of cameras of the 3 x 3 array camera module includes, in the following order, a green color camera, a green color camera, and a red color camera.
42. An array camera comprising: an array camera module, comprising:
an imager array comprising M x N focal planes, where each focal plane comprises a plurality of rows of pixels that also form a plurality of columns of pixels and each active focal plane is contained within a region of the imager array that does not contain pixels from another focal plane; an optic array of M x N lens stacks, where an image is formed on each focal plane by a separate lens stack in the optic array of lens stacks;
wherein the imager array and the optic array of lens stacks form an M x N array of cameras that are configured to independently capture an image of a scene;
wherein at least one row in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and
wherein at least one column in the M x N array of cameras comprises at least one red color camera, at least one green color camera, and at least one blue color camera; and
a processor that comprises an image processing pipeline, the image processing pipeline comprising:
a parallax detection module; and
a super-resolution module;
wherein the parallax detection module is configured to obtain a reference low resolution image of a scene and at least one alternate view image of the scene from the camera module;
wherein the parallax detection module is configured to compare the reference image and the at least one alternate view image to determine a depth map and an occlusion map for the reference image; and
wherein the super-resolution module is configured to synthesize a high resolution image using at least the reference image, the depth map, the occlusion map and the at least one alternate view image.
PCT/US2013/039155 2012-05-01 2013-05-01 CAMERA MODULES PATTERNED WITH pi FILTER GROUPS WO2013166215A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2015510443A JP2015521411A (en) 2012-05-01 2013-05-01 Camera module patterned using π filter group
EP13785220.8A EP2845167A4 (en) 2012-05-01 2013-05-01 CAMERA MODULES PATTERNED WITH pi FILTER GROUPS
CN201380029203.7A CN104335246B (en) 2012-05-01 2013-05-01 The camera model of pattern is formed with pi optical filters group

Applications Claiming Priority (6)

Application Number Priority Date Filing Date Title
US201261641165P 2012-05-01 2012-05-01
US61/641,165 2012-05-01
US201261691666P 2012-08-21 2012-08-21
US61/691,666 2012-08-21
US201361780906P 2013-03-13 2013-03-13
US61/780,906 2013-03-13

Publications (1)

Publication Number Publication Date
WO2013166215A1 true WO2013166215A1 (en) 2013-11-07

Family

ID=49514873

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/US2013/039155 WO2013166215A1 (en) 2012-05-01 2013-05-01 CAMERA MODULES PATTERNED WITH pi FILTER GROUPS

Country Status (4)

Country Link
EP (1) EP2845167A4 (en)
JP (1) JP2015521411A (en)
CN (1) CN104335246B (en)
WO (1) WO2013166215A1 (en)

Cited By (37)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN104735360A (en) * 2013-12-18 2015-06-24 华为技术有限公司 Method and device for optical field image processing
CN106464851A (en) * 2014-06-30 2017-02-22 微软技术许可有限责任公司 Depth estimation using multi-view stereo and a calibrated projector
CN106463002A (en) * 2014-06-03 2017-02-22 株式会社日立制作所 Image processing device and three-dimensional display method
CN106471804A (en) * 2014-07-04 2017-03-01 三星电子株式会社 Method and device for picture catching and depth extraction simultaneously
US9807372B2 (en) 2014-02-12 2017-10-31 Htc Corporation Focused image generation single depth information from multiple images from multiple sensors
US9872012B2 (en) 2014-07-04 2018-01-16 Samsung Electronics Co., Ltd. Method and apparatus for image capturing and simultaneous depth extraction
US20180197035A1 (en) 2011-09-28 2018-07-12 Fotonation Cayman Limited Systems and Methods for Encoding Image Files Containing Depth Maps Stored as Metadata
US10091405B2 (en) 2013-03-14 2018-10-02 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US10089740B2 (en) 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
US10119808B2 (en) 2013-11-18 2018-11-06 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US10127682B2 (en) 2013-03-13 2018-11-13 Fotonation Limited System and methods for calibration of an array camera
US10182216B2 (en) 2013-03-15 2019-01-15 Fotonation Limited Extended color processing on pelican array cameras
US10225543B2 (en) 2013-03-10 2019-03-05 Fotonation Limited System and methods for calibration of an array camera
US10250871B2 (en) 2014-09-29 2019-04-02 Fotonation Limited Systems and methods for dynamic calibration of array cameras
US10261219B2 (en) 2012-06-30 2019-04-16 Fotonation Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US10306120B2 (en) 2009-11-20 2019-05-28 Fotonation Limited Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps
US10311649B2 (en) 2012-02-21 2019-06-04 Fotonation Limited Systems and method for performing depth based image editing
US10334241B2 (en) 2012-06-28 2019-06-25 Fotonation Limited Systems and methods for detecting defective camera arrays and optic arrays
US10366472B2 (en) 2010-12-14 2019-07-30 Fotonation Limited Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US10380752B2 (en) 2012-08-21 2019-08-13 Fotonation Limited Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
US10390005B2 (en) 2012-09-28 2019-08-20 Fotonation Limited Generating images from light fields utilizing virtual viewpoints
US10455218B2 (en) 2013-03-15 2019-10-22 Fotonation Limited Systems and methods for estimating depth using stereo array cameras
US10462362B2 (en) 2012-08-23 2019-10-29 Fotonation Limited Feature based high resolution motion estimation from low resolution images captured using an array source
US10542208B2 (en) 2013-03-15 2020-01-21 Fotonation Limited Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US10540806B2 (en) 2013-09-27 2020-01-21 Fotonation Limited Systems and methods for depth-assisted perspective distortion correction
US10674138B2 (en) 2013-03-15 2020-06-02 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US10708492B2 (en) 2013-11-26 2020-07-07 Fotonation Limited Array camera configurations incorporating constituent array cameras and constituent cameras
US11270110B2 (en) 2019-09-17 2022-03-08 Boston Polarimetrics, Inc. Systems and methods for surface modeling using polarization cues
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11302012B2 (en) 2019-11-30 2022-04-12 Boston Polarimetrics, Inc. Systems and methods for transparent object segmentation using polarization cues
US11412158B2 (en) 2008-05-20 2022-08-09 Fotonation Limited Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US11525906B2 (en) 2019-10-07 2022-12-13 Intrinsic Innovation Llc Systems and methods for augmentation of sensor systems and imaging systems with polarization
US11580667B2 (en) 2020-01-29 2023-02-14 Intrinsic Innovation Llc Systems and methods for characterizing object pose detection and measurement systems
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US11797863B2 (en) 2020-01-30 2023-10-24 Intrinsic Innovation Llc Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects

Families Citing this family (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9762893B2 (en) 2015-12-07 2017-09-12 Google Inc. Systems and methods for multiscopic noise reduction and high-dynamic range
JPWO2017154606A1 (en) * 2016-03-10 2019-01-10 ソニー株式会社 Information processing apparatus and information processing method

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7295697B1 (en) * 1999-12-06 2007-11-13 Canon Kabushiki Kaisha Depth information measurement apparatus and mixed reality presentation system
US20110043665A1 (en) * 2009-08-19 2011-02-24 Kabushiki Kaisha Toshiba Image processing device, solid-state imaging device, and camera module
US20110122308A1 (en) * 2009-11-20 2011-05-26 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US20120012748A1 (en) * 2010-05-12 2012-01-19 Pelican Imaging Corporation Architectures for imager arrays and array cameras

Family Cites Families (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP1812968B1 (en) * 2004-08-25 2019-01-16 Callahan Cellular L.L.C. Apparatus for multiple camera devices and method of operating same
CN102037717B (en) * 2008-05-20 2013-11-06 派力肯成像公司 Capturing and processing of images using monolithic camera array with hetergeneous imagers
US8866920B2 (en) * 2008-05-20 2014-10-21 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7295697B1 (en) * 1999-12-06 2007-11-13 Canon Kabushiki Kaisha Depth information measurement apparatus and mixed reality presentation system
US20110043665A1 (en) * 2009-08-19 2011-02-24 Kabushiki Kaisha Toshiba Image processing device, solid-state imaging device, and camera module
US20110122308A1 (en) * 2009-11-20 2011-05-26 Pelican Imaging Corporation Capturing and processing of images using monolithic camera array with heterogeneous imagers
US20120012748A1 (en) * 2010-05-12 2012-01-19 Pelican Imaging Corporation Architectures for imager arrays and array cameras

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
See also references of EP2845167A4 *

Cited By (58)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US11792538B2 (en) 2008-05-20 2023-10-17 Adeia Imaging Llc Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US11412158B2 (en) 2008-05-20 2022-08-09 Fotonation Limited Capturing and processing of images including occlusions focused on an image sensor by a lens stack array
US10306120B2 (en) 2009-11-20 2019-05-28 Fotonation Limited Capturing and processing of images captured by camera arrays incorporating cameras with telephoto and conventional lenses to generate depth maps
US10366472B2 (en) 2010-12-14 2019-07-30 Fotonation Limited Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US11875475B2 (en) 2010-12-14 2024-01-16 Adeia Imaging Llc Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US11423513B2 (en) 2010-12-14 2022-08-23 Fotonation Limited Systems and methods for synthesizing high resolution images using images captured by an array of independently controllable imagers
US10984276B2 (en) 2011-09-28 2021-04-20 Fotonation Limited Systems and methods for encoding image files containing depth maps stored as metadata
US10430682B2 (en) 2011-09-28 2019-10-01 Fotonation Limited Systems and methods for decoding image files containing depth maps stored as metadata
US20180197035A1 (en) 2011-09-28 2018-07-12 Fotonation Cayman Limited Systems and Methods for Encoding Image Files Containing Depth Maps Stored as Metadata
US10275676B2 (en) 2011-09-28 2019-04-30 Fotonation Limited Systems and methods for encoding image files containing depth maps stored as metadata
US11729365B2 (en) 2011-09-28 2023-08-15 Adela Imaging LLC Systems and methods for encoding image files containing depth maps stored as metadata
US10311649B2 (en) 2012-02-21 2019-06-04 Fotonation Limited Systems and method for performing depth based image editing
US10334241B2 (en) 2012-06-28 2019-06-25 Fotonation Limited Systems and methods for detecting defective camera arrays and optic arrays
US11022725B2 (en) 2012-06-30 2021-06-01 Fotonation Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US10261219B2 (en) 2012-06-30 2019-04-16 Fotonation Limited Systems and methods for manufacturing camera modules using active alignment of lens stack arrays and sensors
US10380752B2 (en) 2012-08-21 2019-08-13 Fotonation Limited Systems and methods for estimating depth and visibility from a reference viewpoint for pixels in a set of images captured from different viewpoints
US10462362B2 (en) 2012-08-23 2019-10-29 Fotonation Limited Feature based high resolution motion estimation from low resolution images captured using an array source
US10390005B2 (en) 2012-09-28 2019-08-20 Fotonation Limited Generating images from light fields utilizing virtual viewpoints
US10225543B2 (en) 2013-03-10 2019-03-05 Fotonation Limited System and methods for calibration of an array camera
US10958892B2 (en) 2013-03-10 2021-03-23 Fotonation Limited System and methods for calibration of an array camera
US11570423B2 (en) 2013-03-10 2023-01-31 Adeia Imaging Llc System and methods for calibration of an array camera
US11272161B2 (en) 2013-03-10 2022-03-08 Fotonation Limited System and methods for calibration of an array camera
US10127682B2 (en) 2013-03-13 2018-11-13 Fotonation Limited System and methods for calibration of an array camera
US10547772B2 (en) 2013-03-14 2020-01-28 Fotonation Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US10091405B2 (en) 2013-03-14 2018-10-02 Fotonation Cayman Limited Systems and methods for reducing motion blur in images or video in ultra low light with array cameras
US10638099B2 (en) 2013-03-15 2020-04-28 Fotonation Limited Extended color processing on pelican array cameras
US10542208B2 (en) 2013-03-15 2020-01-21 Fotonation Limited Systems and methods for synthesizing high resolution images using image deconvolution based on motion and depth information
US10455218B2 (en) 2013-03-15 2019-10-22 Fotonation Limited Systems and methods for estimating depth using stereo array cameras
US10182216B2 (en) 2013-03-15 2019-01-15 Fotonation Limited Extended color processing on pelican array cameras
US10674138B2 (en) 2013-03-15 2020-06-02 Fotonation Limited Autofocus system for a conventional camera that uses depth information from an array camera
US10540806B2 (en) 2013-09-27 2020-01-21 Fotonation Limited Systems and methods for depth-assisted perspective distortion correction
US11486698B2 (en) 2013-11-18 2022-11-01 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US10767981B2 (en) 2013-11-18 2020-09-08 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US10119808B2 (en) 2013-11-18 2018-11-06 Fotonation Limited Systems and methods for estimating depth from projected texture using camera arrays
US10708492B2 (en) 2013-11-26 2020-07-07 Fotonation Limited Array camera configurations incorporating constituent array cameras and constituent cameras
CN104735360A (en) * 2013-12-18 2015-06-24 华为技术有限公司 Method and device for optical field image processing
US9807372B2 (en) 2014-02-12 2017-10-31 Htc Corporation Focused image generation single depth information from multiple images from multiple sensors
US10089740B2 (en) 2014-03-07 2018-10-02 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
US10574905B2 (en) 2014-03-07 2020-02-25 Fotonation Limited System and methods for depth regularization and semiautomatic interactive matting using RGB-D images
CN106463002A (en) * 2014-06-03 2017-02-22 株式会社日立制作所 Image processing device and three-dimensional display method
CN106464851A (en) * 2014-06-30 2017-02-22 微软技术许可有限责任公司 Depth estimation using multi-view stereo and a calibrated projector
CN106464851B (en) * 2014-06-30 2018-10-12 微软技术许可有限责任公司 Use the estimation of Depth of multi-viewpoint three-dimensional figure and the calibrated projector
CN106471804A (en) * 2014-07-04 2017-03-01 三星电子株式会社 Method and device for picture catching and depth extraction simultaneously
US9872012B2 (en) 2014-07-04 2018-01-16 Samsung Electronics Co., Ltd. Method and apparatus for image capturing and simultaneous depth extraction
US11546576B2 (en) 2014-09-29 2023-01-03 Adeia Imaging Llc Systems and methods for dynamic calibration of array cameras
US10250871B2 (en) 2014-09-29 2019-04-02 Fotonation Limited Systems and methods for dynamic calibration of array cameras
US11270110B2 (en) 2019-09-17 2022-03-08 Boston Polarimetrics, Inc. Systems and methods for surface modeling using polarization cues
US11699273B2 (en) 2019-09-17 2023-07-11 Intrinsic Innovation Llc Systems and methods for surface modeling using polarization cues
US11525906B2 (en) 2019-10-07 2022-12-13 Intrinsic Innovation Llc Systems and methods for augmentation of sensor systems and imaging systems with polarization
US11302012B2 (en) 2019-11-30 2022-04-12 Boston Polarimetrics, Inc. Systems and methods for transparent object segmentation using polarization cues
US11842495B2 (en) 2019-11-30 2023-12-12 Intrinsic Innovation Llc Systems and methods for transparent object segmentation using polarization cues
US11580667B2 (en) 2020-01-29 2023-02-14 Intrinsic Innovation Llc Systems and methods for characterizing object pose detection and measurement systems
US11797863B2 (en) 2020-01-30 2023-10-24 Intrinsic Innovation Llc Systems and methods for synthesizing data for training statistical models on different imaging modalities including polarized images
US11683594B2 (en) 2021-04-15 2023-06-20 Intrinsic Innovation Llc Systems and methods for camera exposure control
US11290658B1 (en) 2021-04-15 2022-03-29 Boston Polarimetrics, Inc. Systems and methods for camera exposure control
US11954886B2 (en) 2021-04-15 2024-04-09 Intrinsic Innovation Llc Systems and methods for six-degree of freedom pose estimation of deformable objects
US11953700B2 (en) 2021-05-27 2024-04-09 Intrinsic Innovation Llc Multi-aperture polarization optical systems using beam splitters
US11689813B2 (en) 2021-07-01 2023-06-27 Intrinsic Innovation Llc Systems and methods for high dynamic range imaging using crossed polarizers

Also Published As

Publication number Publication date
EP2845167A1 (en) 2015-03-11
CN104335246B (en) 2018-09-04
CN104335246A (en) 2015-02-04
JP2015521411A (en) 2015-07-27
EP2845167A4 (en) 2016-01-13

Similar Documents

Publication Publication Date Title
WO2013166215A1 (en) CAMERA MODULES PATTERNED WITH pi FILTER GROUPS
US9706132B2 (en) Camera modules patterned with pi filter groups
CN204697179U (en) There is the imageing sensor of pel array
CN105872525B (en) Image processing apparatus and image processing method
JP5589146B2 (en) Imaging device and imaging apparatus
CN105306786B (en) Image processing method for the imaging sensor with phase-detection pixel
JP5472584B2 (en) Imaging device
EP2133726B1 (en) Multi-image capture system with improved depth image resolution
CN103688536B (en) Image processing apparatus, image processing method
JP6131546B2 (en) Image processing apparatus, imaging apparatus, and image processing program
KR20140094395A (en) photographing device for taking a picture by a plurality of microlenses and method thereof
JP5597777B2 (en) Color imaging device and imaging apparatus
US9118879B2 (en) Camera array system
JP5923754B2 (en) 3D imaging device
CN103597811A (en) Image pickup device imaging three-dimensional moving image and two-dimensional moving image, and image pickup apparatus mounting image pickup device
JP2008011532A (en) Method and apparatus for restoring image
JP5634614B2 (en) Imaging device and imaging apparatus
WO2012153504A1 (en) Imaging device and program for controlling imaging device

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13785220

Country of ref document: EP

Kind code of ref document: A1

REEP Request for entry into the european phase

Ref document number: 2013785220

Country of ref document: EP

WWE Wipo information: entry into national phase

Ref document number: 2013785220

Country of ref document: EP

ENP Entry into the national phase

Ref document number: 2015510443

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE