US20150103200A1 - Heterogeneous mix of sensors and calibration thereof - Google Patents

Heterogeneous mix of sensors and calibration thereof Download PDF

Info

Publication number
US20150103200A1
US20150103200A1 US14065810 US201314065810A US2015103200A1 US 20150103200 A1 US20150103200 A1 US 20150103200A1 US 14065810 US14065810 US 14065810 US 201314065810 A US201314065810 A US 201314065810A US 2015103200 A1 US2015103200 A1 US 2015103200A1
Authority
US
Grant status
Application
Patent type
Prior art keywords
image
sensor
attribute
difference
calibration
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Abandoned
Application number
US14065810
Inventor
Gary Lee Vondran, Jr.
Charles Dunlop MacFarlane
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Avago Technologies General IP Singapore Pte Ltd
Original Assignee
Broadcom Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/2258Cameras using two or more image sensors, e.g. a CMOS sensor for video and a CCD for still image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration, e.g. from bit-mapped to bit-mapped creating a similar image
    • G06T5/001Image restoration
    • G06T5/002Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration, e.g. from bit-mapped to bit-mapped creating a similar image
    • G06T5/50Image enhancement or restoration, e.g. from bit-mapped to bit-mapped creating a similar image by the use of more than one image, e.g. averaging, subtraction
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/10Segmentation; Edge detection
    • G06T7/13Edge detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • G06T7/85Stereo camera calibration
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/10Processing, recording or transmission of stereoscopic or multi-view image signals
    • H04N13/106Processing image signals
    • H04N13/133Equalising the characteristics of different image components, e.g. their average brightness or colour balance
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/246Calibration of cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N13/00Stereoscopic video systems; Multi-view video systems; Details thereof
    • H04N13/20Image signal generators
    • H04N13/204Image signal generators using stereoscopic image cameras
    • H04N13/25Image signal generators using stereoscopic image cameras using two or more image sensors with different characteristics other than in their location or field of view, e.g. having different resolutions or colour pickup characteristics; using image signals from one sensor to control the characteristics of another sensor
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N17/00Diagnosis, testing or measuring for television systems or their details
    • H04N17/002Diagnosis, testing or measuring for television systems or their details for television cameras
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/14Picture signal circuitry for video frequency region
    • H04N5/21Circuitry for suppressing or minimising disturbance, e.g. moiré, halo, even if the automatic gain control is involved
    • H04N5/217Circuitry for suppressing or minimising disturbance, e.g. moiré, halo, even if the automatic gain control is involved in picture signal generation in cameras comprising an electronic image sensor, e.g. digital cameras, TV cameras, video cameras, camcorders, webcams, to be embedded in other devices, e.g. in mobile phones, computers or vehicles
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control; Control of cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in, e.g. mobile phones, computers or vehicles
    • H04N5/23229Devices for controlling television cameras, e.g. remote control; Control of cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in, e.g. mobile phones, computers or vehicles comprising further processing of the captured image without influencing the image pickup process
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/232Devices for controlling television cameras, e.g. remote control; Control of cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in, e.g. mobile phones, computers or vehicles
    • H04N5/23229Devices for controlling television cameras, e.g. remote control; Control of cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in, e.g. mobile phones, computers or vehicles comprising further processing of the captured image without influencing the image pickup process
    • H04N5/23232Devices for controlling television cameras, e.g. remote control; Control of cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in, e.g. mobile phones, computers or vehicles comprising further processing of the captured image without influencing the image pickup process by using more than one image in order to influence resolution, frame rate or aspect ratio
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/222Studio circuitry; Studio devices; Studio equipment ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, TV cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/225Television cameras ; Cameras comprising an electronic image sensor, e.g. digital cameras, video cameras, video cameras, camcorders, webcams, camera modules for embedding in other devices, e.g. mobile phones, computers or vehicles
    • H04N5/235Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor
    • H04N5/2355Circuitry or methods for compensating for variation in the brightness of the object, e.g. based on electric image signals provided by an electronic image sensor by increasing the dynamic range of the final image compared to the dynamic range of the electronic image sensor, e.g. by adding correct exposed portions of short and long exposed images
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/04Picture signal generators
    • H04N9/09Picture signal generators with more than one pick-up device
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/73Circuits for processing colour signals colour balance circuits, e.g. white balance circuits, colour temperature control
    • H04N9/735Circuits for processing colour signals colour balance circuits, e.g. white balance circuits, colour temperature control for picture signal generators
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10004Still image; Photographic image
    • G06T2207/10012Stereo images
    • GPHYSICS
    • G06COMPUTING; CALCULATING; COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10016Video; Image sequence

Abstract

Aspects of calibration of sensors are described. In one embodiment, a characteristic associated with a first or a second sensor is identified. The characteristic may be identified before or after assembly of a device including the first and second sensors. In turn, an operating characteristic of at least one of the first sensor or the second sensor may be adjusted based on the identified characteristic. Further, a first image may be captured with the first sensor and a second image may be captured with the second sensor. An attribute of the second image, for example, may be adjusted to substantially address any difference between the attribute of the second image and a corresponding attribute of the first image using the characteristic for calibration. The adjustments described herein may assist various processing techniques which operate on pairs of images, for example, particularly when a heterogeneous mix of sensors is used.

Description

    CROSS-REFERENCE TO RELATED APPLICATIONS
  • This application claims the benefit of U.S. Provisional Application No. 61/891,648, filed Oct. 16, 2013, and claims the benefit of U.S. Provisional Application No. 61/891,631, filed Oct. 16, 2013, the entire contents of each of which are hereby incorporated herein by reference.
  • This application also makes reference to U.S. patent application Ser. No. ______ (Attorney Docket #50229-5030), titled “Depth Map Generation and Post-Capture Focusing,” filed on even date herewith, the entire contents of which are hereby incorporated herein by reference.
  • BACKGROUND
  • Certain cameras, such as light-field or plenoptic cameras, rely upon a lens array over an image sensor and/or an array of image sensors to capture directional projection of light. Among other drawbacks, these approaches use relatively large and specialized image sensors which are generally unsuitable for other applications (e.g., video capture, video conferencing, etc.), use only a fraction of the information captured, and rely upon high levels of processing to deliver even a viewfinder image, for example. Further, some of these light-field or plenoptic camera devices require a relatively large height for specialized lens and/or sensor arrays and, thus, do not present practical solutions for use in cellular telephones.
  • BRIEF DESCRIPTION OF THE DRAWINGS
  • For a more complete understanding of the embodiments and the advantages thereof, reference is now made to the following description, in conjunction with the accompanying figures briefly described as follows:
  • FIG. 1A illustrates a system including a heterogeneous mix of image sensors according to an example embodiment.
  • FIG. 1B illustrates a device for image capture and calibration using the system of FIG. 1A according to an example embodiment.
  • FIG. 2A illustrates a process flow for calibration of the heterogeneous mix of image sensors in the system of FIG. 1A according to an example embodiment.
  • FIG. 2B illustrates a process flow for depth map generation using the system of FIG. 1A, after calibration of the heterogeneous mix of image sensors, according to an example embodiment.
  • FIG. 3 illustrates an example edge map generated by the edge map generator of FIG. 1A according to an example embodiment.
  • FIG. 4 illustrates an example depth map generated by the depth map generator of FIG. 1A according to an example embodiment.
  • FIG. 5 illustrates an example process of smoothing performed by the smoother of FIG. 1A according to an example embodiment.
  • FIG. 6 illustrates a flow diagram for a process of calibration of a mix of image sensors in the system of FIG. 1A according to an example embodiment.
  • FIG. 7 illustrates an example schematic block diagram of a computing environment which may embody one or more of the system elements of FIG. 1A according to various embodiments.
  • The drawings are provided by way of example and should not be considered limiting of the scope of the embodiments described herein, as other equally effective embodiments are within the scope and spirit of this disclosure. The elements and features shown in the drawings are not necessarily drawn to scale, emphasis instead being placed upon clearly illustrating the principles of the embodiments. Additionally, certain dimensions or positions of elements and features may be exaggerated to help visually convey certain principles. In the drawings, similar reference numerals among the figures generally designate like or corresponding, but not necessarily the same, elements.
  • DETAILED DESCRIPTION
  • In the following paragraphs, the embodiments are described in further detail by way of example with reference to the attached drawings. In the description, well known components, methods, and/or processing techniques are omitted or briefly described so as not to obscure the embodiments.
  • Certain cameras, such as light-field or plenoptic cameras, rely upon a lens array over an image sensor and/or an array of image sensors to capture directional projection of light. Among other drawbacks, these approaches use relatively large and specialized image sensors which are generally unsuitable for other applications (e.g., video capture, video conferencing, etc.), use only a fraction of the information captured, and rely upon high levels of processing to deliver even a viewfinder image, for example. Further, some of these light-field or plenoptic camera devices require a relatively large height for specialized lens and/or sensor arrays and, thus, do not present practical solutions for use in cellular telephones.
  • In this context, the embodiments described herein include a heterogeneous mix of sensors which may be relied upon to achieve, among other processing results, image processing results that are similar, at least in some aspects, to those achieved by light-field or plenoptic imaging devices. In various embodiments, the mix of sensors may be used for focusing and re-focusing images after the images are captured. In other embodiments, the mix of sensors may be used for object extraction, scene understanding, gesture recognition, etc. In other aspects, a mix of image sensors may be used for high dynamic range (HDR) image processing. Further, according to the embodiments described herein, the mix of image sensors may be calibrated for focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc.
  • In one embodiment, the heterogeneous mix of sensors includes a main color image sensor having a pixel density ranging from 3 to 20 Megapixels, for example, with color pixels arranged in a Bayer pattern, and a secondary luminance image sensor having a relatively lower pixel density. It should be appreciated, however, that the system is generally agnostic to the resolution and format of the main and secondary sensors, which may be embodied as sensors of any suitable type, pixel resolution, process, structure, or arrangement (e.g., infra-red, charge-coupled device (CCD), 3CCD, Foveon X3, complementary metal-oxide-semiconductor (CMOS), red-green-blue-clear (RGBC), etc.).
  • Turning now to the drawings, a description of exemplary embodiments of a system and its components are provided, followed by a discussion of the operation of the same.
  • FIG. 1A illustrates a system 10 including a heterogeneous mix of image sensors according to an example embodiment. The system 10 includes a processing environment 100, a memory 110, and first and second sensors 150 and 152, respectively, which may be embodied as a heterogeneous mix of image sensors. The memory 110 includes memory areas for image data 112 and calibration characteristic data 114. The processing environment 100 may be embodied as one or more processors, processing circuits, and/or combinations thereof. Generally, the processing environment 100 includes embedded (i.e., application-specific) and/or general purpose processing circuitry and/or software structures that process data, such as image data captured by the first and second sensors 150 and 152, for example. Further structural aspects of the processing environment 100 are described below with reference to FIG. 7.
  • In the example illustrated in FIG. 1A, the processing environment 100 generally includes elements for focusing and re-focusing of images captured by the first and second sensors 150 and 152, as further described below. In this context, the processing environment 100 includes a scaler 120, a calibrator 122, a depth map generator 124, an edge map generator 126, a smoother 128, a focuser 130, and an image processor 132. Each of these elements of the processing environment 100, and the respective operation of each, is described in further detail below.
  • Here, it should be appreciated that the elements of the processing environment 100 may vary among embodiments, particularly depending upon the application for use of the heterogeneous mix of image sensors 150 and 152. In other words, depending upon whether the first and second sensors 150 and 152 are directed for use in focusing, re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc., the processing environment 100 may include additional or alternative processing elements or modules. Regardless of the application for use of the first and second sensors 150 and 152, the embodiments described herein are generally directed to calibrating operational aspects of the first and second sensors 150 and 152 and/or the image data captured by the first and second sensors 150 and 152. In this way, the first and second sensors 150 and 152 and the images captured by the sensors 150 and 152 can be used together.
  • The first and second sensors 150 and 152 may be embodied as any suitable types of sensors, depending upon the application for use of the system 10. For example, in image processing applications, the first and second sensors 150 and 152 may be embodied as image sensors having the same or different pixel densities, ranging from a fraction of 1 to 20 Megapixels, for example. The first image sensor 150 may be embodied as a color image sensor having a first pixel density, and the second image sensor 152 may be embodied as a luminance image sensor having a relatively lower pixel density. It should be appreciated, however, that the system 10 is generally agnostic to the resolution and format of the first and second sensors 150 and 152, which may be embodied as sensors of any suitable type, pixel resolution, process, structure, or arrangement (e.g., infra-red, charge-coupled device (CCD), 3CCD, Foveon X3, complementary metal-oxide-semiconductor (CMOS), red-green-blue-clear (RGBC), etc.).
  • The memory 110 may be embodied as any suitable memory that stores data provided by the first and second sensors 150 and 152, among other data, for example. In this context, the memory 110 may store image and image-related data for manipulation and processing by the processing environment 100. As noted above, the memory 110 includes memory areas for image data 112 and calibration characteristic data 114. Various aspects of processing and/or manipulation of the image data 112 by the processing environment 100, based, for example, upon the calibration characteristic data 114, are described in further detail below.
  • FIG. 1B illustrates a device 160 for image capture and calibration using the system 10 of FIG. 1A according to an example embodiment. The device 160 includes the processing environment 100, the memory 110, and the first and second sensors 150 and 152 of FIG. 1A, among other elements. The device 160 may be embodied as a cellular telephone, tablet computing device, laptop computer, desktop computer, television, set-top box, personal media player, appliance, etc., without limitation. In other embodiments, the device 160 may be embodied as a pair of glasses, a watch, wristband, or other device which may be worn or attached to clothing. If embodied as a pair of glasses, then the sensors 150 and 152 of the device 160 may be positioned at opposite corners of rims or end-pieces of the pair of glasses.
  • As illustrated in FIG. 1B, the first and second sensors 150 and 152 are separated by a first distance X in a first dimension and by a second distance Y in a second dimension. The distances X and Y may vary among embodiments, for example, based on aesthetic and/or performance factors, depending upon the application or field of use for the device 160. Further, the relative positions (e.g., right verses left, top verses bottom, etc.) of the first and second sensors 150 and 152 may vary among embodiments. In this context, it is also noted that a relative difference in rotational or angular displacement (i.e., R1-R2) may exist between the first and second sensors 150 and 152. Although not explicitly illustrated, it should be appreciated that the device 160 may include one or more additional elements for image capture, such as lenses, flash devices, focusing mechanisms, etc., although these elements may not be relied upon in certain embodiments and may be omitted.
  • As described herein, in one embodiment, the first and second sensors 150 and 152 may be embodied as sensors of varied operating and structural characteristics (i.e., a heterogeneous mix of sensors). The differences in operating characteristics may be identified during manufacturing and/or assembly of the device 160, for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the differences in operating characteristics may be identified during post-assembly calibration processes by the calibrator 122. These differences may be quantified as calibration data which is representative of the operating characteristics of the first and second sensors 150 and 152, and stored in the memory 110 as the calibration characteristic data 114.
  • Among other operational aspects, the device 160 is configured to capture images using the first and second sensors 150 and 152. Based on the processing techniques described herein, images captured by the first and second sensors 150 and 152 may be focused and re-focused after being captured. Additionally or alternatively, the images may be processed according to one or more HDR image processing techniques, for example, or for object extraction, scene understanding, gesture recognition, etc.
  • FIG. 2A illustrates a process flow for calibration of the heterogeneous mix of image sensors 150 and 152 in the system 10 of FIG. 1A according to an example embodiment. As illustrated in FIG. 2, the first sensor 150 generates a first image 202, and the second sensor 152 generates a second image 204. The first and second images 202 and 204 may be captured at a substantially same time. Alternatively, the first and second images 202 and 204 may be captured, respectively, by the first and second sensors 150 and 152, at different times. Data associated with the first and second images 202 and 204 may be stored in the memory 110 (FIG. 1).
  • Here, it is noted that, before the first and second sensors 150 and 152 capture the first and second images 202 and 204, the calibrator 122 may adapt at least one operating parameter of the first sensor 150 or the second sensor 152 to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, without limitation. More particularly, the calibrator 122 may reference the calibration characteristic data 114 in the memory 110, to identify any adjustments to the operating parameters of the first and second sensors 150 and 152, and accommodate for or balance differences in noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, between or among images generated by the first and second sensors 150 and 152.
  • In this context, it should be appreciated that, to the extent that the characteristics of the first and second sensors 150 and 152 vary, such that the first and second images 202 and 204 deviate along a corresponding unit of measure or other qualitative or quantitative aspect, for example, the calibrator 122 may adjust one or more of the operating parameters of the first and second sensors 150 and 152 (e.g., operating voltages, timings, temperatures, exposure timings, etc.) to address the difference or differences. In other words, the calibrator 122 may seek to align or normalize aspects of the operating characteristics of the first and second sensors 150 and 152. In this way, downstream operations performed by other elements in the system 10 may be aligned, as necessary, for suitable performance and results in image processing.
  • As a further example, based on the respective characteristics of the first sensor 150 and the second sensor 152, the first sensor 150 may produce images including relatively more noise than the images produced by the second sensor 152. This difference in the generation of noise may be embodied in values of the calibration characteristic data 114, for example, in one or more variables, coefficients, or other data metrics. The calibrator 122 may refer to the calibration characteristic data 114 and, based on the calibration characteristic data 114, adjust operating parameters of the first and second sensor 150 and 152, in an effort to address the difference.
  • Similarly, the first sensor 150 may produce images including a first dark current characteristic, and the second sensor 152 may produce images including a second dark current characteristic. The difference between these dark current characteristics may be embodied in values of the calibration characteristic data 114. The calibrator 122 may seek to adjust operating parameters of the first and second sensors 150 and 152 to address this difference. Although certain examples are provided herein, it should be appreciated that the calibrator 122 may seek to normalize or address other differences in operating characteristics between the first and second sensors 150 and 152, so that a suitable comparison may be made between images produced by the first and second sensors 150 and 152.
  • The differences in operating characteristics between the first and second sensors 150 and 152 may be due to various factors. For example, the differences may be due to different pixel densities of the first and second sensors 150 and 152, different manufacturing processes used to form the first and second sensors 150 and 152, different pixel array patterns or filters (e.g., Bayer, EXR, X-Trans, etc.) of the first and second sensors 150 and 152, different sensitivities of the first and second sensors 150 and 152 to light, temperature, operating frequency, operating voltage, or other factors, without limitation.
  • As noted above, differences in operating characteristics between the first and second sensors 150 and 152 may be identified and characterized during manufacturing and/or assembly of the device 160, for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the differences in operating characteristics may be identified during post-assembly calibration processes. These differences may be quantified as calibration data representative of the operating characteristics of the first and second sensors 150 and 152, and stored in the memory 110 as the calibration characteristic data 114.
  • In addition to adapting one or more of the operating parameters of the first and second sensors 150 and 152, the calibrator 122 may adjust one or more attributes of one or more of the first or second images 202 or 204 to substantially address a difference between attributes of the first or second images 202 or 204. For example, based on a difference in sensitivity between the first sensor 150 and the second sensor 152, the calibrator 122 may adjust the exposure of one or more of the first image 202 and the second image 204, to address the difference in exposure. Similarly, based on a difference in noise, the calibrator 122 may filter one or more of the first image 202 and the second image 204, to address a difference in an amount of noise among the images.
  • In various embodiments, to the extent possible, the calibrator 122 may adjust one or more attributes of the first and/or second images 202 and/or 204 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation. Again, a measure of differences among attributes (e.g., noise response, defective pixels, dark current response, vignetting response, white balance response, exposure response, etc.) of the first and second images 202 and 204 may be quantified as the calibration characteristic data 114. This calibration characteristic data 114 may be referenced by the calibrator 122 when adjusting attributes of the first and/or second images 202 and/or 204.
  • In one embodiment, as further illustrated in FIG. 2A, the first and second images 202 and 204 may be provided to the scaler 120. Generally, the scaler 120 downscales and/or upscales images in pixel density. It is noted that, in certain embodiments, the scaler 120 may be omitted from the process flow of FIG. 2A, for one or more of the first and second images 202 and 204. When included, the scaler 120 is generally relied upon, for example, to reduce the pixel processing loads of other elements in the system 10, to align pixel densities among the first and second images 202 and 204 (e.g., if the first and second sensors 150 and 152 vary in pixel density) and/or to reduce or compact image features. The downscaling and/or upscaling operations of the scaler 120 may be embodied according to nearest-neighbor interpolation, bi-linear interpolation, bi-cubic interpolation, supersampling, and/or other suitable interpolation techniques, or combinations thereof, without limitation.
  • In some embodiments, after the scaler 120 downscales the first and second images 202 and 204 into the first and second downscaled images 212 and 214, respectively, the calibrator 122 may adjust one or more attributes of the first and/or second downscaled images 212 and/or 214 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation. In other words, it should be appreciated that the calibrator 122 may make adjustments to the first and/or second downscaled images 212 and/or 214 at various stages. For example, the adjustments may be made before and/or after downscaling, upscaling, or other image processing activities.
  • Generally, the calibrator 122 adapts operating parameters of the first and second sensors 150 and 152 and adjusts attributes of the first and second images 202 and 204 to substantially remove, normalize, or balance differences between images, for other downstream image processing activities of the system 10 and/or the device 160. For example, as described in the examples below, the images captured by the system 10 and/or the device 160 may be relied upon in focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc. To the extent that these image processing activities rely upon a stereo pair of images, and to the extent that the system 10 and/or the device 160 may benefit from a heterogeneous mix of image sensors (e.g., for cost reduction, processing reduction, parts availability, wider composite sensor range and sensitivity, etc.), the calibrator 122 is configured to adapt and/or adjust certain operating characteristics and attributes into substantial alignment for the benefit of the downstream image processing activities.
  • As one example of a downstream image processing activity that may benefit from the operations of the calibrator 122, aspects of depth map generation and focusing and re-focusing our described below with reference to FIG. 2B. FIG. 2B illustrates a process flow for depth map generation using the system of FIG. 1A, after calibration of the heterogeneous mix of image sensors, according to an example embodiment.
  • It is noted that, in certain downstream processes, the first image 202 may be compared with the second image 204 according to one or more techniques for image processing. In this context, the first and second images 202 and 204 may be representative of and capture substantially the same field of view. In this case, similar or corresponding image information (e.g., pixel data) among the first and second images 202 and 204 is typically shifted in pixel space between the first and second images 202 and 204, due to the relative difference in position (e.g., illustrated as X, Y, R1, and R2 in FIG. 1B) between the first and second sensors 150 and 152 on the device 160. The amount of this shift, per pixel, is representative of depth, because it is dependent (i.e., changes) upon the relative depths of items within a field of view of the images 202 and 204. Additionally, it is noted that the image information among the first and second images 202 and 204 is typically shifted in other aspects, such as luminance, color, color coding, pixel density, noise, etc., and these differences should be accounted for by the calibrator 122 of the system 10 before or while processing the images 202 and 204.
  • According to various embodiments described herein, the first and second images 202 and 204 may have the same or different pixel densities, depending upon the respective types and characteristics of the first and second image sensors 150 and 152, for example. Further, the first and second images 202 and 204 may be of the same or different image formats. For example, the first image 202 may include several color components of a color image encoded or defined according to a certain color space (e.g., red, green, blue (RGB); cyan, magenta, yellow, key (CMYK); phase alternating line (PAL); YUV or Y′UV; YCbCr; YPbPr, etc.), and the second image 204 may include a single component of another color space.
  • Referring again to FIG. 2B, the first downscaled image 212 is provided to the edge map generator 126. The edge map generator 126, generally, generates an edge map by identifying edges in at least one image. In other words, the edge map generator 126 generates an edge map by identifying edges in one or more of the first or second downscaled images 212 and 214. In the embodiment illustrated in FIG. 2B, the edge map generator 126 generates the edge map 222 by identifying edges in the first downscaled image 212, although the edge map 222 may be generated by identifying edges in the second downscaled image 214. It should be appreciated that the performance of the edge map generator 126 may be improved by identifying edges in downscaled, rather than higher pixel density, images. For example, edges in higher density images may span several (e.g., 5, 10, 15, or more) pixels. In contrast, such edges may span relatively fewer pixels in downscaled images. Thus, in certain embodiments, the scaler 120 may be configured to downscale one or more of the first or second images 202 or 204 so as to provide a suitable pixel density for accurate edge detection by the edge map generator 126.
  • FIG. 3 illustrates an example edge map 222 generated by the edge map generator 126 of FIG. 1A according to an example embodiment. As illustrated in FIG. 3, the edge map 126 is embodied by data representative of edges. In the context of FIGS. 2 and 3, the edge map 126 is embodied by data representative of edges in the first image 202. In one embodiment, the edge map generator 126 generates the edge map 222 by identifying pixels or pixel areas in the first image 202 where pixel or pixel area brightness quickly changes or encounters a discontinuity (i.e., at “step changes”). Points at which pixel brightness change quickly are organized into edge segments in the edge map 222 by the edge map generator 126. The changes may be due to changes in surface or material orientation, changes in surface or material properties, or variations in illumination, for example. Data associated with the edge map 222 may be stored by the edge map generator 126 in the memory 110 (FIG. 1).
  • Referring again to FIG. 2B, the first and second downscaled images 212 and 214 are also provided to the depth map generator 124. The depth map generator 124, generally, generates a depth map including a mapping among relative depth values in a field of view based on a difference between pixels of a first image and pixels of a second image. In the context of FIG. 2B, the depth map generator 124 generates a depth map 224 including a mapping of relative depth values based on differences between pixels of the first downscaled image 212 and pixels of the second downscaled image 214. In this context, it is noted that, in certain embodiments, the depth map generator 124 (and/or the edge map generator 126) may operate using only the luminance component of images. Thus, in certain embodiments, the first sensor 150 may be embodied as a main color image sensor, and the second sensor 152 may be embodied as a secondary luminance only image sensor. In this case, the secondary luminance image sensor may not need to be at the full resolution of the main color sensor, because no demosaicing interpolation is required for the luminance image sensor (i.e., the luminance image sensor has a higher effective resolution). Thus, as suggested above, downscaling by the scaler 120 may be omitted for the second image 204, for example.
  • FIG. 4 illustrates an example depth map 224 generated by the depth map generator 124 of FIG. 1A according to an example embodiment. As illustrated in FIG. 4, the depth map 224 is embodied by data representative of relative depths in a field of view based on differences between pixels of the first downscaled image 212 and pixels of the second downscaled image 214. In FIG. 4, relatively darker areas are closer in depth and relatively lighter areas are further in depth, from the point of view of the first and second image sensors 150 and 152 and/or the device 160 (FIG. 1B). It should be appreciated that the relatively darker and lighter areas in FIG. 4 are representative of depth values. That is, relatively darker areas are representative of data values (e.g., per pixel data values) associated with less depth, and relatively lighter areas are representative of data values associated with more depth. In the context of FIG. 5, as further described below, the depth map 224 is referred to as a “raw” depth map, because it is representative of unsmoothed or unfiltered depth values. Data associated with the depth map 224 may be stored by the depth map generator 124 in the memory 110 (FIG. 1).
  • The depth map generator 124 may generate the depth map 224, for example, by calculating a sum of absolute differences (SAD) between pixel values in a neighborhood of pixels in the downscaled image 212 and a corresponding neighborhood of pixels in the downscaled image 214, for each pixel in the downscaled images 212 and 214. Each SAD value may be representative of a relative depth value in a field of view of the downscaled images 212 and 214 and, by extension, the first and second images 202 and 204. In alternative embodiments, rather than (or in addition to) calculating relative depth values of the depth map 224 by calculating a sum of absolute differences, other stereo algorithms, processes, or variations thereof may be relied upon by the depth map generator 124. For example, the depth map generator 124 may rely upon squared intensity differences, absolute intensity differences, mean absolute difference measures, or other measures of difference between pixel values, for example, without limitation. Additionally, the depth map generator 124 may rely upon any suitable size, shape, or variation of pixel neighborhoods for comparisons between pixels among images. Among embodiments, any suitable stereo correspondence algorithm may be relied upon by the depth map generator 124 to generate a depth map including a mapping among relative depth values between images.
  • Referring again to FIG. 2B, after the edge map generator 126 generates the edge map 222 and the depth map generator 124 generates the depth map 224, the smoother 128 smooths the relative depth values of the depth map 224 using the edge map 222. For example, according to one embodiment, the smoother 128 filters columns (i.e., in a first direction) of depth values of the depth map 224 between a first pair of edges in the edge map 222. The smoother 128 further filters rows (i.e., in a second direction) of depth values of the depth map 224 between a second pair edges in the edge map 222. The process of filtering along columns and rows may proceed iteratively between filtering columns and rows, until a suitable level of smoothing has been achieved.
  • FIG. 5 illustrates an example process of smoothing performed by the smoother 128 of FIG. 1A according to an example embodiment. In FIG. 5, the depth map 500 is smoothed or filtered along columns (i.e., in a first direction Y) of depth values and between pairs of edges, and the depth map 502 is smoothed or filtered along rows (i.e., in a second direction X) of depth values and between pairs of edges. With reference to FIGS. 3 and 4, the depth map 500 is representative, for example, of depth values after a first pass of smoothing depths along columns, using the raw depth map 224 as a basis for depth values and the edge map 222 as a basis for edges. The depth map 502 is representative of smoothed depth values after a second pass of smoothing depths along rows, using the depth map 500 as a starting basis for depth values.
  • More particularly, in the generation of the depth map 500 by the smoother 128, the smoother 128 scans along columns of the depth map 500, from a right to a left, for example, of the map. The columns may be scanned according to a column-wise pixel-by-pixel shift of depth values in the map. Along each column, edges which intersect the column are identified, and the depth values within or between adjacent pairs of intersecting edges are filtered. For example, as illustrated in FIG. 5, along the column 510 of depth values, a pair of adjacent edges 512 and 514 is identified by the smoother 128. Further, the pair of adjacent edges 516 and 518 is identified by the smoother 128. Once a pair of adjacent edges is identified along a column, the smoother 128 filters the depth values between the pair of edges, to provide a smoothed range of depth values between the pair of edges. As illustrated in FIG. 5, smoothing or filtering depth values between pairs of edges is performed by the smoother 128 along the column 510, on a per edge-pair basis. In this way, raw depth values in the raw depth map 224 (FIG. 4) are smoothed or filtered with reference to the edges in the edge map 222 (FIG. 3). Thus, depth values are generally extended and smoothed with a certain level of consistency among edges.
  • As further illustrated in FIG. 5, starting with the depth map 500 as input, the smoother 128 scans along rows of the depth map 502, from a top to a bottom, for example, of the map. The rows may be scanned according to a row-wise pixel-by-pixel shift of depth values in the map. Along each row, edges which intersect the row are identified, and the depth values within or between adjacent pairs of intersecting edges are filtered. For example, along the row 520 of depth values, a pair of adjacent edges 522 and 524 is identified by the smoother 128. Further, the pair of adjacent edges 526 and 528 is identified by the smoother 128. Once a pair of adjacent edges is identified along a row, the smoother 128 filters the depth values between the pair of edges, to a provide smoothed range of depth values between the pair of edges. As illustrated in FIG. 5, smoothing or filtering depth values between pairs of edges is performed by the smoother 128 along the row 520, on a per edge-pair basis. In this way, depth values are generally extended and smoothed with a certain level of consistency among edges. It should be appreciated here that several pairs of intersecting edges may be identified along each column 510 and row 520 in a depth map, and depth values may be smoothed between each of the pairs of edges.
  • Referring back to FIG. 2B, after the smoother 128 smooths the depth values in the depth map 224, to provide a smoothed depth map 226, the smoother 128 provides the smoothed depth map 226 to the scaler 120. The scaler 120 upscales the smoothed depth map 226, and provides an upscaled depth map 228 to the focuser 130. Generally, the upscaled depth map 228 includes a density of depth values which corresponds to the pixel density of the first and/or second images 202 and 204. Using the upscaled depth map 228, the focuser 130 may focus and/or re-focus one or more pixels in the first image 202, for example, with reference to corresponding values of depth in the depth map 228.
  • As illustrated in FIG. 2B, the focuser 130 receives the upscaled depth map 228, the first image 202, and a point for focus 140. Generally, the focuser 130 selectively focuses the first image 202 according to the point for focus 140, by blending portions of a blurred replica of the first image 202 with the first image 202. The blending is performed by the focuser 130 with reference to the relative depth values of the upscaled depth map 228 as a measure for blending. The focuser 130 provides an output image based on a blend of the first image 202 and the blurred replica of the first image 202.
  • The point for focus 140 may be received by the device 160 (FIG. 1B) using any suitable input means, such as by capacitive touch screen, mouse, keyboard, electronic pen, etc. That is, a user of the device 160 may, after capture of the first and second images 202 and 204 by the device 160, select a point on the first image 202 (or the second image 204) to be selectively focused using a capacitive touch screen, mouse, keyboard, electronic pen, etc. Here, it is noted that the first image 202 may be captured by the first sensor 150 according to a relatively large depth of field. In other words, the first image 202 may be substantially focused throughout its field of view, for example, based on a sufficiently small optical aperture, etc. Thus, after capture of the first image 202, the focuser 130 may selectively focus areas of the first image 202 based on depth, by simulating a focal point and associated in-focus depth of field of the first image 202 along with other depths of field which are out of focus (i.e., blurred).
  • According to one embodiment, for a certain point for focus 140 selected by a user, the focuser 130 identifies a corresponding depth value (i.e., a selected depth value for focus) in the upscaled depth map 228, and evaluates a relative difference in depth between the selected depth value and each other depth value in the upscaled depth map 228. Thus, the focuser 130 evaluates the depth values in the depth map 228 according to relative differences from the point for focus 140. In turn, the focuser 130 blends the first image 202 and the blurred replica of the first image 202 based on relative differences in depth, as compared to the point for focus 140.
  • In one embodiment, the blurred replica of the first image 202 may be generated by the image processor 132 using a Gaussian blur or similar filter, and the focuser 130 blends the first image 202 and the blurred replica according to an alpha blend. For example, at the point for focus 140, the focuser 130 may form a composite of the first image 202 and the blurred replica, where the first image 202 comprises all or substantially all information in the composite and the blurred replica comprises no or nearly no information in the composite. On the other hand, for a point in the first image 202 having a relatively significant difference in depth as compared to the point for focus 140 in the first image 202, the focuser 130 may form another composite of the first image 202 and the blurred replica, where the first image 202 comprises no or nearly no information in the composite and the blurred replica comprises all or substantially all information in the composite.
  • The focuser 130 may evaluate several points among the first image 202 for difference in depth as compared to the point for focus 140, and generate or form a composite image for each point based on relative differences in depth, as compared to the point for focus 140 as described above. The composites for the various points may then be formed or joined together by the focuser 130 into an output image. In one embodiment, the focuser 130 may evaluate individual pixels in the first image 202 for difference in depth as compared to the point for focus 140, and generate or form a composite image for each pixel (or surrounding each pixel) based on relative differences in depth embodied in the depth values of the depth map 228, as compared to the point for focus 140.
  • According to the operation of the focuser 130, the output image of the focuser 130 includes a region of focus identified by the point for focus 140, and a blend of regions of progressively less focus (i.e., more blur) based on increasing difference in depth as compared to the point for focus 140. In this manner, the focuser 130 simulates a focal point and associated in-focus depth of field in the output image 260A, along with other depths of field which are out of focus (i.e., blurred). It should be appreciated that, because the depth map 228 includes several graduated (or nearly continuous) values of depth, the output image 260A also includes several graduated ranges of blur or blurriness. In this way, the focuser 130 simulates the effect of capturing the image 202 using a relatively larger optical aperture, and the point of focus when capturing the image 202 may be altered after the image 202 is captured. Particularly, several points for focus 140 may be received by the focuser 130 over time, and the focuser 130 may generate respective output images 260A for each point for focus 140.
  • In another embodiment, rather than relying upon a blurred replica of the first image 202, the focuser 130 selectively focuses regions of the first image 202 without using the blurred replica. In this context, the focuser 130 may determine a point spread per pixel for pixels of the first image 202, to generate an output image. For example, for pixels with little or no difference in depth relative to the point for focus 140, the focuser 130 may form the output image 260 using the pixel values in the first image 202 without (or with little) change to the pixel values. On the other hand, for pixels with larger differences in depth relative to the point for focus 140, the focuser 130 may determine a blend of the value of the pixel and its surrounding pixel values based on a measure of the difference. In this case, rather than relying upon a predetermined blurred replica, the focuser 130 may determine a blend of each pixel, individually, according to values of neighboring pixels.
  • While it is noted that the processes for focusing and re-focusing images, as described above, may benefit from the calibration processes performed by the calibrator 122, other image processing techniques may benefit from the calibration processes. For example, depth maps may be relied upon for object extraction, scene understanding, or gesture recognition, for example. In this context, to the extent that the calibration processes performed by the calibrator 122 improve the accuracy of depth maps generated by the system 10, the calibrator 122 may improve object extraction, scene understanding, or gesture recognition image processes.
  • As another example of image processing techniques which may benefit from the calibration processes performed by the calibrator 122, it is noted that additional details may be imparted to regions of an image which would otherwise be saturated (i.e., featureless or beyond the measureable range) using HDR image processing techniques. Generally, HDR images are created by capturing both a short exposure image and a normal or long exposure image of a certain field of view. The short exposure image provides the additional details for regions that would otherwise saturated in the normal or long exposure. The short and normal exposure images may be captured in various ways. For example, multiple images may be captured for the same field of view, successively, over a short period of time and at different levels of exposure. This approach is commonly used in video capture, for example, especially if a steady and relatively high-rate flow of frames is being captured and any object motion is acceptably low. For still images, however, object motion artifacts are generally unacceptable for a multiple, successive capture approach.
  • An alternative HDR image processing approach alternates the exposure lengths of certain pixels of an image sensor. This minimizes problems associated with object motion, but injects interpolation artifacts due to the interpolation needed to reconstruct a full resolution image for both exposures. Still another approach adds white or clear pixels to the Bayer pattern of an image sensor, and is commonly known as RGBC or RGBW. The white or clear pixels may be embodied as low light pixels, but the approach may have problems with interpolation artifacts due to the variation on the Bayer pattern required for the white or clear pixels.
  • In the context of the system 10 and/or the device 160, if the first sensor 150 is embodied as a main color image sensor, and the second sensor 152 is embodied as a secondary luminance only image sensor, for example, the luminance-only data provided from the second sensor 152 may provide additional information in HDR detail enhancement. In certain aspects of the embodiments described herein, the exposure settings and characteristics of the secondary luminance image sensor may be set and determined separately from that of the main color image sensor by the calibrator 122. This is achieved while the main sensor is not adversely affected by the addition of white or clear pixels, for example.
  • While various examples are provided above, it should be appreciated that the examples are not to be considered limiting, as other advantages in image processing techniques may be achieved based on the calibration processes performed by the calibrator 122.
  • Before turning to the process flow diagrams of FIG. 6, it is noted that the embodiments described herein may be practiced using an alternative order of the steps illustrated in FIG. 6. That is, the process flows illustrated in FIG. 6 are provided as examples only, and the embodiments may be practiced using process flows that differ from those illustrated. Additionally, it is noted that not all steps are required in every embodiment. In other words, one or more of the steps may be omitted or replaced, without departing from the spirit and scope of the embodiments. Further, steps may be performed in different orders, in parallel with one another, or omitted entirely, and/or certain additional steps may be performed without departing from the scope and spirit of the embodiments. Finally, although the process 600 of FIG. 6 is generally described in connection with the system 10 of FIG. 1A and/or the device 160 of FIG. 1B, the process 600 may be performed by other systems and/or devices.
  • FIG. 6 illustrates a flow diagram for a process 600 of calibration of a mix of image sensors in the system 10 of FIG. 1A according to an example embodiment. At reference numeral 602, the process 600 includes identifying a characteristic for calibration associated with at least one of a first sensor or a second sensor. In one embodiment, a pixel density of the second sensor may be a fraction of the pixel density of the first sensor. With reference to the system 10 of FIG. 1A and/or the device 160 of FIG. 1B, the identifying at reference numeral 602 may be performed during manufacturing and/or assembly of the device 160, for example, based on manufacturing and/or assembly calibration processes. Additionally or alternatively, the characteristics for calibration may be identified during post-assembly calibration processes by the calibrator 122. The characteristic for calibration may be related to operating characteristics of one or more of the first and second sensors 150 and 152.
  • Differences in operating characteristics between the first and second sensors 150 and 152 may be quantified as calibration data and stored in the memory 110 as the calibration characteristic data 114. The differences may be due to different pixel densities of the first and second sensors 150 and 152, different manufacturing processes used to form the first and second sensors 150 and 152, different pixel array patterns or filters (e.g., Bayer, EXR, X-Trans, etc.) of the first and second sensors 150 and 152, different sensitivities of the first and second sensors 150 and 152 to light, temperature, operating frequency, operating voltage, or other factors, without limitation.
  • At reference numeral 604, the process 600 includes adapting an operating characteristic of at least one of the first sensor or the second sensor to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, using the characteristic for calibration identified at reference numeral 602. For example, the calibrator 122 may adapt operating characteristics or parameters of one or more of the first sensor 150 and/or the second sensor 152, as described herein.
  • At reference numeral 606, the process 600 includes capturing a first image with the first sensor, and capturing a second image with a second sensor. In the context of the system 10 and/or the device 160 (FIG. 1A and FIG. 1B), the first image may be captured by the first sensor 150, and the second image may be captured by the second image sensor 152. The first and second images may be captured at a substantially same time or at different times among embodiments. As noted above, the first sensor be embodied as a multi-spectral component (e.g., color) sensor and the second sensor may be embodied as a limited-spectral (e.g., luminance only) component sensor. Further, the first and second sensors may be embodied as sensors having similar or different pixel densities or other characteristics.
  • At reference numeral 608, the process 600 includes adjusting an attribute of one or more of the first or second images to substantially address at least one difference between them. For example, reference numeral 608 may include adjusting an attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the first image using the characteristic for calibration identified at reference numeral 602. Reference numeral 608 may further include aligning the second image with the first image to substantially address a difference in alignment between the first sensor and the second sensor. Additionally or alternatively, reference numeral 608 may include normalizing values among the first image and the second image to substantially address a difference in sensitivity between the first sensor and the second sensor.
  • In this context, the calibrator 122 may adjust one or more attributes of one or more of the first or second images 202 or 204 (FIG. 1B) to substantially address a difference between attributes of the first or second images 202 or 204. In one embodiment, based on a difference in sensitivity between the first sensor 150 and the second sensor 152, the calibrator 122 may adjust the exposure of one or more of the first image 202 and the second image 204, to address the difference in exposure. Similarly, based on a difference in noise, the calibrator 122 may filter one or more of the first image 202 and the second image 204, to address a difference in an amount of noise among the images.
  • In various embodiments, to the extent possible, the calibrator 122 may adjust one or more attributes of the first and/or second images 202 and/or 204 to accommodate for, address, or normalize differences between them due to noise, defective pixels, dark current, vignetting, demosaicing, or white balancing, for example, or any combination thereof, without limitation. Again, a measure of differences among attributes (e.g., noise response, defective pixels, dark current response, vignetting response, white balance response, exposure response, etc.) of the first and second images 202 and 204 may be quantified as the calibration characteristic data 114. This calibration characteristic data 114 may be referenced by the calibrator 122 when adjusting attributes of the first and/or second images 202 and/or 204 at reference numeral 608.
  • At reference numeral 610, the process 600 may include scaling one or more of the first image or the second image to scaled image copies. For example, at reference numeral 610, the process 600 may include upscaling the first image to an upscaled first image and/or upscaling the second image to an upscaled second image. Alternatively, at reference numeral 610, the process 600 may include downscaling the first image to a downscaled first image and/or downscaling the second image to a downscaled second image. In certain embodiments, the scaling at reference numeral 610 may be omitted, for example, depending upon the application for use of the first and/or second images and the pixel densities of the sensors used to capture the images.
  • At reference numeral 612, the process 600 includes adjusting an attribute of one or more of the scaled (i.e., upscaled or downscaled) first or second images to substantially address at least one difference between them. This process may be similar to that performed at reference numeral 608, although performed on scaled images. Here, it should be appreciated that the process 600 may make adjustments to downscaled or upscaled images at various stages. For example, adjustments may be made before and/or after downscaling, upscaling, or other image processing activities.
  • Here, it is noted that the processes performed at reference numerals 602, 604, 606, 608, 610, and 612 may be relied upon to adapt and/or adjust one or more images or pairs of images, so that other image processes, such as the processes at reference numerals 614, 616, and 618, may be performed with better results. In this context, the processes at reference numerals 614, 616, and 618 are described by way of example (and may be omitted or replaced), as other downstream image processing techniques may follow the image calibration according to the embodiments described herein.
  • At reference numeral 614, the process 600 may include generating one or more edge or depth maps. For example, the generation of edge or depth maps may be performed by the edge map generator 126 and/or the depth map generator 124 as described above with reference to FIG. 2B. In turn, at reference numeral 616, the process 600 may include receiving a point for focus and focusing or re-focusing one or more images. Again, the focusing or re-focusing of images may be performed by the focuser 130 as described above with reference to FIG. 2B.
  • Alternatively or additionally, at reference numeral 618, the process 600 may include extracting one or more objects, recognizing one or more gestures, or other image processing techniques. These techniques may be performed with reference to the edge or depth maps generated at reference numeral 614, for example. In this context, due to the calibration processes performed at reference numerals 602, 604, 606, 608, 610, and 612, for example, the accuracy of edge or depth maps may be improved, and the image processing techniques at reference numeral 618 (and reference 616) may also be improved.
  • As another alternative, at reference numeral 620, the process 600 may include generating an HDR image. Here, it is noted that the generation of an HDR image may occur before any image scaling occurs at reference numeral 610. The generation of an HDR image may be performed according to the embodiments described herein. For example, the generation of an HDR image may include generating the HDR image by combining luminance values of a second image with full color values of a first image.
  • According to various aspects of the process 600, the process 600 may be relied upon for calibration of images captured from a plurality of image sensors, which may include a heterogeneous mix of image sensors. The calibration may assist with various image processing techniques, such as focusing and re-focusing, object extraction, scene understanding, gesture recognition, HDR image processing, etc.
  • FIG. 7 illustrates an example schematic block diagram of a computing architecture 700 that may be employed as the processing environment 100 of the system 10 of FIG. 1A, according to various embodiments described herein. The computing architecture 700 may be embodied, in part, using one or more elements of a mixed general and/or specific purpose computer. The computing architecture 700 includes a processor 710, a Random Access Memory (RAM) 720, a Read Only Memory (ROM) 730, a memory device 740, and an Input Output (I/O) interface 750. The elements of computing architecture 700 are communicatively coupled via one or more local interfaces 702. The elements of the computing architecture 700 are not intended to be limiting in nature, as the architecture may omit elements or include additional or alternative elements.
  • In various embodiments, the processor 710 may include or be embodied as a general purpose arithmetic processor, a state machine, or an ASIC, for example. In various embodiments, the processing environment 100 of FIGS. 1A and 1B may be implemented, at least in part, using a computing architecture 700 including the processor 710. The processor 710 may include one or more circuits, one or more microprocessors, ASICs, dedicated hardware, or any combination thereof. In certain aspects and embodiments, the processor 710 is configured to execute one or more software modules which may be stored, for example, on the memory device 740. The software modules may configure the processor 710 to perform the tasks undertaken by the elements of the computing environment 100 of the system 10 of FIG. 1A, for example. In certain embodiments, the process 600 described in connection with FIG. 6 may be implemented or executed by the processor 710 according to instructions stored on the memory device 740.
  • The RAM and ROM 720 and 730 may include or be embodied as any random access and read only memory devices that store computer-readable instructions to be executed by the processor 710. The memory device 740 stores computer-readable instructions thereon that, when executed by the processor 710, direct the processor 710 to execute various aspects of the embodiments described herein.
  • As a non-limiting example group, the memory device 740 includes one or more non-transitory memory devices, such as an optical disc, a magnetic disc, a semiconductor memory (i.e., a semiconductor, floating gate, or similar flash based memory), a magnetic tape memory, a removable memory, combinations thereof, or any other known non-transitory memory device or means for storing computer-readable instructions. The I/O interface 750 includes device input and output interfaces, such as keyboard, pointing device, display, communication, and/or other interfaces. The one or more local interfaces 702 electrically and communicatively couples the processor 710, the RAM 720, the ROM 730, the memory device 740, and the I/O interface 750, so that data and instructions may be communicated among them.
  • In certain aspects, the processor 710 is configured to retrieve computer-readable instructions and data stored on the memory device 740, the RAM 720, the ROM 730, and/or other storage means, and copy the computer-readable instructions to the RAM 720 or the ROM 730 for execution, for example. The processor 710 is further configured to execute the computer-readable instructions to implement various aspects and features of the embodiments described herein. For example, the processor 710 may be adapted or configured to execute the process 600 described above in connection with FIG. 6. In embodiments where the processor 710 includes a state machine or ASIC, the processor 710 may include internal memory and registers for maintenance of data being processed.
  • The flowchart or process diagram of FIG. 6 is representative of certain processes, functionality, and operations of embodiments described herein. Each block may represent one or a combination of steps or executions in a process. Alternatively or additionally, each block may represent a module, segment, or portion of code that includes program instructions to implement the specified logical function(s). The program instructions may be embodied in the form of source code that includes human-readable statements written in a programming language or machine code that includes numerical instructions recognizable by a suitable execution system such as the processor 710. The machine code may be converted from the source code, etc. Further, each block may represent, or be connected with, a circuit or a number of interconnected circuits to implement a certain logical function or process step.
  • Although embodiments have been described herein in detail, the descriptions are by way of example. The features of the embodiments described herein are representative and, in alternative embodiments, certain features and elements may be added or omitted. Additionally, modifications to aspects of the embodiments described herein may be made by those skilled in the art without departing from the spirit and scope of the present invention defined in the following claims, the scope of which are to be accorded the broadest interpretation so as to encompass modifications and equivalent structures.

Claims (20)

    Therefore, at least the following is claimed:
  1. 1. An image processing method, comprising:
    identifying a characteristic for calibration associated with at least one of a first sensor or a second sensor, a pixel density of the second sensor being a fraction of the pixel density of the first sensor;
    capturing a first image with the first sensor;
    capturing a second image with a second sensor; and
    adjusting, with a processing circuit, an attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the first image using the characteristic for calibration.
  2. 2. The method of claim 1, further comprising, before adjusting the attribute of the second image, adapting an operating characteristic of at least one of the first sensor or the second sensor to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing using the characteristic for calibration.
  3. 3. The method of claim 1, further comprising:
    before adjusting the attribute of the second image, downscaling the first image to a downscaled first image, wherein:
    adjusting the attribute of the second image comprises adjusting the attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the downscaled first image using the characteristic for calibration.
  4. 4. The method of claim 1, wherein adjusting the attribute of the second image comprises aligning the second image with the first image to substantially address a difference in alignment between the first sensor and the second sensor.
  5. 5. The method of claim 1, wherein:
    the first sensor comprises a multi-spectral component sensor and the second sensor comprises a limited-spectral component sensor; and
    adjusting the attribute of the second image comprises normalizing values among the first image and the second image to substantially address a difference in sensitivity between the first sensor and the second sensor.
  6. 6. The method of claim 1, wherein:
    the first sensor comprises a multi-spectral component sensor and the second sensor comprises a luminance sensor; and
    adjusting the attribute of the second image comprises normalizing luminance values among the first image and the second image to substantially address a difference in luminance sensitivity between the first sensor and the second sensor.
  7. 7. The method of claim 6, further comprising generating a high dynamic range image by combining luminance values of the second image with the first image.
  8. 8. The method of claim 1, further comprising:
    generating a depth map for focusing the first image, the depth map including a mapping among relative depth values in a field of view based on a difference between pixels of the first image and pixels of the second image; and
    generating an edge map by identifying edges in at least one of the first image or the second image.
  9. 9. An image processing device, comprising:
    a first sensor and a second sensor;
    a memory coupled to the first sensor and the second sensor; and
    a processing circuit coupled to the memory and configured to:
    identify a characteristic for calibration associated with at least one of the first sensor or the second sensor;
    capture a first image with the first sensor and capture a second image with the second sensor; and
    adjust an attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the first image using the characteristic for calibration.
  10. 10. The image processing device of claim 9, wherein the processing circuit is further configured to, before adjusting the attribute of the second image, adapt an operating characteristic of at least one of the first sensor or the second sensor to accommodate for at least one of noise, defective pixels, dark current, vignetting, demosaicing, or white balancing using the characteristic for calibration.
  11. 11. The image processing device of claim 9, wherein the processing circuit is further configured to:
    before adjusting the attribute of the second image, downscale the first image to a downscaled first image; and
    adjust the attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the downscaled first image using the characteristic for calibration.
  12. 12. The image processing device of claim 9, wherein the processing circuit is further configured to align the second image with the first image to substantially address a difference in alignment between the first sensor and the second sensor.
  13. 13. The image processing device of claim 9, wherein:
    the first sensor comprises a multi-spectral component sensor and the second sensor comprises a limited-spectral component sensor; and
    the processing circuit is further configured to normalize values among the first image and the second image to substantially address a difference in sensitivity between the first sensor and the second sensor.
  14. 14. The image processing device of claim 9, wherein:
    the first sensor comprises a multi-spectral component sensor and the second sensor comprises a luminance sensor; and
    the processing circuit is further configured to normalize luminance values among the first image and the second image to substantially address a difference in luminance sensitivity between the first sensor and the second sensor.
  15. 15. The image processing device of claim 9, wherein the processing circuit is further configured to:
    generate a depth map for focusing the first image, the depth map including a mapping among relative depth values in a field of view based on a difference between pixels of the first image and pixels of the second image; and
    generate an edge map by identifying edges in at least one of the first image or the second image.
  16. 16. An image processing method, comprising:
    identifying a characteristic for calibration associated with at least one of a first sensor or a second sensor;
    adapting an operating characteristic of at least one of the first sensor or the second sensor;
    capturing a first image with the first sensor and capturing a second image with the second sensor; and
    adjusting, with a processing circuit, an attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the first image using the characteristic for calibration.
  17. 17. The method of claim 16, further comprising:
    before adjusting the attribute of the second image, downscaling the first image to a downscaled first image, wherein:
    adjusting the attribute of the second image comprises adjusting the attribute of the second image to substantially address a difference between the attribute of the second image and a corresponding attribute of the downscaled first image using the characteristic for calibration.
  18. 18. The method of claim 16, wherein adjusting the attribute of the second image comprises aligning the second image with the first image to substantially address a difference in alignment between the first sensor and the second sensor.
  19. 19. The method of claim 16, wherein:
    the first sensor comprises a multi-spectral component sensor and the second sensor comprises a limited-spectral component sensor; and
    adjusting the attribute of the second image comprises normalizing values among the first image and the second image to substantially address a difference in sensitivity between the first sensor and the second sensor.
  20. 20. The method of claim 16, wherein:
    the first sensor comprises a multi-spectral component sensor and the second sensor comprises a luminance sensor; and
    adjusting the attribute of the second image comprises normalizing luminance values among the first image and the second image to substantially address a difference in luminance sensitivity between the first sensor and the second sensor.
US14065810 2013-10-16 2013-10-29 Heterogeneous mix of sensors and calibration thereof Abandoned US20150103200A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
US201361891631 true 2013-10-16 2013-10-16
US201361891648 true 2013-10-16 2013-10-16
US14065810 US20150103200A1 (en) 2013-10-16 2013-10-29 Heterogeneous mix of sensors and calibration thereof

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
US14065810 US20150103200A1 (en) 2013-10-16 2013-10-29 Heterogeneous mix of sensors and calibration thereof

Publications (1)

Publication Number Publication Date
US20150103200A1 true true US20150103200A1 (en) 2015-04-16

Family

ID=52809342

Family Applications (2)

Application Number Title Priority Date Filing Date
US14065786 Active 2033-11-02 US9294662B2 (en) 2013-10-16 2013-10-29 Depth map generation and post-capture focusing
US14065810 Abandoned US20150103200A1 (en) 2013-10-16 2013-10-29 Heterogeneous mix of sensors and calibration thereof

Family Applications Before (1)

Application Number Title Priority Date Filing Date
US14065786 Active 2033-11-02 US9294662B2 (en) 2013-10-16 2013-10-29 Depth map generation and post-capture focusing

Country Status (1)

Country Link
US (2) US9294662B2 (en)

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20170084044A1 (en) * 2015-09-22 2017-03-23 Samsung Electronics Co., Ltd Method for performing image process and electronic device thereof
WO2017113048A1 (en) * 2015-12-28 2017-07-06 华为技术有限公司 Image fusion method and apparatus, and terminal device
EP3328067A1 (en) * 2016-11-29 2018-05-30 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for shooting image and terminal device
US9998716B2 (en) 2015-08-24 2018-06-12 Samsung Electronics Co., Ltd. Image sensing device and image processing system using heterogeneous image sensor

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20140354801A1 (en) * 2013-05-31 2014-12-04 Ecole Polytechnique Federale De Lausanne (Epfl) Method, system and computer program for determining a reflectance distribution function of an object
US9524556B2 (en) * 2014-05-20 2016-12-20 Nokia Technologies Oy Method, apparatus and computer program product for depth estimation
US9749532B1 (en) * 2014-08-12 2017-08-29 Amazon Technologies, Inc. Pixel readout of a charge coupled device having a variable aperture
US9646365B1 (en) 2014-08-12 2017-05-09 Amazon Technologies, Inc. Variable temporal aperture
US9787899B1 (en) 2014-08-12 2017-10-10 Amazon Technologies, Inc. Multiple captures with a variable aperture
KR20160112810A (en) * 2015-03-20 2016-09-28 삼성전자주식회사 Method for processing image and an electronic device thereof
US20160381342A1 (en) * 2015-06-26 2016-12-29 Canon Kabushiki Kaisha Image processing apparatus, imaging apparatus, image processing method, and recording medium
US9858649B2 (en) * 2015-09-30 2018-01-02 Lytro, Inc. Depth-based image blurring

Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030611A1 (en) * 2006-08-01 2008-02-07 Jenkins Michael V Dual Sensor Video Camera
US20080165257A1 (en) * 2007-01-05 2008-07-10 Micron Technology, Inc. Configurable pixel array system and method
US20110169921A1 (en) * 2010-01-12 2011-07-14 Samsung Electronics Co., Ltd. Method for performing out-focus using depth information and camera using the same
US20110205389A1 (en) * 2010-02-22 2011-08-25 Buyue Zhang Methods and Systems for Automatic White Balance
US20120044328A1 (en) * 2010-08-17 2012-02-23 Apple Inc. Image capture using luminance and chrominance sensors
US20120075432A1 (en) * 2010-09-27 2012-03-29 Apple Inc. Image capture using three-dimensional reconstruction
US20120236124A1 (en) * 2011-03-18 2012-09-20 Ricoh Company, Ltd. Stereo camera apparatus and method of obtaining image
US8368803B2 (en) * 2009-09-10 2013-02-05 Seiko Epson Corporation Setting exposure attributes for capturing calibration images
US20130235226A1 (en) * 2012-03-12 2013-09-12 Keith Stoll Karn Digital camera having low power capture mode
US20140347350A1 (en) * 2013-05-23 2014-11-27 Htc Corporation Image Processing Method and Image Processing System for Generating 3D Images

Family Cites Families (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US6393142B1 (en) * 1998-04-22 2002-05-21 At&T Corp. Method and apparatus for adaptive stripe based patch matching for depth estimation
CA2553473A1 (en) * 2005-07-26 2007-01-26 Wa James Tam Generating a depth map from a tw0-dimensional source image for stereoscopic and multiview imaging
US8532425B2 (en) * 2011-01-28 2013-09-10 Sony Corporation Method and apparatus for generating a dense depth map using an adaptive joint bilateral filter
US9007441B2 (en) * 2011-08-04 2015-04-14 Semiconductor Components Industries, Llc Method of depth-based imaging using an automatic trilateral filter for 3D stereo imagers
US20130070049A1 (en) * 2011-09-15 2013-03-21 Broadcom Corporation System and method for converting two dimensional to three dimensional video

Patent Citations (10)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080030611A1 (en) * 2006-08-01 2008-02-07 Jenkins Michael V Dual Sensor Video Camera
US20080165257A1 (en) * 2007-01-05 2008-07-10 Micron Technology, Inc. Configurable pixel array system and method
US8368803B2 (en) * 2009-09-10 2013-02-05 Seiko Epson Corporation Setting exposure attributes for capturing calibration images
US20110169921A1 (en) * 2010-01-12 2011-07-14 Samsung Electronics Co., Ltd. Method for performing out-focus using depth information and camera using the same
US20110205389A1 (en) * 2010-02-22 2011-08-25 Buyue Zhang Methods and Systems for Automatic White Balance
US20120044328A1 (en) * 2010-08-17 2012-02-23 Apple Inc. Image capture using luminance and chrominance sensors
US20120075432A1 (en) * 2010-09-27 2012-03-29 Apple Inc. Image capture using three-dimensional reconstruction
US20120236124A1 (en) * 2011-03-18 2012-09-20 Ricoh Company, Ltd. Stereo camera apparatus and method of obtaining image
US20130235226A1 (en) * 2012-03-12 2013-09-12 Keith Stoll Karn Digital camera having low power capture mode
US20140347350A1 (en) * 2013-05-23 2014-11-27 Htc Corporation Image Processing Method and Image Processing System for Generating 3D Images

Cited By (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9998716B2 (en) 2015-08-24 2018-06-12 Samsung Electronics Co., Ltd. Image sensing device and image processing system using heterogeneous image sensor
US20170084044A1 (en) * 2015-09-22 2017-03-23 Samsung Electronics Co., Ltd Method for performing image process and electronic device thereof
WO2017113048A1 (en) * 2015-12-28 2017-07-06 华为技术有限公司 Image fusion method and apparatus, and terminal device
EP3328067A1 (en) * 2016-11-29 2018-05-30 Guangdong Oppo Mobile Telecommunications Corp., Ltd. Method and apparatus for shooting image and terminal device

Also Published As

Publication number Publication date Type
US20150104074A1 (en) 2015-04-16 application
US9294662B2 (en) 2016-03-22 grant

Similar Documents

Publication Publication Date Title
US6252577B1 (en) Efficient methodology for scaling and transferring images
Bennett et al. Multispectral bilateral video fusion
US20120050563A1 (en) Flexible color space selection for auto-white balance processing
US20140267762A1 (en) Extended color processing on pelican array cameras
US20090091645A1 (en) Multi-exposure pattern for enhancing dynamic range of images
US20100253833A1 (en) Exposing pixel groups in producing digital images
US20070177004A1 (en) Image creating method and imaging device
US20120051730A1 (en) Auto-focus control using image statistics data with coarse and fine auto-focus scores
US20130100314A1 (en) Imaging systems and methods for generating motion-compensated high-dynamic-range images
US20140085502A1 (en) High resolution multispectral image capture
Heide et al. High-quality computational imaging through simple lenses
US20120002890A1 (en) Alignment of digital images and local motion detection for high dynamic range (hdr) imaging
US20080240602A1 (en) Edge mapping incorporating panchromatic pixels
US20090051984A1 (en) Image sensor having checkerboard pattern
US20140347521A1 (en) Simulating High Dynamic Range Imaging with Virtual Long-Exposure Images
US20080226278A1 (en) Auto_focus technique in an image capture device
Tai et al. Nonlinear camera response functions and image deblurring: Theoretical analysis and practice
KR20070004202A (en) Method for correcting lens distortion in digital camera
US20110150357A1 (en) Method for creating high dynamic range image
US20100123802A1 (en) Digital image signal processing method for performing color correction and digital image signal processing apparatus operating according to the digital image signal processing method
WO2009153836A1 (en) Method and apparatus for motion blur and ghosting prevention in imaging system
US20140347532A1 (en) Electronic sensor and method for controlling the same
US20090087087A1 (en) Pattern conversion for interpolation
US20130215319A1 (en) Image capture device, image capture device focus control method, and integrated circuit
US20100014775A1 (en) Image processing apparatus and image processing method

Legal Events

Date Code Title Description
AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:MACFARLANE, CHARLES DUNLOP;REEL/FRAME:031811/0071

Effective date: 20131028

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: CORRECTIVE ASSIGNMENT TO CORRECT THE THE ASSIGNMENT DOCUMENT PREVIOUSLY RECORDED ON REEL 031811 FRAME 0071. ASSIGNOR(S) HEREBY CONFIRMS THE ASSIGNMENT;ASSIGNORS:VONDRAN, GARY LEE;MACFARLANE, CHARLES DUNLOP;SIGNING DATES FROM 20131025 TO 20131028;REEL/FRAME:032190/0438

AS Assignment

Owner name: BANK OF AMERICA, N.A., AS COLLATERAL AGENT, NORTH

Free format text: PATENT SECURITY AGREEMENT;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:037806/0001

Effective date: 20160201

AS Assignment

Owner name: AVAGO TECHNOLOGIES GENERAL IP (SINGAPORE) PTE. LTD

Free format text: ASSIGNMENT OF ASSIGNORS INTEREST;ASSIGNOR:BROADCOM CORPORATION;REEL/FRAME:041706/0001

Effective date: 20170120

AS Assignment

Owner name: BROADCOM CORPORATION, CALIFORNIA

Free format text: TERMINATION AND RELEASE OF SECURITY INTEREST IN PATENTS;ASSIGNOR:BANK OF AMERICA, N.A., AS COLLATERAL AGENT;REEL/FRAME:041712/0001

Effective date: 20170119