EP2664153B1 - Imaging system using a lens unit with longitudinal chromatic aberrations and method of operating - Google Patents
Imaging system using a lens unit with longitudinal chromatic aberrations and method of operating Download PDFInfo
- Publication number
- EP2664153B1 EP2664153B1 EP12705781.8A EP12705781A EP2664153B1 EP 2664153 B1 EP2664153 B1 EP 2664153B1 EP 12705781 A EP12705781 A EP 12705781A EP 2664153 B1 EP2664153 B1 EP 2664153B1
- Authority
- EP
- European Patent Office
- Prior art keywords
- images
- information
- unit
- colour
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000003384 imaging method Methods 0.000 title claims description 77
- 230000004075 alteration Effects 0.000 title claims description 40
- 238000000034 method Methods 0.000 title claims description 19
- 230000003595 spectral effect Effects 0.000 claims description 44
- 238000012545 processing Methods 0.000 claims description 36
- 230000002194 synthesizing effect Effects 0.000 claims description 12
- 238000012937 correction Methods 0.000 claims description 11
- 230000004044 response Effects 0.000 claims description 10
- 238000007781 pre-processing Methods 0.000 claims description 8
- 238000005286 illumination Methods 0.000 claims description 6
- 238000012546 transfer Methods 0.000 claims description 6
- 239000000203 mixture Substances 0.000 claims description 3
- 238000011156 evaluation Methods 0.000 claims 1
- 230000035945 sensitivity Effects 0.000 description 31
- 238000010586 diagram Methods 0.000 description 30
- 230000006870 function Effects 0.000 description 23
- 230000005855 radiation Effects 0.000 description 17
- 238000013459 approach Methods 0.000 description 9
- 230000000694 effects Effects 0.000 description 9
- 230000008569 process Effects 0.000 description 9
- 229920013655 poly(bisphenol-A sulfone) Polymers 0.000 description 8
- 239000000758 substrate Substances 0.000 description 8
- 239000003086 colorant Substances 0.000 description 7
- 238000001429 visible spectrum Methods 0.000 description 7
- 239000011159 matrix material Substances 0.000 description 5
- 238000001228 spectrum Methods 0.000 description 5
- 230000008859 change Effects 0.000 description 4
- 239000004065 semiconductor Substances 0.000 description 4
- 230000006399 behavior Effects 0.000 description 3
- 230000001419 dependent effect Effects 0.000 description 3
- 229920006395 saturated elastomer Polymers 0.000 description 3
- 230000009466 transformation Effects 0.000 description 3
- 230000008901 benefit Effects 0.000 description 2
- 230000000740 bleeding effect Effects 0.000 description 2
- 239000000463 material Substances 0.000 description 2
- 241000579895 Chlorostilbon Species 0.000 description 1
- 238000007792 addition Methods 0.000 description 1
- 238000003491 array Methods 0.000 description 1
- 230000001413 cellular effect Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 238000004891 communication Methods 0.000 description 1
- 150000001875 compounds Chemical class 0.000 description 1
- 238000004590 computer program Methods 0.000 description 1
- 229910052876 emerald Inorganic materials 0.000 description 1
- 239000010976 emerald Substances 0.000 description 1
- 230000005669 field effect Effects 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 239000011521 glass Substances 0.000 description 1
- 230000010365 information processing Effects 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 238000010606 normalization Methods 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000001151 other effect Effects 0.000 description 1
- 239000004033 plastic Substances 0.000 description 1
- 229920003023 plastic Polymers 0.000 description 1
- 238000004886 process control Methods 0.000 description 1
- 210000001747 pupil Anatomy 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N9/00—Details of colour television systems
- H04N9/64—Circuits for processing colour signals
- H04N9/646—Circuits for processing colour signals for image enhancement, e.g. vertical detail restoration, cross-colour elimination, contour correction, chrominance trapping filters
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
- H04N25/61—Noise processing, e.g. detecting, correcting, reducing or removing noise the noise originating only from the lens unit, e.g. flare, shading, vignetting or "cos4"
- H04N25/611—Correction of chromatic aberration
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/10—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths
- H04N23/11—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from different wavelengths for generating image signals from visible and infrared light wavelengths
Definitions
- the present invention relates to the field of post-capture digital image processing techniques.
- An embodiment relates to an imaging system that includes a non-colour-corrected lens unit with longitudinal chromatic aberrations and a processing unit for post-capture digital image processing.
- a further embodiment refers to a method of operating an imaging system that includes a non-colour-corrected lens unit with longitudinal chromatic aberrations.
- US 2010/315541 discloses a solid-state imaging device for a camera module, that can increase the depth of field without lowering the resolution signal level by using a timing generation circuit which generates red, green and blue signals from color signals converted by a sensor unit.
- F. Guichard et al. "Extended Depth-of-Field using Sharpness Transport across Colour Channels", SPIE, Proceedings of Electronic Imaging, 2009 , refers to a method of obtaining images with extended depth-of-field where, for a given object distance, at least one colour plane of an RGB image contains the in-focus scene information.
- the object of the present invention is providing an enhanced imaging system for obtaining enhanced images at low computational effort.
- the object is achieved with the subject-matter of the independent claims. Further embodiments are defined in the dependent claims, respectively. Details and advantages of the invention will become more apparent from the following description of embodiments in connection with the accompanying drawings. Features of the various embodiments may be combined unless they exclude each other.
- IR infrared
- Figure 1 shows an imaging system 400 with an imaging unit 100.
- the imaging system 400 may be part of a mobile or stationary camera system, for example a surveillance camera system, a camera for diagnostics or surgical methods, a camera embedded in a manufacture process control system, a digital microscope, a digital telescope, a still camera or video camera for both consumer and professional applications as well as a camera to detect gestures or poses for remote control or gaming applications.
- the imaging system is integrated in a handheld device including a camera system like a cellular phone, a personal digital assistant, or a music player, by way of example.
- the imaging unit 100 includes an aperture unit 110, which is arranged such that radiation passing through the aperture of the aperture unit 110 passes through a lens unit 120 and incidents on an imaging sensor unit 140.
- the aperture unit 110 may be also positioned inside of the lens unit 120, in particular at the position of the pupil plane of the lens unit 120.
- the lens unit 120 may be a single lens, an array of micro-lenses or a lens assembly including a plurality of lenses.
- the lens unit 120 features a longitudinal chromatic aberration and the imaging unit 100 does not contain elements compensating for the longitudinal (axial) chromatic aberration to generate colour-corrected images.
- the lens unit 120 is a compound lens formed of a highly dispersive material like glass or plastics, where the index of refraction is a function of the wavelength of the incident light such that the focal length varies as a function of the wavelength.
- the lens unit 120 images infrared radiation in a first focal plane F IR , visible red light in a focal plane F R , green light in a focal plane F G and blue light in a focal plane F B .
- the lens unit 120 may include compensation elements compensating for spherical and/or field dependent aberrations such that the lens unit 120 exhibits no or only negligible spherical and field dependent aberrations.
- the imaging sensor unit 140 includes a plurality of pixel sensors, wherein each pixel sensor contains a photo sensor for converting a photo signal from the incident radiation into an electronic signal.
- the imaging sensor unit 140 may output an image signal containing the pixel values of all pixel sensors of an imaging sensor unit 140 in a digitized form.
- the imaging unit 100 may provide a greyscale image and an infrared image.
- a colour filter unit 130 may be arranged between the lens unit 120 and the imaging sensor unit 140.
- the colour filter unit 130 may comprise a plurality of colour filter sections, wherein each colour filter section has a filter colour, for example blue, red, green, white or IR (infrared).
- Each colour filter section may be assigned to one single pixel sensor such that each pixel sensor receives colour-specific image information.
- the imaging sensor unit 140 outputs two, three, four or more different sub-images, wherein each sub-image contains image information with regard to a specific frequency range of the incoming radiation.
- One of the sub-images may describe the infrared portion of the imaged scene.
- the imaging sensor unit 140 captures a plurality of non-colour-corrected first images of different spectral content or composition, for example a "red” image using the filter colour “red”, a “blue” image using the filter colour “blue”, and a “green” image using the filter colour “green”.
- One of the first images may consist or contain at least a portion of the infrared range and outputs respective image signals.
- the images of different spectral content may also include images with overlapping spectral content.
- the imaging sensor unit 140 may include broadband sensitive pixel sensors which are assigned to broadband colour filter sections with the filter colour "white” being approximately transparent for the whole visible spectrum.
- the first images of different spectral content are referred to as colour planes or images and may include a greyscale image containing information over the whole visible spectrum or may refer to spectral content outside the visible range, for example infrared radiation.
- An image processing unit 200 receives the colour planes that contain both luminance information and colour information and computes a modified output image signal.
- the modified output image signal represents an image that may have reduced or enhanced depth-of-field compared to the first images, or may be a re-focused image, or an image featuring a 3D effect, by way of example.
- the modified image may be stored in a non-volatile memory 310 of the imaging system 400, for example as a set of digital values representing a coloured image.
- the non-volatile memory 310 may be a memory card of a camera system.
- the modified image may be displayed on a display device of the imaging system 400 or may be output to another system connected to the imaging system 400 via a wired or wireless communication channel or may be supplied to a processing system or application for processing further the information contained in the modified output image.
- FIG. 1B shows the image processing unit 200 in more detail.
- an intensity processing unit 202 may compute broadband luminance sharpness information (sharp intensity information) on the basis of lens parameters, for example a PSF (point spread function), descriptive for the imaging properties of the lens unit 120.
- the broadband luminance corresponds to an intensity image.
- a chrominance processing unit 201 may compute chrominance information by correcting chromatic aberrations resulting from the use of the non-colour corrected lens unit 120.
- a synthesizer unit 280 synthesizes a modified output image on the basis of the resulting sharpness information obtained from the luminance information and on the basis of the corrected chrominance information derived from the colour information.
- the intensity processing unit 202 includes a luminance pre-processing unit 250 computing a broadband luminance information on the basis of the output signals of the imaging unit 100, for example the colour planes.
- a deconvolution unit 260 uses information describing the lens unit 120 for computing broadband luminance sharpness information (sharp intensity information) on the basis of the broadband luminance information.
- the imaging sensor unit 140 of Figure 1A may supply inter alia a "white” colour plane or greyscale image, for example using an RGBW mosaic pattern like a 4x4 or 2x4 RGBW mosaic pattern. Then, at least under daylight conditions, the "white” colour plane may directly represent the broadband luminance information supplied to the deconvolution unit 260 and the intensity processing unit 202 computes the sharp intensity information on the basis of the "white" colour plane.
- the luminance pre-processing unit 250 may compute a greyscale image on the basis of the colour planes and may supply the computed greyscale image as broadband luminance information to the deconvolution unit 260.
- the chrominance processing unit 201 is concerned with the colour information that may include the IR plane. Due to the chromatic aberrations of the lens unit 120, different colours focus at different distances, wherein for typical dispersive materials used for the lens unit shorter wavelengths of light focus at nearer depth than larger wavelengths.
- the chrominance processing unit 201 may generate a coarse depth by measuring the focus of each colour plane and comparing the obtained focus measures with each other. Based on this information the chromatic aberrations in the colour planes, which may include the IR plane, may be corrected in order to eliminate a colour bleeding effect resulting from the chromatic aberrations.
- the chrominance processing unit 201 may include a chrominance pre-processing unit 210 pre-processing the colour planes, which may include an IR plane, and a depth estimator 220 for generating a depth map associating depth values to the pixel values of the colour and IR planes.
- a correction unit 230 corrects chromatic aberrations mainly originating from the use of the non colour-corrected lens unit 120. For example, the correction unit 230 analyzes the colour behaviour on such edges that do not show chromatic aberration for evaluating a range limitation for colour difference signals, identifies pixel values violating the range limitation as colour fringes caused by chromatic aberrations and replaces these pixel values with allowed values. According to another embodiment, the correction unit 230 may exchange sharpness information among the colour planes, for example on the basis of the depth maps.
- All elements of the image processing unit 200 of Figure 1 may be embodied by hardware only, for example as integrated circuits, FPGAs (field programmable gate arrays), ASICs (application specific integrated circuits), by software only, which may be implemented, for example in a computer program or a microcontroller memory, or by a combination of hardware and software elements.
- FIG. 2 refers to a schematic cross-sectional view of an imaging unit 100.
- the imaging unit 100 may include an aperture unit 110, wherein, during an exposure period, radiation, e.g. visible light and/or infrared radiation, which is descriptive for an image of a scene or object passes through an aperture 115 of the aperture unit 110 and a lens unit 120 and incidents onto an imaging sensor unit 140.
- the imaging sensor unit 140 comprises a plurality of pixel sensors 145. Each pixel sensor 145 contains a photo sensor that converts a photo signal from the incident light into an electronic signal.
- the pixel sensors 145 may be formed in a semiconductor substrate.
- the pixel sensor 145 may be arranged in one plane or in different planes.
- the imaging unit 100 may comprise a colour filter unit 130 that may be arranged between the lens unit 120 and the imaging sensor unit 140 or between the aperture unit 110 and the lens unit 120.
- the imaging sensor unit 140 may have a vertically integrated photodiode structure with deep photodiodes formed in a substrate section few microns beneath surface photodiodes formed adjacent to a substrate surface of a semiconductor substrate. Visible light is absorbed in the surface section of the semiconductor substrate, whereas infrared radiation penetrates deeper into the semiconductor substrate. As a result, the deep photodiodes only receive infrared radiation.
- the imaging sensor unit 140 may have a lateral integrated photodiode structure with photodiodes arranged in an array.
- the colour filter unit 130 may be arranged in close contact to the imaging sensor unit 140 and may include a plurality of colour filter sections 135, wherein each colour filter section 135 has a filter colour, for example green, red, blue, magenta, yellow, white or IR (infrared).
- Each colour filter section 135 is assigned to one single pixel sensor 145 such that each pixel sensor 145 receives colour-specific image information.
- the colour filter sections 135 may be arranged matrix-like in columns and rows. Colour filter sections 135 assigned to different filter colours may alternate along the row direction and the column direction in a regular manner.
- each four colour filter sections 135 forming a 2 x 2 matrix may be arranged to form a Bayer mosaic pattern, wherein colour filter sections 135 with the filter colour "green” are arranged on a first diagonal of the 2 x 2 matrix, and one colour filter section 135 with a filter colour “red” and one colour filter section 135 with the filter colour “blue” are arranged on the other diagonal of the 2 x 2 matrix.
- the sampling rate for the filter colour "green” is twice that of the filter colours "red” and "blue” to take into account that the colour green carries most of the luminance information for the human eye.
- the colour filter sections 135 may be arranged to form an RGBE-mosaic pattern with "Emerald" as a fourth filter colour, a CYYM mosaic pattern with one cyan, two yellow and one magenta colour filter section 135 or a CYGM mosaic pattern with one cyan, one yellow, one green and one magenta colour filter section 135 arranged in 2x2 unit matrices which are repeatedly arranged within the colour filter unit 130.
- the colour filter unit 130 includes a mosaic of unit matrices with three colour filter sections of three different filter colours and one transparent filter section without colour filtering properties and transparent for all colours within the visible spectrum.
- the transparent and the colour filter sections 135 may be arranged to form an RGBW mosaic pattern, for example a 4x4 or a 2x4 RGBW mosaic pattern, by way of example.
- the filter range of the colour filter sections 135 is not restricted to the visible part of the spectrum.
- the colour filter 130 contains at least one colour filter section type being transparent for infrared radiation.
- the colour filter 130 is an RGBIR filter where each 2x2 unit matrix contains one red, one green, one blue and one infrared colour filter section 135 and where the unit matrices are regularly arranged to form a mosaic pattern.
- the four colours R, G, B and IR can be arranged by any permutation within the 2x2 unit matrices.
- the infrared radiation may pass the colour filter unit 130 in sections 133 transparent for infrared radiation between the colour filter sections 135.
- the colour filter 130 contains at least one colour filter section type being transparent for infrared radiation.
- the colour filter 130 is an RGBIR filter where each 2x2 unit matrix contains one red, one green, one blue and one infrared colour filter section 135 and where the unit matrices are regularly arranged to form a mosaic pattern.
- the colour filter unit 130 does not include sections assigned to the deep photodiodes, since the colour filter sections 135 may be transparent for a portion of the frequency range of infrared radiation.
- each colour image blue, green, red and infrared will focus at different distances from near to far respectively such that by measuring and comparing the sharpness of each of the four image planes, a four layer depth map can be computed.
- Each lens unit 120 may be realized as micro-lens array including a plurality of segments. Each lens segment of a lens unit 120 may be assigned to one single pixel sensor 145 and one colour filter section 135.
- the lens unit 120 may be realized as an objective, comprising several single lenses, adapted to image objects in the object space to the sensor plane. Due to chromatic aberrations, each colour image blue, green, red and infrared will focus in another focal plane at different distances.
- the distance between the first focal plane for infrared radiation and any second focal plane assigned to visible light typically does not match with the vertical distance between the first and second sensor planes.
- at least one of the first and second images is severely out of focus when both the infrared image and the image for visible light are captured contemporaneously.
- Figure 3A refers to an embodiment of a luminance pre-processing unit 250 which receives colour-filtered first images, for example the colour planes red, green and blue from an imaging sensor unit 140 using an RGB Bayer mosaic colour filter.
- the imaging sensor unit 140 may be a CYGM sensor supplying the colour planes cyan, yellow, magenta and green or a CYYM sensor supplying the colour planes cyan, yellow and magenta.
- the luminance pre-processing unit 250 may include, for each received colour plane an interpolation unit 252 which interpolates those pixel values for each colour plane for which the colour plane has no pixel values available since the respective pixel is assigned to another filter colour. Interpolation may be performed by estimating the missing pixel values from neighbouring pixel values of the same colour plane and/or from the corresponding pixel of other colour planes. At least embodiments where the imaging unit 100 outputs only colour channels may provide a weighting unit 256 for one, two or all of the colour channels, respectively. Each weighting unit 256 multiplies the pixel values of the colour plane it is assigned to with a specific value.
- a superposition unit 254 may add up the interpolated and weighted colour planes to obtain a greyscale image to be supplied to the deconvolution unit 260 of Figure 1B .
- the computed broadband luminance may be obtained by summing up the red, green and blue colour images at equal weights, at least in the case of daylight conditions.
- Each weighting unit 256 may be non-configurable and may be a wired connection for weighting a colour plane with the weight "1".
- Another embodiment provides configurable weighting units 256 and a weighting control unit 258 that configures the weights of the weighting units 256 as a function of the illumination conditions or in response to a user input. At least one of the weights may have the value "0".
- the intensity image is the weighted average of all available spectral components in the sensor plane(s) including white and infrared.
- the imaging system uses information about the type of the light source illuminating the scene to select the weights for obtaining the computed broadband luminance.
- information about suitable colour weights may be stored in the weighting control unit 258, wherein the colour weights are predefined such that the spectral power density of the light source multiplied with the sum of the colour components obtained by multiplying the colour sensitivity with its respective weight, respectively, is broad and flat over the visible range of the spectrum in order to achieve a depth invariant PSF.
- the weighting control unit 258 may be adapted to classify a light source illuminating the imaged scene as daylight or artificial light, for example as incandescent lamp, fluorescent lamp, or LED lamp. In accordance with other embodiments, the weighting control unit 258 may process a user command indicating a light source type.
- the imaging unit 100 is based on a multi-layer image sensor, where pixel sensor layers are stacked within a transparent substrate and each pixel sensor layer is sensitive to another spectral range of the incoming light taking advantage of the fact that red, green, and blue light penetrate the substrate to different depths.
- the interpolation unit 252 may be omitted.
- Figure 3B refers to an embodiment of a luminance pre-processing unit 250 which receives colour-filtered first images, for example the colour planes red, green, blue and infrared from an imaging sensor unit 140 using an RGBIR Bayer mosaic colour filter.
- the imaging sensor unit 140 may be a CYGMIR sensor supplying the colour planes cyan, yellow, magenta, green and infrared or a CYYMIR sensor supplying the colour planes cyan, yellow, magenta and infrared.
- the computed broadband luminance may be obtained by summing up the red, green, blue and the infrared image colour images at equal weights, at least in the case of daylight conditions.
- Figure 3C refers to an embodiment of the deconvolution unit 260.
- the deconvolution unit 260 receives the broadband luminance information, for example as a greyscale image, and recovers sharpness information using information descriptive for the imaging properties of the lens unit 120.
- the deconvolution unit 260 deconvolves the greyscale image using a PSF descriptive for the imaging properties of the lens unit 120 of Figure 1 .
- the point spread function may be stored in a memory unit 262.
- a deconvolution sub-unit 264 performs the deconvolution and outputs a greyscale image representing the sharp intensity information.
- the memory unit 262 may store the in-focus PSF.
- the memory unit 262 may store several point spread functions for different depth-of-field ranges and one of them may be selected in response to a corresponding user request.
- the in-focus PSF and one, two or three further PSFs corresponding to user-selectable scene modes like "macro", “portrait” or “landscape” may be provided, for example, where the depth invariance of PSF is not sufficiently large.
- the resulting sharp intensity information is approximately depth-invariant for larger depth of-field for such objects that have a broadband spectrum. This is the case for most of the real world objects.
- Figures 4A to 4E refer to details and effects of the deconvolution process.
- Figure 4A shows a first diagram with spectral sensitivities of an RGB image sensor.
- Curve 422 is the spectral sensitivity of a pixel assigned to the filter colour "red”
- Curve 424 the spectral sensitivity of a pixel assigned to the filter colour "green”
- curve 426 the spectral sensitivity of a pixel assigned to the filter colour "blue”.
- spectral sensitivity is plotted as a function of the wavelength.
- Figure 4A shows a second diagram illustrating spectral sensitivity for "conventional" luminance 428 on the left hand side and a third diagram illustrating spectral sensitivity for broadband luminance 429.
- luminance is obtained from the RGB colour planes by RGB to YUV conversion, wherein in the YUV space, Y represents the greyscale image or luminance and UV represent two differential colour channels or chrominance, whereby the luminance is defined in a way such that it closely represents the photopic luminosity function defined by the CIE (Commission Internationale de l'Eclairage) 1931 standard.
- CIE Commission Internationale de l'Eclairage
- the conventional luminance spectral sensitivity curve 428 shown in the second diagram results from applying these weights to the spectral sensitivity curves 422, 424, 426 in the first diagram.
- the broadband luminance 429 as computed in accordance with the embodiments mainly corresponds to a white image captured by panchromatic sensor elements or a greyscale image captured through a monochrome sensor in the daylight conditions where the captured image has a broadband property.
- the computed broadband luminance may be computed similar to the intensity as defined in the HSI colour space.
- the computed broadband luminance is obtained by summing up the red, green and blue colour images at equal weights in the case of day light conditions.
- the weights of the colour channels may be chosen such that, given with the computing resources available in the imaging system, the final spectral sensitivity response is as broad and flat as possible within the visible spectrum in order to secure a depth invariant PSF, but at least more flat than the photopic luminosity function defined by the CIE 1931 standard.
- the computed broadband luminance 429 is approximately flat over a wavelength range of at least 100 nm or 200 nm, wherein in the flat range the sensitivity does not change by more than 10 % or 20 %.
- an amplitude of the final spectral sensitivity response does not change by more than 50 % over the half of the visible spectrum.
- the broadband luminance may be obtained directly from a suitable white channel, by demosaicing the colour channels of a Bayer sensor, an RGB sensor, an RGBIR sensor, or a CYGM, CYGMIR, CYYM or CYYMIR sensor and summing up corresponding pixel values at equal weights, or by adding up the colour channels, which may include the IR channel of a stacked or multi-layer RGB or RGBIR sensor.
- the weights may be selected such that they take into account the spectral power density of the light source(s) illuminating the scene from which the image is captured, for example an incandescent light, a fluorescent lamp, or a LED lamp, such that a resulting point spread function is less depth variant than obtained by using equal weights.
- the lower half shows a family of one-dimensional PSFs 402, 404, 406 estimated for a real lens for the spectral sensitivity curve 428 given for conventional luminance in the upper half of the Figure 4B .
- Figure 4C shows a family of one-dimensional PSFs 412, 414, 416 estimated for a real lens for the spectral sensitivity curve 429 given for broadcast luminance in the upper half of Figure 4C .
- Each single PSF describes the response of the respective lens arrangement to a point source for a given distance between the point source and a lens plane in which the lens arrangement is provided.
- the PSF plots are from 10 cm to infinity distance.
- FIGS 4B and 4C show that broadband luminance results in more depth invariant PSF as compared to conventional luminance. More specifically, the PSFs for near distances vary more in the case of conventional luminance.
- Figures 4B and 4C refer to daylight conditions. Under other lighting conditions, broadband luminance can be obtained by selecting appropriate weights to each colour. According to an embodiment, in incandescent light, blue has more weight than green and green more weight than red, wherein the weights are selected such that the resulting luminance has the above described "broadband luminance" characteristic.
- Figure 4D schematically illustrates a family of one-dimensional PSFs 422, 424, 426 for a conventional lens arrangement
- Figure 4E refers to a family of one-dimensional PSFs 432, 434, 436 of a non-colour-corrected lens arrangement.
- Each single PSF describes the response of the respective lens arrangement to a point source for a given distance between the point source and a lens plane in which the lens arrangement is provided. Comparing the PSFs in both Figures shows that for a non-colour-corrected lens arrangement the PSFs for different distances deviate from each other to a significantly less degree than those for colour-corrected lens arrangements.
- the blur is approximately depth invariant over a wide range, though a lens arrangement with chromatic aberrations introduces blur, a simple deconvolution process with the a-priori known blur kernel suffices to restore sharp intensity information.
- that PSF that corresponds to the in-focus position is selected for deconvolution. Since the in-focus position is a symmetry position and the PSF changes similarly and only slowly for non-colour corrected lenses in both the near and far defocus position, the in-focus PSF gives optimal results.
- the resulting sharp intensity information is approximately depth-invariant for larger depth of-field for such objects that have a broadband spectrum. This is the case for most of the real world objects.
- Figure 4F shows a first diagram with spectral sensitivities of an RGBIR image sensor.
- Curve 421 is the spectral sensitivity of a pixel assigned to an IR spectral sensitivity filter.
- Curve 422 is the spectral sensitivity of a pixel assigned to the filter colour "red”.
- Curve 424 is the spectral sensitivity of a pixel assigned to the filter colour "green”.
- Curve 426 is the spectral sensitivity of a pixel assigned to the filter colour "blue”. In all diagrams, spectral sensitivity is plotted as a function of the wavelength.
- Figure 4F shows a second diagram illustrating spectral sensitivity for "conventional" luminance 428 based on the colour planes red, green and blue on the left hand side and a third diagram illustrating spectral sensitivity for broadband luminance 430 resulting from all colour planes including infrared.
- the conventional luminance spectral sensitivity curve 428 shown in the second diagram results from applying these weights to the spectral sensitivity curves 422, 424, 426 in the first diagram.
- the broadband luminance 430 as computed in accordance with the embodiments mainly corresponds to a white image captured by panchromatic sensor elements or a greyscale image captured through a monochrome sensor in the daylight conditions where the captured image has a broadband property.
- the computed broadband luminance may be computed similar to the intensity as defined in the HSI colour space.
- the computed broadband luminance is obtained by summing up the red, green and blue colour images at equal weights in the case of day light conditions.
- the weights of the colour channels for example the red, green, blue and infrared channels may be chosen such that, given with the computing resources available in the imaging system, the final spectral sensitivity response is as broad and flat as possible within the visible spectrum in order to secure a depth invariant PSF, but at least more flat than the photopic luminosity function defined by the CIE 1931 standard.
- the computed broadband luminance 429 is approximately flat over a wavelength range of at least 100 nm or 200 nm, wherein in the flat range the sensitivity does not change by more than 10 % or 20 %.
- an amplitude of the final spectral sensitivity response does not change by more than 50 % over the half of the visible spectrum.
- Figure 5A refers to a depth map estimator 220 as an element of the chrominance processing unit 201 of Figure 1B .
- the first non-colour-corrected sub-images or colour planes may be supplied to storage units 222 holding at least two or all of the colour planes for the following operation.
- a depth map unit 224 compares relative sharpness information across the colour planes temporarily stored in the storage units 222, for example using the DCT (Discrete Cosine transform).
- the relative sharpness between colour planes can be measured by computing, on the neighbourhood of each pixel at a given location in the colour plane, the normalized sum of differences between the local gradient and the average gradient.
- the depth map unit 224 may generate a depth map assigning to each pixel or each group of pixels a distance information.
- the Hadamard transform is used for sharpness measure. The results of the Hadamard transform are similar to that of the DCT but the computational effort is less since only additions and subtractions are required for performing the Hadamard transformation. Alternatively of in addition, other known sharpness measures may be used.
- the gradients are computed in the logarithmic domain where the normalization step may be omitted since the logarithmic domain makes the depth estimation independent of small variations in lighting conditions and small variations of intensity gradients in different colours.
- the depth map may be supplied to the correction unit 230 of Figure 1B .
- the correction unit 230 corrects chromatic aberrations in the colour planes. For example, the correction unit 230 evaluates a range limitation for colour differences on the basis of colour behaviour on edges that do not show chromatic aberration, identifies pixels violating the range limitation and replaces for these pixels the pixel values with allowed values.
- the correction unit 230 may use information included in the depth map to provide corrected colour images.
- the correction unit 230 may perform a sharpness transport by copying high frequencies of the sharpest colour plane for the respective image region to the other colour planes. For example, to each blurred sub-region of a colour plane a high-pass filtered version of the sharpest colour plane for the respective sub-region may be added.
- Figure 5B shows a depth map estimator 220 with an additional storage unit 222 for holding the IR plane.
- Figure 5C refers to a synthesizing unit 280 that transfers sharp intensity information into the corrected colour images output by the correction unit 230 of Figure 1B to obtain a modified output image with extended depth of field, by way of example.
- the synthesizing unit 280 transfers the sharp intensity information into the corrected images on the basis of a depth map obtained, for example during chrominance information processing. For example, the sharp intensity information from the sharpest spectral component is transferred to the uncorrected spectral components.
- the synthesizing unit 280 transfers the sharp intensity information on the basis of the depth map and user information indicating a user request concerning picture composition. The user request may concern the implementation a post-capture focus distance selection or a focus range selection.
- the synthesizing unit 280 may obtain the modified output image by an interleaved imaging approach, where, for example, for mesopic illumination conditions the output image is derived from both the colour images obtained from the chrominance path and the greyscale image obtained from the luminance path.
- each pixel value is updated on the basis of neighbouring pixel values by means of a distance function and a similarity function, wherein the similarity function uses information from both the respective colour image and the greyscale image and determining a weight for the information from the greyscale image.
- the weight may be determined by the fraction of saturated pixels in an image patch around that neighbour pixel whose distance and similarity is currently evaluated.
- the greyscale image to be combined with the colour images is obtained by a process taking into account the imaging properties of the non-colour corrected lens unit, for example by deconvolving a greyscale image obtained from the imaging units with the lens unit PSF,
- the synthesizing unit 280 obtains the output image by a similar interleaved imaging approach, wherein the greyscale image used in the similarity function may be obtained by deconvolving a light intensity obtained by a transformation of the RGB signals into the HSI (Hue-Saturation-Intensity) space.
- the light intensity signal or the underlying RGB images may be pre-processed to correspond to a broadband luminance signal, for which the PSF is more depth-invariant,
- Figure 6A refers to an embodiment based on RGBW image sensors outputting R, G, B, and W planes (602).
- the W plane may directly give a broadband luminance signal or may be pre-processed to represent a broadband luminance signal (650).
- the W plane is deconvolved using the PSF of the lens unit (660).
- the original R, G, B planes are provided to the chrominance path (610). Chromatic aberrations in the R, G, B planes are compensated for (630).
- a depth map may be derived from the original R, G, B planes (620) and used to compensate for or correct chromatic aberrations in the R, G, B planes.
- the deconvolved W plane is interleaved with the corrected R, G, B planes respectively (680).
- a modified colour image comprising modified RGB planes or equivalent information is output (690).
- Figure 6B refers to an embodiment based on RGBWIR image sensors outputting R, G, B, IR. and W planes (602).
- the W plane may directly give a broadband luminance signal or may be pre-processed to represent a broadband luminance signal (650).
- the W plane is deconvolved using the PSF of the lens unit (660).
- the original R, G, B, IR planes are provided to the chrominance path (610). Chromatic aberrations in the R, G, B, IR planes are compensated for (630).
- a depth map may be derived from the original R, G, B, IR planes (620) and used to compensate for or correct chromatic aberrations in the R, G, B, IR planes.
- the deconvolved W plane is interleaved with the corrected R, G, B, IR planes respectively (680).
- a modified colour image comprising modified R, G, B, IR planes or equivalent information is output (690).
- Figure 6C refers to an embodiment based on RGB image sensors outputting R, G, and B planes (602).
- the R, G, B, planes may be pre-processed, for example transformed into the HSI space (650), wherein the hue value, the saturation and the light intensity are obtained (652).
- the light intensity is deconvolved using the lens PSF (660).
- the original R, G, B planes may be provided to the chrominance path (614) or recalculated from the HSI values using the inverse HSI transformation (610). Chromatic aberrations in the R, G, B planes are compensated for (630).
- a depth map may be computed from the original R, G, B planes (620) and used to compensate for or correct chromatic aberrations in the R, G, B planes.
- the deconvolved light intensity is interleaved with the corrected R, G, B planes respectively (680).
- a modified colour image comprising modified RGB planes or equivalent information is output (690).
- Figure 6D refers to an embodiment based on RGBIR image sensors outputting R, G, B and IR planes (602).
- the R, G, B planes may be pre-processed, for example transformed into the HSI space (650), wherein the hue value, the saturation and the light intensity are obtained (652).
- the light intensity is deconvolved using the lens PSF (660).
- the original R, G, B, IR planes may be provided to the chrominance path (614). Chromatic aberrations in the R, G, B, IR planes are compensated for (630).
- a depth map may be computed from the original R, G, B, IR planes (620) and used to compensate for or correct chromatic aberrations in the R, G, B, IR planes.
- the deconvolved light intensity may be interleaved with the corrected R, G, B, IR planes respectively (680).
- a modified colour image comprising modified R, G, B, and IR planes or
- Figure 7 refers to a method of operating an imaging system.
- the method provides capturing at least two non-colour-corrected first images of an imaged scene by using a lens unit featuring longitudinal chromatic aberration, wherein the first images have different spectral components (702) and represent, by way of example, different colour planes.
- a broadband luminance sharpness information (704) and a chrominance information (706) is computed.
- the computed chrominance and broadband luminance sharpness information is combined to provide an output image (708).
- Computing the broadband luminance information may include deconvolving a greyscale image derived from at least one of the non-colour-corrected first images with a point spread function descriptive for the lens unit.
- Computing the chrominance information may include compensating for effects of the chromatic aberrations resulting from the use of the non-colour-corrected lens unit.
- Combining the chrominance and broadband luminance sharpness information may be performed on the basis of an interleaving process using a similarity function based on both the chrominance and the broadband luminance sharpness information.
- the present approach uses the PSF known for the broadband luminance channel such that extended depth-of-field or other effects can be obtained by performing a simple deconvolution.
- the present approach also provides correction of chromatic aberrations in the chrominance channel, wherein remaining small colour artefacts do not degrade the image quality due to the fact that the human eye is less sensitive to chrominance signal aberrations than to luminance aberrations.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Image Processing (AREA)
- Color Television Image Signal Generators (AREA)
- Studio Devices (AREA)
Description
- The present invention relates to the field of post-capture digital image processing techniques. An embodiment relates to an imaging system that includes a non-colour-corrected lens unit with longitudinal chromatic aberrations and a processing unit for post-capture digital image processing. A further embodiment refers to a method of operating an imaging system that includes a non-colour-corrected lens unit with longitudinal chromatic aberrations.
-
US 2010/315541 discloses a solid-state imaging device for a camera module, that can increase the depth of field without lowering the resolution signal level by using a timing generation circuit which generates red, green and blue signals from color signals converted by a sensor unit. F. Guichard et al., "Extended Depth-of-Field using Sharpness Transport across Colour Channels", SPIE, Proceedings of Electronic Imaging, 2009, refers to a method of obtaining images with extended depth-of-field where, for a given object distance, at least one colour plane of an RGB image contains the in-focus scene information. Oliver Cossairt et al., "Spectral Focal Sweep: Extended Depth of Field from Chromatic Aberrations", IEEE International Conference on Computational Photography (ICCP), March 2010, provide a spectral focal sweep camera that deconvolves a binary image obtained with a lens with large longitudinal chromatic aberrations to restore an image with extended depth of field. Shen, C.H. and Chen, HH, "Robust focus measure for low-contrast images", International Conference on Consumer Electronics (ICCE), 2006 suggest using a Discrete Cosine transform energy measure for evaluating focus and sharpness information. Manu Parmar and Brian Wandell, "Interleaved Imaging: An Imaging System Design Inspired by Rod-Con Vision", Proceedings of SPIE, 2009, propose an imaging architecture with a high-sensitive monochromatic pixel set to obtain a greyscale image and a low-sensitive trichromatic pixel set to obtain colour images. Under low-light, spatial information of an output image is mainly derived from the greyscale image, whereas under photopic conditions where the high-sensitive pixels are saturated the output image is only derived from the colour images. Under mesopic conditions the output image is derived from both the colour and the greyscale images, wherein each pixel value is updated taking into account neighbouring pixel values by using a distance function and a similarity function. The similarity function provides information from both the respective colour and the greyscale image, wherein a weight of the greyscale information is determined by the fraction of saturated pixels in an image patch around the neighbour pixel whose distance and similarity is currently evaluated. - S. Chung et al.; "Removing Chromatic Aberration by Digital Image Processing"; Optical Engineering; Vol. 49(6); June 2010; suggest removing chromatic aberration by analyzing the colour behaviour on such edges that do not show chromatic aberration for evaluating a range limitation for colour difference signals. Pixel values violating the range limitation condition are identified as colour fringes caused by chromatic aberrations and are replaced with allowed values.
- The object of the present invention is providing an enhanced imaging system for obtaining enhanced images at low computational effort. The object is achieved with the subject-matter of the independent claims. Further embodiments are defined in the dependent claims, respectively. Details and advantages of the invention will become more apparent from the following description of embodiments in connection with the accompanying drawings. Features of the various embodiments may be combined unless they exclude each other.
-
Figure 1A is a schematic block diagram of an imaging system including an image processing unit and a non-colour-corrected lens unit with longitudinal chromatic aberrations in accordance with an embodiment of the invention. -
Figure 1B is a schematic block diagram of the image processing unit ofFigure 1A . -
Figure 2 is a schematic cross-sectional view of an imaging unit of an imaging system in accordance with another embodiment of the invention. -
Figure 3A is a schematic block diagram of a detail of a luminance processing unit concerning interpolation of RGB images in accordance with an embodiment. -
Figure 3B is a schematic block diagram of a detail of a luminance processing unit concerning interpolation of RGBIR images in accordance with an embodiment. -
Figure 3C is a schematic block diagram of a detail of a luminance processing unit concerning deconvolution in accordance with further embodiments of the invention. -
Figure 4A shows diagrams illustrating spectral sensitivities for illustrating effects of the present invention. -
Figure 4B shows diagrams illustrating PSF depth variance for conventional luminance for illustrating effects of the present invention. -
Figure 4C shows diagrams illustrating PSF depth variance for broadband luminance for illustrating effects of the present invention. -
Figure 4D is a diagram showing point spread functions for a colour-corrected lens assembly for illustrating effects underlying the invention. -
Figure 4E is a diagram showing point spread functions for a non-colour-corrected lens assembly for illustrating effects underlying the invention. -
Figure 4F shows diagrams illustrating spectral sensitivities including that of an IR sensor for illustrating effects of the present invention. -
Figure 5A is a schematic block diagram of a detail of a chrominance processing unit concerning generation of a depth map in accordance with an embodiment. -
Figure 5B is a schematic block diagram of a detail of a chrominance processing unit concerning generation of a depth map in accordance with an embodiment referring to an IR sensor. -
Figure 5C is a schematic block diagram of a detail of a synthesizing unit according to a further embodiment. -
Figure 6A is a schematic diagram illustrating a process of deriving an output image from RGBW images in accordance with an embodiment. -
Figure 6B is a schematic diagram illustrating a process of deriving an output image from RGBIR images in accordance with an embodiment. -
Figure 6C is a schematic diagram illustrating a process of deriving an output image from RGB images in accordance with an embodiment. -
Figure 6D is a schematic diagram illustrating a process of deriving an output image from RGBIR images in accordance with an embodiment. -
Figure 7 is a simplified flowchart referring to a method of operating an imaging system in accordance with another embodiment of the invention. - As a convention to simplify the reading in the present text we will call any sub-ranges of the visible and the infrared (IR) radiation spectrum as 'colour'. In particular we will also call IR as a colour, even if this naming is not correct from the perspective of human vision. For example a filter transmitting only radiation in the IR spectral range will be also named 'colour filter'.
-
Figure 1 shows animaging system 400 with animaging unit 100. Theimaging system 400 may be part of a mobile or stationary camera system, for example a surveillance camera system, a camera for diagnostics or surgical methods, a camera embedded in a manufacture process control system, a digital microscope, a digital telescope, a still camera or video camera for both consumer and professional applications as well as a camera to detect gestures or poses for remote control or gaming applications. According to other embodiments, the imaging system is integrated in a handheld device including a camera system like a cellular phone, a personal digital assistant, or a music player, by way of example. - The
imaging unit 100 includes anaperture unit 110, which is arranged such that radiation passing through the aperture of theaperture unit 110 passes through alens unit 120 and incidents on animaging sensor unit 140. Theaperture unit 110 may be also positioned inside of thelens unit 120, in particular at the position of the pupil plane of thelens unit 120. - The
lens unit 120 may be a single lens, an array of micro-lenses or a lens assembly including a plurality of lenses. Thelens unit 120 features a longitudinal chromatic aberration and theimaging unit 100 does not contain elements compensating for the longitudinal (axial) chromatic aberration to generate colour-corrected images. For example, thelens unit 120 is a compound lens formed of a highly dispersive material like glass or plastics, where the index of refraction is a function of the wavelength of the incident light such that the focal length varies as a function of the wavelength. For example, thelens unit 120 images infrared radiation in a first focal plane FIR, visible red light in a focal plane FR, green light in a focal plane FG and blue light in a focal plane FB. - According to an embodiment the
lens unit 120 may include compensation elements compensating for spherical and/or field dependent aberrations such that thelens unit 120 exhibits no or only negligible spherical and field dependent aberrations. - The
imaging sensor unit 140 includes a plurality of pixel sensors, wherein each pixel sensor contains a photo sensor for converting a photo signal from the incident radiation into an electronic signal. Theimaging sensor unit 140 may output an image signal containing the pixel values of all pixel sensors of animaging sensor unit 140 in a digitized form. - The
imaging unit 100 may provide a greyscale image and an infrared image. According to other embodiments acolour filter unit 130 may be arranged between thelens unit 120 and theimaging sensor unit 140. Thecolour filter unit 130 may comprise a plurality of colour filter sections, wherein each colour filter section has a filter colour, for example blue, red, green, white or IR (infrared). Each colour filter section may be assigned to one single pixel sensor such that each pixel sensor receives colour-specific image information. Theimaging sensor unit 140 outputs two, three, four or more different sub-images, wherein each sub-image contains image information with regard to a specific frequency range of the incoming radiation. One of the sub-images may describe the infrared portion of the imaged scene. - From an imaged scene, the
imaging sensor unit 140 captures a plurality of non-colour-corrected first images of different spectral content or composition, for example a "red" image using the filter colour "red", a "blue" image using the filter colour "blue", and a "green" image using the filter colour "green". One of the first images may consist or contain at least a portion of the infrared range and outputs respective image signals. The images of different spectral content may also include images with overlapping spectral content. For example, theimaging sensor unit 140 may include broadband sensitive pixel sensors which are assigned to broadband colour filter sections with the filter colour "white" being approximately transparent for the whole visible spectrum. Hereinafter the first images of different spectral content are referred to as colour planes or images and may include a greyscale image containing information over the whole visible spectrum or may refer to spectral content outside the visible range, for example infrared radiation. - An
image processing unit 200 receives the colour planes that contain both luminance information and colour information and computes a modified output image signal. The modified output image signal represents an image that may have reduced or enhanced depth-of-field compared to the first images, or may be a re-focused image, or an image featuring a 3D effect, by way of example. - The modified image may be stored in a
non-volatile memory 310 of theimaging system 400, for example as a set of digital values representing a coloured image. Thenon-volatile memory 310 may be a memory card of a camera system. Alternatively or in addition, the modified image may be displayed on a display device of theimaging system 400 or may be output to another system connected to theimaging system 400 via a wired or wireless communication channel or may be supplied to a processing system or application for processing further the information contained in the modified output image. -
Figure 1B shows theimage processing unit 200 in more detail. From the luminance information, anintensity processing unit 202 may compute broadband luminance sharpness information (sharp intensity information) on the basis of lens parameters, for example a PSF (point spread function), descriptive for the imaging properties of thelens unit 120. The broadband luminance corresponds to an intensity image. On the basis of the colour information, achrominance processing unit 201 may compute chrominance information by correcting chromatic aberrations resulting from the use of the non-colour correctedlens unit 120. Asynthesizer unit 280 synthesizes a modified output image on the basis of the resulting sharpness information obtained from the luminance information and on the basis of the corrected chrominance information derived from the colour information. - According to an embodiment, the
intensity processing unit 202 includes aluminance pre-processing unit 250 computing a broadband luminance information on the basis of the output signals of theimaging unit 100, for example the colour planes. Adeconvolution unit 260 uses information describing thelens unit 120 for computing broadband luminance sharpness information (sharp intensity information) on the basis of the broadband luminance information. - The
imaging sensor unit 140 ofFigure 1A may supply inter alia a "white" colour plane or greyscale image, for example using an RGBW mosaic pattern like a 4x4 or 2x4 RGBW mosaic pattern. Then, at least under daylight conditions, the "white" colour plane may directly represent the broadband luminance information supplied to thedeconvolution unit 260 and theintensity processing unit 202 computes the sharp intensity information on the basis of the "white" colour plane. - Where the
imaging sensor unit 140 does not output a "white" colour plane, theluminance pre-processing unit 250 may compute a greyscale image on the basis of the colour planes and may supply the computed greyscale image as broadband luminance information to thedeconvolution unit 260. - While the
intensity processing unit 202 is concerned with sharp intensity information, thechrominance processing unit 201 is concerned with the colour information that may include the IR plane. Due to the chromatic aberrations of thelens unit 120, different colours focus at different distances, wherein for typical dispersive materials used for the lens unit shorter wavelengths of light focus at nearer depth than larger wavelengths. Thechrominance processing unit 201 may generate a coarse depth by measuring the focus of each colour plane and comparing the obtained focus measures with each other. Based on this information the chromatic aberrations in the colour planes, which may include the IR plane, may be corrected in order to eliminate a colour bleeding effect resulting from the chromatic aberrations. - The
chrominance processing unit 201 may include achrominance pre-processing unit 210 pre-processing the colour planes, which may include an IR plane, and adepth estimator 220 for generating a depth map associating depth values to the pixel values of the colour and IR planes. Acorrection unit 230 corrects chromatic aberrations mainly originating from the use of the non colour-correctedlens unit 120. For example, thecorrection unit 230 analyzes the colour behaviour on such edges that do not show chromatic aberration for evaluating a range limitation for colour difference signals, identifies pixel values violating the range limitation as colour fringes caused by chromatic aberrations and replaces these pixel values with allowed values. According to another embodiment, thecorrection unit 230 may exchange sharpness information among the colour planes, for example on the basis of the depth maps. - All elements of the
image processing unit 200 ofFigure 1 may be embodied by hardware only, for example as integrated circuits, FPGAs (field programmable gate arrays), ASICs (application specific integrated circuits), by software only, which may be implemented, for example in a computer program or a microcontroller memory, or by a combination of hardware and software elements. -
Figure 2 refers to a schematic cross-sectional view of animaging unit 100. Theimaging unit 100 may include anaperture unit 110, wherein, during an exposure period, radiation, e.g. visible light and/or infrared radiation, which is descriptive for an image of a scene or object passes through anaperture 115 of theaperture unit 110 and alens unit 120 and incidents onto animaging sensor unit 140. Theimaging sensor unit 140 comprises a plurality ofpixel sensors 145. Eachpixel sensor 145 contains a photo sensor that converts a photo signal from the incident light into an electronic signal. Thepixel sensors 145 may be formed in a semiconductor substrate. Thepixel sensor 145 may be arranged in one plane or in different planes. Theimaging unit 100 may comprise acolour filter unit 130 that may be arranged between thelens unit 120 and theimaging sensor unit 140 or between theaperture unit 110 and thelens unit 120. - For example, the
imaging sensor unit 140 may have a vertically integrated photodiode structure with deep photodiodes formed in a substrate section few microns beneath surface photodiodes formed adjacent to a substrate surface of a semiconductor substrate. Visible light is absorbed in the surface section of the semiconductor substrate, whereas infrared radiation penetrates deeper into the semiconductor substrate. As a result, the deep photodiodes only receive infrared radiation. In another example, theimaging sensor unit 140 may have a lateral integrated photodiode structure with photodiodes arranged in an array. - The
colour filter unit 130 may be arranged in close contact to theimaging sensor unit 140 and may include a plurality ofcolour filter sections 135, wherein eachcolour filter section 135 has a filter colour, for example green, red, blue, magenta, yellow, white or IR (infrared). Eachcolour filter section 135 is assigned to onesingle pixel sensor 145 such that eachpixel sensor 145 receives colour-specific image information. For example, thecolour filter sections 135 may be arranged matrix-like in columns and rows.Colour filter sections 135 assigned to different filter colours may alternate along the row direction and the column direction in a regular manner. For example, each fourcolour filter sections 135 forming a 2 x 2 matrix may be arranged to form a Bayer mosaic pattern, whereincolour filter sections 135 with the filter colour "green" are arranged on a first diagonal of the 2 x 2 matrix, and onecolour filter section 135 with a filter colour "red" and onecolour filter section 135 with the filter colour "blue" are arranged on the other diagonal of the 2 x 2 matrix. With the Bayer mosaic pattern, the sampling rate for the filter colour "green" is twice that of the filter colours "red" and "blue" to take into account that the colour green carries most of the luminance information for the human eye. - According to another embodiment, the
colour filter sections 135 may be arranged to form an RGBE-mosaic pattern with "Emerald" as a fourth filter colour, a CYYM mosaic pattern with one cyan, two yellow and one magentacolour filter section 135 or a CYGM mosaic pattern with one cyan, one yellow, one green and one magentacolour filter section 135 arranged in 2x2 unit matrices which are repeatedly arranged within thecolour filter unit 130. According to another embodiment, thecolour filter unit 130 includes a mosaic of unit matrices with three colour filter sections of three different filter colours and one transparent filter section without colour filtering properties and transparent for all colours within the visible spectrum. The transparent and thecolour filter sections 135 may be arranged to form an RGBW mosaic pattern, for example a 4x4 or a 2x4 RGBW mosaic pattern, by way of example. - The filter range of the
colour filter sections 135 is not restricted to the visible part of the spectrum. In accordance with an embodiment, thecolour filter 130 contains at least one colour filter section type being transparent for infrared radiation. For example, thecolour filter 130 is an RGBIR filter where each 2x2 unit matrix contains one red, one green, one blue and one infraredcolour filter section 135 and where the unit matrices are regularly arranged to form a mosaic pattern. The four colours R, G, B and IR can be arranged by any permutation within the 2x2 unit matrices. - The infrared radiation may pass the
colour filter unit 130 in sections 133 transparent for infrared radiation between thecolour filter sections 135. In accordance with an embodiment, thecolour filter 130 contains at least one colour filter section type being transparent for infrared radiation. For example, thecolour filter 130 is an RGBIR filter where each 2x2 unit matrix contains one red, one green, one blue and one infraredcolour filter section 135 and where the unit matrices are regularly arranged to form a mosaic pattern. According to other embodiments thecolour filter unit 130 does not include sections assigned to the deep photodiodes, since thecolour filter sections 135 may be transparent for a portion of the frequency range of infrared radiation. - Due to chromatic aberrations, each colour image blue, green, red and infrared will focus at different distances from near to far respectively such that by measuring and comparing the sharpness of each of the four image planes, a four layer depth map can be computed.
- Each
lens unit 120 may be realized as micro-lens array including a plurality of segments. Each lens segment of alens unit 120 may be assigned to onesingle pixel sensor 145 and onecolour filter section 135. - According to other embodiments, the
lens unit 120 may be realized as an objective, comprising several single lenses, adapted to image objects in the object space to the sensor plane. Due to chromatic aberrations, each colour image blue, green, red and infrared will focus in another focal plane at different distances. In embodiments referring to imaging sensor units with two or more sensor planes, the distance between the first focal plane for infrared radiation and any second focal plane assigned to visible light typically does not match with the vertical distance between the first and second sensor planes. As a consequence, also in this case, at least one of the first and second images is severely out of focus when both the infrared image and the image for visible light are captured contemporaneously. -
Figure 3A refers to an embodiment of aluminance pre-processing unit 250 which receives colour-filtered first images, for example the colour planes red, green and blue from animaging sensor unit 140 using an RGB Bayer mosaic colour filter. According to other embodiments theimaging sensor unit 140 may be a CYGM sensor supplying the colour planes cyan, yellow, magenta and green or a CYYM sensor supplying the colour planes cyan, yellow and magenta. - The
luminance pre-processing unit 250 may include, for each received colour plane aninterpolation unit 252 which interpolates those pixel values for each colour plane for which the colour plane has no pixel values available since the respective pixel is assigned to another filter colour. Interpolation may be performed by estimating the missing pixel values from neighbouring pixel values of the same colour plane and/or from the corresponding pixel of other colour planes. At least embodiments where theimaging unit 100 outputs only colour channels may provide aweighting unit 256 for one, two or all of the colour channels, respectively. Eachweighting unit 256 multiplies the pixel values of the colour plane it is assigned to with a specific value. Asuperposition unit 254 may add up the interpolated and weighted colour planes to obtain a greyscale image to be supplied to thedeconvolution unit 260 ofFigure 1B . According to an embodiment referring to an RGB sensor, the computed broadband luminance may be obtained by summing up the red, green and blue colour images at equal weights, at least in the case of daylight conditions. - Each
weighting unit 256 may be non-configurable and may be a wired connection for weighting a colour plane with the weight "1". Another embodiment providesconfigurable weighting units 256 and aweighting control unit 258 that configures the weights of theweighting units 256 as a function of the illumination conditions or in response to a user input. At least one of the weights may have the value "0". According to an embodiment, the intensity image is the weighted average of all available spectral components in the sensor plane(s) including white and infrared. - According to an embodiment, the imaging system uses information about the type of the light source illuminating the scene to select the weights for obtaining the computed broadband luminance. For example, for at least one specific type of light source, information about suitable colour weights may be stored in the
weighting control unit 258, wherein the colour weights are predefined such that the spectral power density of the light source multiplied with the sum of the colour components obtained by multiplying the colour sensitivity with its respective weight, respectively, is broad and flat over the visible range of the spectrum in order to achieve a depth invariant PSF. Theweighting control unit 258 may be adapted to classify a light source illuminating the imaged scene as daylight or artificial light, for example as incandescent lamp, fluorescent lamp, or LED lamp. In accordance with other embodiments, theweighting control unit 258 may process a user command indicating a light source type. - According to another embodiment, the
imaging unit 100 is based on a multi-layer image sensor, where pixel sensor layers are stacked within a transparent substrate and each pixel sensor layer is sensitive to another spectral range of the incoming light taking advantage of the fact that red, green, and blue light penetrate the substrate to different depths. In such cases where the image sensor provides full-scale colour planes, for example where stacked pixel sensors are used, theinterpolation unit 252 may be omitted. -
Figure 3B refers to an embodiment of aluminance pre-processing unit 250 which receives colour-filtered first images, for example the colour planes red, green, blue and infrared from animaging sensor unit 140 using an RGBIR Bayer mosaic colour filter. According to other embodiments theimaging sensor unit 140 may be a CYGMIR sensor supplying the colour planes cyan, yellow, magenta, green and infrared or a CYYMIR sensor supplying the colour planes cyan, yellow, magenta and infrared. - The computed broadband luminance may be obtained by summing up the red, green, blue and the infrared image colour images at equal weights, at least in the case of daylight conditions.
-
Figure 3C refers to an embodiment of thedeconvolution unit 260. Thedeconvolution unit 260 receives the broadband luminance information, for example as a greyscale image, and recovers sharpness information using information descriptive for the imaging properties of thelens unit 120. According to an embodiment, thedeconvolution unit 260 deconvolves the greyscale image using a PSF descriptive for the imaging properties of thelens unit 120 ofFigure 1 . The point spread function may be stored in amemory unit 262. Adeconvolution sub-unit 264 performs the deconvolution and outputs a greyscale image representing the sharp intensity information. - According to an embodiment of the
deconvolution unit 260 ofFigure 3C , thememory unit 262 may store the in-focus PSF. According to other embodiments thememory unit 262 may store several point spread functions for different depth-of-field ranges and one of them may be selected in response to a corresponding user request. For example, the in-focus PSF and one, two or three further PSFs corresponding to user-selectable scene modes like "macro", "portrait" or "landscape" may be provided, for example, where the depth invariance of PSF is not sufficiently large. - The resulting sharp intensity information is approximately depth-invariant for larger depth of-field for such objects that have a broadband spectrum. This is the case for most of the real world objects.
-
Figures 4A to 4E refer to details and effects of the deconvolution process. In the upper half,Figure 4A shows a first diagram with spectral sensitivities of an RGB image sensor.Curve 422 is the spectral sensitivity of a pixel assigned to the filter colour "red",Curve 424 the spectral sensitivity of a pixel assigned to the filter colour "green" andcurve 426 the spectral sensitivity of a pixel assigned to the filter colour "blue". In all diagrams, spectral sensitivity is plotted as a function of the wavelength. - In the lower half,
Figure 4A shows a second diagram illustrating spectral sensitivity for "conventional"luminance 428 on the left hand side and a third diagram illustrating spectral sensitivity forbroadband luminance 429. Conventionally, luminance is obtained from the RGB colour planes by RGB to YUV conversion, wherein in the YUV space, Y represents the greyscale image or luminance and UV represent two differential colour channels or chrominance, whereby the luminance is defined in a way such that it closely represents the photopic luminosity function defined by the CIE (Commission Internationale de l'Eclairage) 1931 standard. The traditional luminance Y is obtained by weighting the red signal with WR=0.299, the blue signal with WB=0.114, and the green signal with WG=0.587 and summing up the results of the three weighting operations. The conventional luminancespectral sensitivity curve 428 shown in the second diagram results from applying these weights to the spectral sensitivity curves 422, 424, 426 in the first diagram. - By contrast, the
broadband luminance 429 as computed in accordance with the embodiments mainly corresponds to a white image captured by panchromatic sensor elements or a greyscale image captured through a monochrome sensor in the daylight conditions where the captured image has a broadband property. Referring to image sensors outputting only colour channels, the computed broadband luminance may be computed similar to the intensity as defined in the HSI colour space. According to an embodiment referring to an RGB sensor, the computed broadband luminance is obtained by summing up the red, green and blue colour images at equal weights in the case of day light conditions. - In illumination conditions other than daylight, the weights of the colour channels, for example the red, green, and blue channels or the red, green, blue and IR channels, may be chosen such that, given with the computing resources available in the imaging system, the final spectral sensitivity response is as broad and flat as possible within the visible spectrum in order to secure a depth invariant PSF, but at least more flat than the photopic luminosity function defined by the CIE 1931 standard. For example, the
computed broadband luminance 429 is approximately flat over a wavelength range of at least 100 nm or 200 nm, wherein in the flat range the sensitivity does not change by more than 10 % or 20 %. According to another embodiment, an amplitude of the final spectral sensitivity response does not change by more than 50 % over the half of the visible spectrum. - The broadband luminance may be obtained directly from a suitable white channel, by demosaicing the colour channels of a Bayer sensor, an RGB sensor, an RGBIR sensor, or a CYGM, CYGMIR, CYYM or CYYMIR sensor and summing up corresponding pixel values at equal weights, or by adding up the colour channels, which may include the IR channel of a stacked or multi-layer RGB or RGBIR sensor. Under other lighting conditions, the weights may be selected such that they take into account the spectral power density of the light source(s) illuminating the scene from which the image is captured, for example an incandescent light, a fluorescent lamp, or a LED lamp, such that a resulting point spread function is less depth variant than obtained by using equal weights.
- Referring to
Figure 4B , the lower half shows a family of one-dimensional PSFs spectral sensitivity curve 428 given for conventional luminance in the upper half of theFigure 4B . By contrast,Figure 4C shows a family of one-dimensional PSFs spectral sensitivity curve 429 given for broadcast luminance in the upper half ofFigure 4C . Each single PSF describes the response of the respective lens arrangement to a point source for a given distance between the point source and a lens plane in which the lens arrangement is provided. The PSF plots are from 10 cm to infinity distance. - The
Figures 4B and4C show that broadband luminance results in more depth invariant PSF as compared to conventional luminance. More specifically, the PSFs for near distances vary more in the case of conventional luminance.Figures 4B and4C refer to daylight conditions. Under other lighting conditions, broadband luminance can be obtained by selecting appropriate weights to each colour. According to an embodiment, in incandescent light, blue has more weight than green and green more weight than red, wherein the weights are selected such that the resulting luminance has the above described "broadband luminance" characteristic. -
Figure 4D schematically illustrates a family of one-dimensional PSFs Figure 4E refers to a family of one-dimensional PSFs - The resulting sharp intensity information is approximately depth-invariant for larger depth of-field for such objects that have a broadband spectrum. This is the case for most of the real world objects.
- In the upper half,
Figure 4F shows a first diagram with spectral sensitivities of an RGBIR image sensor.Curve 421 is the spectral sensitivity of a pixel assigned to an IR spectral sensitivity filter.Curve 422 is the spectral sensitivity of a pixel assigned to the filter colour "red".Curve 424 is the spectral sensitivity of a pixel assigned to the filter colour "green".Curve 426 is the spectral sensitivity of a pixel assigned to the filter colour "blue". In all diagrams, spectral sensitivity is plotted as a function of the wavelength. - In the lower half,
Figure 4F shows a second diagram illustrating spectral sensitivity for "conventional"luminance 428 based on the colour planes red, green and blue on the left hand side and a third diagram illustrating spectral sensitivity forbroadband luminance 430 resulting from all colour planes including infrared. The conventional luminancespectral sensitivity curve 428 shown in the second diagram results from applying these weights to the spectral sensitivity curves 422, 424, 426 in the first diagram. - By contrast, the
broadband luminance 430 as computed in accordance with the embodiments mainly corresponds to a white image captured by panchromatic sensor elements or a greyscale image captured through a monochrome sensor in the daylight conditions where the captured image has a broadband property. Referring to image sensors outputting only colour channels, the computed broadband luminance may be computed similar to the intensity as defined in the HSI colour space. According to an embodiment referring to an RGBIR sensor, the computed broadband luminance is obtained by summing up the red, green and blue colour images at equal weights in the case of day light conditions. - In illumination conditions other than daylight, the weights of the colour channels, for example the red, green, blue and infrared channels may be chosen such that, given with the computing resources available in the imaging system, the final spectral sensitivity response is as broad and flat as possible within the visible spectrum in order to secure a depth invariant PSF, but at least more flat than the photopic luminosity function defined by the CIE 1931 standard. For example, the
computed broadband luminance 429 is approximately flat over a wavelength range of at least 100 nm or 200 nm, wherein in the flat range the sensitivity does not change by more than 10 % or 20 %. According to another embodiment, an amplitude of the final spectral sensitivity response does not change by more than 50 % over the half of the visible spectrum. -
Figure 5A refers to adepth map estimator 220 as an element of thechrominance processing unit 201 ofFigure 1B . The first non-colour-corrected sub-images or colour planes may be supplied tostorage units 222 holding at least two or all of the colour planes for the following operation. - According to an embodiment, a
depth map unit 224 compares relative sharpness information across the colour planes temporarily stored in thestorage units 222, for example using the DCT (Discrete Cosine transform). For example, the relative sharpness between colour planes can be measured by computing, on the neighbourhood of each pixel at a given location in the colour plane, the normalized sum of differences between the local gradient and the average gradient. By estimating the sharpest colour plane for sub-regions or pixels of the imaged scene, thedepth map unit 224 may generate a depth map assigning to each pixel or each group of pixels a distance information. According to an embodiment the Hadamard transform is used for sharpness measure. The results of the Hadamard transform are similar to that of the DCT but the computational effort is less since only additions and subtractions are required for performing the Hadamard transformation. Alternatively of in addition, other known sharpness measures may be used. - In accordance with another embodiment, the gradients are computed in the logarithmic domain where the normalization step may be omitted since the logarithmic domain makes the depth estimation independent of small variations in lighting conditions and small variations of intensity gradients in different colours.
- The depth map may be supplied to the
correction unit 230 ofFigure 1B . Thecorrection unit 230 corrects chromatic aberrations in the colour planes. For example, thecorrection unit 230 evaluates a range limitation for colour differences on the basis of colour behaviour on edges that do not show chromatic aberration, identifies pixels violating the range limitation and replaces for these pixels the pixel values with allowed values. According to another embodiment, thecorrection unit 230 may use information included in the depth map to provide corrected colour images. For example, thecorrection unit 230 may perform a sharpness transport by copying high frequencies of the sharpest colour plane for the respective image region to the other colour planes. For example, to each blurred sub-region of a colour plane a high-pass filtered version of the sharpest colour plane for the respective sub-region may be added. -
Figure 5B shows adepth map estimator 220 with anadditional storage unit 222 for holding the IR plane. -
Figure 5C refers to asynthesizing unit 280 that transfers sharp intensity information into the corrected colour images output by thecorrection unit 230 ofFigure 1B to obtain a modified output image with extended depth of field, by way of example. According to an embodiment, the synthesizingunit 280 transfers the sharp intensity information into the corrected images on the basis of a depth map obtained, for example during chrominance information processing. For example, the sharp intensity information from the sharpest spectral component is transferred to the uncorrected spectral components. According to another embodiment, the synthesizingunit 280 transfers the sharp intensity information on the basis of the depth map and user information indicating a user request concerning picture composition. The user request may concern the implementation a post-capture focus distance selection or a focus range selection. - According to embodiments referring to RGBW or RGBWIR image sensors, the synthesizing
unit 280 may obtain the modified output image by an interleaved imaging approach, where, for example, for mesopic illumination conditions the output image is derived from both the colour images obtained from the chrominance path and the greyscale image obtained from the luminance path. For each colour plane, each pixel value is updated on the basis of neighbouring pixel values by means of a distance function and a similarity function, wherein the similarity function uses information from both the respective colour image and the greyscale image and determining a weight for the information from the greyscale image. For example, the weight may be determined by the fraction of saturated pixels in an image patch around that neighbour pixel whose distance and similarity is currently evaluated. - Other than in conventional approaches, the greyscale image to be combined with the colour images is obtained by a process taking into account the imaging properties of the non-colour corrected lens unit, for example by deconvolving a greyscale image obtained from the imaging units with the lens unit PSF,
- According to other embodiments referring to RGB or RGBIR image sensors, the synthesizing
unit 280 obtains the output image by a similar interleaved imaging approach, wherein the greyscale image used in the similarity function may be obtained by deconvolving a light intensity obtained by a transformation of the RGB signals into the HSI (Hue-Saturation-Intensity) space. The light intensity signal or the underlying RGB images may be pre-processed to correspond to a broadband luminance signal, for which the PSF is more depth-invariant, -
Figure 6A refers to an embodiment based on RGBW image sensors outputting R, G, B, and W planes (602). In a luminance path, the W plane may directly give a broadband luminance signal or may be pre-processed to represent a broadband luminance signal (650). The W plane is deconvolved using the PSF of the lens unit (660). The original R, G, B planes are provided to the chrominance path (610). Chromatic aberrations in the R, G, B planes are compensated for (630). A depth map may be derived from the original R, G, B planes (620) and used to compensate for or correct chromatic aberrations in the R, G, B planes. The deconvolved W plane is interleaved with the corrected R, G, B planes respectively (680). A modified colour image comprising modified RGB planes or equivalent information is output (690). -
Figure 6B refers to an embodiment based on RGBWIR image sensors outputting R, G, B, IR. and W planes (602). In a luminance path, the W plane may directly give a broadband luminance signal or may be pre-processed to represent a broadband luminance signal (650). The W plane is deconvolved using the PSF of the lens unit (660). The original R, G, B, IR planes are provided to the chrominance path (610). Chromatic aberrations in the R, G, B, IR planes are compensated for (630). A depth map may be derived from the original R, G, B, IR planes (620) and used to compensate for or correct chromatic aberrations in the R, G, B, IR planes. The deconvolved W plane is interleaved with the corrected R, G, B, IR planes respectively (680). A modified colour image comprising modified R, G, B, IR planes or equivalent information is output (690). -
Figure 6C refers to an embodiment based on RGB image sensors outputting R, G, and B planes (602). The R, G, B, planes may be pre-processed, for example transformed into the HSI space (650), wherein the hue value, the saturation and the light intensity are obtained (652). The light intensity is deconvolved using the lens PSF (660). The original R, G, B planes may be provided to the chrominance path (614) or recalculated from the HSI values using the inverse HSI transformation (610). Chromatic aberrations in the R, G, B planes are compensated for (630). A depth map may be computed from the original R, G, B planes (620) and used to compensate for or correct chromatic aberrations in the R, G, B planes. The deconvolved light intensity is interleaved with the corrected R, G, B planes respectively (680). A modified colour image comprising modified RGB planes or equivalent information is output (690). -
Figure 6D refers to an embodiment based on RGBIR image sensors outputting R, G, B and IR planes (602). The R, G, B planes may be pre-processed, for example transformed into the HSI space (650), wherein the hue value, the saturation and the light intensity are obtained (652). The light intensity is deconvolved using the lens PSF (660). The original R, G, B, IR planes may be provided to the chrominance path (614). Chromatic aberrations in the R, G, B, IR planes are compensated for (630). A depth map may be computed from the original R, G, B, IR planes (620) and used to compensate for or correct chromatic aberrations in the R, G, B, IR planes. The deconvolved light intensity may be interleaved with the corrected R, G, B, IR planes respectively (680). A modified colour image comprising modified R, G, B, and IR planes or equivalent information may be output (690). -
Figure 7 refers to a method of operating an imaging system. The method provides capturing at least two non-colour-corrected first images of an imaged scene by using a lens unit featuring longitudinal chromatic aberration, wherein the first images have different spectral components (702) and represent, by way of example, different colour planes. From the first images a broadband luminance sharpness information (704) and a chrominance information (706) is computed. The computed chrominance and broadband luminance sharpness information is combined to provide an output image (708). - Computing the broadband luminance information may include deconvolving a greyscale image derived from at least one of the non-colour-corrected first images with a point spread function descriptive for the lens unit. Computing the chrominance information may include compensating for effects of the chromatic aberrations resulting from the use of the non-colour-corrected lens unit. Combining the chrominance and broadband luminance sharpness information may be performed on the basis of an interleaving process using a similarity function based on both the chrominance and the broadband luminance sharpness information.
- Where accuracy of approaches using sharpness transport between colour planes only depends on the quality of the depth estimation to a high degree, the present approach uses the PSF known for the broadband luminance channel such that extended depth-of-field or other effects can be obtained by performing a simple deconvolution. The present approach also provides correction of chromatic aberrations in the chrominance channel, wherein remaining small colour artefacts do not degrade the image quality due to the fact that the human eye is less sensitive to chrominance signal aberrations than to luminance aberrations.
- On the other hand, approaches referring to spectral focal sweep loose colour information contained in the colour planes such that colour bleeding artefacts appear in the final image. At low computational effort, the present approach produces significantly better results and obtains perceptually more meaningful representations of real world scenes than the previous approaches by processing at a first stage broadband luminance sharpness information and chrominance information separately and independently from each other and combining them at a second stage.
Claims (15)
- An imaging system comprisingan imaging unit (100) that comprises a lens unit (120) showing longitudinal chromatic aberration and an imaging sensor unit (140) configured to generate, from an imaged scene, non-colour-corrected first images of different spectral content,a intensity processing unit (202) configured to compute broadband luminance sharpness information on the basis of the first images and information descriptive for imaging properties of the lens unit (120) by weighting each of the first images with a specific value to compute a broadband luminance information and by deconvolving the broadband luminance information using a depth-invariant point spread function,a chrominance processing unit (201) configured to compute chrominance information on the basis of the first images, anda synthesizing unit (280) configured to combine the chrominance and broadband luminance sharpness information to provide an output image.
- The imaging system of claim 1, whereinthe imaging unit (100) is configured to supply at least two colour-filtered first images,the intensity processing unit (202) comprises a luminance pre-processing unit (250) configured to generate a greyscale image or intensity function from the at least two colour-filtered first images, andthe intensity processing unit (202) is configured to compute the broadband luminance sharpness information on the basis of the generated greyscale image.
- The imaging system of any of claims 1 or 2, wherein the intensity processing unit (202) comprisescontrollable weighting units (256), each weighting unit (256) configured to apply a selectable weight to one of the first images, anda weighting control unit (258) configured to select the weights for the weighting units (256) in response to a user input or an evaluation of the first images.
- The imaging system of any of claims 1 to 3, wherein
the intensity processing unit (202) comprises a deconvolution unit (260) configured to compute the broadband luminance sharpness information by deconvolving the greyscale image with a point spread function descriptive for the lens unit (120). - The imaging system of any of claims 1 to 4, wherein
the chrominance processing unit (201) comprises a depth map estimator (220) configured to generate a depth map from the first images. - The imaging system of claim 5, wherein
the chrominance processing unit (201) further comprises a correction unit (230) configured to correct chromatic aberrations in the first images to provide corrected images from the first images. - The imaging system of any of claims 1 to 6, wherein
the synthesizing unit (280) is configured to transfer sharpness information from the broadband luminance sharpness information into the corrected images. - The imaging system of any of claim 6, wherein
the synthesizing unit (280) is configured to transfer sharpness information from the broadband luminance sharpness information into the corrected images on the basis of the depth map. - The imaging system of any of claims 1 to 5, wherein
the synthesizing unit (280) is configured to interleave corrected images obtained from the chrominance processing unit (201) and a greyscale image obtained from the intensity processing unit (202), wherein for each corrected image, a pixel value is updated on the basis of neighbouring pixel values by means of a distance function and a similarity function, the similarity function using information from both the respective corrected image and the greyscale image and determining a weight for the information from the greyscale image. - The imaging system of claim 8, whereinthe synthesizing unit (280) is configured to transfer sharpness information further on the basis of a user request,the user request concerns one or more items selected from the group including implementation of a 3D-effect, focus distance selection and focus range selection.
- The imaging system any of claims 1 to 10, wherein
the imaging system (400) is provided in a camera system, a digital telescope, or a digital microscope. - A method of operating an imaging system, the method comprisingcapturing at least two non-colour-corrected first images of an imaged scene using a lens unit (120) featuring longitudinal chromatic aberration, the first images having different spectral composition;computing a broadband luminance sharpness information from the first images on the basis of information descriptive for imaging properties of the lens unit (120) by weighting each of the first images with a specific value to compute a broadband luminance information and by deconvolving the broadband luminance information using a depth invariant point spread function,computing a chrominance information from the first images to obtain corrected images, andcombining the chrominance and broadband luminance sharpness information to provide an output image using a synthesizing unit (280).
- The method of claim 12, wherein
computing the broadband luminance sharpness information comprises deconvolving a greyscale image derived from at least one of the first images using a deconvolution unit (260) and a point spread function descriptive for the lens unit (120). - The imaging method of any of claims 12 or 13, wherein
computing the broadband luminance sharpness information comprises weighting the first images with weights, the weights being selected in response to an illumination condition such that a resulting point spread function is less depth variant than obtained by using equal weights under illumination conditions other than daylight. - The imaging method of any of claims 12 to 14, wherein
combining the chrominance and luminance information comprises interleaving the corrected images representing the chrominance information and a greyscale image representing the broadband luminance sharpness information, wherein for each corrected image, a pixel value is updated on the basis of neighbouring pixel values by means of a distance function and a similarity function, the similarity function using information from both the respective corrected image and the greyscale image and determining a weight for the information from the greyscale image.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP12705781.8A EP2664153B1 (en) | 2011-01-14 | 2012-01-13 | Imaging system using a lens unit with longitudinal chromatic aberrations and method of operating |
Applications Claiming Priority (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP11000283 | 2011-01-14 | ||
EP11003573 | 2011-05-02 | ||
PCT/EP2012/000143 WO2012095322A1 (en) | 2011-01-14 | 2012-01-13 | Imaging system using a lens unit with longitudinal chromatic aberrations and method of operating |
EP12705781.8A EP2664153B1 (en) | 2011-01-14 | 2012-01-13 | Imaging system using a lens unit with longitudinal chromatic aberrations and method of operating |
Publications (2)
Publication Number | Publication Date |
---|---|
EP2664153A1 EP2664153A1 (en) | 2013-11-20 |
EP2664153B1 true EP2664153B1 (en) | 2020-03-04 |
Family
ID=45757364
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP12705781.8A Active EP2664153B1 (en) | 2011-01-14 | 2012-01-13 | Imaging system using a lens unit with longitudinal chromatic aberrations and method of operating |
Country Status (5)
Country | Link |
---|---|
US (1) | US9979941B2 (en) |
EP (1) | EP2664153B1 (en) |
JP (1) | JP5976676B2 (en) |
CN (1) | CN103430551B (en) |
WO (1) | WO2012095322A1 (en) |
Families Citing this family (28)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP2672696B1 (en) * | 2011-01-31 | 2018-06-13 | Panasonic Corporation | Image restoration device, imaging device, and image restoration method |
JP2013219705A (en) * | 2012-04-12 | 2013-10-24 | Sony Corp | Image processor, image processing method and program |
FR3013491B1 (en) * | 2013-11-19 | 2016-01-15 | Commissariat Energie Atomique | DETERMINATION OF THE DEPTH MAP IMAGE OF A SCENE |
US10136107B2 (en) * | 2013-11-21 | 2018-11-20 | Semiconductor Components Industries, Llc | Imaging systems with visible light sensitive pixels and infrared light sensitive pixels |
CN105313782B (en) * | 2014-07-28 | 2018-01-23 | 现代摩比斯株式会社 | Vehicle travel assist system and its method |
US10334216B2 (en) | 2014-11-06 | 2019-06-25 | Sony Corporation | Imaging system including lens with longitudinal chromatic aberration, endoscope and imaging method |
JP2016170122A (en) * | 2015-03-13 | 2016-09-23 | キヤノン株式会社 | Measurement device |
JP6247425B2 (en) | 2015-04-23 | 2017-12-13 | 富士フイルム株式会社 | Image processing apparatus, imaging apparatus, image processing method, and image processing program |
JP6240813B2 (en) * | 2015-04-23 | 2017-11-29 | 富士フイルム株式会社 | Image processing apparatus, imaging apparatus, image processing method, and image processing program |
CN107534733B (en) * | 2015-04-23 | 2019-12-20 | 富士胶片株式会社 | Image pickup apparatus, image processing method of the same, and medium |
CN105070270B (en) * | 2015-09-14 | 2017-10-17 | 深圳市华星光电技术有限公司 | The compensation method of RGBW panel sub-pixels and device |
CN109716176B (en) * | 2016-06-07 | 2021-09-17 | 艾瑞3D 有限公司 | Light field imaging device and method for depth acquisition and three-dimensional imaging |
CN109310278B (en) * | 2016-06-17 | 2022-05-03 | 索尼公司 | Image processing apparatus, image processing method, program, and image processing system |
FR3056332A1 (en) * | 2016-09-21 | 2018-03-23 | Stmicroelectronics (Grenoble 2) Sas | DEVICE COMPRISING A 2D IMAGE SENSOR AND A DEPTH SENSOR |
DE112017006338T5 (en) | 2016-12-16 | 2019-08-29 | Sony Corporation | SHOOTING AN IMAGE OF A SCENE |
US11172172B2 (en) * | 2016-12-30 | 2021-11-09 | Texas Instruments Incorporated | Efficient and flexible color processor |
WO2019048492A1 (en) * | 2017-09-08 | 2019-03-14 | Sony Corporation | An imaging device, method and program for producing images of a scene |
CN107911599B (en) * | 2017-10-30 | 2020-08-21 | 北京航天福道高技术股份有限公司 | Infrared image global automatic focusing method and device |
CN109756713B (en) * | 2017-11-08 | 2021-12-21 | 超威半导体公司 | Image capturing apparatus, method of performing processing, and computer readable medium |
JP7171254B2 (en) * | 2018-06-13 | 2022-11-15 | キヤノン株式会社 | Image processing device, imaging device, and image processing method |
US10948715B2 (en) | 2018-08-31 | 2021-03-16 | Hellman Optics, LLC | Chromatic lens and methods and systems using same |
JP7301601B2 (en) * | 2019-05-24 | 2023-07-03 | キヤノン株式会社 | Image processing device, imaging device, lens device, image processing system, image processing method, and program |
JP2021048464A (en) * | 2019-09-18 | 2021-03-25 | ソニーセミコンダクタソリューションズ株式会社 | Imaging device, imaging system, and imaging method |
CN113395497A (en) * | 2019-10-18 | 2021-09-14 | 华为技术有限公司 | Image processing method, image processing apparatus, and imaging apparatus |
US11601607B2 (en) | 2020-07-27 | 2023-03-07 | Meta Platforms Technologies, Llc | Infrared and non-infrared channel blender for depth mapping using structured light |
CN112465724B (en) * | 2020-12-07 | 2024-03-08 | 清华大学深圳国际研究生院 | Chromatic aberration correction method based on prior information of cross channel of shear wave domain |
WO2023079842A1 (en) * | 2021-11-08 | 2023-05-11 | ソニーセミコンダクタソリューションズ株式会社 | Solid-state imaging device, imaging system, and imaging processing method |
CN116520095B (en) * | 2023-07-03 | 2023-09-12 | 昆明理工大学 | Fault location method, system and computer readable storage medium |
Family Cites Families (19)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US6934408B2 (en) * | 2000-08-25 | 2005-08-23 | Amnis Corporation | Method and apparatus for reading reporter labeled beads |
US8339462B2 (en) * | 2008-01-28 | 2012-12-25 | DigitalOptics Corporation Europe Limited | Methods and apparatuses for addressing chromatic abberations and purple fringing |
EP1662803A4 (en) | 2003-08-13 | 2006-11-15 | Scalar Corp | Camera, image processing apparatus, image data processing method, and program |
WO2006026354A2 (en) * | 2004-08-25 | 2006-03-09 | Newport Imaging Corporation | Apparatus for multiple camera devices and method of operating same |
US7224540B2 (en) | 2005-01-31 | 2007-05-29 | Datalogic Scanning, Inc. | Extended depth of field imaging system using chromatic aberration |
CN102984448B (en) * | 2005-03-07 | 2016-05-25 | 德克索实验室 | Utilize color digital picture to revise the method for controlling to action as acutance |
US7683950B2 (en) * | 2005-04-26 | 2010-03-23 | Eastman Kodak Company | Method and apparatus for correcting a channel dependent color aberration in a digital image |
JP2008033060A (en) * | 2006-07-28 | 2008-02-14 | Kyocera Corp | Imaging apparatus and imaging method, and image processor |
US7792357B2 (en) * | 2007-05-30 | 2010-09-07 | Microsoft Corporation | Chromatic aberration correction |
US7787121B2 (en) * | 2007-07-18 | 2010-08-31 | Fujifilm Corporation | Imaging apparatus |
KR101340518B1 (en) * | 2007-08-23 | 2013-12-11 | 삼성전기주식회사 | Method and apparatus for compensating chromatic aberration of an image |
US8149319B2 (en) * | 2007-12-03 | 2012-04-03 | Ricoh Co., Ltd. | End-to-end design of electro-optic imaging systems for color-correlated objects |
JP5224804B2 (en) * | 2007-12-27 | 2013-07-03 | 三星電子株式会社 | Imaging device |
JP5132401B2 (en) * | 2008-04-16 | 2013-01-30 | キヤノン株式会社 | Image processing apparatus and image processing method |
US8866920B2 (en) * | 2008-05-20 | 2014-10-21 | Pelican Imaging Corporation | Capturing and processing of images using monolithic camera array with heterogeneous imagers |
US8760756B2 (en) * | 2008-10-14 | 2014-06-24 | Burnham Institute For Medical Research | Automated scanning cytometry using chromatic aberration for multiplanar image acquisition |
JP5213688B2 (en) * | 2008-12-19 | 2013-06-19 | 三洋電機株式会社 | Imaging device |
TW201119019A (en) * | 2009-04-30 | 2011-06-01 | Corning Inc | CMOS image sensor on stacked semiconductor-on-insulator substrate and process for making same |
JP2010288150A (en) * | 2009-06-12 | 2010-12-24 | Toshiba Corp | Solid-state imaging device |
-
2012
- 2012-01-13 CN CN201280005368.6A patent/CN103430551B/en active Active
- 2012-01-13 US US13/977,854 patent/US9979941B2/en active Active
- 2012-01-13 JP JP2013548788A patent/JP5976676B2/en not_active Expired - Fee Related
- 2012-01-13 EP EP12705781.8A patent/EP2664153B1/en active Active
- 2012-01-13 WO PCT/EP2012/000143 patent/WO2012095322A1/en active Application Filing
Non-Patent Citations (1)
Title |
---|
None * |
Also Published As
Publication number | Publication date |
---|---|
US20130278726A1 (en) | 2013-10-24 |
CN103430551B (en) | 2017-06-09 |
US9979941B2 (en) | 2018-05-22 |
EP2664153A1 (en) | 2013-11-20 |
JP2014507856A (en) | 2014-03-27 |
CN103430551A (en) | 2013-12-04 |
WO2012095322A1 (en) | 2012-07-19 |
JP5976676B2 (en) | 2016-08-24 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
EP2664153B1 (en) | Imaging system using a lens unit with longitudinal chromatic aberrations and method of operating | |
US10306158B2 (en) | Infrared imaging system and method of operating | |
US9615030B2 (en) | Luminance source selection in a multi-lens camera | |
JP6404923B2 (en) | Imaging sensor and imaging apparatus | |
KR101490653B1 (en) | Imaging systems with clear filter pixels | |
KR101639382B1 (en) | Apparatus and method for generating HDR image | |
US7400332B2 (en) | Hexagonal color pixel structure with white pixels | |
US8125543B2 (en) | Solid-state imaging device and imaging apparatus with color correction based on light sensitivity detection | |
EP1594321A2 (en) | Extended dynamic range in color imagers | |
US9793306B2 (en) | Imaging systems with stacked photodiodes and chroma-luma de-noising | |
US20070177004A1 (en) | Image creating method and imaging device | |
CN110463197B (en) | Enhancing spatial resolution in stereoscopic camera imaging systems | |
US9787915B2 (en) | Method and apparatus for multi-spectral imaging | |
KR20170074602A (en) | Apparatus for outputting image and method thereof | |
JP5186517B2 (en) | Imaging device | |
JP2007006061A (en) | Color filter and image pickup apparatus having the same | |
US10395347B2 (en) | Image processing device, imaging device, image processing method, and image processing program | |
TWI617198B (en) | Imaging systems with clear filter pixels | |
JP2012253727A (en) | Solid-state image sensor and camera module |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
17P | Request for examination filed |
Effective date: 20130704 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
DAX | Request for extension of the european patent (deleted) | ||
GRAP | Despatch of communication of intention to grant a patent |
Free format text: ORIGINAL CODE: EPIDOSNIGR1 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: GRANT OF PATENT IS INTENDED |
|
RIC1 | Information provided on ipc code assigned before grant |
Ipc: H04N 9/64 20060101ALI20190725BHEP Ipc: H04N 5/357 20110101ALI20190725BHEP Ipc: H04N 9/04 20060101AFI20190725BHEP Ipc: H04N 5/33 20060101ALI20190725BHEP |
|
INTG | Intention to grant announced |
Effective date: 20190828 |
|
GRAS | Grant fee paid |
Free format text: ORIGINAL CODE: EPIDOSNIGR3 |
|
GRAA | (expected) grant |
Free format text: ORIGINAL CODE: 0009210 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE PATENT HAS BEEN GRANTED |
|
AK | Designated contracting states |
Kind code of ref document: B1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: FG4D |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: EP |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: REF Ref document number: 1241782 Country of ref document: AT Kind code of ref document: T Effective date: 20200315 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R096 Ref document number: 602012068217 Country of ref document: DE |
|
REG | Reference to a national code |
Ref country code: IE Ref legal event code: FG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: FI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: RS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: NO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200604 |
|
REG | Reference to a national code |
Ref country code: NL Ref legal event code: MP Effective date: 20200304 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BG Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200604 Ref country code: GR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200605 Ref country code: HR Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: LV Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: SE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 |
|
REG | Reference to a national code |
Ref country code: LT Ref legal event code: MG4D |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: NL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: LT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: ES Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: SM Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: EE Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: PT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200729 Ref country code: CZ Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: IS Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200704 Ref country code: RO Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 |
|
REG | Reference to a national code |
Ref country code: AT Ref legal event code: MK05 Ref document number: 1241782 Country of ref document: AT Kind code of ref document: T Effective date: 20200304 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R097 Ref document number: 602012068217 Country of ref document: DE |
|
PLBE | No opposition filed within time limit |
Free format text: ORIGINAL CODE: 0009261 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: NO OPPOSITION FILED WITHIN TIME LIMIT |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: DK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: AT Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 |
|
26N | No opposition filed |
Effective date: 20201207 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: SI Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 Ref country code: PL Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MC Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 |
|
REG | Reference to a national code |
Ref country code: CH Ref legal event code: PL |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LU Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210113 |
|
REG | Reference to a national code |
Ref country code: BE Ref legal event code: MM Effective date: 20210131 |
|
REG | Reference to a national code |
Ref country code: GB Ref legal event code: 746 Effective date: 20210920 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: LI Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210131 Ref country code: CH Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210131 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: IE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210113 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: BE Free format text: LAPSE BECAUSE OF NON-PAYMENT OF DUE FEES Effective date: 20210131 |
|
REG | Reference to a national code |
Ref country code: DE Ref legal event code: R079 Ref document number: 602012068217 Country of ref document: DE Free format text: PREVIOUS MAIN CLASS: H04N0009040000 Ipc: H04N0023100000 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: GB Payment date: 20221221 Year of fee payment: 12 Ref country code: FR Payment date: 20221220 Year of fee payment: 12 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: HU Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT; INVALID AB INITIO Effective date: 20120113 Ref country code: CY Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 |
|
PGFP | Annual fee paid to national office [announced via postgrant information from national office to epo] |
Ref country code: DE Payment date: 20221220 Year of fee payment: 12 |
|
P01 | Opt-out of the competence of the unified patent court (upc) registered |
Effective date: 20230527 |
|
PG25 | Lapsed in a contracting state [announced via postgrant information from national office to epo] |
Ref country code: MK Free format text: LAPSE BECAUSE OF FAILURE TO SUBMIT A TRANSLATION OF THE DESCRIPTION OR TO PAY THE FEE WITHIN THE PRESCRIBED TIME-LIMIT Effective date: 20200304 |