CN107623844B - Determination of color values of pixels at intermediate positions - Google Patents

Determination of color values of pixels at intermediate positions Download PDF

Info

Publication number
CN107623844B
CN107623844B CN201710575233.8A CN201710575233A CN107623844B CN 107623844 B CN107623844 B CN 107623844B CN 201710575233 A CN201710575233 A CN 201710575233A CN 107623844 B CN107623844 B CN 107623844B
Authority
CN
China
Prior art keywords
color
pixels
pixel
image processing
filter
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201710575233.8A
Other languages
Chinese (zh)
Other versions
CN107623844A (en
Inventor
约尔格·孔策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Bao Shina AG
Original Assignee
Bao Shina AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Bao Shina AG filed Critical Bao Shina AG
Publication of CN107623844A publication Critical patent/CN107623844A/en
Application granted granted Critical
Publication of CN107623844B publication Critical patent/CN107623844B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T3/00Geometric image transformations in the plane of the image
    • G06T3/40Scaling of whole images or parts thereof, e.g. expanding or contracting
    • G06T3/4015Image demosaicing, e.g. colour filter arrays [CFA] or Bayer patterns
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N1/00Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
    • H04N1/46Colour picture communication systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/84Camera processing pipelines; Components thereof for processing colour signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/95Computational photography systems, e.g. light-field imaging systems
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/40Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled
    • H04N25/46Extracting pixel data from image sensors by controlling scanning circuits, e.g. by modifying the number of pixels sampled or to be sampled by combining or binning pixels
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N9/00Details of colour television systems
    • H04N9/64Circuits for processing colour signals
    • H04N9/74Circuits for processing colour signals for obtaining special effects
    • H04N9/76Circuits for processing colour signals for obtaining special effects for mixing of colour signals

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Theoretical Computer Science (AREA)
  • Computing Systems (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Color Television Image Signal Generators (AREA)

Abstract

An image processing device for processing image data of an image sensor has a mosaic filter for at least one first color and a second color on regularly arranged first pixels, the image processing device being designed to: a color value of a second color is determined for a second pixel at an intermediate position between the first pixels. The determining comprises determining, for the first color and the second color, low-noise color values of the colors by interpolating pixels of the respective colors in the vicinity of the intermediate positions by means of respective local filters; determining a luminance value at the intermediate position by interpolating pixels of at least a first color in a vicinity of the intermediate position by means of a local filter, the filter coefficients of the local filter satisfying a first condition: the center of gravity of the filter coefficients corresponds to the middle position; the color value of the second color at the intermediate position is determined based on a sum of the color value of the second color at low noise and a difference formed between the color value of the first color at low noise and the luminance value.

Description

Determination of color values of pixels at intermediate positions
Technical Field
The invention relates to an image processing device for processing image data of an image sensor having a mosaic filter for at least one first color and a second color on regularly arranged first pixels, and to a digital camera comprising an image sensor and an image processing device. The invention further relates to a corresponding image processing method, to a computer device and to a computer program product.
Background
In industrial environments, digital cameras are often used, such as the one described in german patent application DE 102013000301 and shown in fig. 1 there.
Fig. 1 schematically and exemplarily shows a configuration of a digital camera 10 having a lens 22. The image scene 30 is imaged via the lens 22 onto an image sensor 31 having regularly arranged light-sensitive elements, so-called pixels. The image sensor 31 transfers the electronic data to a calculation unit 32, typically located in the camera 10, for example comprising a processor, a Digital Signal Processor (DSP) or a so-called Field Programmable Gate Array (FPGA). Here it may be required: the analog image data are converted into digital image data, for example by means of an analog-to-digital converter (not shown in the figure). In the calculation unit 32, the desired mathematical operations, for example color correction or conversion to another image format, are carried out, if necessary, on the basis of the image data before the latter are output as electronic signals 34 via an interface 33. Alternatively, the output image can also be calculated outside the digital camera 10, for example by means of a computer.
Not only monochrome black-and-white cameras but also multicolor color cameras are used as digital cameras. The most common method for capturing color images is to use so-called mosaic filters, for example so-called bayer templates, which are known from US patent document US 3,971,065 (in this context, fig. 6 in particular). Here, there is a regular pattern of color filters for red (R), green (G) and blue (B) on the pixels, so that each pixel is sensitive only to light of the respective color. In the bayer pattern, green is typically twice as much as blue or red.
Users of digital color cameras often desire to use color images in which a complete color value exists for each pixel location. Such color values are understood here as positions in the color space. The most commonly used color spaces are three-dimensional, such as the RGB color space, the sRGB color space, the XYZ color space, the YUV color space or the la b color space. A color value generally has three components as positions in a three-dimensional color space. Mathematical methods for converting Color values of one Color space to Color values of another Color space are known to those skilled in the art, see, for example, a. koschan and m.abidi, "Digital Color Image Processing," John Wiley & Sons, Hoboken, NJ, USA, 2008. Furthermore, methods for transmitting color values are known, for example as 8-bit RGB, as 12-bit RGB or as YUV4-2-2 transmission.
Since each pixel is sensitive to light of only one color, i.e. provides only one-dimensional information, it is underdetermined that the color value of the pixel is determined as a value in a three-dimensional color space. Therefore, the signals of neighboring pixels, in particular the signals of pixels with filters of different colors, are usually added in order to determine the missing information. Such mathematical methods are known as a de-color conversion function, demosaicing, or Color Filter Array (CFA) interpolation. As a result of the color conversion function, a multi-dimensional color value then exists for each pixel.
Digital cameras are also known which already produce more than one color value per pixel in the image sensor plane (see for example fig. 4 according to US patent document US6,614,478). Such digital cameras, which are also referred to herein as multi-sensor cameras, usually have a beam splitter by means of which an image is imaged on a plurality of image sensors aligned with one another. However, this method is very expensive due to the plurality of required optical, electronic and mechanical components and the required high precision with small production tolerances.
In industrial environments but also in areas of use of the same type, for example in traffic monitoring, or in medical technology, there are many different applications of digital cameras, which often differ in their requirements for the camera. For example, in traffic monitoring, the camera therefore requires a high horizontal resolution, which enables the license plate number to be read on a corresponding number of lanes. In the automatic checking of, for example, brake disks, a further resolution is required which enables the control of a specific standard gauge with a predetermined degree of accuracy.
The different requirements mentioned above with regard to digital cameras are usually implemented with a large number of different camera modules from the camera manufacturer's point of view. Providing such a large number of camera types requires high organizational and capital expenditures, particularly in research and development, production, marketing, sales, and logistics. It would therefore be desirable to be able to reduce the variety of camera types or the variety of types of hardware and software modules used to compose the camera.
If the types of digital cameras are formed to look similar, then it appears that: the large number of different types of image sensors required in particular makes the type format very diverse. Thus, it may be desirable to: different requirements for digital cameras can be met with a small number of types of image sensors. In this context, particularly important distinguishing features are the size and number of pixels of the image sensor, wherein the physical size of the sensor or of its image diagonals is generated from these feature values as a further important distinguishing feature.
Based on the previously described situation regarding the variety of types of digital cameras, the inventors have proposed the following objectives: the development of generating an image having a second pixel size, which is different from the first pixel size and freely selectable if necessary, from an image having a first pixel size of a color image sensor having a mosaic filter, for example a bayer pattern, is carried out without requiring a further image sensor type having a second pixel size for this purpose. In this way, more than one application with more than one requirement for pixel size and pixel number can be addressed by means of the same camera hardware with the same image sensor type, whereby at least some of the costs for type formation and/or the corresponding organization effort can be saved if necessary.
A known method of generating an image having a second pixel size in an image sensor having a first pixel size is called "binning", where 1) charge domain binning, 2) voltage domain binning and 3) digital domain binning are generally distinguished herein. In the first mentioned case, the charge groups of adjacent or spatially closely arranged pixels are combined, in the second mentioned case the voltage signals of adjacent or spatially closely arranged pixels are combined, and in the third mentioned case the digital signals of adjacent or spatially closely arranged pixels are combined, thereby obtaining charge values, voltage values or digital values which represent the signal of a so-called "super pixel" having the second pixel size. Such a merging method is described in detail, for example, in european patent document EP 0940029 or U.S. Pat. No. 6,462,779.
Fig. 2 shows schematically and exemplarily two different merging processes. Fig. 2 (a) shows an image sensor having regularly arranged pixels 40 having a first pixel size. Fig. 2 (b) shows the arrangement of super-pixels 41 produced by merging horizontally adjacent pixels, which are twice as wide and equally high as the pixels 40. This merging is also referred to in said document as 2 x 1 merging. Fig. 2 (c) shows the arrangement of super-pixels 42 produced by merging horizontally and vertically adjacent pixels, which are twice as tall and twice as wide as the pixels 40. This merging is also referred to as 2 x 2 merging. The gaps between the super-pixels 41 visible in fig. 2 (a) to (c) are only for better recognition of their contours in the view and are not really present.
A disadvantage of charge domain binning is that it can only be implemented in image sensors where this is possible due to the configuration, e.g. in multiple CCD sensors, but not in image sensors where this is not possible depending on the configuration, e.g. in a large number of CMOS sensors.
Furthermore, super-pixels can be generated by merging, the side lengths of which are respectively integer multiples of the side length of the first pixel. However, a second pixel size can also be expected, which has a side length that is a non-integer multiple of the side length of the first pixel, e.g., a reasonable multiple of the side length of the first pixel.
An example of a reasonable non-integer multiple of the side length of the second pixel relative to the side length of the first pixel is schematically and exemplarily shown in fig. 3. Here, the side length of the second pixel 44 is 6/5 the side length of the first pixel 43. Here, the identifiable gaps between the second pixels 44 are also not actually present and serve only to better identify their contours in the view.
In the case of mosaic filters, for example with bayer patterns, merging becomes more difficult, since charge packets which respectively belong to the same color should preferably be combined here in order to obtain the correct color information during blending. A technical solution to this problem is described in US patent document US 7,538,807. Here, the bayer pattern in the output image is generated again from the bayer pattern in the input image. It is disadvantageous here that the method is applicable only to CCD sensors with special structures and not to any image sensor with a mosaic filter. It may furthermore be advantageous that for each pixel in the output image all three color information R, G and B may be present simultaneously. Furthermore, the US patent document US 7,538,807 does not provide a solution for achieving a non-integer size ratio of output pixels.
Different methods are also known for generating color images from input images by means of mosaic filters, which have a plurality of color values for each pixel, in which mosaic filters one color value is present for each pixel. An example of such a process is found in german patent application DE 102013000301. However, in such a color conversion function method, demosaicing method, or Color Filter Array (CFA) interpolation method, the position of the output pixel and the size thereof are generally not freely selectable and determined based on the position of the input pixel.
In addition to the described merging techniques, methods are known in which the resolution of the image is changed by interpolation, as described in detail in, for example, US 7,567,723 and US 7,286,721. Commonly known interpolation methods are for example the so-called "nearest neighbor" interpolation, the bilinear interpolation and the bicubic interpolation (see for example fig. 4 of US 7,567,723).
All these interpolations are based on the following model assumptions: the luminance values of the pixels exist in the form of dots, respectively. For the interpolation, the model then uses a function that assumes the luminance values at the punctiform locations of the first pixels, and determines the luminance values of the second pixels corresponding to the punctiform locations of the function. Such a procedure is illustrated, for example, in fig. 4 of US 7,567,723 for ease of understanding.
In US 7,072,509B 2 is concerned with efficiently performing color correction of an image captured by means of a Bayer pattern and reducing mixing of noise of color channels at the time of color correction.
"A versatic demosaicing algorithm for developing image composition", IEEE SITIS 2012, by Hor et al, discloses a demosaicing method that simultaneously implements scaling. In this case, the scaling cannot take integer values either, but known size-changing methods, such as the "bilinear image scaling algorithm", are used.
Disclosure of Invention
The invention is based on the following objectives: an image processing device for processing image data of an image sensor is provided, the image sensor having a mosaic filter for at least one first color and a second color on regularly arranged first pixels, the mosaic filter for example having a bayer pattern, wherein the image processing device implements: the color value of the second color is determined for a second pixel at an intermediate position between the first pixels, whereby a high quality image can be produced, preferably having a pixel size different from the size of the first pixels, wherein the color value of the second color is present for the second pixel.
According to a first aspect of the present invention, an image processing device for processing image data of an image sensor having a mosaic filter for at least one first color and a second color on regularly arranged first pixels is provided, wherein the image processing device is configured to: determining a color value of a second color for a second pixel at an intermediate position between the first pixels, wherein the intermediate position does not coincide with a position of one of the first pixels, wherein the determining comprises:
determining, for the first color and the second color, a low-noise color value of the color by interpolating pixels of the respective color in the region near the intermediate position by means of the respective local filter,
-determining a luminance value at the intermediate position by interpolating pixels of at least a first color in a vicinity of the intermediate position by means of a local filter, wherein filter coefficients of the local filter satisfy a first condition as follows: the center of gravity of the filter coefficients corresponds to the middle position, an
-determining a color value of the second color at the intermediate position based on a sum of the color value of the second color at low noise and a difference formed between the color value of the first color at low noise and the luminance value.
The invention is based on: in a typical color image, the higher spatial frequencies are substantially determined by the brightness, while the colors usually change only very slowly, that is, with low frequencies, across the image. Based on this fact, the inventors obtained the following recognition: when a higher frequency of the interpolated luminance values is added to the interpolated low-noise color values of the second color, the color values of the second color of the second pixels at intermediate positions between the first pixels (i.e., not on the pixel grid) can be determined with high quality from the image data of the image sensor having the mosaic filter for at least one first color and second color (pixel grid) on the regularly arranged first pixels. Here, it is determined that the luminance values at the intermediate positions are sufficient to avoid geometric distortions that may occur during interpolation, which is achieved as required by: the filter coefficients of the local filter satisfy the following first condition for determining the luminance values: i.e. the center of gravity of the filter coefficients corresponds to the middle position. Lower-frequency colors can also be interpolated, if necessary, in a simpler manner, preferably only by means of interpolation in relation to the phase of the mosaic filter in the vicinity of the intermediate position, without this substantially affecting the quality of the image produced.
The term "local" herein means: there is a limited filter size among the filters used at the time of interpolation. By means of only one filter having a limited, i.e. limited, size, the result can be achieved with limited computation time and/or limited resource consumption. The local filter is applied on the vicinity of respective intermediate positions, which respectively include a plurality of the first pixels. Preferably, the local filter is a linear filter.
The mosaic filter is preferably a mosaic filter having a bayer pattern composed of the colors red (R), green (G) and blue (B), where green is typically twice the color red or blue. Therefore, it is preferable to set green as the "first color", and set red or blue as the "second color". To determine color values not only for red but also for blue, the following can be performed for red and blue: the color value of the second color at the intermediate position is determined by interpolating the pixel of the second color in the vicinity of the intermediate position by means of a local filter, and the color value of the second color at the intermediate position is determined based on a sum of the color value of the second color at the low noise and a difference formed between the color value of the first color at the low noise and the luminance value. Furthermore, the luminance value determined at the intermediate position can be directly used as the color value of the first color (green) at the intermediate position.
According to an advantageous further development, the image processing device is designed to: a direction estimation value relating to a preferential direction of the image data in the intermediate position vicinity region is determined, and a luminance value is determined based on the direction estimation value.
According to an advantageous further development, the image processing device is designed to: the direction estimation value is determined based on a difference in color values of pixels along the horizontal direction and the vertical direction in the vicinity of the neutral position.
According to an advantageous further development, the image processing device is designed to: the luminance value is determined as a weighted average of a first luminance value, which is determined by means of a local filter which is particularly suitable for describing vertical structures, and a second luminance value, which is determined by means of a local filter which is particularly suitable for describing horizontal structures, as a function of the direction estimate.
According to one advantageous development, the local filter for determining the luminance values is implemented by two one-dimensional local filters, which are applied in succession along directions orthogonal to one another, wherein the interpolation along the first orthogonal direction is implemented by means of a first one-dimensional filter and the interpolated values obtained by the interpolation along the second orthogonal direction are implemented by means of a second one-dimensional filter.
According to an advantageous further development, the image processing device is designed to: the difference values are formed and a non-linear function is applied to influence the noise combination and/or the difference values are formed and an image sharpening combination obtained by multiplication with the sharpness values.
According to an advantageous further development, the image processing device is designed to: two further low-noise color values of the first color are determined from only a part of the pixels of the first color in the region near the intermediate position by means of in each case one local filter, and a correction of color aliasing artifacts of the low-noise color values of the second color is carried out as a function of the two further low-noise color values of the first color and as a function of the phase of the mosaic filter in the region near the intermediate position.
According to an advantageous further development, the image processing device is designed to: in order to perform a correction of color aliasing artifacts for low-noise color values of the second color, a difference of two further low-noise color values of the first color is formed, a multiplication with a direction estimate is performed, and low-noise color values of the second color having a sign related to the phase of the mosaic filter in the vicinity of the intermediate position are added.
According to an advantageous further development, the filter coefficients of the local filter for determining the color values of the low noise of the first color and/or the filter coefficients of the local filter for determining the color values of the low noise of the second color also satisfy the following first condition: the center of gravity of the filter coefficients corresponds to the middle position.
According to an advantageous further development, the image processing device is designed to: determining color values of a second color for two or more second pixels of the same size at different intermediate positions between the first pixels, respectively, wherein each of the local filters for determining luminance values comprises a plurality of filter coefficients, wherein for at least one of the local filters more than one filter coefficient is not equal to zero, and:
-the sum of the squares of the filter coefficients of each of the local filters is equal to a constant value, which is the same for all local filters according to a second condition, wherein preferably the constant value corresponds to the square of a noise amplification value according to a third condition, wherein the noise amplification value corresponds to the product of a preset gain of the second pixel with respect to the gain of the first pixel and the square root of a preset relative pixel size, wherein the relative pixel size corresponds to the ratio of the size of the second pixel to the size of the first pixel, and/or
The filter coefficients of each of the local filters additionally satisfy a fourth condition: the sum of the filter coefficients is equal to a constant value, preferably a preset gain.
By selecting the filter coefficients which satisfy the second condition, the luminance value of the second pixel can be determined by interpolation such that the characteristic of the second pixel corresponds better to the result expected by the "real" pixel than the result of the known interpolation method, since the transmission of noise therefore proceeds spatially uniformly. In this way, for example, periodic changes in noise can be avoided.
Since for at least one of the local filters more than one filter coefficient is not equal to zero, a "true" interpolation is achieved, wherein the luminance value of the second pixel is determined on the basis of more than one first pixel. Depending on the orientation of the different intermediate positions and the size of the second pixel, more than one filter coefficient can also be unequal to zero for more than one local filter, if necessary also for each local filter.
Depending on whether the resolution of the image data of the image sensor should be reduced or increased by means of interpolation according to the invention, one or more of different intermediate positions can exist between directly adjacent first pixels depending on the position in the image data and the desired degree of change.
When the third condition is satisfied, the transmission of noise is not only performed uniformly in space, but also the amplification value of the noise transmission corresponds to the desired size of the output pixel (second pixel). The term "amplification" can be understood in this context to mean also the attenuation of the noise, that is to say the noise changes by a factor less than 1, or the noise remains unchanged (factor equal to 1).
The third condition can be set forth as follows: root of herbaceous plantAccording to the pixel model of EMVA standard 1288, photon and photoelectron statistics respectively follow poisson distribution. If the size of the pixel at this time is changed by a factor w (relative pixel size), the received photon μ will be independent of the gain gpAnd photoelectron mueThe average value of (a) is increased by a factor w. Because of the mean value μpAnd mueSince the proposed Poisson distribution is equal to the variance σ2 pAnd σ2 eThese mean values therefore also increase the factor w, and the associated noise increases the square root of w accordingly. If the gain of the input pixel (first pixel) is applied at this time and additionally a preset gain g is applied (see also below), the following conclusion is reached: the output noise measured in DN should be raised by a factor R (noise amplification value) equal to the square root of g times w. The term "relative pixel size" is to be understood here as: the relative pixel size describes a change in an area of the second pixel relative to an area of the first pixel. If the second pixel is, for example, 1.5 times larger than the first pixel, both in width and in height, a relative pixel size of 1.5 × 1.5 — 2.25 results, i.e., the area of the second pixel is increased by a factor w equal to 2.25 in each case with respect to the area of the corresponding first pixel.
The desired output luminance relative to the input luminance can be determined via the gain of the local filter (fourth condition). For example, it can be desirable to: the second pixel has the same conversion gain K as the first pixel according to EMVA standard 1288. In this case, the gain may be selected such that it corresponds to the inverse of the relative pixel size. However, it can also be desirable, for example, for the output image (second pixels) to have the same brightness as the input image (first pixels) measured in DN. In this case, the gain may be chosen to be equal to 1.
According to an advantageous development, the image processing device comprises a setting element for setting at least the relative pixel size, for example a relative pixel size of 1.5 × 1.5, for example, if the second pixel should be 1.5 times larger than the first pixel, both in width and in height, wherein the image processing device is designed to: the filter coefficients of the local filter are determined based on the set relative pixel size. This brings about great flexibility, since the filter coefficients for determining the luminance values do not have to be determined in advance, but can be determined, for example, in a digital camera comprising an image sensor for generating image data having regularly arranged first pixels and an image processing device according to the invention for processing the image data of the image sensor, in the case of configuring the digital camera on the basis of the relative pixel size set, for example, by a user of the digital camera (or even during operation). In addition to the relative pixel size, the gain of the second pixel can preferably also be set by setting the gain of the element relative to the first pixel, wherein the image processing device is thus preferably configured for: the filter coefficients of the local filter are determined based on the set relative pixel size and the set gain. Furthermore, it can also be advantageously provided that: the resolution of the output image (second pixel) can be set via the setting element. In this case, the resolution is not automatically derived from the ratio of the size of the second pixels to the size of the first pixels, but can be preset individually. This makes it possible to realize applications in which a digital camera transmits a low-resolution image, for example, at a high image refresh rate or at a low data transmission rate, and only when the event that the same camera produces a high-resolution image is detected in the image can more details, for example, the license plate number, be identified in the high-resolution image. The setting element can, for example, comprise: a controller, such as a slide controller or a rotary controller; a register; a digital interface, etc.
According to another aspect of the present invention, there is provided an image processing apparatus for processing image data of an image sensor having a mosaic filter for at least one first color and a second color on regularly arranged first pixels of a first pixel size, the image sensor being configured to generate image data comprising second pixels of a second pixel size, wherein the second pixel size is a non-integer multiple of the first pixel size, wherein the image processing apparatus is configured to: a color value is calculated for each second pixel, and wherein the processing is such that the quantum efficiency of the second pixel is substantially invariant with respect to the quantum efficiency of the first pixel.
In this way, image data comprising second pixels of a second pixel size can be generated by processing image data generated by the image sensor comprising pixels of a first pixel size, the image data substantially corresponding to image data expected by "real" pixels of the second pixel size, wherein the second pixel size is a non-integer multiple of the first pixel size. That is, when the generated image data is studied assuming the second pixel size in the manner proposed in the EMVA standard 1288, a value is obtained that substantially corresponds to a value expected by an image sensor corresponding to the image sensor having pixels of the regularly set second pixel size.
It is particularly preferred here that the processing does not substantially change the quantum efficiency of the second pixel compared to the quantum efficiency of the first pixel. In this case, the quantum efficiency of the second pixel differs in value by less than plus/minus 10 percent, preferably by less than plus/minus 5 percent, and more preferably by less than plus/minus 2 percent, as compared to the quantum efficiency of the first pixel. That is, for example, if the quantum efficiency of the first pixel (measured in accordance with the EMVA standard 1288 in the nominal wavelength, preferably in the wavelength at which the quantum efficiency of the image sensor is maximal) is 60%, then the quantum efficiency of the second pixel (measured in accordance with the EMVA standard 1288 in the same nominal wavelength) is between 50% and 70%, preferably between 55% and 60%, more preferably between 58% and 62%. It is further preferred that the image processing apparatus is an image processing apparatus according to the present invention.
According to another aspect of the present invention, there is provided a digital camera, wherein the digital camera includes:
-an image sensor having a mosaic filter for at least one first color and a second color on regularly arranged first pixels, the image sensor being configured to generate image data; and
an image processing apparatus according to the invention for processing image data of an image sensor.
According to another aspect of the present invention, an image processing method for processing image data of an image sensor having a mosaic filter for at least one first color and a second color on regularly arranged first pixels is provided, wherein the image processing method determines a color value of the second color for a second pixel at an intermediate position between the first pixels, wherein the intermediate position does not coincide with a position of one of the first pixels, wherein the determining comprises:
determining, for the first color and the second color, a low-noise color value of the color by interpolating pixels of the respective color in the vicinity of the intermediate position by means of the respective local filter,
-determining a luminance value at the intermediate position by interpolating pixels of at least a first color in a vicinity of the intermediate position by means of a local filter, wherein filter coefficients of the local filter satisfy a first condition as follows: the center of gravity of the filter coefficients corresponds to the middle position, an
-determining a color value of the second color at the intermediate position based on a sum of the low-noise color value of the second color and a difference formed between the luminance value of the first color and the low-noise color value.
According to a further aspect of the invention, a computer device is provided, wherein the computer device comprises a computing unit which is designed to execute the image processing method according to the invention.
According to another aspect of the present invention, a computer program product is provided, wherein the computer program product comprises code for causing a computer device to perform the image processing method according to the present invention, when the computer program product is executed on the computer device.
It is to be understood that the image processing apparatus according to the invention, the digital camera according to the invention, the image processing method according to the invention, the computer device according to the invention and the computer program product according to the invention have similar and/or identical embodiments, in particular as defined herein.
It is to be understood that a preferred embodiment of the invention can also be any combination of the embodiments herein.
Drawings
Preferred embodiments of the present invention are described in detail below with reference to the accompanying drawings, in which
Figure 1 shows schematically and exemplarily the construction of a digital camera,
figure 2 shows schematically and exemplarily two different merging processes,
figure 3 shows schematically and exemplarily an example where the side length of the second pixel is a reasonable non-integer multiple of the side length of the first pixel,
fig. 4 shows schematically and exemplarily: how an output image should be generated from an input image by means of a mosaic filter on regularly arranged first pixels, said output image having second pixels of another pixel size and having more color values for each pixel,
fig. 5 shows schematically and exemplarily a vicinity of an intermediate position comprising a plurality of first pixels of the first pixels, to determine a signal value of the second pixel by interpolation by means of a local filter,
fig. 6 schematically and exemplarily shows the application of two different pixel sizes w and wc, for reconstructing a color image,
figure 7 schematically and exemplarily shows an image processing method according to an embodiment of the present invention,
figure 8 shows schematically and exemplarily the problem of color reconstruction in a two-dimensional frequency map,
figure 9 schematically and exemplarily shows that the luminance values are determined by performing vertical and horizontal interpolation in sequence,
figure 10 shows schematically and exemplarily the determination of luminance values by performing horizontal and vertical interpolation in sequence,
figure 11 shows schematically and exemplarily the determination of a direction estimate relating to a preferential direction of image data in a region near an intermediate position,
fig. 12 shows schematically and exemplarily: how to perform Blending with the help of direction estimates (Blending),
figure 13 schematically and exemplarily shows an image processing method according to another embodiment of the present invention,
figure 14 shows schematically and exemplarily a graph with a non-linear function N for noise influencing,
figure 15 shows schematically and exemplarily the generation of an output image having a pattern corresponding to a mosaic filter,
fig. 16 schematically and exemplarily shows a case where the first condition is not satisfied when interpolating color values G1, R1, B1, G3, and G4 of a relatively large pixel, and
fig. 17 schematically and exemplarily shows the application of a fixed filter matrix for determining larger pixels.
Detailed Description
In the drawings, identical or corresponding elements or units are provided with identical or corresponding reference numerals, respectively. When an element or unit has been described in the context of one figure, it is not described in detail in the context of another figure as necessary.
Fig. 4 shows schematically and exemplarily: how an output image with second pixels 51 of another pixel size and with a plurality of color values for each pixel should be generated from an input image with the aid of a mosaic filter, here with the aid of a bayer pattern, on regularly arranged first pixels 50. In fig. 4 (a), a bayer pattern is visible, consisting of the colors red (R), green (G) and blue (B), where green is typically twice that of red or blue. Fig. 4 (b) shows a superposition of the bayer pattern and the output pixel (second pixel) shown by a dotted line. It can be seen that the output pixel has not only another position (typically an intermediate position between the first pixels) but also another size compared to the input pixel (first pixel). As in fig. 3, the gaps between the output pixels are only used for better recognizability. Fig. 4 (c) shows: in the output image, each pixel with R, G and B has more color values.
When interpolating image data of an image sensor having a mosaic filter, there are difficulties in principle as follows: the input pixels (first pixels) 50 side by side usually have different colors to prevent interpolation of the same color. Nevertheless, by means of the method according to the invention, a high-quality image can be produced, which preferably has a pixel size different from the size of the first pixels, wherein one or more color values are present for each second pixel. As will be described below, with reference to DE 102013000301, it is preferred to determine the color value of a second pixel at an intermediate position between first pixels in an overall manner, that is, by means of a procedure in which all required operations are performed only on the vicinity, based on processing pixels in or around the intermediate position vicinity. In this way, the memory and computation requirements are reduced when selecting a suitable size in the vicinity, as is also explained in DE 102013000301.
Fig. 5 shows schematically and exemplarily a vicinity of the intermediate position 62 comprising a plurality of first pixels of the first pixels 60 to determine the signal values (luminance values, color values) of the second pixels 61 by interpolation by means of a local filter. In the example shown here, the size in the vicinity area is 4 × 4 first pixels 60 used for calculating the signal value of the second pixel 61. Here, the vicinity is selected such that the middle position 62 corresponding to the midpoint of the second pixel 61 is arranged in the central square 65 of the vicinity. In the illustrated case, the input pixel 63 is represented via two counting variables i and j. The relative position of the second pixel 61 is described without loss of generality via two position values x and y with respect to the selected vicinity. The value ranges here are: -1. ltoreq. x.ltoreq.2 and-1. ltoreq. y.ltoreq.2. Here, the input pixel 63 is selected as a reference input pixel whose position x is 0 and y is 0.
The size of the neighborhood is equal to the size of the local filter. The filter size is selected here such that, on the one hand, the second pixel 61 with the desired pixel size w is located completely within the vicinity or within the local filter during the interpolation, but, on the other hand, no excessive computational effort is incurred by an excessively large selection of the vicinity. It is also advantageous to select the width and the height in the vicinity to be equally large in each case, so that adverse anisotropic effects are avoided. Finally, it is advantageous to select the vicinity symmetrically around the central square 65. This symmetry simplifies the calculation, avoids unnecessary computational effort and again helps to avoid adverse anisotropy effects. For a value w (hereinafter also referred to as "relative pixel size") where the desired pixel size of the second pixel is less than or equal to the 1.5 × 1.5 pixel size of the first pixel, the size of the nearby area is selected to be 4 × 4 as in fig. 5 to achieve the above-described advantages. For larger values of the relative pixel size, a suitable larger square neighborhood is proposed, which has an even number of first pixels for each side length, e.g. 6 × 6, 8 × 8, 10 × 10, etc.
As described above, according to the invention: for a second pixel at an intermediate position between the first pixels, a color value of a second color, e.g. red or blue, is determined. However, since only a part of the pixels are provided for the second color in the image sensor having the mosaic filter, for example, only every second pixel is provided horizontally and vertically, a high spatial resolution cannot be correctly described by interpolating the pixels of the second color in the vicinity of the intermediate position. This results in a degradation of image quality. The description of the higher frequencies is now carried out as required by: (i) determining, not only for a first color, for example green, but also for a second color, for example red or blue, a low-noise color value of said color by interpolating pixels of the color in the vicinity of the intermediate position by means of a respective local filter, (ii) determining a luminance value at the intermediate position by interpolating pixels of at least the first color in the vicinity of the intermediate position by means of the local filter, wherein the filter coefficients of the local filter satisfy the following first condition: (ii) the center of gravity of the filter coefficient corresponds to the middle position, and (iii) the color value of the second color at the middle position is determined based on a sum made up of a color value of the low noise of the second color and a difference formed between the color value of the low noise of the first color and the luminance value.
By adding the higher frequency of the interpolated luminance values to the interpolated low-noise color values of the second color in the manner described hereinbefore, the color values of the second color can be determined with high quality for the second pixels at intermediate positions between the first pixels. In a preferred implementation, the luminance values at the intermediate positions are interpolated such that the luminance values preferably correspond to the following luminance values with respect to the correlation between signal and noise: the desired luminance value of a pixel of the desired pixel size w of the second pixel is determined from a linear model of the pixel from the european mechanical vision association standard 1288, the so-called EMVA standard 1288 (release 3.0 on 11/29 2010). In correspondence therewith, the color values of the low noise of the first color (e.g. green) and the second color (e.g. red or blue) are advantageously interpolated such that the signal and noise correspond to the signal and noise expected by the pixels having the larger pixel size wc. It is also described in detail below how the desired signal-to-noise ratio can be achieved by selecting the local filter accordingly and which effects or advantages this has.
The reconstruction of a color image using two different pixel sizes w and wc is schematically and exemplarily shown in fig. 6. In this view, in the vicinity 90 of the middle position 91, luminance values are interpolated corresponding to pixels 92 of smaller size w having the middle position 91 as the midpoint, and low-noise color values of the first color (e.g., green) and the second color (e.g., red or blue) are interpolated corresponding to pixels 93 of larger size wc also having the middle position 91 as the midpoint. By choosing the pixel size wc to be the same for the first and second colors, the interpolated color values of these colors advantageously have corresponding or at least similar frequency characteristics. Thus, the difference formed between the luminance value of the first color and the color value of the low noise is very well "matched" in frequency to the color value of the low noise of the second color, and the sum of the two provides a high quality color value of the second color at the intermediate position 91.
It should still be noted that the inventors have determined in experiments carried out with a large number of real images: the determination of the luminance value at the intermediate position 91 is sufficient to avoid geometric distortions that may occur during interpolation, which is achieved by the following means, as required: the filter coefficients of the local filter satisfy the following first condition for determining the luminance values: the center of gravity of the filter coefficients corresponds to the middle position. Lower-frequency colors can also be interpolated, if necessary, by means of simpler interpolation, preferably only with respect to the phase of the mosaic filter in the vicinity of the intermediate position, without this substantially impairing the quality of the resulting image. This is also described in more detail below.
Fig. 7 schematically and exemplarily shows an image processing method according to an embodiment of the present invention. For the nearby area 80, the green pixel 81 (first color), the red pixel 82, and the blue pixel 83 (second color, respectively) are observed and the green pixel G1, the red pixel R1, and the blue pixel B1 (color values of low noise) are interpolated corresponding to the pixels of the larger size wc in the manner described above. For descriptive reasons, a vicinity 80 having a size of 4 × 4 is shown. Obviously, the vicinity can also have other sizes, for example 6 × 6, 8 × 8, 10 × 10, etc.
Furthermore, from at least the first color (in this case green) which is present here at a higher resolution, the luminance value Y is interpolated for pixels of smaller size w (i.e. the pixel size of the desired second pixel) at intermediate positions, and the difference 88 of the signals Y-G1 is added to the color values G1, R1 and B1 having the larger pixel size wc. Thereby, color values R2, G2, and B2 of the second pixel at the intermediate position are obtained.
Since the color value G2 is derived from the term Y-G1+ G1 according to the equation mentioned hereinabove, G2 can be simply made directly equivalent to Y. The calculation G2 by means of addition is therefore not shown in fig. 7, but rather a direct assignment 89. Reference numerals 87a and 87b denote: in a first possible variant, the luminance values Y are interpolated from pixels of a first color only (in this case green), while in a second possible variant the luminance values are interpolated not only from pixels of the first color but also from pixels of a second color (in this case red or blue).
Fig. 8 shows schematically and exemplarily the problem of color reconstruction in a two-dimensional frequency diagram. Here, the horizontal frequency fx and the vertical frequency fy are plotted horizontally and vertically, respectively starting at 0. fN is the nyquist frequency of the input image data without taking the color filter into account. The red color channel and the blue color channel have only every second pixel in the horizontal direction and the vertical direction, respectively, in the mosaic filter with the bayer pattern. Thus, its horizontal nyquist frequency and its vertical nyquist frequency are respectively reduced by half to a value of 1/2 fN. This enables accurate detection of only the frequency region (freqenzalj) 170 for each of the red color channel and the blue color channel according to the sampling theorem. While the green color channel has a checkerboard arrangement of green pixels by which a diagonally extending nyquist limit 173 is generated for the green pixels. Thereby, the frequency regions 170, 171, and 172 can be correctly detected for green. While the frequency region 174 is not correctly detected for any color. The basic idea of the inventor is that: the color reconstruction is performed only for the frequency region 170 and the color-free reconstruction, that is, the achromatic reconstruction is performed for the frequency areas 171 and 172.
The selection of a larger pixel size wc for the reconstruction of the color channels R1, G1, and B1 shown in fig. 7 has the following advantages: since the pixel integrates incident light on its pixel surface, the pixel is provided as a box filter in its frequency characteristic. This is described, for example, in J.R.Janesick: "Scientific Charge-Coupled Devices", SPIE Press,2001, chapter 4.2.2. The larger pixels thus act essentially as low-pass filters with lower noise, which results in the advantage of lower color noise set forth in DE 102013000301. Therefore, according to the invention, it is advantageously provided that: the pixel size wc is matched to the nyquist frequency of the red and blue color channels 82 and 83.
In contrast, a smaller size w of pixels has a lower low-pass characteristic than a larger pixel, and thus contains more high spatial frequency components. Thus, high frequencies, such as those out of frequencies 171 and 172, may be better illustrated thereby. The formation 84 of the difference Y-G1 thus produces a high-pass characteristic due to the deviation in the frequency characteristic of the low-pass effect of the large and small pixels.
As described hereinbefore, the luminance value Y can be interpolated in step 87a based on only the green pixels 81. Alternatively, of course, interpolation can also be performed in step 87b based on the pixels of the vicinity 80 so that pixels of other colors (e.g. red and/or blue) are also included.
In a preferred embodiment, the image processing device is designed to: determining color values of a second color for two or more second pixels of the same size w at different intermediate positions between the first pixels, respectively, wherein each of the local filters for determining the luminance values comprises a plurality of filter coefficients, wherein more than one filter coefficient is not equal to zero for at least one of the local filters and the sum of the squares of the filter coefficients of each of the local filters is equal to a constant value, which is equal to zero for all local filters according to a second condition. Preferably, the constant value corresponds to a square of a noise amplification value according to a third condition, wherein the noise amplification value corresponds to a product of a preset gain of the second pixel with respect to a square root of a gain of the first pixel and a preset associated pixel size, wherein the relative pixel size corresponds to a ratio of a size of the second pixel to a size of the first pixel. Additionally or alternatively, the filter coefficients of each of the local filters can also satisfy a fourth condition: the sum of the filter coefficients is equal to a constant value, preferably a preset gain.
Fig. 9 schematically and exemplarily shows that the luminance values are determined by sequentially performing vertical and horizontal interpolation. Here, the columns a, b, c and d are separately observed in a first step 100 on the basis of the vicinity region 80. These columns alternately have the value of a green value and the value of a further color X, respectively, which is the same within each column and is either red or blue. Interpolation on these columns can be performed as follows: each element of the column is multiplied with an associated filter value and the products thus obtained are summed.
In this way, the column-by-column interpolation values Ya, Yb, Yc, and Yd can be determined in the vertical interpolation steps 101 to 104. These interpolation values are associated with the same color and from these interpolation values, if appropriate, if second, third and/or fourth conditions are additionally met, the luminance value Y of the pixel of the size w at the desired intermediate position can be determined in a horizontal interpolation step 105. If the fourth condition is fulfilled, it is furthermore advantageous if the sum of the filter coefficients associated with the green pixels equals a constant value (e.g. a preset gain), whereas the sum of the filter coefficients associated with the non-green pixels assumes a value of zero. Thereby determining: the brightness achieved is color independent and artefacts at color edges can be avoided.
Since the vertical interpolation 101 to 104 is based on input data with two alternating colors G and X, respectively, the reconstruction of high frequencies cannot be achieved as well by means of the vertical interpolation as by means of the horizontal interpolation 105. Therefore, according to the invention it is proposed: when the input image contains more horizontal frequencies than vertical frequencies, the interpolation described above is first used. This is the case when the structure located in the portion of the input image covered by the nearby area 80 has substantially a vertical orientation. For this reason, the background of the symbol 106 is vertically hatched for illustration.
In the opposite case (that is to say, in the case of a horizontally extending structure that is stronger, the structure has a higher proportion of vertical frequencies than horizontal frequencies), it is proposed that: the direction order is reversed during interpolation. This is schematically and exemplarily shown in fig. 10. Here, horizontal interpolation of lines a, b, c and d is performed in horizontal interpolation steps 111 to 114. It is also advantageously proposed for this purpose that the above-mentioned second to third conditions are also fulfilled. Subsequently, a vertical interpolation is carried out and a luminance value Y116 of a pixel of size w at the desired intermediate position is obtained, which is shown here with horizontal hatching as seen from the substantially horizontal preferential direction of the structure.
Since the interpolation shown in fig. 9 and 10 is performed in association with the preferential direction of the input image data in the vicinity of the intermediate position, it is appropriate to determine a direction estimation value associated therewith. This is schematically and exemplarily shown in fig. 11. In this case, starting from the neighborhood 120, the difference of the horizontal neighborhood is calculated for each of the possible pixels. Compared with the adjacent area, the method has the following advantages: these immediately adjacent regions respectively have the same color so that the difference is not susceptible to hue. This is shown for pixel G12. The pixel has a left neighbor B11 and a right neighbor B13, which are of the same color blue. The difference 121 is formed from B11 and B13 and its absolute value 122 is determined. As a result, a difference dBh12 of the level of blue is obtained for the position of the pixel G12. The difference for all other levels of the matrix 124 is also obtained in the same way. For pixels 123 of the left-edge column or the right-edge column, the difference of the associated levels cannot be determined because their left-hand or right-hand neighbors are no longer present in the neighborhood 120. Their respective values can simply be set to zero and are shown in fig. 11 as empty.
The matrix 124 is now multiplied with a filter matrix 127 in the form of a frobenius inner product 128 to obtain a value DH, that is to say the entries of the matrix 124 are multiplied element by element with the corresponding filter coefficients of the matrix 127, and the obtained product is added to the scalar scale of the level difference DH. It is expedient here to select the filter coefficients such that the row sum or the column sum of the filter coefficients respectively associated with a color remains unchanged for all colors when the vicinity 120 is moved horizontally or vertically in the image. So-called zipper artifacts can thereby be avoided.
In a similar manner, the difference 125 of the vertical neighbors R02 and R22 is determined for the pixel G12 and its absolute value 126 is determined. Thereby obtaining a vertical difference dRv12 in red. All other vertical differences of the matrix 124 are also obtained in this way. Due to the lack of a corresponding vertical neighbor, a vertical difference cannot be determined for the upper row and for the lower row, the corresponding values of which can simply be set to zero and are shown in fig. 12 as empty. The matrix 124 can be multiplied by a matrix 129 of filter coefficients in the form of a frobenius inner product 130, thereby obtaining a measure of the vertical difference DV. Suitably, the coefficients of matrix 129 are selected in the same way as the coefficients of matrix 127. Here, it is appropriate to select the filter coefficients such that the row sum or the column sum of the filter coefficients respectively associated with the colors remains unchanged for all colors when the vicinity area 120 is moved horizontally or vertically in the image. So-called zipper artifacts can thereby be avoided.
These two measures of the horizontal difference DH and the vertical difference DV are compared with each other, for example by difference formation 131. The difference obtained here encodes the preferential direction of the image data by its sign. If it is zero, there is no determinable preferential direction.
If the result of the difference formation is used to select one of the two luminance values 106 and 116 at this time, in a time sequence of input images, for example in a video sequence, this may result in: from the image noise present for the image position, unstable differences, that is to say differences which change sign randomly within the image sequence, are determined. This results in: one of the two luminance values 106 and 116 is likewise selected randomly within the image sequence, which can be perceived as disturbing pixel flicker in the output image if the image content is not suitable.
Therefore, it is proposed here: a continuous cross fade between these two values is achieved. For this purpose, the result of the difference formation 131 is multiplied 132 by a scaling factor DF and then the direction estimate D is obtained by applying a non-linear clipping function 133. In the example shown, the clipping function performs clipping to values-1 and 1. This can be achieved, for example, by the function fc (x) max (-1, min (1, x)). Thus, D takes the value-1 or the value 1, depending on whether the priority direction is vertical or horizontal, in the case where the priority direction is clear. If there is no clear direction of preference, a continuous transition between the two mentioned values is obtained.
Fig. 12 shows schematically and exemplarily: how to perform the mixing by means of the direction estimate D. For this purpose, the two brightness values 140 and 141, which correspond to the values 106 and 116 in fig. 9 and 10, are supplied to a selector 142, which is controlled by the value D. Thereby, the output luminance value 144 having high image quality is obtained.
The mixing can be performed such that a weighted average between the two Y values 140 and 141 is performed according to the direction estimate D. For the example shown, where D covers a range of values from-1 to 1, this suggests: the average of these two Y values is calculated and proposed: half of the difference of these two Y values is added to the average value by multiplication with D.
Fig. 13 schematically and exemplarily shows an image processing method according to an extended embodiment of the present invention. With respect to the embodiment shown in fig. 7, the formation of the difference Y-G1 is combined with applying a non-linear function N for noise effects and/or image sharpening by multiplication with sharpness values S. The former realizes that: the image noise is influenced in a desired manner, by which the sharpness of the image can be adjusted.
The nonlinear function N can advantageously be a continuous function which runs through the origin of coordinates (0, 0) in the two-dimensional coordinate system developed by the difference Y-G1 before and after the influence of noise and is continuously constant or monotonically increasing, wherein the slope in the origin is smaller than the slope at least one location remote from the origin. Fig. 14 (a) exemplarily shows such a function. The scaling by means of the parameter Th can be performed here, for example, in the following manner: the application of the function to the product of the input size and the parameter Th.
When such a non-linear function is piecewise linear, it can be implemented in an FPGA with particularly little resource requirements. Therefore, the above-described nonlinear function N can be approximated as a piecewise linear function according to (b) in fig. 14. The function N is linear below the value-Th (for example with a slope 1), constant 0 between the values-Th and linear above the value Th (for example with a slope 1).
The use of the non-linear function N is based on the following recognition: the high-pass value obtained by the high-pass filtering is smaller than the noise with a certain noise amplitude. It is therefore advantageous to process values within the noise amplitude (that is to say between the values-Th and Th) by means of the noise-reducing function N. According to the teaching of the error propagation law of karl friedrichs gauss, the noise, which can be understood as the measurement error of a pixel, is reduced by a small amount when the first derivative of the function applied to the magnitude of the noise is small in absolute value. It is therefore advantageous that the absolute value of the slope of the function N is small near the origin, i.e. within the noise amplitude. It is particularly advantageous that near the origin and within the noise amplitude the absolute value of the slope of the function N is zero and the absolute value of the function N itself there is also zero, since the non-linear function N then suppresses the noise within the noise amplitude. Here, it is recognized that it is acceptable: structures whose absolute value is smaller than the noise amplitude are also removed from the image. Since these structures are almost unrecognizable in any case, this does not cause a significant impairment of the image quality.
Furthermore, in contrast to the embodiment shown in fig. 7, DE 102013000301 provides: two further low-noise color values G3 and G4 of the first color (in this case green) are determined from only a part of the pixels of the first color in the vicinity of the intermediate position by means of in each case one local filter, respectively, and a correction of color aliasing artifacts of the low-noise color values R1 or B1 of the second color (in this case red or blue) is carried out as a function of the phases of the two further low-noise color values G3 and G4 of the first color and of the mosaic filter in the vicinity 80 of the intermediate position. In particular, for the low-noise color value R1 or B1 of the second color, a difference 156 of the two further low-noise color values G3 and G4 of the first color is formed (said difference being interpolated in the manner described hereinbefore corresponding to pixels of larger size wc), multiplied with the direction estimate D, and 158 or 159 is added to the low-noise color value R1 or B1 of the second color with a sign related to the phase of the mosaic filter in the vicinity 80 of the intermediate position. This successfully avoids the disturbing color aliasing artifacts of color orange and sky blue.
Fig. 15 schematically and exemplarily shows the generation of an output image having a pattern corresponding to a mosaic filter, here with a bayer pattern. To this end, a selection 160 is made from the input values 163, 164 and 165 for each output pixel 162 from the output image 161. This selection is made according to the phase Phi, which is generally two-dimensional and takes, for the lines of the output image, alternately, values associated with respective colors in the mosaic filter of the output image, for example red and green or green and blue colors for a mosaic filter with a bayer pattern.
As already explained above, in order to simplify the computation effort, it is basically possible that the first condition is no longer satisfied during the interpolation of the color values G1, R1, B1, G3 and G4 of the larger pixels. This results in the situation illustrated schematically and exemplarily in fig. 16. A larger pixel 183 with a center point 184 and a smaller pixel 181 with a center point 182 are calculated on the basis of the vicinity area 180, wherein the center points 184 and 182 can also differ from one another here.
This difference results in: strictly speaking, pixels 183 and 181 cannot be exactly matched to each other, so that unfavorable artifacts can be expected. However, in practical examination it appears that: the resulting artifacts are almost completely imperceptible to a human observer. The inventors attribute this to: by the difference formation 84 (see fig. 13), the luminance errors due to the position errors go together into the value 151 and are returned again in a corrected manner into the output values G2, R2, and B2 by means of the additions 153, 85, and 86, thereby eliminating errors in the image. In an advantageous embodiment of the invention, therefore, it is provided that: the calculations are performed accordingly. It is also possible to use "nearest neighbor" interpolation to generate the color values of larger pixels. This is achieved, for example, by: the midpoint 184 is always precisely centered in the vicinity 180. By selecting the vicinity 180 accordingly, it is ensured here that: the distance between midpoints 182 and 184 does not exceed value 1/2 in both the horizontal and vertical directions.
Fig. 17 schematically and exemplarily shows the application of a fixed filter matrix for determining larger pixels. Here, a 4 × 4-sized neighborhood 180 is used as a starting point and the filter coefficients are selected such that their sum is always 1 and their center of gravity is exactly located in the center of the neighborhood 180. The sum of squares is always the same size.
The image processing method described above is largely a linearly working interpolation method, which is based on applying linear filter coefficients. For this purpose, other filter coefficients for other linearly operating interpolation methods can be used in addition to the filter coefficients described above. For example, filter coefficients for "nearest neighbor" interpolation, bilinear interpolation, bicubic interpolation, spline interpolation, or sinusoidal interpolation can be used. These interpolations can be used to determine the luminance values R1, B1, G1, G3 and G4 for the larger pixels, and the interpolations (as long as they satisfy the first condition) can also be used to determine the luminance values 106 and 116.
Different interpolation methods can also be used for different steps. This shows to be particularly significant when, due to the same size of the neighborhood, there are fewer pixels in the color values 82, 83, 154 and 155 along one direction, respectively, than in the interpolation steps 105 and 115. In the example shown, the values G1, R1, B1, G3 and G4 can be determined using, for example, bilinear methods, which require 2 × 2 input values, respectively, and cubic methods are used in steps 105 and 115, which provide good results based on 1 × 4 input values or 4 × 1 input values.
Advantageously, it can be provided: in addition to the filter coefficients of the local filters for determining the luminance values, the filter coefficients of the local filters for determining the color values of the low noise (for example the values G1, R1 and B1 in fig. 7 or the values G1, R1, B1, G3 and G4 in fig. 13) can also satisfy one or more conditions corresponding to the first to fourth conditions described above, wherein the size of the respective pixel is here wc (see above). Furthermore, in one variant, it can also be provided that: in fig. 13, the sum of the squares of the filter coefficients applied to the red pixels 82 or blue pixels 83 plus the sum of the squares of the filter coefficients applied to the green pixels 154 and 155 take a constant value (e.g., the square of the noise amplification value) when the contributions of the operations 156 and 157 are properly included. This can be understood as: the values obtained by operations 158 and 159 are generated retrospectively from the filter coefficients, with the sum of the squares taking a constant value. The effect of the changing size of the red or blue channel in the application is thus avoided with different symbols 158 and 159 during the addition.
The mentioned advantages, such as acceptable results in measurements according to the EMVA standard 1288, are sometimes not achieved by using other interpolation methods. At the same time, it is of course possible to achieve other advantages of the methods themselves, such as lower ringing artifacts.
With the image processing method described above, it is possible for the first time in a particular embodiment of the invention to: the steps of color conversion function, sharpening, denoising, color-antialiasing and interpolation are combined in a fast and resource-saving manner in a single overall operation that can be performed simply, for example in an FPGA, ASIC, DSP, GPU or in a computer program. A large number of existing linear interpolations can be transferred to the image by means of mosaic filters, for example by means of bayer templates. Furthermore, it is possible for the first time in a specific embodiment of the invention to change the pixel size of the color image by means of a mosaic filter in an interpolated manner such that this pixel size change is subject to measurement according to the EMVA standard 1288.
The image processing method can be performed in the digital camera 10, for example in the described calculation unit 32. The computation unit 32 can be configured here as a processor, a Microprocessor (MPU) or Microcontroller (MCU), a Digital Signal Processor (DSP), a Field Programmable Gate Array (FPGA), an image processor (GPU) or an Application Specific Integrated Circuit (ASIC). The computation unit 32 can also be integrated together into another module, for example into the image sensor 31 or the interface 33. It is thus also possible, for example, to produce an image sensor 31 with a freely selectable pixel size or to produce a corresponding interface module 33 for a digital camera.
The calculation unit 32 can comprise a setting element for setting at least the relative pixel size (and if necessary the gain of the second pixel relative to the gain of the first pixel), wherein the calculation unit 32 is configured for: the filter coefficients of the local filter for determining the luminance values are determined based on the set relative pixel size (and, if necessary, the set gain). As already described above, it can also be advantageously provided that: the resolution of the output image (second pixel) may also be set via the setting element.
Furthermore, the image processing method can also be executed after output as an electronic signal 34, for example in a part of an engineering facility, on a computer or a smartphone.
Other variations to the disclosed embodiments can be understood and effected by those skilled in the art in practicing the claimed invention, from a study of the drawings, the disclosure, and the disclosure.
In this document, the words "having" and "comprising" do not exclude other elements or steps, and the indefinite article "a" or "an" does not exclude a plurality.
A single unit or device is capable of performing the functions described herein for multiple elements. The fact that various functions and/or elements are described in different embodiments does not mean that: combinations of these functions and/or elements cannot be used to advantage.
The reference signs should not be construed as: the claimed subject matter and the scope of protection are limited by these reference numerals.
In summary, an image processing device for processing image data of an image sensor having a mosaic filter for at least one first color and a second color on regularly arranged first pixels has been described, wherein the image processing device is configured to: a color value of a second color is determined for a second pixel at an intermediate position between the first pixels. The determining includes: determining, for a first color and a second color, a low-noise color value of the color by interpolating pixels of the respective color in the vicinity of the intermediate position by means of the respective local filter; determining a luminance value at the intermediate position by interpolating pixels of at least a first color in a vicinity of the intermediate position by means of a local filter, wherein filter coefficients of the local filter satisfy a first condition as follows: the center of gravity of the filter coefficients corresponds to the middle position; and determining a color value of the second color at the intermediate position based on a sum of a color value of the second color and a difference formed between the color value of the first color and the luminance value.

Claims (18)

1. An image processing device for processing image data of an image sensor (31) having a mosaic filter for at least one first color and a second color on regularly arranged first pixels, wherein the image processing device is configured to: determining a color value of the second color for a second pixel at an intermediate position between the first pixels, wherein the intermediate position does not coincide with a position of one of the first pixels, wherein the determining comprises:
-for the first and second colors, determining a low-noise color value of the color by interpolating pixels of the respective color in a vicinity of the intermediate position by means of respective local filters;
-determining a luminance value at said intermediate position by interpolating at least pixels of said first color in a vicinity of said intermediate position by means of a local filter, wherein filter coefficients of said local filter satisfy a first condition of: i.e. the center of gravity of the filter coefficients corresponds to the intermediate position; and
-determining a color value of the second color at the intermediate position based on a sum of a color value of the second color at low noise and a difference formed between a color value of the first color at low noise and a luminance value.
2. The image processing device of claim 1, wherein the image processing device is configured to: a direction estimation value (D) relating to a priority direction of image data in a region in the vicinity of the intermediate position is determined, and the luminance value is determined in accordance with the direction estimation value (D).
3. The image processing device of claim 2, wherein the image processing device is configured to: determining the direction estimate (D) based on a difference of color values of pixels along a horizontal direction and a vertical direction in a vicinity of the intermediate position.
4. The image processing apparatus according to claim 2 or 3, wherein the image processing apparatus is configured to: the luminance value is determined as a weighted average of a first luminance value determined by means of a local filter adapted to describe a vertical structure and a second luminance value determined by means of a local filter adapted to describe a horizontal structure in dependence on the direction estimate (D).
5. The image processing apparatus according to claim 4, wherein said local filter for determining said luminance values is implemented by two one-dimensional local filters, which are applied sequentially along directions orthogonal to each other, wherein interpolation along a first orthogonal direction is implemented by means of a first one-dimensional filter, and interpolation values obtained by interpolation along a second orthogonal direction is implemented by means of a second one-dimensional filter.
6. The image processing apparatus according to any one of claims 1 to 3, wherein the image processing apparatus is configured to: the difference values are formed and combined with applying a non-linear function to affect the noise combination and/or the difference values are formed and combined with image sharpening by multiplication with sharpness values.
7. The image processing apparatus according to claim 2 or 3, wherein the image processing apparatus is configured to: determining two further low-noise color values of the first color by means of one local filter each from only a part of the pixels of the first color in a vicinity of the intermediate position, and performing a correction of color aliasing artifacts of the low-noise color values of the second color in dependence on the two further low-noise color values of the first color and in dependence on the phase of the mosaic filter in the vicinity of the intermediate position.
8. Image processing apparatus according to claim 7, wherein in order to perform correction of color aliasing artifacts of the low-noise color values of the second color, a difference of the two further low-noise color values of the first color is formed, a multiplication with the direction estimate (D) is performed, and low-noise color values of phase-dependent symbols of the mosaic filter in a vicinity of the intermediate position, which is given to the second color, are added.
9. The image processing apparatus according to any one of claims 1 to 3, wherein, in addition, the filter coefficients of the local filter for determining color values of low noise of the first color and/or the filter coefficients of the local filter for determining color values of low noise of the second color satisfy a first condition of: the center of gravity of the filter coefficients corresponds to the intermediate position.
10. The image processing apparatus according to any one of claims 1 to 3, wherein the image processing apparatus is configured to: determining color values of the second color for two or more second pixels of the same size at different intermediate positions between the first pixels, respectively, wherein each of the local filters for determining luminance values comprises a plurality of filter coefficients, wherein for at least one of the local filters more than one of the filter coefficients is not equal to zero, and:
-the sum of the squares of the filter coefficients of each of the local filters is equal to a constant value, which is the same for all local filters according to a second condition, wherein the noise amplification value corresponds to the product of a preset gain of the second pixel with respect to the gain of the first pixel and the square root of a preset relative pixel size, wherein the relative pixel size corresponds to the ratio of the size of the second pixel to the size of the first pixel, and/or
-the filter coefficients of each of the local filters additionally satisfy a fourth condition: the sum of the filter coefficients equals a constant value.
11. The image processing device of claim 10, wherein the constant value corresponds to a square of a noise amplification value according to a third condition.
12. The image processing device of claim 10, wherein a sum of the filter coefficients is equal to the preset gain.
13. The image processing device of claim 10, wherein the image processing device comprises a setting element for setting at least the relative pixel size, wherein the image processing device is configured to: determining the filter coefficients of the local filter based on the set relative pixel size.
14. An image processing apparatus for processing image data of an image sensor (31) having a mosaic filter for at least one first color and a second color on first pixels of a regularly arranged first pixel size to generate image data comprising second pixels of a second pixel size, wherein the second pixel size is a non-integer multiple of the first pixel size, wherein the image processing apparatus is configured to: a color value is calculated for each second pixel, and wherein the processing is capable of making the quantum efficiency of the second pixel substantially invariant with respect to the quantum efficiency of the first pixel.
15. A digital camera (10) comprising:
-an image sensor (31) having at least one mosaic filter of a first color and a second color on regularly arranged first pixels to generate image data; and
-image processing means according to any of claims 1 to 14 for processing the image data of the image sensor (31).
16. An image processing method for processing image data of an image sensor having a mosaic filter for at least one first color and a second color on regularly arranged first pixels, wherein the image processing method determines a color value of the second color for a second pixel at an intermediate position between the first pixels, wherein the intermediate position does not coincide with a position of one of the first pixels, wherein the determining comprises:
-for the first color and the second color, determining a low-noise color value of the color by interpolating pixels of the respective color in a vicinity of the intermediate position by means of a respective local filter,
-determining a luminance value at said intermediate position by interpolating at least pixels of said first color in a vicinity of said intermediate position by means of said local filter, wherein filter coefficients of said local filter satisfy a first condition of: the center of gravity of the filter coefficients corresponds to the intermediate position, an
-determining a color value of the second color at the intermediate position based on a sum of a color value of the second color at low noise and a difference formed between a color value of the first color at low noise and a luminance value.
17. A computer device comprising a computing unit configured to execute the image processing method according to claim 16.
18. A computer-readable storage medium comprising code which, when executed on a computer device, causes the computer device to perform the image processing method of claim 16.
CN201710575233.8A 2016-07-14 2017-07-14 Determination of color values of pixels at intermediate positions Active CN107623844B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102016112968.2A DE102016112968B4 (en) 2016-07-14 2016-07-14 Determination of color values for pixels at intermediate positions
DE102016112968.2 2016-07-14

Publications (2)

Publication Number Publication Date
CN107623844A CN107623844A (en) 2018-01-23
CN107623844B true CN107623844B (en) 2021-07-16

Family

ID=60782825

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201710575233.8A Active CN107623844B (en) 2016-07-14 2017-07-14 Determination of color values of pixels at intermediate positions

Country Status (2)

Country Link
CN (1) CN107623844B (en)
DE (1) DE102016112968B4 (en)

Families Citing this family (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102018130710B3 (en) * 2018-12-03 2020-04-23 Basler Ag Device and method for generating test image data and system with such a device
DE102020127482B4 (en) 2020-10-19 2022-06-09 Basler Aktiengesellschaft Recording method and recording system for successively recording an object moving relative to a camera
DE102020127495B4 (en) 2020-10-19 2022-05-25 Basler Aktiengesellschaft Recording method and recording system for successively recording an object moving relative to a camera
TWI767795B (en) * 2021-07-20 2022-06-11 國立虎尾科技大學 Establishment method and application method of mosaic tile image database
US11582405B1 (en) 2021-09-23 2023-02-14 Qualcomm Incorporated Image data processing using non-integer ratio transforming for color arrays
CN114582268B (en) * 2022-04-28 2022-07-08 武汉精立电子技术有限公司 Method, device and equipment for calculating Demura compensation parameters

Family Cites Families (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US509A (en) 1837-12-07 Waterproof mail-carriage
US7072A (en) 1850-02-05 Improvement in engines for carding and drawing wool
US3971065A (en) 1975-03-05 1976-07-20 Eastman Kodak Company Color imaging array
WO1997028558A2 (en) 1996-01-22 1997-08-07 California Institute Of Technology Active pixel sensor array with electronic shuttering
US6462779B1 (en) 1998-02-23 2002-10-08 Eastman Kodak Company Constant speed, variable resolution two-phase CCD
US6614478B1 (en) 1999-04-30 2003-09-02 Foveon, Inc. Color separation prisms having solid-state imagers mounted thereon and camera employing same
GB2378077A (en) * 2001-07-27 2003-01-29 Hewlett Packard Co Electronic image colour plane reconstruction
US7286721B2 (en) 2003-09-11 2007-10-23 Leadtek Research Inc. Fast edge-oriented image interpolation algorithm
JP4455364B2 (en) 2004-03-09 2010-04-21 キヤノン株式会社 Resolution conversion method and apparatus
US7502505B2 (en) * 2004-03-15 2009-03-10 Microsoft Corporation High-quality gradient-corrected linear interpolation for demosaicing of color images
ITVA20040038A1 (en) * 2004-10-04 2005-01-04 St Microelectronics Srl METHOD OF INTERPOLATION OF THE COLOR OF AN IMAGE ACQUIRED BY A DIGITAL SENSOR THROUGH DIRECTIONAL FILTERING
US7538807B2 (en) 2004-11-23 2009-05-26 Dalsa Corporation Method and apparatus for in a multi-pixel pick-up element reducing a pixel-based resolution and/or effecting anti-aliasing through selectively combining selective primary pixel outputs to combined secondary pixel outputs
JP4892909B2 (en) * 2005-09-22 2012-03-07 ソニー株式会社 Signal processing method, signal processing circuit, and camera system using the same
KR20080089601A (en) * 2006-01-20 2008-10-07 어큐트로직 가부시키가이샤 Optical low pass filter and imaging device using the same
DE102008063970B4 (en) * 2008-12-19 2012-07-12 Lfk-Lenkflugkörpersysteme Gmbh Method for interpolating color values at picture elements of an image sensor
DE102013000301A1 (en) 2013-01-10 2014-07-10 Basler Ag Method and device for producing an improved color image with a color filter sensor
EP2955691B1 (en) * 2014-06-10 2017-08-09 Baumer Optronic GmbH Device for determining of colour fraction of an image pixel of a BAYER matrix
DE102014115742B3 (en) * 2014-10-29 2015-11-26 Jenoptik Optical Systems Gmbh Method for interpolating missing color information of picture elements

Also Published As

Publication number Publication date
CN107623844A (en) 2018-01-23
DE102016112968B4 (en) 2018-06-14
DE102016112968A1 (en) 2018-01-18

Similar Documents

Publication Publication Date Title
CN107623844B (en) Determination of color values of pixels at intermediate positions
JP4520886B2 (en) High-quality gradient-corrected linear interpolation for demosaicing color images
JP4385282B2 (en) Image processing apparatus and image processing method
US6978050B2 (en) Electronic image color plane reconstruction
US8040558B2 (en) Apparatus and method for shift invariant differential (SID) image data interpolation in fully populated shift invariant matrix
US9338364B2 (en) Imaging device and image generation method
JP2004343685A (en) Weighted gradient based and color corrected interpolation
US8248496B2 (en) Image processing apparatus, image processing method, and image sensor
JP4142360B2 (en) Image processing method for correcting color of electronic color image
KR20010020797A (en) Image demosaicing method utilizing directional smoothing
JP4328424B2 (en) Image conversion method
KR20080106585A (en) Method and arrangement for generating a color video signal
CN108701353B (en) Method and device for inhibiting false color of image
JP2010034964A (en) Image composition apparatus, image composition method and image composition program
JP4329542B2 (en) Image processing apparatus and image processing program for determining similarity of pixels
JP2009077274A (en) Image processing device, image processing method, and imaging apparatus
TW201633214A (en) Edge detection system and methods
JP2007325253A (en) Recursive filter system for video signal
JP3908169B2 (en) Method and system for sensing and interpolating color image data
US8213710B2 (en) Apparatus and method for shift invariant differential (SID) image data interpolation in non-fully populated shift invariant matrix
JP2004519156A (en) Contour filter for output signal of image detector
JP2009194721A (en) Image signal processing device, image signal processing method, and imaging device
JP2009100150A (en) Device, method, and program for image processing
US8068145B1 (en) Method, systems, and computer program product for demosaicing images
JP3965460B2 (en) Interpolation method for interleaved pixel signals such as checkered green signal of single-panel color camera

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant