CN108881748B - Image processing apparatus and method, digital camera, computer device, and program product - Google Patents

Image processing apparatus and method, digital camera, computer device, and program product Download PDF

Info

Publication number
CN108881748B
CN108881748B CN201810443537.3A CN201810443537A CN108881748B CN 108881748 B CN108881748 B CN 108881748B CN 201810443537 A CN201810443537 A CN 201810443537A CN 108881748 B CN108881748 B CN 108881748B
Authority
CN
China
Prior art keywords
image
pixel
image processing
quality value
pixel quality
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201810443537.3A
Other languages
Chinese (zh)
Other versions
CN108881748A (en
Inventor
约尔格·孔策
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Basler AG
Original Assignee
Basler AG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Basler AG filed Critical Basler AG
Publication of CN108881748A publication Critical patent/CN108881748A/en
Application granted granted Critical
Publication of CN108881748B publication Critical patent/CN108881748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/50Control of the SSIS exposure
    • H04N25/57Control of the dynamic range
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/70Denoising; Smoothing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T5/00Image enhancement or restoration
    • G06T5/20Image enhancement or restoration using local operators
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/71Circuitry for evaluating the brightness variation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/70Circuitry for compensating brightness variation in the scene
    • H04N23/76Circuitry for compensating brightness variation in the scene by influencing the image signals
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • H04N25/67Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response
    • H04N25/671Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction
    • H04N25/672Noise processing, e.g. detecting, correcting, reducing or removing noise applied to fixed-pattern noise, e.g. non-uniformity of response for non-uniformity detection or correction between adjacent sensors or output registers for reading a single image

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Studio Devices (AREA)
  • Image Processing (AREA)

Abstract

The invention relates to an image processing apparatus and method, a digital camera, a computer device, and a program product. The image processing device is used for improving the pixel quality value by processing a first image of an image sensor of a digital camera, the image sensor having a given pixel quality value, wherein the image processing device is designed to calculate a second image from the first image, wherein the pixel quality value determined from the second image is greater than the given pixel quality value of the image sensor, and wherein for a first image recorded by the image sensor at a uniform, constant brightness, the correlation coefficient for neighboring pixels of the second image is greater than the correlation coefficient for neighboring pixels of the first image.

Description

Image processing apparatus and method, digital camera, computer device, and program product
Technical Field
The invention relates to an image processing apparatus for improving a pixel quality value by processing a first image of an image sensor of a digital camera, the image sensor having a given pixel quality value, and to a digital camera comprising an image sensor for generating the first image and an image processing apparatus. The invention also relates to an application of the image processing apparatus, a corresponding image processing method, a design for designing a digital camera, and a computer program device and a computer program product.
Background
Digital cameras are commonly used in the industrial field. This is described, for example, in the german patent application DE 102013000301 a 1. Here, a plurality of different models with a plurality of different image sensors are used.
Fig. 1 schematically and exemplarily shows a digital camera 10 having an objective lens 12. The image scene 11 is imaged via an objective 12 onto an image sensor having regularly arranged light-sensitive elements, so-called pixels. The image sensor transmits the first image 13 in the form of electronic data to a calculation unit 14, typically located in the camera 10, for example comprising a processor, a Digital Signal Processor (DSP) or a so-called field-programmable gate array (FPGA). In this case, it may be necessary to convert the analog image data into digital image data, for example by means of an analog-to-digital converter (not shown in the figures). It is also possible in the calculation unit 14 to perform desired mathematical operations on the image data, for example color correction or conversion to other image formats. Thereby, a second image 15 is obtained, which is subsequently output via an Interface 16. Alternatively, the initial image can also be calculated outside the digital camera 10, for example by means of a computer.
The image quality parameters of digital cameras are typically determined according to the european machine vision association standard 1288, the so-called EMVA standard 1288 (published 3.0 on 11/29/2010). This applies in particular to cameras for industrial applications. The standards describe herein a physical model of the camera, the performance of the measurements, the evaluation of the measurement data and the display of the results in the form of an EMVA standard 1288 data manual. By means of the criteria, the user is able to compare different camera models of different manufacturers with each other and thus make a suitable purchase decision.
An important value in the EMVA standard 1288 data sheet is the quantum efficiency (QE, "quantum efficiency" in english), which is described there as η (λ). Said value relating to the wavelength λ of the incident light describes the photoelectrons μ generated per unit exposure time of the pixeleWith the average statistical number of photons mu incident on the pixel for the same exposure timepIs measured in terms of the ratio of the average number of (c). A small QE value indicates: in statistical averaging, only a small number of photons are converted into photoelectrons. The image sensor is almost blind. A large QE value indicates: a large number of photons are converted into photoelectrons. The camera is more light sensitive. This is preferred by most users.
Fig. 2 shows in simplified form a physical model of a pixel of a digital camera based on the EMVA standard 1288. During the exposure time, the number npIs incident on the pixel. A part of the photons is converted there into electrons e and stored. The number of electrons is then ne. The conversion is done with statistical probabilities, referred to as quantum efficiencies. After the end of the exposure time, the number neIs converted into a pixel value y, which is expressed in digital units (DN,english "digital number"). The conversion is performed with the conversion gain (K), which is thus a proportionality constant. Other terms described in the EMVA Standard 1288, such as, for example, dark noise ndOr quantization noise sigmaqOmitted here for reasons of simplicity.
A large number of patent documents focus on the problem of how QE can be improved. Exemplary of these are US 4,822,748, US 5,005,063, US 5,055,900, US6,005,619, US6,259,085, US6,825,878, US7,038,232 and US 8,304,759. The improvement of the quantum efficiency is achieved here by measures during the development or manufacture, respectively, of the image sensor. The development of image sensors is very costly and typically requires investments in the range of at least seven euros. Developing or improving manufacturing methods for image sensors is also quite expensive.
Furthermore, these approaches have the disadvantage that only specific image sensor types, specific production technologies, such as, for example, CCDs (charge-coupled devices) or CMOSs (complementary metal oxide semiconductors) or only specific image sensor families of the manufacturer are thereby always improved. If a large number of different image sensors are used for a large number of camera models, these measures have to be carried out for each image sensor, whereby the costs increase even by a multiple.
Other means for improving quantum efficiency exist in Kodak Application Note "NIR-Enhanced Mode Operation of Kodak Interline CCDs for use with Kodak KAI-1003, KAI-2000, KAI-2093, KAI-4000, KAI-4010and KAI-4020Interline CCD image sensors (first edition 11.11.2002)". It is stated here how the QE for infrared light can be increased by the special electrical operation of the CCD image sensor described. The disadvantage of this approach is that it improves QE only in a specific wavelength range, i.e. in the infrared, is only available for a specific CCD image sensor of a specific manufacturer, i.e. kodak, requires electronic changes in the operation and wiring of the camera, and furthermore the problem of "blooming" artifacts which become severe in case of overexposure occurs.
The second important value in the EMVA standard 1288 data sheet is the saturation capacity (C)satEnglish is "sales capacity"). The value describes the maximum number of electrons n that a pixel can receivee. Because each detected electron is the result of a random process, the number of electrons is subject to statistical fluctuations. According to the law of large numbers, the relative error is given by neThe root of (2) is reduced. A high saturation capacity allows for a large number of electrons and thus small relative errors and is thus preferred for a large number of applications.
A number of patent documents, such as for example US6,515,703 and US7,115,855, focus on the improvement of the saturation capacitance. The patent document relates to the development and manufacture of image sensors and has the disadvantages mentioned above in relation thereto.
The third important value in the EMVA standard 1288 data sheet is the maximum signal-to-noise ratio (maxSNR, for "maximum signal-to-noise ratio"). The SNR value allows good prediction: how much weak details in the image can be identified before they disappear in the noise. The statistical properties of the photons p and the electrons e generated therefrom only allow to give a maximum SNR value, which preferably corresponds to the number n of electronseThe root of (2). For high quality digital cameras, a high maxSNR value is desirable and therefore an important selection criterion for the user.
Furthermore, the US patent documents US 5,250,824, 6,124,606 and US6,822213 focus on the improvement of SNR. Said reference in turn relates to the development and manufacture of image sensors and has the disadvantages as mentioned above in connection therewith.
A fourth important value in the EMVA standard 1288 data sheet is the dynamic range ("dynamic range" in english). The high dynamic range helps to be able to identify details in both light and dark regions of the image. Thus, DR also has great practical significance to users and can be an important criterion for selecting data cameras.
A large number of patent documents are mentioned which focus on the improvement of the dynamic range. US patent documents US6,864,920, US7,446,812, US7,518,645, US7,554,588 and US7,636,115 are mentioned here by way of example. These patent documents relate in turn to measures in the development, manufacture or operation of image sensors. The above-mentioned disadvantages are also mentioned with respect to developing and manufacturing an image sensor. With regard to the operation of the image sensor, the following disadvantages are obtained: the camera must be adapted to this with corresponding effort. The adaptation generally only works for a specific image sensor or image sensor family and for other image sensors or image sensor families has to be carried out anew with corresponding effort. Furthermore, the altered operation of the image sensor often causes problems, which are due to differences from the usual ways and methods of how images are recorded. For example, the double exposure is premised on: no motion occurs in the recorded scene. If this is the case, very disturbing motion artifacts occur. Furthermore, a higher number of partial recordings for images reduces the maximum achievable frame rate of the camera, which also has a very adverse effect.
Summarizing it can be seen that the improvement of quantum efficiency, saturation capacity, maxSNR or dynamic range according to the prior art is generally costly and expensive, in particular because measures are always carried out only for a particular image sensor, type of image sensor or family of image sensors.
Disclosure of Invention
The present invention is based on the object of providing an image processing device for improving a pixel quality value by processing a first image of an image sensor of a digital camera, the image sensor having a given pixel quality value, which image processing device enables the pixel quality value, for example quantum efficiency, saturation capacity, maximum signal-to-noise ratio or dynamic range, to be improved in a simple, cost-effective and image sensor-independent manner.
According to a first aspect of the present invention, there is provided an image processing apparatus for improving a pixel quality value by processing a first image of an image sensor of a digital camera, the image sensor having a given pixel quality value, wherein the image processing apparatus is configured to calculate a second image from the first image,
wherein the pixel quality value determined from the second image is greater than the given pixel quality value of the image sensor, an
Wherein for a first image recorded by the image sensor at a uniform, constant brightness, the correlation coefficients of neighboring pixels for the second image are greater than the correlation coefficients of neighboring pixels for the first image.
The invention is based on the inventors' knowledge that: the pixel quality value, e.g. quantum efficiency, saturation capacity, maximum signal-to-noise ratio or dynamic range, of the digital camera does not have to be the same as the pixel quality value of the image sensor of the digital camera. The concept is a pioneering innovation since values of quantum efficiency, for example for image sensors and digital cameras, have hitherto been considered to be invariably the same. This also applies to a large extent to the saturation capacity, maximum signal-to-noise ratio and dynamic range, as long as no special image sensors or image recording techniques are used. By calculating the second image from the image of the image sensor (first image) in such a way that the pixel quality value determined from the second image is increased (improved) compared to the given pixel quality value of the image sensor at the expense of increasing the correlation coefficient for the neighboring pixels of the second image compared to the neighboring pixels of the first image recorded by the image sensor at a uniform, constant brightness, the improvement in pixel quality value can be achieved by means of inexpensive image sensors which are virtually all freely available on the market, without having to change the image sensor in terms of construction or production by means of expensive measures. As the inventors have experimentally demonstrated, the pixel quality values are thus significantly improved, while an increase in the correlation between neighboring pixels of the second image does not cause a significant deterioration in the visual perceptibility of the second image. The second image calculated from the image of the image sensor (first image) can then be used as the image of the digital camera and, if the image processing device is enclosed by the digital camera, output for example via a suitable interface (see below).
According to an advantageous further development, the image processing device is designed to calculate the values of the pixels of the second image by means of local image processing operators which are each applied to a respective pixel of the first image and process the values of the pixels of the first image with respect to a predetermined region of the respective pixel and surrounding regions comprising a plurality of pixels. The computational effort for calculating the values of the pixels of the second image is limited by limiting the processing of the local image processing operators to a predetermined surrounding region of the first image with respect to the corresponding pixels and comprising a plurality of pixels.
If the image sensor is a monochrome image sensor, it is preferable that the values of all pixels of a predetermined surrounding area can be processed. However, this does not apply to color sensors which use so-called Mosaik filters, for example so-called bayer patterns, which are known, for example, from U.S. Pat. No. 3,971,065 (in particular fig. 6 there). Here, a regular pattern of color filters for red (R), green (G) and blue (B) is present on the pixels, so that each pixel is sensitive only to light of the respective color. In the bayer pattern, the frequency of green is twice that of red or blue. It is advantageous here to process the values of only pixels of a predetermined surrounding area, the color (and possibly the phase) of which is equal to the color (and the phase) of the respective pixel, respectively.
According to a further advantageous development, the predetermined peripheral region has the corresponding pixel as a first, preferably central pixel and/or the predetermined peripheral region is a peripheral region having the same, in particular an odd number of rows and columns, in particular pixels of a size of 3 × 3 or more. By using a square shape, the image processing operator can be constructed very well symmetrically (see below). Furthermore, by selecting an odd number of rows and columns, there are always intermediate rows and intermediate columns, so that the predetermined surrounding area is present centrally on the midpoint of the corresponding pixel of the first image.
According to a further advantageous development, the local image processing operator is adapted to calculate the value of the pixel of the second image as a weighted sum of the values of the pixels of the predetermined surrounding area.
According to a further advantageous development, the weight of the value of the respective pixel is compared with the weight of the values of the remaining pixels of the predefined surrounding areaIs in the ratio range of 2.2247 to 40.4298, preferably 4.2913 to 40.4298, more preferably 10.3385 to 40.4298, most preferably 20.3562 to 40.4298. With the values, for example, the improvement in quantum efficiency is in the range of 5% to 100%, preferably 5% to 50%, more preferably 5% to 20%, most preferably 5% to 10%. Thereby, the pixel quality value can be significantly improved without the increase in correlation between neighboring pixels of the second image causing a significant deterioration in the visual perceptibility of the second image. An improvement of the pixel quality value of less than 5% is not significant, since the determination of the quantum efficiency, for example, according to the EMVA standard 1288 requires the measurement of the irradiation intensity E by means of a calibrated photodiode. (this is necessary in order to expose the time texpDuring which photons mu impinge on the face A of the pixelpIs given by the equation μp=(A·E·texp) V (h (c/λ), where h is the Planckian constant and c is the speed of light). The accuracy of the photodiode is typically between 3% and 5% depending on what wavelength is used. According to EMVA1288, the deviation is then the smallest systematic error in quantum efficiency.
According to a further advantageous development
The pixel quality value is quantum efficiency or saturation capacity or dynamic range in the case of a given wavelength band, wherein the quotient of the square of the sum of the weights divided by the square of the L2 norm of the local image processing operator is equal to the pixel quality improvement factor, or
The pixel quality value is the maximum signal-to-noise ratio, where the quotient of the square of the sum of the weights divided by the L2 norm of the local image processing operator equals the square of the pixel quality improvement factor,
wherein the pixel quality value determined from the second image is substantially greater than the given pixel quality value of the image sensor by a pixel quality improvement factor.
According to a further advantageous development, the sum of the weights is equal to a brightness change factor, wherein the brightness determined from the second image is changed by the brightness change factor compared to the brightness determined from the first image.
According to a further advantageous development, the pixel quality value determined from the second image is substantially greater than the given pixel quality value of the image sensor by a pixel quality improvement factor, wherein the pixel quality improvement factor lies in the pixel quality improvement range of 5% to 100%, preferably 5% to 50%, more preferably 5% to 20%, most preferably 5% to 10%, and/or the brightness determined from the second image is varied by a brightness variation factor relative to the brightness determined from the first image.
According to a further advantageous development, the image processing device comprises an operating means for setting the pixel quality improvement factor and/or the brightness change factor, preferably not more than the pixel quality improvement range.
According to a further advantageous development, the operating means are designed such that a change in the pixel quality change factor does not change the brightness determined from the second image and/or a change in the brightness change factor does not change the pixel quality improvement factor determined from the second image.
According to a further advantageous development, the local image processing operators are mirror-symmetrical in the row and column directions and are 90 ° rotationally symmetrical. As a result, it is thereby possible to avoid shifts, asymmetries and anisotropies, which may be perceived as disturbing in the second image.
According to another aspect of the present invention, there is provided a digital camera, preferably an industrial camera, wherein the digital camera comprises:
-an image sensor for generating a first image, an
An image processing apparatus according to an embodiment of the invention for processing a first image of an image sensor.
According to an advantageous further development, the digital camera further comprises:
-an interface for the communication of the data,
-wherein the digital camera is adapted to output the second image via the interface.
According to a further aspect of the invention, an application of the image processing device according to the invention for improving a pixel quality value of a digital camera, preferably an industrial camera, having an interface is proposed, wherein a second image is output via the interface.
According to another aspect of the present invention, there is provided an image processing method for improving a pixel quality value by processing a first image of an image sensor of a digital camera, the image sensor having a given pixel quality value, wherein the image processing method is configured to calculate a second image from the first image,
wherein the pixel quality value determined from the second image is greater than the given pixel quality value of the image sensor, an
Wherein for a first image recorded by the image sensor at a uniform, constant brightness, the correlation coefficients for neighboring pixels of the second image are larger than the correlation coefficients for neighboring pixels of the first image.
According to a further aspect of the invention, a design method for designing a digital camera, preferably an industrial camera, for a desired pixel quality value is provided, wherein the digital camera comprises an image sensor for generating a first image, the image sensor having a given pixel quality value, wherein the given pixel quality value of the image sensor is worse than the desired pixel quality value, wherein the design method is configured to calculate a second image from the first image, wherein the pixel quality value determined from the second image is substantially equal to the desired pixel quality value.
According to another aspect of the present invention, a computer apparatus is provided, wherein the computer apparatus comprises a computing unit configured to execute an image processing method.
According to another aspect of the invention, a computer program product is provided, wherein the computer program product comprises code means for causing a computer device to perform the image processing method when the computer program product is executed on the computer device.
It is to be understood that the image processing apparatus, the digital camera, the application of the image processing apparatus, the image processing method, the design method, the computer device and the computer program product have similar and/or identical preferred embodiments, in particular as defined in the examples.
It is to be understood that a preferred embodiment of the invention can also be any combination of features of the respective embodiments.
Drawings
Preferred embodiments of the present invention are described in detail below with reference to the attached drawing figures, wherein
Figure 1 shows schematically and exemplarily the construction of a digital camera,
figure 2 shows in simplified form a physical model of a pixel of a digital camera based on the EMVA standard 1288,
figure 3 shows schematically and exemplarily how a first image is converted into a second image by calculation,
figure 4 shows schematically and exemplarily a view of a description of a model for elucidating the problem of limiting the quantum efficiency,
fig. 5 shows schematically and exemplarily how the second image is composed of values of virtual pixels, the area of which slightly exceeds the area of the pixels of the image sensor,
figure 6 schematically and exemplarily shows the formation of an overlap region of a virtual pixel,
figure 7 shows schematically and exemplarily a linear filter for a3 x 3 coefficient,
fig. 8 shows schematically and exemplarily how by means of filtering the digital pixel values y of the first image by means of a filter, a second image with values z is obtained,
fig. 9 shows schematically and exemplarily the operating elements BH and BQ, which act on the filter,
fig. 10 shows schematically and exemplarily how a desired image processing operator is realized by performing a plurality of filtering steps in sequence,
figure 11 shows schematically and exemplarily a point filter,
figure 12 shows schematically and exemplarily the result of using the filtering according to the invention for quantum efficiency,
figure 13 shows schematically and exemplarily the result of filtering for saturation capacity according to the present invention,
FIG. 14 shows schematically and exemplarily the results of filtering for maximum signal-to-noise ratio according to the present invention, an
Fig. 15 shows schematically and exemplarily the result of filtering for dynamic range according to the present invention.
Detailed Description
In the drawings, identical or corresponding elements or units are provided with identical or corresponding reference numerals, respectively. If an element or a unit has been described in connection with a drawing, it may be omitted from the detailed description in connection with other drawings.
As described, according to the invention, a second image with an improved pixel quality value is calculated from a first image of an image sensor of a digital camera, which image sensor has a given pixel quality value, for example quantum efficiency, saturation capacity, maximum signal-to-noise ratio or dynamic range, at the expense of an increase in the correlation coefficient for neighboring pixels of the second image compared to neighboring pixels of the first image recorded by the image sensor with uniform, constant brightness.
As this can advantageously be done, the following description is exemplary for quantum efficiency, wherein a description of a model formula of the problem of the limitation of quantum efficiency is first set forth in accordance with fig. 4. As the most important reason for the practically occurring limitation of the quantum efficiency of the pixel 20, it is considered in this description that the effective photosensitive surface 22 of the pixel 20 is smaller than its total surface. In modern image sensors, the photosensitive surface is lowered, for example, by the shading of the conductor tracks or electrodes of the electronic components, or by regions in the underlying silicon which contribute to the collection of electrons, for example because of recombination, diffusion or drift of electrons occurring there.
Finally, independently of the particular reason that quantum efficiencies of significantly less than 100% are actually occurring, this therefore also means that the imaging pixel, although having a perfect QE, is not entirely photosensitive here. For this reason, the term of effective light sensitive area (ELA, english "effective light sensitive area") is added here. Thereby conceptually converting the quantum efficiency deficit into pixel area (pixelflache).
This results in the inventive concept of matching the ELA to a desired magnitude for correction or for improving the quantum efficiency. In this regard, the second image 41 (see fig. 3) is advantageously constructed from "virtual" pixels mathematically calculated from pixels of the first image 40 of the image sensor of the digital camera. The calculation can be carried out in an image processing device, for example, a calculation unit 14 (see fig. 1) of the digital camera 10. The calculation can however also take place downstream in an external calculation unit.
It is proposed from fig. 5 that the second image 41 consists of the values of the virtual pixels 32 whose plane a2 slightly exceeds the plane of the pixels of the image sensor 30 (a, in english "area"). Here, the side length of the virtual pixel is greater than the pitch of the midpoint of the first pixel. Thereby, the ELA31 of the virtual pixel is elevated with respect to the ELA22 of the first pixel 30, since this is defined via a portion of the plane a, the plane a2 of the virtual pixel being larger than a, and thus the ELA of the virtual pixel is also larger than the ELA of the first pixel. This results in a photon n incident on the pixel 30pDue to the larger ELA of the virtual pixels, a larger average number of electrons μ is collected in the virtual pixels during the exposure timee. Because the quantum efficiency is as μeDivided by mupThis corresponds to a higher QE. Thus, by using the virtually enlarged pixel plane a1, the problem of too small ELA is causally addressed and solved, more electrons are collected, and quantum efficiency is raised.
For calculating the value of the virtual pixel, it is preferable to use the value of the neighboring pixel 35 or 36. Because too small an ELA loses photons, the photons do not cause the formation of electrons. Therefore, additional electrons are required, which are only in the surrounding region (adjacent region). Therefore, in practice, more electrons can contribute to the value of the virtual pixel and the quantum efficiency is increased.
Since the area a2 of the virtual pixel 32 is enlarged relative to the area a of the first pixel 30, the side length wp of the virtual pixel shown in fig. 6 is generally greater than the distance between the center points 53 and 54 of the adjacent virtual pixel dp, which is the same as the distance between the center points of the adjacent first pixel. This fact applies both in the vertical direction and in the horizontal direction.
Thereby, the overlapping area 51 of the dummy pixels is formed. Electrons conceptually associated with the overlap region 51 contribute to the signal of the plurality of virtual pixels. The overlap is intentionally taken into account.
If an image is recorded with the aid of an image sensor at a uniform, constant brightness, there is usually noise in the first image, wherein the brightness values are usually uncorrelated. But because the overlap region 51 now contributes to the values of the virtual pixels 50 and 52, there is a correlation between the values of the two virtual pixels. In this way, in the respective measurement, the covariance between respectively adjacent pixels no longer disappears, but rather has a value which differs significantly from zero. Therefore, the correlation coefficient for the neighboring pixels of the second image 41 is larger than the correlation coefficient for the neighboring pixels of the first image 40.
The overlap 51 in principle causes an adverse effect such that a slight drop in the Modulation Transfer Function (MTF) and thus a lower image sharpness in the second image may occur as a result. The disadvantage is acceptable according to the performance of the actual experiment, since it is not visible to the naked eye of most observers with a modest increase in QE.
It proposes: the calculation of the virtual pixels is performed by means of linear filters, i.e. local image processing operators, which are always used for the respective pixels of the first image and process the values of the pixels of the first image with respect to a predetermined region of the respective pixel and surrounding regions comprising a plurality of pixels. Such filters are exemplarily shown in fig. 7 for 3 × 3 with coefficients c00, c01, c02, c10, c11, c12, c20, c21, and c 22. By means of the filter, the value of the pixel of the second image is calculated as a weighted sum of the values of the pixels of the predetermined surrounding area. The weights are here exactly the filter coefficients c00, c01, c02, … …. Since the model of the pixels in EMVA1288 is linear, the application of a linear filter based on the linear pixels of the image sensor results in linear virtual pixels which in their respect again conform to the linear model of the pixels in EMVA 1288.
Based on the idea how linear filtering works on the measured values according to the standard EMVA 1288.
In the EMVA standard 1288, in brief, the mean value μ is determined from the digital pixel values yySum noise value σy. Determining the noise variance σ from the noise values by powery 2. According to the so-called photon conversion method, in the case of comparing bright and dark images, it is referred to as Δ μ from here onyAverage value of (d)yIs referred to herein as Δ σy 2Variance σ ofy 2In the relationship of the difference values of (a), a conversion gain K is calculated. The conversion gain allows to convert the digital signal muyIs calculated back to the mean value for electron mueIs determined by averaging the number of (a) and (b), this is achieved by: mu.syDivided by K. For monochromatic image sensors, the quantum efficiency for "unique" wavelengths with bandwidths no wider than 50nm is determined. For a color sensor, a quantum efficiency value is determined for each color channel.
As is shown schematically and exemplarily in fig. 8, if the digital pixel values y of the first image 40 are filtered by means of the filter F, a second image 41 with the value z results. The second image having a further mean value μzAnd has a further variance σy 2Standard deviation of (a)z. Difference Δ μ from the mean valuezDifference of sum variance Δ σz 2Now, a virtual conversion gain K2 can be determined, by means of which a virtual electron n can be obtainede.2Is a virtual average value mu of the number ofe.2And a virtual quantum efficiency QE 2. For a better understanding, it is noted here that the number of virtual electrons is by no means only assumed, but actually present on the enlarged ELA. Thus, by filtering F, a new value QE2 for quantum efficiency can be achieved that can exceed the first QE value when desired.
The filter according to the invention is preferably a relatively weak filter, by means of which only the pixel quality values are significantly improved, while the improvement in the correlation between adjacent pixels of the second image does not lead to a significant deterioration in the visual perception of the second image. This is advantageously achieved by: the filter coefficients for the corresponding pixel (c 11 in fig. 7) are significantly larger than the filter coefficients for the remaining pixels (c 00, c01, c02, c10, c12, c20, c21, c22 in fig. 7). The ratio of c11 to the sum of the remaining filter coefficients indicates at what intensity to filter. If the ratio has a high value, then the filtering is extremely weak (because c11 is large), and if the ratio is small, then the filtering is stronger because the values of the remaining pixels are now also more strongly incorporated into the results of the filtering. It is particularly proposed that the ratio between the weight of the value of the respective pixel (filter coefficient) and the sum of the weights of the values of the remaining pixels of the predetermined surrounding area (filter coefficient) lies in a ratio range from 2.2247 to 40.4298, preferably from 4.2913 to 40.4298, more preferably from 10.3385 to 40.4298, most preferably from 20.3562 to 40.4298. With said values, the improvement of the quantum efficiency is in the range of 5% to 100%, preferably 5% to 50%, more preferably 5% to 20%, most preferably 5% to 10%.
The two 3 x 3 filters shown below shall be referred to as two exemplary filters:
0.00563 0.00603 0.00563
0.00603 0.95331 0.00603
0.00563 0.00603 0.00563
and
0.01875 0.02041 0.01875
0.02041 0.84333 0.02041
0.01875 0.02041 0.01875
the first filter had a ratio of 20.4398, and the quantum efficiency was improved by 10.03% using the filter. The second filter has a ratio of 5.3839, and the quantum efficiency is improved 40.27% by using the filter.
It is furthermore proposed that the brightness variation factor H is preset such that the brightness determined from the second image 41 should vary with respect to the brightness determined from the first image (i.e. around said brightness the second image should be brightened/darkened by filtering). It is then appropriate to select the linear filter F such that its sum of coefficients has the value H, since then the desired brightness is achieved when applying the filter. The sum of the coefficients is understood here as the sum of the filter coefficients c00, c01, c02 … ….
The brightness variation factor H is suitably chosen to be, for example, a value of 1. In this case, the image processing operator has no influence on the image brightness. With appropriate selection of filter coefficients, a respectively inverted effect on QE2 and K2 is obtained. Thus, an improved second quantum efficiency QE2 can be achieved without the value range of the digital values of the second image 41 changing with respect to the first image 40. If the value range of the digital value y is well utilized, the value range of the digital value z is thus also well utilized in the same way, and for example saturation problems caused by necessary clipping are eliminated, which prevents overflow of the values.
Another good choice for H is given by: the middle element of the filter, e.g. element c11 in fig. 7, is equal to one. Thereby, the second conversion gain K2 is kept constant with respect to the first conversion gain K, and a change in the second quantum efficiency QE2 with respect to the first quantum efficiency QE is immediately correctly visible in terms of image brightness. The selection of H is particularly intuitive as it makes the photosensitivity of the digital camera immediately visible in a way and method intuitively predictable by the user through an improvement in higher quantum efficiency.
An operating element BH for setting the luminance change factor H can be provided. The operating element can be configured, for example, as a rotary or sliding control in hardware or software, as a digital register or as user-programmable instructions in A Program Interface (API). The operating element BH is shown on the left side of fig. 9 together with the effect on the value H, which again acts on the filter F.
The operating element BH can be associated with the following effects: the coefficients c00, c01, c02 … … of the filter F are collectively scaled in proportion to the desired value. Thus, only the image brightness is changed via the value K2, while the value of QE2 remains independent of the setting of BH.
It is also proposed to preset the factor Q (pixel quality improvement factor) so that the quantum efficiency is increased by linear filtering or, if necessary, also decreased.
An operating element BQ for setting the value Q can be proposed. Like the actuating element BH, it can likewise be configured, for example, as a rotary or sliding actuator in hardware or software, as a digital register or as a user-programmable command in A Program Interface (API). The operating element BQ is shown on the right side of fig. 9 together with the effect on the value Q, which again acts on the filter F.
It is proposed that the linear filter F is chosen such that the quotient of the square of the coefficient sum divided by the square of the L2 norm of the coefficients has the value Q. The norm L2 is calculated here as the square root of the sum of the squares of the coefficients of the filters. The L2 norm corresponds to the euclidean norm in the case of a one-dimensional filter and to the frebinis norm in the case of a two-dimensional filter.
Based on the knowledge how to work on the mean value μ using a linear filter F with elements c00, c01, c02 … …zSum noise value σz: it is assumed that the average values of the first pixels are all equal to the value μyThe association can be deduced: second pixel muzFrom the first pixel muyIs multiplied by the sum of the coefficients. It is assumed that the noise of the first pixels with values y00, y01, y02 … … is equal on the statistical average, having values σ respectivelyyAnd there is no correlation between different pixels, then by means of gaussian error propagation: deriving the second pixel mu from the average noise of the first pixel by multiplying by a factorzThe value of said factor is exactly the square root of the element of the filter. The noise is measured here in each case as a digital value DN. Thus, the noise figure of the linear filter corresponds exactly to the L2 norm of the linear filter F.
Thus, it is furthermore possible to deduce: k2 is derived from the value K1 by multiplying by the square of the L2 norm of the filter, divided by the sum of the coefficients of the filter, when the value is determined according to the standard EMVA 1288. If now the value K2 is inserted into μe.2Then thereby being able to take the value QE2 as ne.2Divided by npThe quotient of (2). It follows that QE2 is derived from QE by multiplying by the square of the sum of the coefficients of the filter divided by the square of the L2 norm of the filter. However, since the quantum efficiency should now be increased by exactly the factor Q, it is advantageous to select the filters such that the square of the sum of the coefficients of the filters is divided by the L2 norm of the filtersThe quotient of the squares of the numbers yields exactly the value Q, since the desired effect is thus exactly achieved.
It proposes: the calculation of the virtual pixels is performed by filters symmetrical in the vertical, horizontal and 45 ° diagonal directions. In this way, disturbing displacements, asymmetries or anisotropies are avoided.
Furthermore, it is proposed that: the linear filter uses a square shape with odd rows and the same odd columns having at least a value of 3. By using a square shape, the previously required symmetry properties can be well met. And by selecting odd rows and columns there is always an intermediate row and column so that the filter is centered on the midpoint 37 of the pixel 30. This enables a virtual pixel 32 to be realized, the midpoint of which is the same as the midpoint 21 of the pixel 20. Thereby, spatial errors are avoided. Such filters are shown in three rows and three columns in fig. 7.
A linear filter having the described characteristics can be realized, for example, in the following cases: for the coefficients shown in fig. 7, the following apply: c00, c02, c20, c22, c01, c10, c12, c 21.
Alternatively to the mentioned filter filtering by means of the L2 norm F, the desired local image processing operator can also be realized by executing a plurality of filtering steps in succession. Then it is appropriate to select the filters such that the product of the respective quotient of the square of the respective coefficient sum of the respective filter divided by the square of the respective L2 norm of the respective filter has the value Q. Thereby, a desired change in quantum efficiency is achieved.
An example of this is shown in fig. 10. The filtering can be carried out first by means of one-dimensional vertical filters, for example with the elements a0, a1, a2, as shown in fig. 10(a), and subsequently by means of one-dimensional horizontal filters, for example with the elements b0, b1, b2, as in fig. 10 (b). Since the two filters are linear, it is also possible to use the filters in the opposite order and obtain the same result. The result is also the same as the filtering of a two-dimensional filter, which results from the outer product of the two illustrated one-dimensional filters.
Here, the above-mentioned vertical or horizontal symmetry property can be satisfied in the following cases: vertical or horizontal filters are chosen symmetrically. This is the case for the example shown in fig. 10 in the following cases: a0 ═ a2 or b0 ═ b 2.
Furthermore, the above-described symmetrical characteristic of the 45 ° diagonal is satisfied if the coefficients of the vertical and horizontal filters are equal to one another, i.e. if a 0-b 0, a 1-b 1, a 2-b 2 apply to the filter coefficients in fig. 10.
By having a plurality of successively formed filters whose product of the respective quotient of the square of the respective coefficient sum divided by the square of the respective L2 norm has the value Q, it can be expedient to select the filters in each case such that the quotient of the square of the respective coefficient sum divided by the square of the respective L2 norm has in each case the value of the n-th root of Q, where n describes the number of filters which are carried out in succession. It is thus ensured that for a filter, the product of the respective quotient of the square of the respective coefficient sum of the respective filter divided by the square of the respective L2 norm of the respective filter has the value Q.
The possibility of determining the filters such that the quotient of the square of the sum of the coefficients of the filters divided by the square of the L2 norm of the filters just yields the value Q is in the following calculation:
a low pass filter TF with a coefficient sum of one and a high pass filter HF with a coefficient sum of zero are selected, which high pass filter and low pass filter are added to obtain a so-called spot filter PF in which only the central element has the value one and in which all other values are equal to zero. Such a spot filter PF is exemplarily shown in fig. 11.
In this case, for example, a so-called box filter (Boxfilter) can be selected as the low-pass filter TF, in which all coefficients have the value one divided by the number of coefficients.
Another good possibility to select the low-pass filter TF consists in associating the coefficient and the area of the virtual pixel 32 with the area ratio of the respective intersection 33 and 34 of the areas a of the first pixels 35 and 36 associated with the coefficient divided by the area of the virtual pixel 32. With this selection, the low-pass filtering corresponds exactly to the area fraction of the virtual pixel.
A good possibility to select the high pass filter HF is to form the difference of the point filter PF minus the low pass filter TF. By said selection the sum of TF plus HF is automatically fulfilled to the requirement of the spot filter PF.
Now, the linear combination factor α is determined such that the filter calculated from TF plus α times HF just satisfies the following characteristic: the quotient of the square of the sum of the coefficients of the filter F divided by the square of the L2 norm of the filter F yields exactly the value Q.
By selecting the filter TF with a coefficient sum of one and the filter HF with a coefficient sum of 0 as described above, the filter F automatically always has a coefficient sum of 1 as TF plus α times HF. Thus, the selection of α is simplified as follows: now only the square of the L2 norm of F should have the value Q.
The value α can be determined by a quadratic equation which is obtained by: the formula for the L2 norm plus α times HF of the filter TF is solved according to α, said HF should be equal to the value Q. In this case, two solutions are usually obtained for α, of which the physically significant solution is the one with a higher value for the central coefficient.
If now the value of the desired brightness variation factor H is not equal to 1, the filter F obtained so far is multiplied by the value H, which is achieved by: all coefficients are multiplied by the value H, respectively. Thereby, a new filter is obtained, which changes the luminance by a value H and the quantum efficiency by a value Q (pixel quality value improvement factor).
The described method for determining the quotient of the square of the sum of the coefficients of the filters divided by the square of the L2 norm of the filters to exactly yield the filter of the value Q can of course also be transferred to determining the quotient of the square of the sum of the coefficients of the filters divided by the square of the L2 norm of the filters to yield filters of other desired values, for example the n-th root of Q, by: the value Q is replaced by a corresponding other desired value.
Fig. 12 shows the results of the application of filtering according to the present invention. Here, a camera was selected and subjected to analysis according to EMVA1288 multiple times. For the light wavelengths used herein, the camera has a QE of 53.59%. During each analysis, the filtration by means of the filter F according to the invention is performed for different values of Q, Q varying equidistantly from 1.0 to 2.0 (corresponding to 0% to 100%) in thirty-two steps. The brightness is not changed by the filtering, i.e. the brightness change coefficients H each have a value of one. It can be seen that the measurements for quantum efficiency scale as expected compared to Q. From this it can be deduced that the invention functions as intended.
With the filtering according to the invention, the quantum efficiency of all cameras with image sensors is increased, which in terms of their basic function conform to the model used in the EMVA1288 standard. This involves a vast majority of the camera and also of the image sensor. Most digital cameras today have a computing unit 14. Since the computational complexity for the filtering according to the invention is low, the filtering can be integrated without problems and without further components in the already existing computing unit. Since the filtering according to the invention is independent of the exact structure of the image sensor, it can be used in the same way and method throughout in a plurality of different camera models with a plurality of different image sensors. Thus, there is only a small development cost.
Alternatively to integration into a camera, the filtering according to the invention can also be carried out in an external computing unit. Thereby, the invention can also be used for cameras which do not have their own computing unit. In this case, it can be provided, for example, that the filtering takes place in a so-called driver which receives the image data of the camera and supplies it to other applications on an external computing unit.
The filtering according to the invention can be performed in real time. It is of course also possible for the filtering to be carried out at a significantly later point in time than for the recording of the first image. Thus, the present invention can also be applied to stored images.
The invention can be applied to a single image or a sequence of images, i.e. to a so-called video data stream. In this case, a first image thereof is separately extracted from the video data stream and a second image is calculated therefrom, which first image and second image again form the video data stream.
The present invention can be used for monochrome images or color images. In the case of color images, it can be provided that, for the calculation of the virtual pixels having a particular color, only the first pixels of the same color are used.
The same method as that for improving the quantum efficiency can also be performed to increase the saturation capacity. This is shown in fig. 13. For the exact same experiment, the QE values in fig. 12 are shown for the experiment. Since the area of the dummy pixel is increased in proportion to Q, more electrons can be stored also in the dummy pixel than in the first pixel. Therefore, by applying the same filter F, the saturation capacity C is also achievedsatRising by a factor Q.
Of course, in the case of an increased saturation tolerance, it should be noted that as the brightness increases, i.e. when the value of H is greater than 1, digital saturation effects may occur, which counteract the increase in saturation capacity. The problem can be avoided by: a sufficiently large value range is provided for the digital data. Alternatively, the problem can be avoided by: h is chosen small enough. If the same value range is used for the second image as for the first image, the problem can be avoided by choosing H equal to one, for example.
Accordingly, an operating element BS can also be provided, similar to the operating element BQ, by means of which the factor S can be set in order to increase the saturation capacity. From the value S, the filter F can be determined in exactly the same way as from the value Q. Here, Q is simply replaced by S in the calculation.
Fig. 14 also shows the results of the same experiment as in fig. 12 and 13. This time, however, depicts the maximum SNR value. It is clear that the filtering according to the invention is suitable for raising the SNR-value to a desired extent. The formula of how to change the value of maxSNR exactly can be derived from the standard EMVA 1288.
Accordingly, similar to the operating element BQ, an operating element BM can also be provided, by means of which the factor M can be set in order to increase the maximum SNR value. From the value M, the filter F can be determined in exactly the same way as from the value Q. In this regard, the maximum correlation between SNR and QE can be derived from the standard EMVA 1288. The maximum SNR is calculated as a good approximation as the square root of the saturation capacity measured in electrons. It is therefore a good choice to make the so-called value Q equal to the square of M for the calculation of F.
It must also be noted that in the case of a maximum SNR increase, a digital saturation effect may occur at an increased brightness, i.e. at a value of H greater than 1, which counteracts the maximum SNR increase. The problem can be avoided by: a sufficiently large value range is provided for the digital data. Alternatively, the problem can be avoided by: h is chosen small enough. If the same value range is used for the second image as for the first image, the problem can be avoided by choosing H equal to one, for example.
Finally, fig. 15 shows for the same experiment how the dynamic range DR changes when Q changes. It is seen that the dynamic range can also be increased by using the filtering according to the invention. The correlation between DR and Q is mathematically less simple, since there is an additional effect of quantization effects. (in principle the correlation is also linear as in the case of saturation capacity and quantum efficiency, wherein the additional influence of the quantization effect is increased here. in particular, in the region to the left of FIG. 15, the influence of the quantization effect is visible, whereas in the region to the right of FIG. 15, an approximately linear behavior occurs).
Similarly to the operating element BQ, an operating element BD can also be provided, by means of which a value D can be set, which value is associated with this effect: an increase in D causes an increase in the dynamic range, and a decrease in D causes a decrease in the dynamic range. This can be achieved, for example, by: q is made equal to D.
The image processing device according to the invention can preferably be used to improve the pixel quality values, such as quantum efficiency, saturation capacity, maximum signal-to-noise ratio or dynamic range, of a digital camera, in particular an industrial camera, which has an interface via which the second image is output.
Furthermore, the described invention can be used in a design method for designing a digital camera, preferably an industrial camera, for a desired pixel quality value, for example quantum efficiency, saturation capacity, maximum signal-to-noise ratio or dynamic range, wherein the digital camera comprises an image sensor for generating a first image, the image sensor having a given pixel quality value, wherein the given pixel quality value of the image sensor is worse than the desired pixel quality value, wherein the design method is configured to calculate a second image from the first image, wherein the pixel quality value determined from the second image is substantially equal to the desired pixel quality value.
The term "substantially" is understood within the scope of the above disclosure as follows: deviations, for example by digital saturation effects, quantization effects, etc., are not to be excluded. As described, the effect plays a smaller role in quantum efficiency than in saturation capacity and maximum signal-to-noise ratio, in which digital saturation effects can become significant, and in dynamic range, in which quantization effects can additionally occur.

Claims (19)

1. An image processing device for improving a pixel quality value by processing a first image (13) of an image sensor of a digital camera, the image sensor having a given pixel quality value, wherein the image processing device is configured to calculate a second image (15) from the first image (13), wherein the pixel quality value is a quantum efficiency or a saturation capacity,
wherein the pixel quality value determined from the second image (15) is greater than the given pixel quality value of the image sensor, and
wherein for the first image (13) recorded by the image sensor at a uniform, constant brightness, the correlation coefficients for neighboring pixels of the second image (15) are larger than the correlation coefficients for neighboring pixels of the first image (13),
wherein the image processing device is configured to calculate the values of the pixels of the second image (15) by means of local image processing operators which are each applied to a respective pixel of the first image (13) and which process the values of pixels of a surrounding area of the first image (13) which is predetermined with respect to the respective pixel and comprises a plurality of pixels,
wherein the local image processing operator is adapted to calculate the value of a pixel of the second image (15) as a weighted sum of values of pixels of the predetermined surrounding area.
2. The image processing apparatus according to claim 1,
wherein a predetermined said surrounding area has a corresponding pixel as a first pixel, and/or wherein a predetermined said surrounding area is a surrounding area having the same number of rows and columns.
3. The image processing apparatus according to claim 1,
wherein a ratio between the weight of the value of the corresponding pixel and the sum of the weights of the values of the remaining, predetermined pixels of the surrounding area is in a ratio range of 2.2247 to 40.4298.
4. The image processing apparatus according to claim 1,
wherein the pixel quality value is quantum efficiency or saturation capacity at a given wavelength band, wherein the quotient of the square of the sum of weights divided by the square of the L2 norm of the local image processing operator is equal to a pixel quality improvement factor,
wherein the pixel quality value determined from the second image (15) is greater than the given pixel quality value of the image sensor by the pixel quality improvement factor.
5. The image processing apparatus according to claim 1,
wherein the sum of the weights is equal to a brightness change factor by which the brightness determined from the second image (15) changes relative to the brightness determined from the first image (13).
6. The image processing apparatus according to claim 1,
wherein the pixel quality value determined from the second image (15) is greater than the given pixel quality value of the image sensor by a pixel quality improvement factor, wherein the pixel quality improvement factor is in the range of 5% to 100% pixel quality improvement, and/or the luminance determined from the second image (15) varies by a luminance variation factor compared to the luminance determined from the first image (13).
7. The image processing apparatus according to claim 4,
the image processing apparatus includes an operation mechanism for setting the pixel quality improvement factor.
8. The image processing apparatus according to claim 5,
the image processing apparatus includes an operation mechanism for setting the luminance change factor.
9. The image processing apparatus according to claim 4,
wherein the sum of the weights is equal to a brightness change factor by which the brightness determined from the second image (15) changes relative to the brightness determined from the first image (13).
10. The image processing apparatus according to claim 9,
the image processing apparatus includes an operation mechanism for setting the pixel quality improvement factor and the luminance change factor.
11. The image processing apparatus according to claim 10,
wherein the operating mechanism is configured such that a change in the pixel quality improvement factor does not change the luminance determined from the second image (15) and/or a change in the luminance change factor does not change the pixel quality improvement factor determined from the second image (15).
12. The image processing apparatus according to any one of claims 1 to 6,
wherein the local image processing operators are mirror symmetric in row and column directions and the local image processing operators are 90 ° rotationally symmetric.
13. A digital camera (10), the digital camera comprising:
-an image sensor for generating a first image (13); and
-image processing means according to any of claims 1 to 12 for processing the first image (13) of the image sensor.
14. The digital camera (10) of claim 13, further comprising:
-an interface (16),
wherein the digital camera (10) is adapted to output the second image (15) via the interface (16).
15. A method for using an image processing apparatus for improving a pixel quality value of a digital camera (10), the image processing apparatus being an image processing apparatus according to any one of claims 1 to 12, the digital camera having an interface, wherein the second image (15) is output via the interface (16), wherein the pixel quality value is a quantum efficiency or a saturation capacity.
16. An image processing method for improving a pixel quality value by processing a first image (13) of an image sensor of a digital camera, the image sensor having a given pixel quality value, wherein the image processing method is configured to calculate a second image (15) from the first image (13), wherein the pixel quality value is a quantum efficiency or a saturation capacity,
wherein the pixel quality value determined from the second image (15) is greater than the given pixel quality value of the image sensor, and
wherein for the first image (13) recorded by the image sensor at a uniform, constant brightness, the correlation coefficients for neighboring pixels of the second image (15) are larger than the correlation coefficients for neighboring pixels of the first image (13),
wherein the image processing method is configured to calculate the values of the pixels of the second image (15) by means of local image processing operators which are each applied to a respective pixel of the first image (13) and which process the values of pixels of a surrounding area of the first image (13) which is predetermined with respect to the respective pixel and comprises a plurality of pixels,
wherein the local image processing operator is adapted to calculate the value of a pixel of the second image (15) as a weighted sum of values of pixels of the predetermined surrounding area.
17. An image processing method for designing a digital camera (10) for a desired pixel quality value, wherein the pixel quality value is a quantum efficiency or a saturation capacity, wherein the digital camera (10) comprises an image sensor for generating a first image (13), the image sensor having a given pixel quality value, wherein the given pixel quality value of the image sensor is worse than the desired pixel quality value, wherein the image processing method is configured to calculate a second image (15) from the first image (13), wherein a pixel quality value determined from the second image is equal to the desired pixel quality value,
wherein the image processing method is configured to calculate the values of the pixels of the second image (15) by means of local image processing operators which are each applied to a respective pixel of the first image (13) and which process the values of pixels of a surrounding area of the first image (13) which is predetermined with respect to the respective pixel and comprises a plurality of pixels,
wherein the local image processing operator is adapted to calculate the value of a pixel of the second image (15) as a weighted sum of values of pixels of the predetermined surrounding area.
18. A computer apparatus comprising a computing unit configured to execute the image processing method according to claim 16.
19. A computer program medium comprising code which, when executed on a computer device, causes the computer device to perform the image processing method of claim 16.
CN201810443537.3A 2017-05-10 2018-05-10 Image processing apparatus and method, digital camera, computer device, and program product Active CN108881748B (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102017110129.2A DE102017110129B4 (en) 2017-05-10 2017-05-10 Improvement of a pixel quality value
DE102017110129.2 2017-05-10

Publications (2)

Publication Number Publication Date
CN108881748A CN108881748A (en) 2018-11-23
CN108881748B true CN108881748B (en) 2022-07-01

Family

ID=63962270

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201810443537.3A Active CN108881748B (en) 2017-05-10 2018-05-10 Image processing apparatus and method, digital camera, computer device, and program product

Country Status (2)

Country Link
CN (1) CN108881748B (en)
DE (1) DE102017110129B4 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101437166A (en) * 2007-11-13 2009-05-20 索尼株式会社 Image pickup apparatus, method of correcting captured image data, and program
CN101442608A (en) * 2008-12-31 2009-05-27 中国资源卫星应用中心 Method for improving relative radiation correction of CCD camera
CN101754029A (en) * 2008-11-28 2010-06-23 佳能株式会社 Signal processing apparatus, image sensing apparatus, image sensing system, and signal processing method
US7907209B2 (en) * 2005-05-13 2011-03-15 The Hong Kong University Of Science And Technology Content adaptive de-interlacing algorithm
CN102299160A (en) * 2010-06-23 2011-12-28 英属开曼群岛商恒景科技股份有限公司 Imaging sensing component and manufacturing method thereof
CN104125421A (en) * 2014-07-02 2014-10-29 江苏思特威电子科技有限公司 CMOS image sensor and line noise correction method thereof

Family Cites Families (23)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US3971065A (en) 1975-03-05 1976-07-20 Eastman Kodak Company Color imaging array
US4822748A (en) 1984-08-20 1989-04-18 California Institute Of Technology Photosensor with enhanced quantum efficiency
US5005063A (en) 1986-03-03 1991-04-02 California Institute Of Technology CCD imaging sensor with flashed backside metal film
US5055900A (en) 1989-10-11 1991-10-08 The Trustees Of Columbia University In The City Of New York Trench-defined charge-coupled device
US5250824A (en) 1990-08-29 1993-10-05 California Institute Of Technology Ultra low-noise charge coupled device
US5786852A (en) 1994-06-20 1998-07-28 Canon Kabushiki Kaisha Image pick-up apparatus having an image sensing device including a photoelectric conversion part and a vertical transfer part
US6124606A (en) 1995-06-06 2000-09-26 Ois Optical Imaging Systems, Inc. Method of making a large area imager with improved signal-to-noise ratio
US6259085B1 (en) 1996-11-01 2001-07-10 The Regents Of The University Of California Fully depleted back illuminated CCD
US6005619A (en) 1997-10-06 1999-12-21 Photobit Corporation Quantum efficiency improvements in active pixel sensors
US6825878B1 (en) 1998-12-08 2004-11-30 Micron Technology, Inc. Twin P-well CMOS imager
EP1353791A4 (en) 2000-11-27 2006-11-15 Vision Sciences Inc Noise floor reduction in image sensors
US6864920B1 (en) 2001-08-24 2005-03-08 Eastman Kodak Company High voltage reset method for increasing the dynamic range of a CMOS image sensor
US7432985B2 (en) * 2003-03-26 2008-10-07 Canon Kabushiki Kaisha Image processing method
US7115855B2 (en) 2003-09-05 2006-10-03 Micron Technology, Inc. Image sensor having pinned floating diffusion diode
US7038232B2 (en) 2003-09-24 2006-05-02 Taiwan Semiconductor Manufacturing Co., Ltd. Quantum efficiency enhancement for CMOS imaging sensor with borderless contact
US7446812B2 (en) 2004-01-13 2008-11-04 Micron Technology, Inc. Wide dynamic range operations for imaging
US7518645B2 (en) 2005-01-06 2009-04-14 Goodrich Corp. CMOS active pixel sensor with improved dynamic range and method of operation
US7554588B2 (en) 2005-02-01 2009-06-30 TransChip Israel, Ltd. Dual exposure for image sensor
US7636115B2 (en) 2005-08-11 2009-12-22 Aptina Imaging Corporation High dynamic range imaging device using multiple pixel cells
US8304759B2 (en) 2009-06-22 2012-11-06 Banpil Photonics, Inc. Integrated image sensor system on common substrate
JP5270642B2 (en) * 2010-03-24 2013-08-21 富士フイルム株式会社 Photoelectric conversion element and imaging element
DE102013000301A1 (en) 2013-01-10 2014-07-10 Basler Ag Method and device for producing an improved color image with a color filter sensor
KR102171387B1 (en) * 2014-03-06 2020-10-28 삼성전자주식회사 Methods of correcting saturated pixel data and processing image data using the same

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US7907209B2 (en) * 2005-05-13 2011-03-15 The Hong Kong University Of Science And Technology Content adaptive de-interlacing algorithm
CN101437166A (en) * 2007-11-13 2009-05-20 索尼株式会社 Image pickup apparatus, method of correcting captured image data, and program
CN101754029A (en) * 2008-11-28 2010-06-23 佳能株式会社 Signal processing apparatus, image sensing apparatus, image sensing system, and signal processing method
CN101442608A (en) * 2008-12-31 2009-05-27 中国资源卫星应用中心 Method for improving relative radiation correction of CCD camera
CN102299160A (en) * 2010-06-23 2011-12-28 英属开曼群岛商恒景科技股份有限公司 Imaging sensing component and manufacturing method thereof
CN104125421A (en) * 2014-07-02 2014-10-29 江苏思特威电子科技有限公司 CMOS image sensor and line noise correction method thereof

Also Published As

Publication number Publication date
DE102017110129A1 (en) 2018-11-15
DE102017110129B4 (en) 2020-07-09
CN108881748A (en) 2018-11-23

Similar Documents

Publication Publication Date Title
US8237831B2 (en) Four-channel color filter array interpolation
EP2436187B1 (en) Four-channel color filter array pattern
JP5527448B2 (en) Image input device
EP3093819B1 (en) Imaging apparatus, imaging system, and signal processing method
US8224082B2 (en) CFA image with synthetic panchromatic image
EP2420051B1 (en) Producing full-color image with reduced motion blur
US8125546B2 (en) Color filter array pattern having four-channels
US8253832B2 (en) Interpolation for four-channel color filter array
US10397465B2 (en) Extended or full-density phase-detection autofocus control
TWI504257B (en) Exposing pixel groups in producing digital images
WO2009120928A2 (en) Generalized assorted pixel camera systems and methods
Catrysse et al. Roadmap for CMOS image sensors: Moore meets Planck and Sommerfeld
US8508618B2 (en) Image pickup apparatus and restoration gain data generation method
JP2015228641A (en) Imaging apparatus, exposure adjustment method and program
JP2006222961A (en) Method for reducing aliasing of electronic image
JP5524133B2 (en) Image processing device
WO2016143134A1 (en) Image processing device, image processing method, and program
CN108881748B (en) Image processing apparatus and method, digital camera, computer device, and program product
US11997384B2 (en) Focus setting determination
US8704925B2 (en) Image sensing apparatus including a single-plate image sensor having five or more brands
US20080055455A1 (en) Imbalance determination and correction in image sensing
JP2012156882A (en) Solid state imaging device
US20180077338A1 (en) Image pickup device and image pickup apparatus
JP2010278950A (en) Imaging device with chromatic aberration correction function, chromatic aberration correction method, program, and integrated circuit
Florin et al. Resolution analysis of an image acquisition system

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant