CN100366052C - Image processing device and method - Google Patents

Image processing device and method Download PDF

Info

Publication number
CN100366052C
CN100366052C CNB2003801004014A CN200380100401A CN100366052C CN 100366052 C CN100366052 C CN 100366052C CN B2003801004014 A CNB2003801004014 A CN B2003801004014A CN 200380100401 A CN200380100401 A CN 200380100401A CN 100366052 C CN100366052 C CN 100366052C
Authority
CN
China
Prior art keywords
image
luminance
section
contrast
logl
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Expired - Fee Related
Application number
CNB2003801004014A
Other languages
Chinese (zh)
Other versions
CN1692629A (en
Inventor
光永知生
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sony Corp
Original Assignee
Sony Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sony Corp filed Critical Sony Corp
Publication of CN1692629A publication Critical patent/CN1692629A/en
Application granted granted Critical
Publication of CN100366052C publication Critical patent/CN100366052C/en
Anticipated expiration legal-status Critical
Expired - Fee Related legal-status Critical Current

Links

Images

Abstract

An image processing device and method which can preferably be used when converting a wide dynamic range image having a pixel value dynamic range wider than normal into a narrow dynamic range image having a narrower pixel value dynamic range and emphasizing a contrast. In step S1, a wide DR luminance image of the current frame input is converted into a narrow DR luminance image according to intermediate information calculated for the wide DR luminance image of a frame preceding by one. Moreover, intermediate information for the wide DR luminance image of the current frame is calculated. In step S2, the intermediate information on a held frame preceding by one is updated by using the calculated intermediate information. In step S3, presence of a subsequent frame is judged. If a subsequent frame is present, control is returned to step S1 and its processing and after are repeated. The present invention can be applied to a digital video camera and the like.

Description

Image processing apparatus and method
Technical Field
The present invention relates to an image processing apparatus and method, and more particularly, to an image processing apparatus and method preferably suitable for converting a wide dynamic range image having a wider dynamic range of pixel values than a conventional dynamic range into a narrow dynamic range image having a narrower dynamic range of pixel values, and enhancing contrast.
Background
Conventionally, solid-state imaging elements such as CCD (charge coupled device) and CMOS (complementary metal oxide semiconductor) have been widely used for imaging instruments such as video cameras and still cameras, and light measuring devices such as element inspection devices used in FA (factory automation), and light measuring devices such as electronic endoscopes used in ME (medical electronics).
In recent years, a large number of techniques have been proposed for obtaining an image having a wide dynamic range of pixel values (hereinafter referred to as "wide DR image") as compared with an optical film photograph using these solid-state imaging elements.
On the other hand, at present, display devices for displaying moving images and still images, such as CRTs (cathode ray tubes) and LCDs (liquid crystal displays), projection devices, such as projectors, and various printing devices have not widened the dynamic range of pixel values that they can support, and have only a limited range of supportable luminance gray scales. Thus, the current state is: even though a wide DE image should have been successfully captured, there is no device that can display, project, or print the image as it is obtained.
Therefore, there is a demand for a technique (hereinafter referred to as "gray scale compression technique") that: with this technique, the dynamic range of the pixel values of the wide DR image is narrowed, or in other words, the luminance grayscale is compressed, so that an image (hereinafter referred to as "narrow DR image") suitable for the dynamic range of a display device or the like is produced.
The following paragraphs will explain the commonly proposed gray scale compression technique. The gray scale compression technique can be simply implemented by reallocating the gray scales of the pixel values of the wide DR image so as to be suitable for the gray scales of the narrower dynamic range that can be supported by the display device or the like.
However, as described above, simply reassigning the gray scale of the pixel values of the wide DR image equally to the narrow dynamic range only causes a reduction in the luminance variation of the image as a whole, thereby converting into an image with a poor appearance of reduced contrast. In general, some gray scale compression techniques capable of suppressing the loss of contrast have been proposed. Three gray scale compression techniques that have been proposed will be explained below.
A technique that can be exemplified as the first gray scale compression technique involves adaptively determining a redistribution rule of gray scales based on a histogram of luminance of an input wide DR image (more specifically, calculating a gray scale conversion curve based on the histogram of the input image). The first gray scale compression technique presupposes that a main subject in an image has a large ratio of an occupied area, and is used to determine a gray scale conversion curve such that as many gray scales as possible are assigned to luminance values around a peak in a histogram, thereby suppressing a contrast reduction of at least the main subject.
However, it is difficult to obtain satisfactory results in every situation only with efforts based on gray scale assignment. In the example case where an image has multiple primary objects with the same luminance background and a relatively wide area (e.g., a blue sky), the objects often do not have enough grayscale allocated to them.
A technique that can be exemplified as the second gray scale compression technique involves emphasizing the high frequency component of the image before or after gray scale conversion. A second gray scale compression technique is used to estimate the fraction of contrast lost (or considered lost) by the gray scale conversion and to compensate for the lost fraction using, for example, a high frequency filter for blur masking.
The second gray scale compression technique has the advantages that: it does not create a problem of image-dependent composition as the first gray scale compression technique does. However, the high frequency filter causes overshoot (overshoot) at the contour portion and noise emphasis at the flat portion of the object, and thus is understood as not always guaranteeing a desired image.
A technique that can be exemplified as a third gray scale compression technique involves splitting the wide DR image into a low frequency component image and a high frequency component image, where only the low frequency component image is subjected to appropriate gray scale conversion processing while leaving the high frequency component image unchanged, and the two are added to produce a composite image.
Since the high-frequency component image is left unchanged in the third gray-scale conversion technique, the contrast reduction due to the gray-scale conversion is successfully avoided. However, similar to the second gray-scale conversion technique, the third gray-scale conversion technique still suffers from the problem that: overshooting at the contour portion of the object and noise aggravating at the flat portion, a method of solving the problem by using a nonlinear filter (e.g., median filter) in the process of dividing into a low-frequency component image and a high-frequency component image has also been proposed.
The first to third gray scale compression techniques described above are summarized below, and they can be classified into a technique (first and second gray scale compression techniques) of realizing gray scale compression by using relatively local processing of adjacent pixels, and a technique (third gray scale compression technique) of realizing gray scale compression using the whole or a relatively large area of an image. The former results in an unnatural image whose only high-frequency component is enhanced, and an effective gray scale compression result is not successfully obtained at all. The latter successfully obtains a more natural image than that obtained by the former because it can adjust relatively low frequency components simultaneously with emphasizing high frequency components, and it can be said that gray scale compression is more effective.
However, the latter suffers from the problems of: this process therefore requires a large memory mainly for the delay line or frame memory, making it unsuitable for hardware construction. For example, the third gray scale compression technique requires a spatial filter for separating luminance into multiple frequency components, wherein in order to allow the installation of a large spatial filter it is necessary to incorporate a large number of delay lines into the circuit, since non-artificial, efficient gray scale compression can be provided only when a large spatial filter relative to the image is used.
Meanwhile, for an exemplary case where it is intended to mount a function for subjecting a wide DR image to a gray scale compression process on an output section of an imaging apparatus such as a digital video camera and a digital still camera, there is a large demand for the function of the gray scale compression process of the digital still camera to be incorporated into hardware, for example, because high-speed signal processing is necessary in order to output an image signal while ensuring a predetermined frame rate. Even for a digital still camera for shooting a still image, for example, there is a demand for a high-speed gray scale compression process because it is necessary to output a monitored image to a viewfinder in order to determine the composition of the image.
As described above, there is a strong demand for such a gray scale compression technique: it requires only a small memory capacity to be consumed and a light calculation workload, allows easy hardware construction, and ensures a large gray scale compression effect. However, such a gray scale compression technique has not been proposed yet.
There are other problems in common in the first to third gray scale compression techniques described above, as described below.
The first problem involves generating overshoot in luminance at a contour portion of an object simultaneously with emphasizing a high-frequency component.
In order to suppress this overshoot, it is necessary to use a two-dimensional nonlinear filter of a relatively large size (20 × 20 pixels), however, a two-dimensional nonlinear filter of such a size that is desired to be implemented on a software basis causes a problem that the cost for calculation will become extremely high, whereas a two-dimensional nonlinear filter of such a size that is desired to be implemented on a hardware basis causes a problem that the circuit scale will become large due to the necessity of a large number of delay lines.
The second problem relates to control of the contrast enhancement amount of the high-frequency component in the high-luminance region and the low-luminance region. The above-described second and third gray scale compression techniques have in common that luminance is divided into a low frequency component and a high frequency component, and gray scale compression is realized by enhancing the high frequency component while keeping the low frequency component relatively suppressed.
However, emphasizing the high-frequency component causes a luminance clipping (clipping) to occur in the periphery of the maximum luminance and the minimum luminance acceptable for a display device or the like, resulting in a loss of image detail, making gray scale conversion not appropriate, which creates a need for some countermeasures: with these countermeasures, luminance clipping can be avoided.
Another problem is that: even without causing luminance clipping, excessive enhancement of the contrast results in an image having a contour portion of the subject that is unnaturally enhanced.
Disclosure of Invention
The present invention has been conceived in view of the foregoing situation, and has as its object to realize a gray scale compression technique: it requires only a small memory capacity to be consumed and less calculation workload, allows easy hardware construction, and guarantees a large gray scale compression effect.
Another object is to make it possible to appropriately enhance the contrast of an image using a smaller memory capacity, based on a smaller amount of calculation, and based on a simple hardware configuration.
The image processing apparatus of the present invention is characterized in that: it includes a reduced image generating means for generating a reduced image from an input image; correction information acquisition means for acquiring correction information of the input image based on the reduced image; and a gray scale converting means for converting a gray scale of the input image; wherein the gray-scale converting means corrects the contrast of the input image using the correction information as processing to be performed before and/or after converting the gray-scale.
The image processing apparatus may further comprise smoothing means for generating the luminance L of the pixel c The smoothed image of (1), the pixels constituting the input image being smoothed based on interpolation calculation using pixels constituting the reduced image, wherein the gray scale conversion means may be configured so as to be based on the luminance L of the pixels constituting the image c And the luminance L of the pixels constituting the smoothed image 1 And a predetermined gain value g to generate a contrast-corrected image.
The gray scale conversion means may be configured such that the luminance L of the pixels constituting the contrast-corrected image is calculated according to the following equation u
L u =g·(L c -L 1 )+L 1
The reduction means may be configured such that the input image is divided into a plurality of blocks, an average value of the luminances of the pixels belonging to a single block is calculated, and a reduced image composed of the same number of pixels as the blocks and having the average value as the luminance of the pixel is generated.
The smoothing means may be configured so as to pinpoint a position on the reduced image corresponding to an interpolation position (which is a position of a pixel to be interpolated), and use pixels existing in the vicinity of the specified position, thereby calculating the pixel luminance L of the smoothed image 1
The smoothing means may also be configured such that a position on the reduced image corresponding to the interpolation position (which is the position of the pixel to be interpolated) is accurately determined, and 4 × 4 pixels existing in the vicinity of the determined position are used, thereby calculating the pixel luminance L of the smoothed image based on bicubic interpolation 1
The image processing apparatus of the present invention may further include: logarithmic conversion means for making the luminance L of the pixels constituting the image before input to the smoothing means c Subjected to logarithmic conversion; and a logarithm reverse conversion means for dominating the luminance of pixels constituting the contrast-corrected image.
The image processing apparatus of the present invention may further include: smoothing means for generating a signal having a pixel luminance L c The smoothed image of (1), the pixels constituting an input image smoothed based on interpolation calculation using pixels constituting a reduced image; and gain value setting means for setting a gain value g for correcting the contrast; wherein the gray scale conversion means may be configured so as to be based on the luminance L of the pixels constituting the input image c And the luminance L of the pixels constituting the smoothed image 1 And a predetermined gain value g to generate a contrast-corrected image; also, the gain value setting means may be configured so as to be based on the input initial gain value g 0 Reference gain value 1 and use of a first luminance threshold Th 1 A second brightness threshold Th 2 And the luminance L of the pixels constituting the input image c Calculated attenuation value attn (Th) 1 ,Th 2 ,L c ) To set the gain value g.
The inventionThe image processing apparatus of (1) may further include conversion means for generating a tone-converted image by converting the luminance L of pixels constituting the input image based on a conversion function; smoothing means for smoothing luminance L of pixels constituting the tone-converted image c Smoothing to generate a smoothed image; and gain value setting means for setting an initial gain value g based on an inverse 1/gamma of a slope gamma representing a conversion function 0 And a gain value g for correcting the contrast is set; wherein the contrast correction means may be configured so as to be based on the luminance L of the pixels constituting the tone-converted image c And the luminance L of the pixels constituting the smoothed image 1 And a gain value g to generate a contrast-corrected image; also, the gain value setting means may be configured so as to be based on the input initial gain value g 0 Reference gain value 1 and use of a first luminance threshold Th 1 A second luminance threshold value Th 2 And the luminance L of the pixels constituting the tone-converted image c Calculated attenuation value attn (Th) 1 ,Th 2 ,L c ) And a gain value g is set.
The gain value setting means may be configured such that the gain value g is set according to:
g=1+(g 0 -1)·attn(Th 1 ,Th 2 ,L c )
the gain value setting means may be configured so that the attenuation value attn (Th) is calculated according to the following equation 1 ,Th 2 , L c ):
attn(Th 1 ,Th 2 ,L c )=|(L c -Th 1 )/(Th 2 -Th 1 )| (2Th 1 -Th 2 ≤L c ≤Th 2 )
attn(Th 1 ,Th 2 ,L c )=1 (L c <2Th 1 -Th 2 ,Th 2 <L c )
The gray scale conversion means may be configured so that the component contrast corrected is calculated according to the following formulaLuminance L of a pixel of an image u
L u =g·(L c -L 1 )+L 1
First luminance threshold Th 1 It may be defined as a medium gray level (level), and the second luminance threshold value Th 2 Which may be defined as the maximum white level.
The reduced image generating means may be configured so that the reduced image is generated by converting the input image into a tone-converted image based on the conversion function and then reducing the size of the tone-converted image, and the correction information acquiring means may be configured so as to acquire the correction information including the slope of the conversion function, and the gray scale converting means may be configured so that the contrast of the tone-converted image is corrected based on the reduced image and the slope of the conversion function.
The image processing apparatus of the present invention may further comprise holding means for holding a reduced image corresponding to the image of the previous frame and a slope of a conversion function applied to the image of the previous frame.
The reduced image generating means may be configured such that pixel values of the image of the current frame are converted stepwise using one or more conversion functions, and the gray scale converting means may be configured such that the contrast-corrected image is generated by correcting the contrast of the tone-corrected image based on a product of the reduced image held by the holding means and slopes individually corresponding to the one or more conversion functions.
Of the one or more transfer functions, at least one transfer function may be configured as a monotonically increasing function.
The image processing apparatus of the present invention may further include average value calculation means for calculating an average value of pixel values of the image after the tone conversion, and, among the one or more conversion functions, at least one conversion function may be configured so as to have a slope proportional to a reciprocal of the average value calculated by the average value calculation means.
The average value calculating means may be configured such that the tone-corrected image is divided into a plurality of blocks, and the value is calculated as an average value by weighted addition of the average of the pixel values of the individual blocks.
The reduced image generating means may be configured such that a first reduced image is generated by reducing the size of the tone-converted image, and a second reduced image is generated by multiplying a single pixel value of the first reduced image by a value proportional to the reciprocal of the average value of the pixel values of the first reduced image.
The image processing apparatus of the present invention may further include logarithmic conversion means for subjecting the pixel values of the image in the current frame to logarithmic conversion, and logarithmic reverse conversion means for subjecting the pixel values of the contrast-corrected image to logarithmic reverse conversion.
The image processing apparatus of the present invention may further include: gamma conversion means for subjecting pixel values of the contrast-corrected image to gamma conversion; luminance range information calculation means for calculating luminance range information indicating a distribution range of luminance components of the contrast-corrected image after the gamma conversion by the gamma conversion means; and normalization means for normalizing the distribution of pixel values of the contrast-corrected image after gamma conversion by the gamma conversion means to a predetermined range based on the luminance range information calculated by the luminance range information calculation means.
The luminance range calculating means may be configured such that upper and lower limit values of the luminance component of the contrast-corrected image after being subjected to the γ -conversion by the γ -converting means are calculated as the luminance range information, and the normalizing means may be configured such that the pixel values of the contrast-corrected image are converted such that the upper and lower limit values of the luminance component of the contrast-corrected image calculated by the luminance range information calculating means coincide with the upper and lower limit values, respectively, of the range of the luminance component reproducible by the assumed reproducing apparatus.
The holding means may be configured so as to hold the luminance range information of the previous frame calculated by the luminance range information calculation means.
The image may be a monochrome image made up of pixels having a luminance component.
The image may be a color image composed of pixels having a plurality of color components.
The reduced image generating means may be configured such that a first luminance image composed of pixels having a luminance component is generated based on a color image, the first luminance image is converted into a luminance image after tone conversion, and a color tone-converted image composed of pixels having a plurality of color components is generated based on the luminance image after tone conversion.
The reduced image generating means may be configured such that the individual color component of the tone-converted image is calculated by calculating a difference between a value of the individual color component and a value of the luminance component of the color image, then calculating a product of the difference and a slope of the conversion function, and adding the product to the value of the individual color component of the tone-converted luminance image.
The reduced image generating means may be configured such that the individual color component of the tone-converted image is calculated by calculating an average value of the luminance components of the first luminance image, then calculating a coefficient proportional to the reciprocal of the average value, and multiplying the value of the individual color component of the color image by the coefficient.
The gray scale converting means may be configured so that the image after the contrast correction of the color is generated by generating the second luminance image composed of pixels having luminance components based on the image after the color tone conversion, and then correcting the contrast of the image after the color tone conversion generated by the converting means based on the second luminance image, the reduced image held by the holding means, and the slope of the conversion function.
The image processing apparatus of the present invention may further include gamma conversion means for subjecting pixel values of the color contrast-corrected image to gamma conversion; luminance range information calculation means for generating a third luminance image composed of pixels having luminance components based on the contrast-corrected image of the color subjected to the γ conversion by the γ conversion means, and for calculating luminance range information indicating a distribution range of the luminance components of the third luminance image; and normalizing means for normalizing the distribution of pixel values of the contrast-corrected image of colors after gamma conversion by the gamma converting means to a predetermined range based on the luminance range information calculated by the luminance range information calculating means.
The image processing method of the present invention comprises the steps of: a reduced image generating step of generating a reduced image from the input image; a correction information acquisition step of acquiring correction information of the input image based on the reduced image; and a gray scale conversion step of converting a gray scale of the input image; wherein the gray-scale conversion step corrects the contrast of the input image using the correction information as processing to be performed before and/or after the conversion gray-scale conversion.
According to the image processing apparatus and method of the present invention, it is made possible to generate a reduced image from an input image, acquire correction information based on the generated reduced image, and convert the gray scale of the input image. In the process of the gray-scale conversion, the correction information is used to correct the contrast of the input image as a process to be performed before and/or after the gray-scale conversion.
Drawings
Fig. 1 is a block diagram showing an exemplary structure of a digital video camera according to an embodiment of the present invention;
fig. 2 is a block diagram showing a first exemplary structure of the DSP shown in fig. 1;
FIG. 3 is a block diagram showing a first exemplary structure of the tone curve correcting section shown in FIG. 2;
FIG. 4 is a graph illustrating an example tone curve;
FIG. 5 is a block diagram showing a second exemplary structure of the tone curve correcting section shown in FIG. 2;
FIG. 6 is a block diagram showing a third exemplary structure of the tone curve correcting section shown in FIG. 2;
fig. 7 is a block diagram showing an exemplary structure of the reduced image generating section shown in fig. 2;
fig. 8 is a block diagram showing an example structure of the average value calculation section shown in fig. 7;
fig. 9 is a block diagram showing an example structure of the contrast correction section shown in fig. 2;
FIG. 10 is a block diagram showing an exemplary structure of the interpolation component shown in FIG. 9;
fig. 11 is a diagram for explaining the process of the interpolation section shown in fig. 9;
fig. 12 is a block diagram showing an example structure of the gain value setting section shown in fig. 9;
fig. 13 is a block diagram showing an example structure of the contrast enhancing member shown in fig. 9;
fig. 14 is a diagram for explaining processing in the luminance range normalization section shown in fig. 2;
fig. 15 is a block diagram showing an example structure of the luminance range information calculating section shown in fig. 2;
fig. 16 is a block diagram showing an example structure of the luminance range normalization section shown in fig. 2;
fig. 17 is a block diagram showing an example structure of a composite member that can replace the portion ranging from the tone curve correction member to the contrast correction member shown in fig. 2;
fig. 18 is a flowchart for explaining a gray scale compression process by the first exemplary structure of the DSP;
fig. 19 is a flowchart for explaining details of the processing in step S1 shown in fig. 18;
fig. 20 is a flowchart for explaining details of the processing in step S2 shown in fig. 18;
fig. 21 is a block diagram showing a second exemplary structure of the DSP shown in fig. 1;
FIG. 22 is a block diagram showing a first example structure of the tone curve correction section shown in FIG. 21;
FIG. 23 is a block diagram showing a second exemplary structure of the tone curve correcting section shown in FIG. 21;
FIG. 24 is a block diagram showing a third exemplary structure of the tone curve correcting section shown in FIG. 21;
fig. 25 is a block diagram showing an exemplary structure of the reduced image generating section shown in fig. 21;
fig. 26 is a block diagram showing an example structure of the contrast correction section shown in fig. 21;
fig. 27 is a block diagram showing an example structure of a composite member that can replace the portion ranging from the tone curve correction member to the contrast correction member shown in fig. 21;
fig. 28 is a block diagram showing an example structure of the luminance range information calculating section shown in fig. 21;
fig. 29 is a flowchart for explaining a gray scale compression process performed by the second exemplary structure of the DSP;
fig. 30 is a flowchart for explaining details of the processing in step S43 shown in fig. 29;
fig. 31 is a flowchart for explaining details of the processing in step S44 shown in fig. 29;
FIG. 32 is a block diagram showing an exemplary configuration of an image processing system to which the present invention is applied;
fig. 33 is a flowchart for explaining the operation of the image processing system shown in fig. 32;
fig. 34 is a block diagram showing a first exemplary structure of the image processing apparatus shown in fig. 32;
FIG. 35 is a block diagram showing an exemplary configuration of the tone curve correcting section shown in FIG. 34;
fig. 36 is a diagram showing an example tone curve used in the first example structure of the image processing apparatus;
fig. 37 is a block diagram showing an exemplary structure of the smoothed luminance generating section shown in fig. 34;
fig. 38 is a block diagram showing an exemplary structure of the reduced image generating section shown in fig. 37;
fig. 39 is a block diagram showing an example structure of the average value calculation section shown in fig. 38;
FIG. 40 is a block diagram showing an exemplary structure of the interpolation section shown in FIG. 37;
fig. 41 is a block diagram showing an example structure of the gain value setting section shown in fig. 34;
fig. 42 is a block diagram showing an example structure of the contrast correction section shown in fig. 34;
fig. 43 is a flowchart for explaining a gray-scale compressed image generation process by the first exemplary configuration of the image processing apparatus;
fig. 44 is a block diagram showing a second exemplary structure of the image processing apparatus shown in fig. 32;
fig. 45 is a flowchart for explaining a gray-scale-compressed-image generating process by the second exemplary configuration of the image processing apparatus; and
fig. 46 is a block diagram showing an example structure of a general-purpose personal computer.
Detailed Description
A digital video camera as one embodiment of the present invention will be explained with reference to the drawings.
Fig. 1 shows an exemplary configuration of a digital video camera as an embodiment of the present invention. The digital camera 1 captures a picture of a subject, generates a wide DR image having a wider dynamic range of pixel values than usual, stores the image in a predetermined storage medium, and outputs the wide DR image to a built-in display or an external device which also serves as a composition determination viewfinder or an image monitor after converting the wide DR image into a narrow DR image having a narrower dynamic range of pixel values than usual.
The digital camera 1 is roughly constituted by an optical system, a signal processing system, a recording system, a display system, and a control system.
The optical system is constituted by a lens 2 for converging a light image of an object, a diaphragm 3 for controlling light energy of the light image, and a CCD image sensor 4 for generating a wide DR image by photoelectric conversion of the converged light image at a predetermined frame rate. It should be noted that the following description will refer to both the case where the wide DR image generated by the CCD image sensor 4 is a monochrome image composed of one luminance signal and the case where it is a color image composed of a plurality of (e.g., 3) luminance signals.
The signal processing system is composed of the following components: a correlated double sampling Circuit (CDS) 5 for reducing noise by sampling the wide DR image output from the CCD image sensor 4; an a/D converter 6 for performing AD conversion on the wide DR image from which noise has been removed by the correlated double sampling circuit 5, thereby obtaining a value having a bit width of, for example, about 14 to 16 bits; and a DSP (digital signal processor) 7 for carrying out a gray scale compression process on the wide DR image output by the a/D converter 6.
Just like the wide DR image output from the a/D converter 6 and having a bit width of 14 to 16 bits, an image signal having a large number of gray scales cannot be completely reproduced by a general video signal including the luminance Y and the color difference signals Cr, cb, but the gray scale compression processing by the DSP7 compresses its gray scale to a range that allows reproduction by a general video signal including the luminance Y and the color difference signals Cr, cb. The DSP7 will be described in detail with reference to fig. 2 and subsequent drawings.
The recording system of the digital video camera 1 is constituted by: a CODEC (compression/decompression) 12 which participates in encoding and recording the wide or narrow DR image received from the DSP7 in the memory 13, and reading and encoding code data stored in the memory 13 and supplying it to the DSP 7; and a memory 13 for storing the encoded wide or narrow DR image, which is constituted by a magnetic disk, an optical disk, a magneto-optical disk, a semiconductor, or the like.
The display system is composed of the following components: a D/a converter 9 which participates in DA conversion of the narrow DR image supplied from the DSP 7; a video decoder for outputting the analog narrow DR image output from the D/a converter 9 to the display 11 after converting it into a general video signal including luminance Y and color difference signals Cr, cb; and a display 11, which is generally constituted by an LCD (liquid crystal display) or the like serving as a viewfinder or a video monitor by displaying an image corresponding to a video signal.
The control system is constituted by a Timing Generator (TG) 8 for controlling the operation timing of elements from the CCD image sensor 4 to the DSP7, an input device 15 for accepting various operations by a user, and a CPU (central processing unit) 14 for controlling the entirety of the digital camera 1.
Next, an outline of the operation of the digital video camera will be explained. An optical image (incident light) of the subject reaches the CCD image sensor 4 through the lens 2 and the diaphragm 3, is subjected to photoelectric conversion by the CCD image sensor 4, and the obtained electric signals representing the pixels of the wide DR image are noise-removed by the correlated double sampling circuit 5, digitized by the a/D converter 6, and supplied to the DSP7.
The DSP7 participates in the gray scale compression processing of the wide DR image received from the a/D converter 6, thereby generating a narrow DR image and outputting it to the D/a converter 9 or the CODEC 12 or both. The narrow DR image supplied to the D/a converter 9 is subjected to DA conversion, and then converted into a normal video signal by the video encoder 10, and the resulting image is displayed on the display 11. On the other hand, the narrow DR image supplied to CODEC 12 is encoded and recorded in memory 13.
Here is the end of the description regarding the overall operational overview of the digital video camera 1.
Next, the DSP7, which is the key of the present invention, will be described.
Fig. 2 shows a first exemplary structure of the DSP7 suitable for a wide DR image which is a monochrome image. The monochrome wide DR image input to the DSP7 is hereinafter referred to as a wide DR luminance image L. The pixel value (i.e., luminance value) of the wide DR luminance image is represented as L (p). In this context, p is a vector or coordinate representing the pixel location on the image, e.g., p = (x, y). Therefore, it is determined to use L (p), which contains information of pixel positions and luminance values, separately from L representing the wide DR luminance image. The same content will be applied to other images and their pixel values described later.
The DSP7 is designed so that the luminances L (p) of the wide DR luminance image L are input thereto in accordance with the order of the rasters.
In the first example structure of the DSP7, the logarithmic-conversion section 21 subjects the input luminance L (p) to logarithmic conversion, and outputs the obtained logarithmic luminance logL (p) to the tone curve correction section 22. The tone curve correction section 22 applies a tone curve obtained in advance to the input logarithmic luminance logL (p), converts it toward a direction of compressing the gray scale, and obtains the obtained logarithmic luminance logL c (p) to the reduced image generation section 23 and the contrast correction section 25. The tone curve correcting section 22 outputs a representative value γ representing the slope of the applied tone curve to the contrast correcting section 25. The representative value γ representing the slope of the applied tone curve is simply referred to as representative value γ hereinafter.
The reduced image generation section 23 generates a reduced image based on the logarithmic luminance logL corresponding to a single frame received from the tone curve correction section 22 c (p) generating a reduced image logL c1 And the reduced image memory 24 stores it.
The contrast correction section 25 corrects the reduced image logL of the previous frame held in the reduced image memory 24 based on the representative value γ and the representative value γ c1 To correct the logarithmic luminance logL of the current frame received from the tone curve correction section 22 c (p) reduced contrast by the tone curve correction, and obtaining a logarithmic luminance logL u (p) to the logarithmic-inverse conversion section 26. The logarithmic reverse conversion section 26 makes the logarithmic luminance logL having the corrected contrast u (p) is subjected to logarithmic inversion, and the obtained luminance L represented by the conventional axis is converted into a luminance L u (p) to the γ correction section 27.
The gamma correction section 27 makes the luminance L received from the logarithmic-inverse conversion section 26 u (p) is subjected to gamma correction in consideration of the gamma characteristic of the reproduction apparatus (e.g., the display 11), and then the obtained luminance Y (p) is output to the luminance information calculation section 28 and the luminance range normalization section 30 after the gamma correction. The luminance information calculation section 28 calculates luminance range information indicating a luminance distribution for each luminance Y (p) corresponding to a single frame received from the γ correction section 27, and allows it to be held by the luminance range information memory 29. It should be noted herein that the luminance range information refers to information indicating a distribution range of luminance within one frame, with which the luminance Y closest to the darkness is calculated d And a brightness Y closest to the brightness b As luminance range information [ Y d ,Y b ]。
The luminance range normalization section 30 bases on the previous frame held by the luminance range information memory 29Luminance range information [ Y d ,Y b ]The luminance Y (b) of the current frame received from the γ correction section 27 is converted so that the distribution range thereof coincides with a range representable by a reproduction apparatus (e.g., the display 11), and the obtained luminance Y is used n (p) output to the subsequent step as pixel values of the narrow DR image.
As has been described above, in the process of the gray scale compression processing according to the first exemplary configuration of the DSP7, the reduced image logL is generated by the reduced image generation section 23 c1 And luminance range information [ Y ] is calculated by the luminance range information calculating section 28 d ,Y b ]. The reduced image logL will be described hereinafter c1 And brightness range information [ Y d ,Y b ]Referred to as intermediate information.
With the DSP7, intermediate information is calculated for a single frame of the input wide DR image, and the calculated intermediate information is used to process a wide DR image coming one frame later.
Although it is generally necessary to use information calculated based on the entire or wide range of luminance values of an image in order to perform effective gray scale compression, a problem arises in installation in that time delay will increase before calculating the information. Therefore, the DSP7 uses the intermediate information of the previous frame for gray scale compression of the current frame by selecting information that is less likely to vary over time. This structure makes it possible to avoid memory consumption and expansion of the circuit scale even after mounting.
Next, details of a first example structure of the DSP7 will be described with reference to the drawings.
Fig. 3 shows a first example structure of the tone curve correction section 22. In the first example structure, the LUT memory 41 holds in advance a lookup table (hereinafter referred to as LUT) corresponding to a monotonously increasing tone curve as shown in fig. 4, and a representative value γ representing a slope of the tone curve. It is also permissible to keep a function equivalent to the tone curve, possibly in place of the LUT. The table referring section 42 corrects the logarithmic luminance logL (p) to the logarithmic luminance logL (p) based on the LUT held in the LUT memory 41 c (p)。
FIG. 4 shows an example of a tone curve, where at [0,1 ]]On the logarithmic axis normalized within the range, the input luminance L (p) is plotted on the abscissa and the luminance L after the tone curve correction is plotted on the ordinate c (p) of the formula (I). The application of the monotonically increasing, gentle inverse sigmoid curve as shown in this example will not cause a strong gray scale compression effect in the high luminance region and the low luminance region, so that it is possible to obtain a desired hue with a small degree of white-blind (whiteteout) or black-blind (blackout) even after gray scale compression. Conversely, gray scale compression will strongly affect the intermediate-luminance region, which means that the contrast correction described later can be applied entirely to the intermediate-luminance region and result in the desired brightness in the intermediate-luminance rangeNarrow DR images with a lesser degree of contrast correction within the periphery.
It should be noted that the representative value γ representing the slope of the tone curve may be determined by generally finding slope values over the entire luminance range and determining the average of these values as the representative value γ. The tone curve shown in fig. 4 has a representative value γ having a value of 0.67.
Fig. 5 shows a second example structure of the tone curve correction section 22. Unlike the first exemplary structure, the second exemplary structure does not use the LUT obtained in advance, but calculates the representative value γ for each frame, and corrects the logarithmic luminance logL (p) to the logarithmic luminance logL c (p) of the formula (I). In the second example structure, the average luminance calculating section 51 calculates the average value μ of the logarithmic luminance logL (p) for one frame. Divider 52 divides a predetermined constant logL T The average value μ is divided to calculate a representative value γ. The γ memory 53 holds the representative value received from the divider 52. The multiplier 54 multiplies the logarithmic luminance logL (p) of the current frame by the representative value γ of the previous frame held by the γ memory 53, thereby calculating the logarithmic luminance logL after the tone curve correction c (p)。
Let it be assumed that the predetermined constant logL T Defined as a medium level of logarithmic luminance, the average of the logarithmic luminance logL (p) of one frame is converted to have the sum logL T Logarithmic luminance logL after tone curve correction of the same value c (p)。
Although the representative value γ is calculated for each frame, the value should not differ much between the previous frame and the subsequent frame because the value is actually calculated based on the average value μ of the logarithmic luminance logL (p). Therefore, the reduced image logL is similar to the reduced image logL c1 And luminance range information [ Y d ,Y b ]Similarly, the representative value γ is also designed to use the previous frame for tone curve correction of the current frame. Therefore, the definition representative value γ is also included in the intermediate information.
Fig. 6 is a third exemplary structure of the tone curve correcting section 22. This third example structure can be said to be the first example structure anda combination of the second exemplary structure. In the third example structure, the LUT memory 61 holds in advance an LUT corresponding to the tone curve as shown in fig. 4, and a representative value γ representing the slope of the tone curve 1 . The table referencing part 62 corrects the log luminance logL (p) to the log luminance logL (p) based on the LUT held in the LUT memory 61k c’ (p) and outputs it to the average luminance calculating section 63 and the multiplier 66.
The average luminance calculating section 63 calculates the logarithmic luminance logL of one frame c’ (p) and outputs it to divider 64. The divider 64 divides the predetermined constant logL T Divided by the average value mu to calculate a representative value gamma 2 And allow γ 2 The memory 65 stores it. The multiplier 66 multiplies the logarithmic luminance logL of the current frame c’ (p) multiplied by the representative value γ of the previous frame held by the γ memory 65 2 Thereby calculating the logarithmic luminance logL after the tone curve correction c (p) of the formula (I). The multiplier 67 multiplies the representative value γ 1 、γ 2 As a representative value γ (= γ) 1 ·γ 2 ) To the contrast correction section 25 in the subsequent stage.
Next, fig. 7 shows an exemplary structure of the reduced image generating section 23. The classification section 71 of the reduced image generation section 23 performs logarithmic luminance logL of one frame received from the tone curve correction section 22 in the preceding stage on the basis of the block to which the luminance belongs when the entire image is divided into m × N blocks and then supplied to the average value calculation sections 72-1 to 72-N (= m × N) c (p) classification. E.g. that classified into the first blockThose are supplied to the average value calculating section 72-1, and those classified into the second block are supplied to the average value calculating section 72-2. The same applies to the subsequent blocks as well, and those classified into the nth block are supplied to the average value calculation section 72-N. The following description takes simple notation of the average calculation section 72 when it is not necessary to distinguish the individual average calculation sections 72-1 to 72-N.
The average value calculation part 72-i (i =1, 2.., N) calculates a logarithmic luminance value logL from one frame c (p) inFace computation logarithmic luminance logL classified into ith block c (p) and outputs it to the composite section 73. The compositing section 73 generates a reduced image logL of m × n pixels c1 And causes a reduced image memory 24 in a subsequent stage to store it, wherein the reduced image logL c1 With logarithmic luminance logL received from the mean value calculation means 72-i respectively c (p) as the pixel value.
Fig. 8 shows an example structure of the average value calculation section 72. The adder 81 of the average value calculation section 72 adds the value held by the register (r) 82 to the logarithmic luminance logL received from the classification section 71 in the preceding stage c (p) to update the value held by register (r) 82. The divider 83 divides the value finally held by the register 82 by the number Q of pixels constituting one block, thereby calculating Q logarithmic luminances logL classified into one block c Average value of (p).
Next, fig. 9 shows an example structure of the contrast correction section 25. The interpolation position specifying part 91 acquires the logarithmic luminance logL received from the tone curve correcting part 22 in the preceding stage c The pixel position p of (p) (hereinafter also referred to as an interpolation position p) is output to the interpolation section 92. The interpolating section 92 uses the reduced image logL of the previous frame held by the reduced image memory 24 c1 Calculating the pixel logL corresponding to the interpolation position p by interpolation c1 (p) and outputs it to the contrast enhancement section 94.
The gain value setting section 93 sets the logarithmic luminance logL based on the representative value γ of the previous frame received from the tone curve correcting section 22 and based on the current frame c (p) to calculate and determine a logarithmic luminance logL of the current frame c (p) gain value g (p) of the contrast enhancement amount. Contrast enhancement section 94 bases on the log luminance logL of the current frame c (p), gain value g (p), and interpolation value logL of reduced image c1 (p) to calculate a log luminance logL of the enhanced contrast with frequency components other than the low frequency component u (p)。
FIG. 10 shows an interposer 92The structure is illustrated. The interpolation section 92 uses the reduced image logL of the previous frame based on bicubic interpolation c1 To interpolate the pixel log l corresponding to the interpolation position p by 4 × 4 pixels near the interpolation position p c1 (p)。
Vicinity selection section 101, upon receiving interpolation position p, based on reduced image log l of m × n pixels of the previous frame held by reduced image memory 24 c1 And a pixel value a [4] of 4 × 4 pixels in the vicinity of the interpolation position p is acquired][4]And outputs it to the product-sum section 104. Where a [ i ]][j]The notation of (d) means that the pixel value a is two-dimensional arrangement data of i × j. Vicinity selection section 101 selects acquired pixel value a [4]][4] And the horizontal displacement dx and the vertical displacement dy between the interpolation positions p are output to the horizontal coefficient calculation section 102 or the vertical coefficient calculation section 103, respectively.
The relationship of the interpolation position p, the adjacent pixel value a [4] [4], and the displacements dx, dy will be described herein with reference to fig. 11.
The m × n grids shown in fig. 11 represent reduced images logL of m × n pixels c1 . Assuming that the interpolation position p = (px, py) is given below, the reduced image logL corresponding to the interpolation position p c1 The position q above is given as q = (qx, qy) = (px/bx-0.5, py/by-0.5), where (bx, by) = (image log L) c Number of horizontal pixels/m, image logL c The number of vertical pixels/n).
In order to acquire adjacent pixels around the position q on the reduced image corresponding to the interpolation position p, it is recommended to acquire a reduced image logL falling within the ranges of qx-2 < x < qx +2 and qy-2 < y < qy +2 as shown by hatching in FIG. 11 c1 The pixel of (2). Within the area shown by hatching, 4 × 4 positions marked with "+" are positions of pixels to be acquired. The displacement (dx, dy) between the neighboring pixel and the interpolation position p is defined as the difference with respect to the nearest pixel at the lower left. That is, the displacement may be given as (dx, dy) = (fractional part of qx, fractional part of qy).
Referring back to FIG. 10, the horizontal coefficient calculation section 102 bases on selecting from the neighborhoodThe horizontal cubic interpolation coefficient k is calculated by the horizontal displacement dx received by the component 101 x [4]. Similarly, vertical coefficient calculation section 103 calculates vertical cubic interpolation coefficient k based on the vertical coefficient based on vertical displacement dy received from vicinity selection section 101 y [4]。
For example, the coefficient k is interpolated horizontally three times x [4]Can be calculated using the following formula (1):
z=|dx-i+2|
Figure C20038010040100191
in addition, the vertical cubic interpolation coefficient k y [4]It can be generally calculated using the following formula (2):
z=|dy-j+2|
Figure C20038010040100192
it should be noted that interpolation other than that shown above may be used as long as sufficiently smooth interpolation can be obtainedCalculating cubic interpolation coefficient k by any arbitrary calculation formula other than formulas (1) and (2) x [4]And k y [4]。
Product summing component 104 sums the product by using the adjacent pixel value a [4]][4]Horizontal interpolation coefficient k x [4]And a vertical interpolation coefficient k y [4]Is calculated, the reduced image logL is calculated using the following formula (3) c1 Is interpolated value L of the interpolation position p c1 (p)。
The gain value setting section 93 will be explained next. As described above, the gain value setting section 93 is used to set the gain value g (p) for adjusting the degree of enhancement of the region other than the low frequency region by the contrast enhancement section 94 in the subsequent stage. For a gain value of g (p) =1, the contrast enhancement section 94 neither enhances nor suppresses the contrast. For a gain value of g (p) > 1, the contrast is enhanced corresponding to this value. For a gain value of g (p) < 1, the contrast is suppressed corresponding to the value.
The setting of the gain value will be described. The contrast of the image has been suppressed by gray scale compression, where the amount of suppression depends on the slope of the tone curve. For example, applying a tone curve with a small slope in consideration of achieving strong gray scale compression means that the contrast is strongly suppressed. On the other hand, applying a straight line having a slope of 1 as the tone curve means that the image does not change or the contrast is not suppressed.
Therefore, for the case where the representative value γ of the tone curve is less than 1, the gain value setting section 93 takes the reciprocal 1/γ of the representative value γ of the tone curve so that the gain value exceeds 1.
Logarithmic luminance at input logL c (p) another case near a white or black level, where contrast enhancement similar to that applied to the intermediate luminance regions may undesirably result in loss of image detail due to clipping, thus when the input log luminance logL is c (p) when the white level or the black level is closer, the gain value is adjusted to be closer to 1.
That is, it is assumed that the reciprocal of the represented γ is 1/γ = g 0 The gain value g (p) is calculated using the following formula (4):
g(p)=1+(g 0 -1)×attn(p) ...(4)
wherein attn (p) is an attenuation coefficient, and is calculated by the following formula (5):
Figure C20038010040100202
Figure C20038010040100203
...(5)
it should be noted thatIn formula (5), logL gray Representing logarithmic luminance representing a medium gray level, andlogL white representing the logarithmic luminance representing the white clip level (maximum white level), both of which are constants set in advance.
Fig. 12 shows an example structure of the gain value setting section 93. The divider 111 calculates the reciprocal 1/γ = g of the representative value γ received from the previous stage 0 And outputs it to the subtractor 112. The subtractor 112 calculates (g) 0 -1) and outputs it to the multiplier 118.
The subtractor 113 calculates the logarithmic luminance logL c (p) and logarithmic luminance logL with medium gray level gray Difference between (logL) c (p)-logL gray ) And outputs it to the divider 115. The subtractor 114 calculates a logarithmic luminance logL with a white clipping level white And log luminance logL gray Difference between (logL) white -logL gray ) And outputs it to the divider 115. The divider 115 divides the output (logL) of the subtractor 113 c (p)-logL gray ) Divided by the output (logL) of subtractor 114 white -logL gray ) And outputs the quotient to the absolute value calculator 116. The absolute value calculator 116 calculates an absolute value of the output of the divider 115 and outputs it to a limiter (clipper) 117. The limiter 117 limits the output of the absolute value calculator 116 so that it is adjusted to 1 when the output exceeds 1 and is left unchanged when the output does not exceed 1, and outputs the result as attn (p) to the multiplier 118.
The multiplier 118 multiplies the output of the subtractor 112 by the output of the limiter 117, and outputs the product to the adder 119. The adder 119 adds 1 to the output of the multiplier 118, and outputs the result to the subsequent stage as a gain value g (p).
Fig. 13 next shows an example structure of the contrast enhancing member 94. Subtractor 121 calculates logarithmic luminance logL c (p) and interpolation value logL of reduced image c1 (p) and outputs it to the multiplier 122. Multiplier 122 calculates the sum of the outputs of subtracter 121The product of the gain values g (p) and outputs it to the adder 123. The adder 123 adds the interpolation value logL of the reduced image c1 (p) is applied to the output of the multiplier 122 and the thus contrast-corrected logarithmic luminance logL is applied u (p) to the subsequent stage.
It should be noted below that the interpolation value logL of the reduced image c1 (p) is an interpolated value based on a reduced image of m × n pixels, and therefore has only the image logL before reduction c Of the very low frequency component.
That is, the output (logL) of the subtractor 121 c (p)-logL c1 (p)) is equivalent to a luminance log L by log-log luminance c (p) the difference obtained by subtracting only the very low frequency components. As described above, the contrast-corrected logarithmic luminance logL u (p) is obtained by: the luminance signal is divided into two types of extremely low frequency components and other components, and among these components, the components other than the low frequency component are enhanced by multiplying by the gain value g (p), and the two are synthesized again using the adder 123.
As is known from the above, the contrast enhancing section 94 is designed so that the same gain value g (p) is used to enhance the components from the low intermediate frequency region to the high frequency region while excluding the very low frequency region. Thus, contrast correctionPositive logarithmic luminance logL u (p) local overshoot, which may be very noticeable when only the high frequency region is enhanced, does not occur at the edge portion, and is designed to obtain an image with contrast enhanced naturally for the eye.
Next, the luminance range information calculating section 28 and the luminance range normalizing section 30 will be explained.
First, an outline of the luminance range normalization process will be explained. The purpose of the gray scale compression by the DSP7 is to convert a wide DR luminance image into a narrow DR image suitable for the dynamic range of a reproducing apparatus such as the display 11, and for this purpose, a tone curve suitable for the dynamic range of the reproducing apparatus is prepared in advance in the tone curve correction section 22. This makes it possible to subject most of the captured wide DR luminance images to gray scale compression appropriately.
However, the dynamic range of incident light may not be so greatly dependent on the subject to be photographed per se, and the gray scale compression processing of such an image may result in excessive gray scale compression, whereby the luminance is limited to a range narrower than the dynamic range reproducible by the reproduction device.
To avoid this, the luminance range normalizing section 30 normalizes the γ -corrected luminance signal Y (p) as processing in the final stage of the gray-scale compression processing so that the dynamic range of the γ -corrected luminance signal Y (p) conforms to the dynamic range reproducible by the reproducing apparatus.
Fig. 14 shows a pattern of the luminance range normalization process performed by the luminance range normalization section 30. In the line graph of this figure, the abscissa plots the luminance Y after γ correction before the normalization of the luminance range, and the ordinate plots the luminance Y after the normalization of the luminance range n And a gray scale conversion curve alpha represents a curve for converting luminance Y into Y n The conversion table of (2).
A method of determining the gray scale conversion curve α will be described below. The hatching pattern 131 shown in the line drawing is an example histogram of the luminance image Y before the luminance range normalization. In this example, in a stage after the γ correction but before the luminance range normalization, a luminance image is obtained: whose gray scale has been compressed so as to have a minimum luminance Y than would be possible from the digital camera 1 min To maximum brightness Y max A narrower dynamic range.
Since the output of the luminance image output to the reproducing apparatus results in inefficient use of only the dynamic range reproducible by the reproducing apparatus while leaving its dynamic range unchanged, the normalization is then performed such that the luminance distribution of the luminance image Y before the luminance range normalization is expanded to the entire dynamic range of the reproducing apparatus.
For this purpose, first, a luminance image before the luminance range is normalized is calculatedDistribution range of histogram 131 of Y [ Y d ,Y b ]As luminance range information of the luminance image Y before the luminance range normalization. Then, a luminance range [ Y ] falling within the slave reproduction apparatus is set nb ,Y nc ]Is slightly inwardly bright at the top and bottom endsValue Y of degree na And Y ns And, the gray scale conversion curve α is determined so that the luminance { Y on the abscissa is expressed min , Y d ,Y b ,Y max And luminance value on ordinate { Y } nb ,Y na ,Y ns ,Y nc And (9) corresponding to each other.
The gray scale conversion using this gray scale conversion curve α successfully obtained the luminance image Y n In the form of a histogram such as the shaded graphic 132 shown on the left hand side of the diagram.
Determining a gray scale conversion curve alpha to normalize the brightness range [ Y ] before the brightness range is normalized d ,Y b ]Mapping to a luminance range [ Y ] of a reproduction device nb ,Y nc ]Slightly narrow luminance range [ Y na ,Y ns ]The reasons for this are: preventing brightness Y nb And Y nc A sharp brightness clipping of the surroundings occurs on the image.
It should be noted here that the luminance value Y na And Y ns Is based on the luminance value Y nb And Y nc But is preset at an appropriate value.
Luminance range [ Y ] before normalization of luminance range d ,Y b ]Is obtained by the luminance range information calculating section 28, and the gray scale conversion curve alpha and the luminance Y are obtained n The calculation of (p) is performed by the luminance range normalization section 30.
Fig. 15 shows an example structure of the luminance range information calculating section 28. In the luminance range information calculation section 28, the decimation section 141 selects the luminance Y (p) received from the γ correction section 27 based on the pixel position p. That is, only the luminance value of the image at the pixel position set in advance is supplied to the MIN classification section 142 and the MAX classification section 145 in the subsequent stage.
MIN classification section 142 is configured such that k is arranged in series with the combination of comparison section 143 and register 144, and such that the input luminance Y (p) values are held in increasing order by registers 144-1 to 144-k.
For example, the comparing section 143-1 compares the luminance Y (p) from the decimation section 141 and the value in the register 144-1, and updates the value in the register 144-1 using the luminance Y (p) from the decimation section 141 when the luminance Y (p) from the decimation section 141 is smaller than the value in the register 144-1. On the contrary, when the luminance Y (p) from the decimation section 141 is not less than the value in the register 144-1, the luminance Y (p) from the decimation section 141 is supplied to the comparison section 143-2 in the subsequent stage.
The comparing section 143-2 compares the luminance Y (p) from the comparing section 143-1 with the value in the register 144-2, and updates the value in the register 144-2 using the luminance Y (p) from the comparing section 143-1 when the luminance Y (p) from the comparing section 143-1 is smaller than the value in the register 144-2. On the contrary, when the luminance Y (p) from the comparing section 143-1 is not less than the value in the register 144-2, the luminance Y (p) from the comparing section 143-1 is supplied to the comparing section 143-3 in the subsequent stage.
The same will be applied to the comparing section 143-3 and thereafter, wherein the register 144-1 will have the minimum value Y of the luminance Y (p) held therein after the luminance Y (p) input for one frame is completed min , And the registers 144-2 to 144-k will have the luminance Y (p) values held therein in ascending order, and the luminance Y (p) held in the register 144-k will be the luminance Y of the luminance range information d And outputting to a later stage.
The MAX sorting section 145 is configured such that k is arranged in series with the combination of the comparison section 146 and the register 147 and such that the inputted luminance Y (p) values are held by the registers 147-1 to 144-k in descending order.
For example, the comparing section 146-1 compares the luminance Y (p) from the decimation section 141 and the value in the register 147-1, and when the luminance Y (p) from the decimation section 141 is larger than the value in the register 144-1, the value in the register 147-1 is updated using the luminance Y (p) from the decimation section 141. On the contrary, when the luminance Y (p) from the decimation section 141 is not more than the value in the register 147-1, the luminance Y (p) from the decimation section 141 is supplied to the comparison section 146-2 in the subsequent stage.
The comparison section 146-2 compares the luminance Y (p) from the comparison section 146-1 with the value in the register 147-2, and updates the value in the register 147-2 with the luminance Y (p) from the comparison section 146-1 when the luminance Y (p) from the comparison section 146-1 is larger than the value in the register 147-2. On the contrary, when the luminance Y (p) from the comparing section 146-1 is not more than the value in the register 147-2, the luminance Y (p) from the comparing section 146-1 is supplied to the comparing section 146-3 in the subsequent stage.
The same will be applied to the comparison section 146-3 and thereafter, wherein the register 147-1 will have the maximum value Y of the luminance Y (p) held therein after the luminance Y (p) input for one frame is completed max And the registers 147-2 to 147-k will have the luminances Y (p) held therein in descending order, and the luminance Y (p) held in the register 147-k will be the luminance Y of the luminance range information b And outputting to a later stage.
Since the luminance Y (p) input to the MIN sorting section 142 and the MAX sorting section 145 is decimated by the decimation section 141, the decimation interval and the number of stages k of the MIN sorting section 142 and the MAX sorting section 145 are appropriately adjusted, so that it is possible to obtain the luminance value Y corresponding to, for example, 1% of the upper end and the lower end of all the pixels in one frame, respectively d 、Y b
Fig. 16 shows an example structure of the luminance range normalization section 30. As described above, the luminance range normalization section 30 determines the gray scale conversion curve α, and converts the luminance Y (p) after the γ correction into the luminance Y after the luminance range normalization using the gray scale conversion curve α n (p)。
Since the gray scale conversion curve α shown in fig. 14 is composed of 5 line segments, the luminance range normalization section30 discriminates to which line segment the input luminance Y (p) belongs, and applies one of 5 line segments constituting the gray scale conversion curve α to the input luminance Y (p) to convert it into a luminance range positiveNormalized luminance Y n (p)。
The selector 151 of the luminance range normalizing section 30 normalizes the luminance values Y input to the input terminals a to h, respectively, based on the luminance Y (p) input to the input terminal i max 、Y b 、Y d 、Y min 、Y nc 、Y ns 、Y na And Y nb Are output from the output terminals j to m. Wherein the correlation is represented by the following formula (6):
Figure C20038010040100251
the subtractor 152 calculates a difference between the output of the output terminal k and the output of the output terminal j, and outputs the result to the divider 155. The subtractor 153 calculates a difference between the output of the output terminal l and the output of the subtractor 154, and outputs the result to the divider 155. The subtractor 154 calculates the difference between the luminance Y (p) and the output of the output terminal m, and outputs the result to the multiplier 156. The divider 155 calculates a ratio of the output of the subtractor 152 and the output of the subtractor 153, and outputs the result to the multiplier 156. The multiplier 156 calculates the product of the output of the divider 155 and the output of the subtractor 154, and outputs the result to the adder 157. The adder 157 adds the output of the output terminal j and the output of the multiplier 156, and outputs the result.
Output Y of adder 157 n (p) is represented by the following expression (7) indicating a segment of the gray scale conversion curve α discriminated based on the γ -corrected luminance Y (p).
Figure C20038010040100252
This is the end of the description about the individual parts that make up the DSP7 shown in fig. 2.
Meanwhile, it is noted that the average luminance calculating section 63 of the tone curve correcting section 22 and the average luminance calculating section 72 of the reduced image generating section 23 shown in fig. 6 perform similar calculations, and it is also possible to reduce the amount of calculation by a simpler circuit configuration. More specifically, the tone curve correcting section 22, the reduced image generating section 23, the reduced image memory 24, and the contrast correcting section 25 all shown in fig. 2 may be combined to be provided as a composite section as shown in fig. 17.
The composite section 160 may replace the tone curve correction section 22, the reduced image generation section 23, the reduced image memory 24, and the contrast correction section 25 shown in fig. 2.
The LUT memory 161 of the composition part 160 has an LUT corresponding to the tone curve as shown in fig. 4, and a representative value γ representing the slope of the tone curve held therein in advance 1 . The table reference section 162 corrects the logarithmic luminance received from the preceding stage based on the LUT held by the LUT memory 161log L (p) to give a log luminance log L c’ (p) and outputs it to the reduced image generating section 163 and the multiplier 172.
Reduced image generation section 163 converts log luminance image logL c’ Dividing into m × n blocks, calculating the logarithmic luminance logL of the pixels belonging to a single block c’ (p) to generate a first reduced image of m × n pixels, and causes the first reduced image memory 164 to store it.
The average luminance calculating section 63 calculates an average value μ of pixel values of the first reduced image of the previous frame held by the first reduced image memory 164, and outputs it to the divider 166. The divider 166 will have a predetermined constant logL T Divided by the average value mu to calculate a representative value gamma 2 And make gamma 2 And the memory 167. The multiplier 168 multiplies a single pixel of the first reduced image held by the first reduced image memory 164 by γ 2 Representative value γ held in memory 65 2 Thereby generating a second reduced image logL c1 And the second reduced image memory 169 stores it.
The multiplier 170 multiplies the logarithmic luminance logL of the current frame received from the table reference part 162 c’ (p) multiplication by γ 2 Representative value γ of previous frame held in memory 167 2 Thereby calculating the logarithmic luminance logL after the tone curve correction c (p) of the formula (I). Multiplier 171 converts representative value γ 1 And gamma 2 As a representative value γ (= γ) 1 ·γ 2 ) To the gain value setting section 172.
The gain value setting part 172 is based on the representative value γ of the previous frame received from the multiplier 171 and the log luminance logL of the current frame received from the multiplier 170 c (p) calculating a gain value g (p) determining the logarithmic luminance logL of the current frame c (p) an amount of contrast enhancement.
The interpolation position specifying section 173 acquires the logarithmic luminance logL of the current frame received from the multiplier 170 c The pixel position p of (p) (hereinafter also referred to as interpolation position p) and outputs it to the interpolation section 174. The interpolation section 174 uses the second reduced image logL of the previous frame held by the second reduced image memory 169 c1 Calculating a pixel logL corresponding to an interpolation position p by interpolation c1 (p) and outputs it to the contrast enhancement section 175.
The contrast enhancement section 175 bases on the gain value g (p) and the interpolation value logL of the reduced image c1 (p) log luminance logL for the current frame received from multiplier 170 c (p) calculating log luminance logL of enhanced contrast with components other than low frequency components u (p)。
The use of the composite section 160 allows the average luminance calculating section 165 to calculate the average value of the first reduced image of m × n pixels, which successfully reduces the amount of calculation compared to the average luminance calculating section 63 shown in fig. 6 with which the average value of the pixels of the logarithmic luminance image of the original size is calculated. Therefore, it is possible to reduce the delay time due to the calculation.
Next, a general gray scale compression process using the first exemplary structure of the DSP7 to which the composite member 160 shown in fig. 17 is applied will be described with reference to the flowchart of fig. 18.
In step S1, the DSP7 calculates and holds intermediate information (second reduced image log l) based on the wide DR luminance image on the previous frame that has already been calculated and held c (p), representative value γ, luminance range information [ Y d ,Y b ]) Converting the input wide DR luminance image L of the current frame into a narrow DR luminance image Y n . The DSP7 also calculates intermediate information about the wide DR luminance image L of the current frame.
In step S2, the DSP7 updates the middle information on the stored wide DR luminance image of the previous frame using the middle information on the calculated wide DR luminance image L of the current frame.
In step S3, the DSP7 discriminates whether or not a subsequent frame exists after the input wide DR luminance image of the current frame, and when it is discriminated that the subsequent frame exists, the process returns to step S1 and repeats the subsequent process. On the contrary, when it is determined that there is no subsequent frame, the gray scale compression process ends.
Details of the processing on a pixel basis in step S1 will be explained with reference to the flowchart of fig. 19. The processing of the single step described below is performed with respect to the target pixel (pixel position p) input in accordance with the raster order.
In step S11, the luminance L (p) of the target pixel (pixel position p) is input to the DSP7. In step S12, the logarithmic-conversion section 21 subjects the input luminance L (p) to logarithmic conversion, and outputs the obtained logarithmic luminance logL (p) to the composite section 160. In step S13, the table referencing section 162 of the composite section 160 corrects the log luminance logL (p) received from the log conversion section 21 based on the LUT held by the LUT memory 161, thereby obtaining the log luminance logL c’ (p) and outputs it to the reduced image generating section 163 and the multiplier 172. At the same time, the LUT memory 161 stores the representative value γ of the tone curve 1 And outputs to the multiplier 171. Multiplier 171 converts representative value γ 1 And based on gamma 2 Held by memory 167Gamma calculated from the first reduced image of the previous frame 2 The product of (a) is output as a representative value γ to the gain value setting section 172.
In step S14, the reduced image generation section 163 corrects the logarithmic luminance logL of one frame based on the tone curve c’ (p) to generate a first reduced image. Based on the first reduced image generated here, the representative value γ is calculated 2 . The generated first reduced image is also multiplied by the calculated representative value γ 2 Thereby generating a second reduced image logL c1
In step S15, the multiplier 170 multiplies the logarithmic luminance logL of the current frame received from the table referencing part 162 c’ (p) multiplication by γ 2 Representative value γ of the previous frame held in the memory 167 2 Thereby calculating a logarithmic luminance logL after the tone curve correction c (p)。
In step S16, the gain value setting part 172 sets the gain value based on the representative value γ regarding the previous frame received from the multiplier 171 and the logarithmic luminance logL of the current frame received from the multiplier 170 c (p) calculating a gain valueg (p) determining the logarithmic luminance logL of the current frame c (p) an amount of contrast enhancement.
In step S17, the interpolation section 174 uses the second reduced image logL of the previous frame held by the second reduced image memory 169 c1 Calculating the pixel logL corresponding to the interpolation position p by interpolation c1 (p) and outputs it to the contrast enhancement section 175. In step S18, the contrast enhancement section 175 determines the interpolation value logL based on the second reduced image c1 (p) and a gain value g (p) to enhance the logarithmic luminance logL after the correction other than the tone curve c (p) other than the low-frequency component, and the obtained contrast-corrected logarithmic luminance logL u (p) to the logarithmic-inverse conversion section 26 in the subsequent stage.
In step S19, the logarithmic-inverse conversion section 26 corrects the contrast-corrected logarithmic luminance logL u (p) conversion to a luminance L in a conventional axis u (p) and outputs it to the gamma correctionA positive part 27. In step S20, the γ correction section 27 performs predetermined γ correction, and outputs the obtained luminance Y (p) to the luminance range information calculation section 28 and the luminance range normalization section 30.
In step S21, the luminance range information calculation section 28 generates luminance range information [ Y ] based on the luminance Y (p) of one frame d ,Y b ]. In step S22, the luminance range normalization section 30 normalizes the luminance range based on the luminance range information [ Y ] of the previous frame held by the luminance range information memory 29 d ,Y b ]And normalizes the luminance Y (p) received from the γ correction section 27 to calculate the luminance Y n (p) of the formula (I). In step S23, the luminance range normalizing section 30 normalizes the luminance Y n (p) output as pixel values of the narrow DR luminance image after gray scale compression. Here is the end of the detailed explanation of the processing in step S1 shown in fig. 18.
Next, details of the processing in step S2 in fig. 18 will be explained with reference to the flowchart in fig. 20.
In step S31, the reduced image generating section 163 uses the logarithmic luminance logL of one frame corrected based on the tone curve c’ (p) the first reduced image held in the first reduced image memory 164 is updated with the generated first reduced image.
In step S32, the divider 166 divides the predetermined constant logL T Divided by the average value μ received from the average brightness calculation section 165 to calculate a representative value γ 2 And using the calculated representative value gamma 2 To update the data by gamma 2 Representative value γ held in memory 167 2
In step S33, the multiplier 168 multiplies the single pixel of the first reduced image updated by the processing in step S31 and held by the first reduced image memory 164 by γ updated by the processing in step S32 2 Representative value γ held in memory 65 2 Thereby generating a second reduced image logL c1 And updates the second reduced image logL held by the first reduced image memory 169 c1
In step S34, the luminance range information calculation section 28 uses the luminance range information [ Y (p) calculated based on the luminance Y (p) of one frame d ,Y b ]To update the luminance range information [ Y ] of the previous frame held by the luminance range information memory 29 d ,Y b ]. Here is a result of detailed explanation of the processing of step S2 in fig. 18And (4) tail.
Next, fig. 21 shows an exemplary structure of the DSP7 suitable for a wide DR image, which is a color image. It should be noted that the wide DR image input to the DSP7 according to the raster order is not configured such that all pixels thereof have all components of R, G, and B components alone, but is configured such that any one of the R, G, and B components is present. Hereinafter, the wide DR image of the second exemplary structure input to the DSP7, which is a color image, is referred to as a wide DR color mosaic (mosaic) image. Which of the R, G and B components a single pixel of the wide DR color mosaic image has is determined by the location of the pixel.
The pixel value of the wide DR color mosaic image input to the DSP7 in raster order is represented as L (p) below.
In the second example structure of the DSP7, the demosaic section 201 demosaics pixel values L (p) of one frame in which each pixel has a different color so that all pixels have all of R, G, and B components, thereby generating color signals [ R (p), G (p), B (p) ] and outputting them to the color balance adjustment section 202. Hereinafter, an image composed of the color signals output from the demosaicing section 201 is referred to as a wide DR color image.
The color balance adjustment section 202 adjusts each of the R, G, and B components so as to make the color balance of the entire image appropriate, thereby generating a color signal R b (p),G b (p),B b (p)]. It should be noted that the demosaicing section 201 and the color balance adjusting section 202 are the same as those mounted on a general digital video recorder equipped with a single-disk type CCD image sensor.
The logarithmic conversion section 203 subjects the color signal [ R ] received from the color balance adjustment section 202 to the b (p), G b (p),B b (p)]Subjected to logarithmic conversion, and the obtained logarithmic color signal [ logR b (p),logG b (p), logB b (p)]To the tone curve correction section 204. The tone curve correction section 204 applies a tone curve obtained in advance to the input logarithmic color signal [ logR ] b (p),logG b (p),logB b (p)]In the above, it is converted toward the direction of compressing the gray scale, and the obtained logarithmic color signal [ logR ] is converted b (p), logG b (p),logB b (p)]To the reduced image generation section 205 and the contrast correction section 207. The tone curve correcting section 204 also outputs a representative value γ representing the slope of the applied tone curve to the contrast correcting section 207.
The reduced image generation section 205 generates a reduced image based on the logarithmic color signal [ logR ] for one frame received from the tone curve correction section 204 c (p),logG c (p),logB c (p)]To generate a reduced image logL c1 And causes the reduced image memory 206 to store it.
The contrast correction section 207 bases on the representative value γ and the reduced image logL of the previous frame held by the reduced image memory 206 c1 To correct the secondary tone curve correction section 204 weakened by the tone curve correctionReceived logarithmic color signal [ logR ] of current frame c (p),logG c (p),logB c (p)]And the obtained logarithmic color signal [ logR ] is used u (p),logG u (p),logB u (p)]And output to the logarithmic-reverse conversion section 208. The logarithmic inverse conversion unit 208 converts the contrast-corrected logarithmic color signal [ logR ] u (p),logG u (p), logB u (p)]Subjected to logarithmic inversion, and the obtained color signal [ R ] represented by the conventional axis u (p), G u (p),B u (p)]To the γ correction section 209.
The gamma correction section 209 makes the reception from the logarithmic-reverse conversion section 208Color signal [ R ] u (p),G u (p), B u (p)]Subjected to gamma correction in consideration of gamma characteristics of a reproducing apparatus (e.g., display 11), and the obtained gamma-corrected color signal [ R g (p),G g (p),B g (p)]To the luminance information calculation section 210 and the luminance range normalization section 212. The luminance information calculation section 210 calculates [ R for one frame received from the γ correction section 209 g (p),G g (p),B g (p)]Converted into luminance Y (p), luminance range information indicating the distribution of the luminance Y (p) is calculated, and the luminance range information memory 211 is caused to hold it. The luminance range information described herein refers to information indicating the distribution range of the luminance Y (p) of one frame, and generally uses the luminance Y closest to the darkness d And a luminance Y closest to the brightness b And is calculated as luminance range information Y d ,Y b ]。
The luminance range normalization unit 212 normalizes the luminance range based on the luminance range information [ Y ] of the previous frame held by the luminance range information memory 211 d ,Y b ]Converts the color signal R of the current frame received from the gamma correction section 209 g (p),G u (p),B g (p)]So that its distribution range can conform to a range representable by a reproduction apparatus (e.g., display 11), and a color signal [ R ] obtained n (p),G n (p),B n (p)]Output to the subsequent stage as a narrow DR image (which is a color image).
As will be described below, the second example structure of the DSP7 suitable for a color image is almost similar to the first example structure suitable for a monochrome image shown in fig. 2 except that the demosaic part 201 and the color balance adjusting part 202 are added, but the internal structures of the individual parts are slightly modified so as to be suitable for a color image.
Fig. 22 shows a first example structure of the tone curve correction section 204. In the first exemplary configuration, the luminance generating section 221 calculates the input logarithmic color signal [ logR [ ] b (p),logG b (p),logB b (p)]To generate a logarithmLuminance logL b (p) and outputs it to the subtractors 222-R to 222-B and the table reference section 224.
The subtractor 222-R derives the logarithmic color signal logR from the logarithmic color signal logR b (p) subtracting the logarithmic luminance logL b (p) and outputs the result to the multiplier 225-R. The LUT memory 223 has LUTs corresponding to the tone curve as shown in fig. 4 and a representative value γ indicating the slope of the tone curve previously held therein. The table referencing part 224 corrects the log luminance logL (p) to the log luminance logL (p) using the LUT held by the LUT memory 223 c (p) and outputs it to adders 226-R to 226-B.
The multiplier 225-R multiplies the output of the subtractor 222-R by the representative value γ received from the LUT memory 223 and outputs it to the adder 226-R. The adder 226-R calculates the output of the multiplier 225-R and the log luminance logL c (p) and the result is taken as the logarithmic color signal logR after the tone curve correction c (p) and outputs the result to the subsequent stage.
It should be noted below that arbitrary components for processing the G and B components are similar to those for processing the R component described above, and therefore, explanation will be omitted.
Fig. 23 shows a second example structure of the tone curve correction section 204. In the second exemplary configuration, the luminance generating section 231 calculates the input logarithmic color signal [ logR ] b (p),logG b (p),logB b (p)]Thereby generating a logarithmic luminance logL b (p) and outputs it to the average luminance calculating section 232. The average luminance calculating section 232 calculates an average value μ of the logarithmic luminance logL (p) of one frame and outputs it to the divider 233. The divider 233 divides a predetermined constant by the average value μ to calculate a representative value γ, and causes the γ memory 234 to store it.
The multiplier 235-R converts the logarithmic color signal logR of the current frame b (p) is multiplied by the representative value γ of the previous frame held by the γ memory 234 to calculate the tone curve corrected logarithmic color signal logR c (p)。
It should be noted below that arbitrary components for processing the G and B components are similar to those for processing the R component described above, and therefore, explanation will be omitted.
Fig. 24 shows a third example structure of the tone curve correction section 204. The third example structure may be said to be a combination of the first example structure and the second example structure. In the third exemplary configuration, the luminance generating section 241 calculates a logarithmic color signal [ logR ] b (p),logG b (p),logB b (p)]Thereby generating a logarithmic luminance logL b (p) and outputs it to the subtractors 242-R to 242-B, and the table reference section 244.
The subtractor 242-R derives the logarithmic color signal logR b (p) subtracting the logarithmic luminance logL b (p) and outputs the result to the multiplier 250-R. The LUT memory 243 has an LUT corresponding to the tone curve as shown in fig. 4 and a representative value γ indicating the slope of the tone curve previously held therein. The table referencing section 244 corrects the logarithmic luminance logL (p) to the logarithmic luminance logL (p) using the LUT held by the LUT memory 243 c’ (p) and outputs it to the average luminance calculating section 245 and the multiplier 249.
Average luminance calculating section 245 calculates logarithmic luminance log l of one frame c’ (p) and outputs it to the divider 246. The divider 246 divides the predetermined constant logL T Divided by the average value mu to calculate a representative value gamma 2 And make gamma 2 The memory 247 stores it. Multiplier 248 converts the representative value gamma 1 And gamma 2 As a representative value γ (= γ) 1 ·γ 2 ) And output to the contrast correction section 207 in the subsequent stage.
The multiplier 249 transforms the logarithmic luminance logL of the current frame c’ (p) multiplication by γ 2 Memory 247 holdsRepresentative value gamma of previous frame 2 Thereby calculating the logarithmic luminance logL after the tone curve correction c (p) and outputs it to the adders 251-R to 251-B.
The multiplier 250-R multiplies the output of the subtractor 242-R by the slaveLegal instrument 248 receives representative value γ and outputs the result to adder 251-R. The adder 251-R calculates the product of the output of the multiplier 250-R and the output of the multiplier 249, and takes the result as the tone curve corrected logarithmic color signal logR c (p) to the subsequent stage.
It should be noted that arbitrary components for processing the G and B components are similar to those for processing the R component described above, and therefore, explanation will be omitted.
Next, fig. 25 shows an example structure of the reduced image generating section 205. The luminance generating section 261 of the reduced image generating section 205 calculates the input tone curve-corrected logarithmic color signal [ logR c (p),logG c (p),logB c (p)]To thereby generate a logarithmic luminance logL c (p) and outputs it to the sorting section 262.
When the entire image is divided into m × n blocks, the classification section 262 counts the luminance logL according to the block to which the luminance belongs c The (p) values are classified and then supplied to the average value calculating sections 263-1 to 263-N (= m × N). For example, those classified into the first block are supplied to the average value calculating section 263-1, and those classified into the second block are supplied to the average value calculating section 263-2. The same applies to the subsequent logarithmic luminance logL c The (p) value, those classified into the nth block are supplied to the average value calculating section 263-N.
The average value calculation means 263-i (i =1, 2.., N) log luminance log l from one frame c (p) calculating therein the logarithmic luminance logL classified into the ith block c (p) and outputs it to the composite component 264. The composition section 264 generates a reduced image logL of m × n pixels c1 Having logarithmic luminance logL received from the average value calculating sections 263-i, respectively c The average value of (p) is taken as a pixel value, and is caused to be held in the reduced image memory 206 in the subsequent stage.
Next, fig. 26 shows an example structure of the contrast correction section 207. The luminance generating section 270 of the contrast correcting section 25 calculates an outputLogarithmic color signal [ logR ] after correction of incoming tone curve c (p), logG c (p),logB c (p)]To thereby generate a logarithmic luminance logL c (p) and outputs it to the interpolation position designation section 271 and the gain value setting section 273.
The interpolation position specifying part 271 acquires the logarithmic luminance logL c The pixel position p of (p) (hereinafter also referred to as interpolation position p) and outputs it to the interpolation section 272. The interpolation section 272 uses the second reduced image logL of the previous frame held by the reduced image memory 206 c1 Calculating the pixel logL corresponding to the interpolation position p by interpolation c1 (p) and outputs it to subtractors 274-R to 274-B and addition276-R to 276-B.
The gain value setting part 273 is based on the representative value γ on the previous frame received from the tone curve correcting part 22 and the log luminance logL of the current frame c (p) calculating and determining the logarithmic luminance logL of the current frame c (p) and outputs it to multipliers 275-R to 275-B.
Subtractor 274-R derives logarithmic color signal logR c (p) subtracting the interpolated value logL c1 (p) and outputs the result to the multiplier 275-R. The multiplier 275-R multiplies the output of the subtractor 274-R by the gain value g (p), and outputs the result to the adder 276-R. The adder 276-R adds the interpolated value logL c1 (p) is applied to the output of the multiplier 275-R and the resulting contrast corrected logarithmic color signal logR is applied u (p) to the subsequent stage.
It should be noted below that arbitrary components for processing the G and B components are similar to those for processing the above-described R component, and therefore, explanation will be omitted.
Next, fig. 27 shows an example structure of a composite section 300 which can replace the tone curve correction section 204, the reduced image generation section 205, the reduced image memory 206, and the contrast correction section 207 shown in fig. 21.
Luminance generating section 3 of composite section 30001 calculate the input logarithmic color signal [ logR [ ] b (p), logG b (p),logB b (p)]Thereby generating a logarithmic luminance logL b (p) and outputs it to the subtracting devices 302-R to 302-B and the table reference section 304. Subtractor 302-R slave logarithmic color signal logR b (p) subtracting the logarithmic luminance logL b (p) and outputs the result to multiplier 316-R.
The LUT memory 303 of the composite part 300 has an LUT corresponding to the tone curve as shown in fig. 4 and a representative value γ representing the slope of the tone curve held therein in advance 1 . The table referencing section 304 corrects the log luminance logL (p) received from the luminance generating section 301 based on the LUT held by the LUT memory 303 to give the log luminance logL c’ (p) and outputs it to the multiplier 305 and reduced image generating section 306.
The multiplier 305 multiplies the logarithmic luminance logL of the current frame received from the table reference part 304 c’ (p) multiplication by γ 2 Representative value γ of the previous frame held in the memory 167 2 Thereby calculating the logarithmic luminance logL after the tone curve correction c (p) and outputs it to the adders 317-R to 317-B.
Reduced image generation section 306 converts log luminance image logL c’ Dividing into m × n blocks, calculating the logarithmic luminance logL of the pixels belonging to a single block c’ (p) average value, thereby generating a first reduced image of m × n pixels, and causing the first reduced image memory 307 to store it.
The average luminance calculating section 308 calculates an average value μ of pixel values of the first reduced image of the previous frame held by the first reduced image memory 307, and outputs it to the divider 309. The divider 309 willPredetermined constant logL T Divided by the average value mu to calculate a representative value gamma 2 And make gamma 2 The memory 310 holds it. The multiplier 311 calculates the representative value γ 1 And gamma 2 As a representative value γ (= γ) 1 ·γ 2 ) And outputs it to gain value setting section 315 and multiplies itFrench machines 316-R through 316-B.
The multiplier 312 multiplies a single pixel of the first reduced image held by the first reduced image memory 164 by γ 2 Representative value γ held in memory 310 2 Thereby generating a second reduced image logL c1 And causes the second reduced image memory 313 to store it.
The interpolation section 314 uses the second reduced image logL of the previous frame held by the reduced image memory 169 c1 The logarithmic luminance logL corresponding to the current frame received from the multiplier 170 is calculated by interpolation c (p) pixel logL at interpolation position p (hereinafter also referred to as interpolation position p) c1 (p) and outputs it to subtractors 318-R to 318-B and adders 320-R to 320-B.
The gain value setting part 315 is based on the representative value γ with respect to the previous frame received from the multiplier 311 and the log luminance logL of the current frame received from the multiplier 305 c (p) calculating and determining the logarithmic luminance logL of the current frame c The gain value g (p) of the contrast enhancement amount of (p) is output to the multipliers 319-R to 319-B.
The multiplier 316-R calculates the product of the output of the subtractor 302-R and the representative value γ, and outputs it to the adder 317-R. Adder 317-R calculates the sum of the output of multiplier 316-R and the output of multiplier 305 and outputs it to subtractor 318-R. The subtractor 318-R subtracts the interpolated value logL from the output of the adder 317-R c1 (p) and outputs the result to multiplier 319-R. The multiplier 319-R multiplies the output of the subtractor 318-R by a gain value g (p), and outputs the result to the adder 320-R. Adder 320-R computes the output of multiplier 319-R and the interpolated value logL c1 (p) and the obtained contrast-corrected logarithmic color signal logR u (p) output to the subsequent stage.
It should be noted below that arbitrary components for processing the G and B components are similar to those for processing the above-described R component, and therefore, explanation will be omitted.
The use of the composite member 300 allows the average luminance calculating section 308 to calculate m &The average of the first reduced image of n pixels is utilized to calculate the log luminance image logL for the original size as shown in FIG. 24 c This successfully reduces the amount of calculation compared to the average luminance calculating section 245 of the average value of the pixels. Therefore, it is possible to reduce the delay time caused by the calculation.
Next, fig. 28 shows an example structure of the luminance range information calculating section 210. In the luminance range information calculating section 210, the luminance generating section 331 calculates the γ -corrected color signal [ R [ ] g (p), G g (p),B g (p)]Thereby generating luminance Y (p) and outputting it to the decimation section 332. The decimation section 332 selects the luminance Y (p) received from the luminance generation section 331 based on the pixel position p. Also hasThat is, only the luminance values of pixels at the preset pixel positions are supplied to the MIN classification part 333 and the MAX classification part 336 in the subsequent stage.
The MIN classification section 333 is configured such that k is arranged in series with the combination of the comparison section 334 and the register 335, and such that the inputted luminance Y (p) values are held by the registers 335-1 to 335-k in ascending order.
For example, the comparing section 334-1 compares the luminance Y (p) from the decimation section 332 and the value in the register 335-1, and updates the value in the register 335-1 using the luminance Y (p) from the decimation section 332 when the luminance Y (p) from the decimation section 332 is smaller than the value in the register 335-1. On the contrary, when the luminance Y (p) from the decimation section 332 is not less than the value in the register 335-1, the luminance Y (p) from the decimation section 332 is supplied to the comparison section 334-2 in the subsequent stage.
The same will be applied to the comparison section 334-2 and thereafter, wherein the register 335-1 will have the maximum value Y of the luminance Y (p) held therein after the luminance Y (p) input for one frame is completed min And the registers 335-2 to 335-k will have the luminance Y (p) values held therein in ascending order, and the luminance Y (p) held in the register 335-k will be the luminance Y of the luminance range information d Output to the subsequent stage。
The MAX sorting block 336 is configured such that k is arranged in series with the combination of the comparing block 337 and the register 338, and such that the inputted luminance Y (p) values are held by the registers 338-1 to 338-k in descending order.
For example, the comparing section 337-1 compares the luminance Y (p) from the decimation section 332 with the value in the register 338-1, and updates the value in the register 338-1 with the luminance Y (p) from the decimation section 332 when the luminance Y (p) from the decimation section 332 is larger than the value in the register 338-1. On the contrary, when the luminance Y (p) from the decimation section 332 is not more than the value in the register 338-1, the luminance Y (p) from the decimation section 332 is supplied to the comparison section 337-2 in the subsequent stage.
The same will be applied to the comparing section 337-2 and thereafter, wherein the register 338-1 will have the maximum value Y of the luminance Y (p) held therein after the luminance Y (p) input for one frame is completed max And the registers 338-2 to 338-k will have the luminance Y (p) values held therein in descending order and the luminance Y (p) held in the register 338-k will be the luminance Y of the luminance range information b And outputting to a later stage.
Since the luminance Y (p) values input to the MIN sorting means 333 and the MAX sorting means 336 are decimated by the decimation means 332, appropriate adjustment of the decimation interval and the number of stages k of the MIN sorting means 333 and the MAX sorting means 336 makes it possible to obtain the upper and lower ends with respect to, for example, all pixels in one frameLuminance Y corresponding to 1% or 0.1% of the end d 、Y b The value is obtained.
Next, a general gray scale compression process using the second exemplary structure of the DSP7 to which the composite component 300 shown in fig. 27 is applied will be described with reference to the flowchart of fig. 29.
In step S41, the DSP7 (demosaicing section 201) demosaics the wide DR color mosaic image to generate a wide DR color image, and subjects the pixel values thereof, i.e., color signals [ R (p), G (p), B (p) ]]Output to the color balance adjustment section 202 in the order of raster. In the step S42, the process proceeds,the DSP7 (color balance adjustment section 202) adjusts the R, G, and B components, respectively, so that the color balance of the entire image will become appropriate, thereby generating a color signal R b (p),G b (p),B b (p)]。
In step S43, the DSP7 calculates and holds intermediate information (second reduced image logL) based on the wide DR color image already calculated with respect to the previous frame c (p), representative value γ, luminance range information [ Y d ,Y b ]) Converting the input color signal of wide DR color image L of the current frame into narrow DR color image Y n . The DSP7 also calculates intermediate information about the wide DR color image L of the current frame.
In step S44, the DSP7 updates the intermediate information on the stored wide DR color image of the previous frame using the intermediate information with respect to the calculated wide DR color image L of the current frame.
In step S45, the DSP7 discriminates whether or not a subsequent frame exists after the wide DR luminance image of the current frame input, and when it is discriminated that the subsequent frame exists, the process returns to step S41 and repeats the subsequent process. On the contrary, when it is determined that there is no subsequent frame, the gray scale compression process ends.
Details of the processing on a pixel basis in step S42 will be explained with reference to the flowchart in fig. 30. The processing of the single step described below is performed with respect to a target pixel (pixel position p) input in accordance with the raster order.
In step S51, the color balance adjustment section 202 generates the color signal [ R b (p),G b (p), B b (p)]Output to the logarithmic-conversion section 203. In step S52, the logarithmic-conversion section 203 makes the input color signal [ R ] b (p),G b (p),B b (p)]Subjected to logarithmic conversion, and the obtained logarithmic color signal [ logR b (p),logG b (p),logB b (p)]Output to the composite member 300.
In step S53, the luminance generating section 301 of the composite section 300 calculates the input logarithmic color informationNumber [ logR b (p),logG b (p),logB b (p)]Thereby generating a logarithmic luminance logL b (p) and outputs it to the subtractors 302-R to 302-B, and the table reference section 304. In step S54, the table referencing part 304 corrects the input log luminance logL (p) to the log luminance logL (p) based on the LUT held by the LUT memory 303 c’ (p) and outputs it to the multiplier 305 and the reduced image generation section 306.
In step S55, the reduced image generation section 306 corrects the pair of one frame based on the tone curveNumber luminance logL c’ (p) to generate a first reduced image. Here, the representative value γ is calculated based on the generated first reduced image 2 . Here also by multiplying the generated first reduced image by the representative value γ thus calculated 2 And generates a second reduced image logL c1
In step S56, the multiplier 305 multiplies the logarithmic luminance logL of the current frame received from the table reference section 304 c’ (p) multiplication by γ 2 Representative value γ of the previous frame held in the memory 310 2 Thereby calculating a logarithmic luminance logL after the tone curve correction c (p)。
In step S57, the subtracter 302-R, the multiplier 316-R and the adder 317-R calculate the R component to generate a tone curve-corrected logarithmic color signal logR c (p) of the formula (I). For the G component, a calculation is performed by the subtracter 302-G, the multiplier 316-G and the adder 317-G to generate the tone curve corrected logarithmic color signal logG c (p) of the formula (I). For the B component, a calculation is performed by the subtractor 302-B, the multiplier 316-B, and the adder 317-B to generate the tone curve corrected logarithmic color signal logB c (p)。
In step S58, the gain value setting section 315 bases on the representative value γ with respect to the previous frame received from the multiplier 311 and the log luminance logL of the current frame received from the multiplier 305 c (p) calculating and determining the logarithmic luminance logL of the current frame c (p) gain g for contrast enhancement(p) of the formula (I). In step S59, the interpolation section 314 uses the second reduced image logL of the previous frame held by the second reduced image memory 313 c1 Calculating the pixel logL corresponding to the interpolation position p by interpolation c1 (p)。
In step S60, the R component is calculated by the subtracter 318-R, the multiplier 319-R and the adder 320-R to generate the tone curve corrected logarithmic color signal logR u (p) of the formula (I). For the G component, a calculation is performed by a subtractor 318-G, a multiplier 319-G, and an adder 320-G to generate a tone curve corrected logarithmic color signal logG u (p) of the formula (I). For the B component, a calculation is performed by subtractor 318-B, multiplier 319-B, and adder 320-B to generate a tone curve corrected logarithmic color signal logB u (p)。
In step S61, the logarithmic-inverse-conversion unit 208 makes the contrast-corrected logarithmic color signal [ logR [ ] u (p),logG u (p),logB u (p)]Subjected to logarithmic inversion to generate a color signal [ R ] represented by a conventional axis u (p),G u (p),B u (p)]And outputs it to the γ correction section 209. In step S62, the γ correction section 209 performs predetermined γ correction, and obtains the γ -corrected color signal [ R g (p), G g (p),B g (p)]To the luminance information calculation section 210 and the luminance range normalization section 212.
In step S63, the luminance generating section 331 of the luminance range information calculating section 210 bases on the gamma-corrected color signal [ R [ ] g (p),G g (p),B g (p)]To generate luminance Y (p). In step S64, the MIN classification section 333 and MAX classification section 336 of the luminance range information calculation section 210 are based on oneLuminance range information [ Y ] is calculated from luminance Y (p) of a frame d ,Y b ]。
In step S65, the luminance range normalization section 212 bases on the luminance range information [ Y ] of the previous frame held by the luminance range information memory 211 d ,Y b ]Will receive from the gamma correction section 209Color signal [ R ] g (p),G g (p),B g (p)]Normalized to thereby calculate [ R n (p),G n (p),B n (p)]. In step S66, the luminance range normalizing section 212 normalizes the color signal [ R ] thus calculated n (p),G n (p),B n (p)]Output as pixel values of a narrow DR color image after gray scale compression. Here is the end of the detailed explanation of the processing of step S43 in fig. 29.
Next, details of the processing in step S44 in fig. 29 will be explained with reference to the flowchart in fig. 31. In step S71, the reduced image generation section 306 uses the logarithmic luminance logL of one frame corrected based on the tone curve c’ (p) the first reduced image held in the first reduced image memory 307 is updated with the generated first reduced image.
In step S72, the divider 309 sets a predetermined constant logL T Divided by the average value μ received from the average brightness calculation section 165 to calculate a representative value γ 2 And using the thus calculated representative value gamma 2 To update by gamma 2 Representative value γ held in memory 310 2
In step S73, the multiplier 312 multiplies the single pixel of the first reduced image held by the first reduced image memory 307 updated by the process of step S71 by γ updated by the process of step S72 2 Representative value γ held in memory 310 2 Thereby generating a second reduced image logL c1 And updates the second reduced image logL held by the first reduced image memory 313 c1
In step S74, the luminance range information calculation section 210 uses [ R ] based on one frame g (p),G g (p), B g (p)]And the generated luminance range information [ Y d ,Y b ]To update the luminance range information [ Y ] of the previous frame held by the luminance range information memory 211 d ,Y b ]. Here is the end of the detailed explanation of the processing of step S44 in fig. 29.
This is the end of the detailed explanation of the second exemplary architecture of the DSP7.
For example, it should be noted that each of the average luminance calculating section 51 shown in fig. 5, the average luminance calculating section 63 shown in fig. 6, the average luminance calculating section 165 shown in fig. 17, the average luminance calculating section 232 shown in fig. 23, and the average luminance calculating section 245 shown in fig. 24 is configured so that an average value of luminance values is calculated, wherein the calculation for finding the average value may employ a weighted average. For example, a larger weight is given to the central portion of the image than to the peripheral portion, so that luminance correction can be performed when the weight point is placed on the reflectance of the object existing in the central portion of the image.
The composite member 160 shown in fig. 17 and the composite member 300 shown in fig. 27 have a memory for holding the generated first reduced image, and a memory for holding the first reduced image generated by the image processing methodMultiplying the small image by the representative value γ 2 And a memory of the generated second reduced image, wherein the two memories may be combined into one because it is no longer necessary to hold the first reduced image once the second reduced image is generated.
If the present invention is applied to such a digital camera as in the present embodiment: which takes a picture of a wide DR image, compresses a gray scale, and outputs it as an image displayable on a display with a narrow dynamic range, makes it possible to realize a gray scale compression process by a structure in which: it has only a greatly reduced memory capacity (delay line for frame memory or pixel serial data) indispensable to the conventional gray scale compression technique, and also makes it possible to obtain an output image: which is by no means inferior to the output image obtained by the gray scale compression process that is generally achieved using large-scale filtering.
This makes it possible to realize a high-quality and inexpensive digital camera which has never been realized.
The wide DR image in the present embodiment assumes that the display 11 is subjected to a gray scale compression process for the reproduction apparatus, wherein it is also possible to perform the gray scale compression process so as to be suitable for a dynamic range that can be represented by a monitor or a printer externally attached to the digital camera 1, for example.
The following fig. 32 shows an exemplary configuration of an image processing system to which the present invention is applied. The image processing system 501 is constituted by: a camera 502 for taking a picture of a subject and generating a wide DR image L composed of pixels having pixel values (brightness) with a dynamic range wider than usual; an image processing device 510 for compressing the gray scale of the wide DR image L generated by the camera 502 to a gray scale range displayable by the display 511; and a display 11 for displaying the gray-scale-compressed image L generated by the image processing apparatus 510 u
The camera 502 is constituted by a lens 503 for condensing a light image of a subject, an aperture for adjusting the amount of light energy of the light image, a CCD image sensor for generating a luminance signal by photoelectric conversion of the condensed light image, a preamplifier (Pre-amp.) 506 for removing a noise component from the generated luminance signal, an AD converter (a/D) 507 for converting the luminance signal from which the noise component is removed into digital data having a bit width of about 14 to 16 bits in general, and an I/O interface (I/O) 508 for outputting a wide DR image constituted by pixels having digitized luminance to an image processing device 510.
Fig. 32 shows the overall operation of the image processing apparatus 1. In step S101, the camera 502 captures a picture of a subject, generates a corresponding wide DR image L, and outputs it to the image processing apparatus 510. In step S102, the image processing apparatus 510 subjects the wide DR image L to the gray-scale compression processing, thereby generating a gray-scale compressed image L u And outputs it to the display 511. In step S103, the display 511 displays the gray-scale-compressed image L u
The following fig. 34 shows a first exemplary structure of an image processing apparatus 510. The tone curve correcting section 521 of the image processing apparatus 510 corrects the wide DR image L received from the camera 502 in the direction of compressing the gray scale based on the tone curve obtained in advance, and corrects the resultant tone curve-corrected image L c To the smoothed luminance generating section 522, the gain value setting section 523, and the contrast correcting section 524. It should be noted that the tone curve corrected image L c With compressed gray scale and reduced contrast due to the compressed gray scale. The tone curve correcting section 521 also outputs a representative value γ representing the slope of the tone curve used for correction to the gain value setting section 523.
Fig. 35 shows an example structure of the tone curve correcting section 521. The LUT memory 531 of the tone curve correction part 521 holds in advance a lookup table (hereinafter referred to as LUT) corresponding to a monotonously increasing tone curve as shown in fig. 36, and a representative value γ representing a slope of the tone curve. Instead of a LUT, it is also permissible to maintain a function corresponding to the tone curve. The table referencing section 532 corrects the wide DR image L based on the LUT held by the LUT memory 531, thereby obtaining the tone curve corrected image L c
FIG. 36 shows an example of a tone curve, where at [0,1 ]]On the logarithmic axis normalized within the range, the wide DR image L is plotted on the abscissa and the tone curve corrected image L after correction is plotted on the ordinate c The brightness of (2). The tone curve shown in fig. 36 does not correct the value when the luminance value of the normalized wide DR image exceeds 0.5, but corrects the value when the luminance value of the normalized wide DR image value is less than 0.5, so that a smaller value is corrected by a larger correction amount. That is, correction is made so that shading of dark areas in the largest image is avoided when appearing on the display 511. The representative value γ representing the slope of the tone curve may be defined by an average value of slopes respectively obtained over the entire region of the luminance. For example, the representative value of the tone curve shown in fig. 36 is γ =0.94.
Referring back to fig. 34, the smooth luminance generating section 522 corrects the tone curveThe image L after correction c And the luminance L of the obtained smoothed image is smoothed c1 (p) to the contrast correction section 24. Fig. 37 shows an exemplary structure of the smooth luminance generating section 22.
The reduced image generation section 541 of the smooth luminance generation section 522 corrects the tone curve received from the tone curve correction section 521 by the pixel position c Is classified into m × n blocks, and a reduced image L is generated c1 It has as its own pixel the average of the pixel intensities classified into a single block. The reduced image memory 542 holds the reduced image L of m × n pixels thus generated c1 . The interpolation section 543 calculates the luminance of the pixel positions sequentially specified by interpolation using the pixels of the reduced image held by the reduced image memory 542, and adds the obtained interpolation value L c1 (p) the pixel luminance as the smoothed image is output to the contrast correction section 524. It should be noted here that p = (x, y) is a coordinate or vector representing a pixel position. Size of smoothed image output from interpolation section 543Equal to the tone curve corrected image L c The size of (2).
That is, in the smooth luminance generating section 522, the tone curve corrected image L is reduced c To generate a reduced image L c1 And by using the held reduced image L c1 The luminance of the smoothed image is calculated by interpolation operation on a pixel-by-pixel basis.
Although it is generally necessary to employ relatively large-scale filtering for effective gray scale compression processing, the smooth luminance generating section 522 only requires the reduced image memory 542 for holding reduced images of m × n pixels.
Fig. 38 shows an example structure of the reduced image generation section 541 shown in fig. 37. The classification section 551 of the reduced image generation section 541 corrects the tone curve corrected image L received by the tone curve correction section 521 c Is classified into m × n pixels according to pixel positionsBlocks, which are then supplied to the average value calculating sections 552-1 to 552-N (= m × N). For example, those classified into the first block are supplied to the average value calculating section 552-1, and those classified into the second block are supplied to the average value calculating section 552-2. The following description takes simple notation of the average calculation section 552 when it is not necessary to distinguish the individual average calculation sections 552-1 to 552-N.
The average value calculation section 552-i (i =1,2.., N) calculates the tone curve corrected image L classified into the ith block c And outputs it to the composite section 553. The composition section 553 generates a reduced image logL of m × n pixels c1 Which have as pixel values the average values of the luminances received from the average value calculating sections 552-i, respectively.
Fig. 39 shows an example structure of the average value calculating section 552 shown in fig. 38. The adder 561 of the average value calculation section 552 corrects the tone curve received from the classification section 551 in the preceding stage to the image L c Adds to the value held by register (r) 562, thereby updating the value held by register (r) 562. The divider 563 divides the value finally held by the register 562 by the number of pixels Q constituting one block, thereby calculating an average value of the luminance of Q pixels classified into one block.
Fig. 40 shows an example structure of the interpolation section 543 shown in fig. 37. The vicinity selection section 571 of the interpolation section 543 bases on the reduced image L of m × n pixels held by the reduced image memory 542 c1 The luminances a [4] of 4 × 4 pixels in the vicinity of the interpolation position p are acquired upon receiving the interpolation position p][4]And outputs it to product sum section 574.
Here, the symbol a [ i ] [ j ] means two-dimensional array data in which the pixel value a is i × j. The vicinity selection part 571 outputs the acquired horizontal displacement dx and vertical displacement dy between the luminance a [4] [4] and the interpolation position p to the horizontal coefficient calculation part 572 or the vertical coefficient calculation part 573, respectively.
It should be noted that the relationships between the interpolation position p, the adjacent luminances a [4] [4], and the amounts of displacements dx, dy are similar to those described above with reference to fig. 11, and therefore the explanation will be omitted.
The horizontal coefficient calculation part 572 calculates a cubic interpolation coefficient k in the horizontal direction based on the horizontal displacement dx received from the vicinity selection part 71 x [4]. Similarly, the vertical coefficient calculation section 573 calculates the cubic interpolation coefficient k in the vertical direction based on the vertical displacement dy received from the vicinity selection section 571 y [4]。
Cubic interpolation coefficient k in horizontal direction x [4]Usually by using the above formula (1).
Cubic interpolation coefficient k in vertical direction y [4]The calculation is generally performed by using the above formula (2).
It should be noted that the cubic interpolation coefficient K may be calculated using any arbitrary calculation formula other than the above-shown formulas (1), (2) as long as sufficiently smooth interpolation can be obtained x [4]And K y [4]。
Product summing section 574 uses equation (3) described above, by using adjacent pixel value a [4]][4]Horizontal interpolation coefficient k x [4]And a vertical interpolation coefficient k y [4]Sum of products of (c) to calculate a reduced image L c1 Is interpolated value L of the interpolation position p c1 (p)。
Referring back to fig. 34, the gain value setting section 523 calculates, for a single pixel position, the luminance logL for adjusting the smoothed image in the contrast correcting section 524 based on the representative value γ received from the tone curve correcting section 521 c The gain value g (p) of the contrast correction amount of (p) is output to the contrast correction section 524.
The gain value g (p) will be explained below. For a gain value of g (p) =1, the contrast enhancement section 524 neither enhances nor suppresses the contrast. For a gain value of g (p) > 1, the contrast is enhanced corresponding to the value. Conversely, for a gain value of g (p) < 1, the contrast is suppressed corresponding to that value.
The outline of the gain value setting by the gain setting section 523 is similar to the gain value setting by the gain setting section 93 described above, and therefore the explanation will be omitted.
Fig. 41 shows an example structure of the gain value setting section 523. The divider 581 calculates the reciprocal 1/γ = g of the representative value γ received from the previous stage 0 And outputs it to the subtractor 582. The subtractor 582 calculates (g) 0 -1) and outputs it to multiplier 588.
Subtractor 583 calculates tone curve corrected image L c And a brightness L with a medium gray level gray Difference (L) therebetween c -L gray ) And outputs it to divider 585. The subtractor 584 calculates the luminance L with a white clipping level white And a luminance L gray Difference (L) therebetween white -L gray ) And outputs it to divider 585. Divider 585 divides the output (L) of subtractor 583 c -L gray ) Divided by the output (L) of the subtractor 584 white -L gray ) And outputs it to the absolute value calculator 586. The absolute value calculator 586 calculates an absolute value of an output of the subtractor 585, and outputs it to the limiter 587. Limiter 587 limits the output of absolute value calculator 586 so that it is adjusted to 1 when it exceeds 1, andthe output is left unchanged when it does not exceed 1 and the result is output as attn (p) to multiplier 588.
The multiplier 588 multiplies the output of the subtractor 582 by the output of the limiter 587, and outputs the product to the adder 589. The adder 589 adds 1 to the output of the multiplier 588, and outputs the result to the subsequent stage as a gain value g (p).
Referring back to fig. 34, the contrast correcting section 524 is based on the gain value g (p) for the single pixel position p received from the gain value setting section 523 and the luminance L of the smoothed image received from the smoothed luminance generating section 522 c1 (p) tone curve corrected image L to enhance previously weakened contrast c Thereby generating a gray scale compressed image L u
Fig. 42 shows an example structure of the contrast correction section 524. The subtracter 591 of the contrast correction section 524 calculates the tone curve-corrected image L c Of a single pixel of c (p) luminance of a corresponding pixel of the smoothed image (i.e., interpolated value L of the reduced image) c1 (p)) difference (L) between c (p)-L c1 (p)), and outputs it to the multiplier 592. Multiplier 592 calculates the product of the output of subtractor 591 and gain value g (p) received from gain value setting section 523, and outputs the result to adder 593. The adder 593 adds the luminance of the smoothed image (the interpolated value L of the reduced image) c1 (p)) is added to the output of the multiplier 592 and the resulting luminance L is applied u (p) as a component contrast-corrected, gray-scale-compressed image L u The luminance of the pixel of (1) is outputted to the subsequent stage.
It should be noted below that the pixel luminance of the smoothed image (interpolated value L of the reduced image) c1 (p)) is an interpolated value based on m × n pixels, and thus has only the image L before reduction c Of the very low frequency component.
Thus, the output (L) of the subtractor 591 c (p)-L c1 (p)) is equal to the image logL corrected by the tone curve from the original c Only the difference obtained for the very low frequency component is subtracted. As described above, the luminance L of the contrast-corrected, gray-scale-compressed image u (p) is obtained by: the luminance signal is divided into a very low frequency component and other components, and among these components, components other than the low frequency component are multiplied by the gain value g (p) to enhance the contrast, and the two are synthesized again using the adder 593.
As described above, the contrast correction section 524 is configured such that the same gain value g (p) is used to enhance the components in the low-intermediate frequency region and the high-frequency region other than the very low-intermediate frequency region. Thus, it is made possible to obtain an image having a very natural enhancement to the eye without generating a local overshoot having an edge portion which is noticeable when only the high-frequency component is enhanced.
Next, details of the gray-scale compressed image generation processing (i.e., the processing in step S102 described above with reference to the flowchart in fig. 33) by the image processing apparatus 510 according to the first exemplary structure will be explained with reference to the flowchart in fig. 43.
In step S111, the tone curve correction section 521 corrects the secondary color tone based on the LUT obtained in advanceThe brightness of the wide DR image L received by the camera 502 and the tone curve-corrected image L obtained c To the smoothed luminance generating section 522, the gain value setting section 523, and the contrast correcting section 524. The tone curve correcting section 521 also outputs a representative value γ representing the slope of the tone curve used for correction to the gain value setting section 523.
In step S112, the smooth luminance generating section 522 reduces the tone curve corrected image L c Thereby generating a reduced image L c1 And further using the reduced image L c1 Based on interpolation operation, the pixel brightness L of the smoothed image is calculated c1 (p) and outputs the result to the contrast correction section 524.
In step S113, the gain setting section 523 calculates, for a single pixel position, the luminance L for adjusting the smoothed image in the contrast correcting section 524 based on the representative value γ received from the tone curve correcting section 521 c The gain value g (p) of the contrast correction amount of (p) is output to the contrast correction section 524.
It should be noted that the processing in step S112 and step S113 may be implemented in parallel.
In step S114, the contrast correcting section 524 is based on the gain value g (p) for the single pixel position p received from the gain value setting section 523 and the luminance L of the smoothed image received from the smoothed luminance generating section 522 c1 (p) to correct the tone curve corrected image L c To calculate a gray-scale compressed image L u Pixel luminance of (L) u (p)。Thus, the contrast-corrected, gray-scale-compressed image L obtained as described above u Obtained as an image with a contrast that is enhanced very naturally for the eye, without generating local overshoots at edge portions that may be significant when only high frequency components are enhanced. Here is the end of the explanation of the gray-scale compressed image generation process by the first exemplary configuration of the image processing apparatus 510.
Next, fig. 44 shows a second example structure of the image processing apparatus 510. The second exemplary configuration is configured such that a logarithmic conversion section 601 for realizing logarithmic conversion of the luminance of the wide DR image L received from the camera 501 is provided in the front stage of the tone curve correction section 521 in the first exemplary configuration shown in fig. 34, and a logarithmic inverse conversion section 602 for realizing logarithmic inverse conversion of the output from the contrast correction section 524 is provided in the rear stage of the contrast correction section 524 in the first exemplary configuration.
Any components constituting the second exemplary structure of the image processing apparatus 510 other than the logarithmic conversion section 601 and the logarithmic reverse conversion section 602 are equivalent to those of the first exemplary structure shown in fig. 34 and are given the same reference numerals, and thus the explanation will be omitted. It should be noted here that, in the second example structure, the components from the tone curve correcting section 521 to the contrast correcting section 524 separately process the logarithmically converted luminance.
For example, the tone curve correcting section 521 in the second example structure employs a tone curve as shown in fig. 4. The application of the monotonically increasing, gentle inverse-S-shaped tone curve as shown in fig. 4 will not cause a strong gray scale compression effect in the high luminance region and the low luminance region, so that it is possible to obtain a desired tone with a less degree of white-blindness or black-blindness even after the gray scale compression. Conversely, gray scale compression will strongly affect the intermediate luminance regions, which means that contrast correction can be applied sufficiently to the intermediate luminance regions and results in the desired lesser degree of contrast also being obtained in the intermediate luminance regionsContrast corrected gray scale compressed image L u . Here, the tone curve has a representative value γ of 0.67.
Next, details of the gray-scale compressed image generation processing according to the second exemplary structure of the image processing apparatus 510 will be explained with reference to a flowchart in fig. 45.
In step S121, the logarithmic-conversion section 601 subjects the luminance of the wide DR image L received from the camera 5022 to logarithmic conversion, and outputs the obtained logarithmic-wide DR image logL to the tone curve correction section 521.
In step S122, the tone curve correction section 521 corrects the luminance of the logarithmic-wide DR image logL, typically based on the LUT obtained in advance corresponding to the tone curve shown in fig. 4, and corrects the obtained logarithmic tone curve-corrected image logL c To the smoothed luminance generating section 522, the gain value setting section 523, and the contrast correcting section 524. The tone curve correcting section 521 also outputs a representative value γ representing the slope of the tone curve used for correction to the gain value setting section 523.
In step S123, the smooth luminance generating section 522 reduces the image logL after the logarithmic tone curve correction c Thereby generating a logL log-reduced image log L c1 And further using logL, the log-reduced image log L c1 By interpolation operation, the pixel luminance logL of the image after logarithmic smoothing is calculated c1 (p) and outputs the result to the contrast correction section 524.
In step S124, the gain value setting section 523 calculates the luminance logL for adjusting the logsmoothed image in the contrast correction section 524 for a single pixel position based on the representative value γ received from the tone curve correction section 521 c The gain value g (p) of the contrast correction amount of (p) is output to the contrast correction section 524.
It should be noted that the processing in step S123 and step S124 may be implemented in parallel.
In step S125, the contrast correction section 524 bases onThe gain value g (p) for the single pixel position p received from the gain value setting section 523 and the luminance L of the logarithmic smoothed image received from the smoothed luminance generating section 522 c1 (p) to correct the log tone curve corrected image logL c Thereby calculating log gray scale compressed image log L u Pixel luminance logL of u (p) and outputs it to the logarithmic-inverse conversion section 602.
In step S126, the logarithmic-inverse-conversion section 602 makes the log-gray-scale-compressed image logL u Pixel luminance logL of u (p) is subjected to a logarithmic inversion, and the L obtained is u (p) image L compressed as a gray scale u The luminance of the pixel is output.
Because of the contrast-corrected, gray-scale-compressed image L obtained as described above u A strong gray scale compression effect will not be caused in the high luminance region and the low luminance region, so that it is possible to obtain a desired hue with a lesser degree of white-blindness or black-blindness even after the gray scale compression. Conversely, the gray scale compression will strongly influence the intermediate luminance region, which means that the contrast correction can be applied completely to the intermediate luminance region and results in a gray scale compressed image L with a smaller degree of contrast correction, which is desired also in the intermediate luminance region u . Here is the end of the explanation of the gray-scale compressed image generation processing according to the second exemplary configuration of the image processing apparatus 510.
As has been described above, the image processing apparatus 510 according to one embodiment of the present invention makes it possible to convert a wide DR image having a dynamic range of luminance wider than usual into a gray-scale compressed image that can be displayed on the display 111 having only a narrow dynamic range of displayable luminance, without impairing the good-appearance characteristics, based on a structure that greatly reduces the large memory capacity (serving as a frame memory and a data delay line) that is indispensable to the conventional gray-scale compression technique. It also makes it possible to obtain a gray scale compressed image: which is by no means inferior to a gray-scale compressed image obtained by a gray-scale compression process that is generally realized using large-scale filtering.
Of course, the image processing apparatus 510 can convert a wide DR image into a gray-scale-compressed image while being suitable for a dynamic range that can be expressed on printers and projectors other than the display 511.
The present invention is generally applicable to an image signal processing circuit: it is built in not only shooting devices such as digital cameras and digital still cameras but also presentation devices such as displays, printers, projectors, and the like.
The series of processes described above may be executed on a hardware basis or may be executed on a software basis. In the case where the series of processes is executed on the basis of software, a program constituting the software is installed from a recording medium onto a computer built in dedicated hardware, or, for example, onto a general-purpose personal computer which can execute various functions after installing various programs.
Fig. 46 shows an example structure of a general-purpose personal computer. The personal computer 620 has a CPU (central processing unit) 621 built therein. The CPU 621 is connected to an input/output interface 625 via a bus 624. The bus 624 is connected to a ROM (read only memory) 622 and a RAM (random access memory) 623.
The input/output interface 625 is connected to the following components: an input section 626 constituted by an input device such as a keyboard, a mouse, or the like through which a user inputs an operation command; an output section 627 for outputting a processing operation screen or a resultant image of the processing on a display device; a storage unit 628 including a hard disk drive or the like for storing programs of various data; and an I/O interface 629 for communicating image data with the camera 502 and the like. It is also connected to a drive 630 for writing data to or reading data from a recording medium such as a magnetic disk 631 (including a flexible disk), an optical disk 632 (including CD-ROM (compact disc read only memory) and DVD (digital versatile disc)), a magneto-optical disk 633 (including MD (mini disc)), or a semiconductor memory 634.
The CPU 621 executes various processes in accordance with a program stored in the ROM 622, or a program read out from any of the magnetic disk 631 to the semiconductor memory 634, installed on the storage section 628, and loaded from the storage section 628 to the RAM 623 from the storage section 628. The RAM 623 also has data necessary for the CPU 621 to perform various processes appropriately stored therein.
It should be understood that in this patent specification, the steps for describing the program recorded in the recording medium include not only processes performed according to a given order on a time-series basis but also processes performed in parallel or individually in place of those always performed on a time-series basis.
It should be understood that in this specification, the entire contents of the claims, descriptions, drawings and abstracts of japanese patent application nos. 2002-003134 (filed 1/9/2003) and 2003-003135 (filed 1/9/2003) are referenced and incorporated herein.
Industrial applicability range
As already described above, the invention makes it possible to implement a gray scale compression technique: it requires only a small memory capacity and light computational effort to be consumed, allows easy hardware construction, and guarantees a large gray scale compression effect.
It also makes it possible to appropriately enhance the contrast of an image using a smaller memory capacity, based on a smaller amount of calculation, and based on a simple hardware configuration.

Claims (7)

1. An image processing apparatus characterized by comprising:
reduced image generating means for generating a reduced image from an input image;
correction information acquisition means for acquiring correction information of the input image based on the reduced image; and
a gray scale converting means for converting a gray scale of the input image;
wherein the gray-scale converting means corrects the contrast of the input image using the correction information as processing to be performed before and/or after converting the gray-scale.
2. The image processing apparatus according to claim 1, characterized by further comprising:
smoothing means for generating a signal having a pixel luminance L c The pixels constituting an input image smoothed based on an interpolation calculation using the pixels constituting the reduced image,
wherein the gray scale conversion means is based on the luminance L of the pixels constituting the image c And the luminance L of the pixels constituting the smoothed image 1 And a predetermined gain value g to generate a contrast-corrected image.
3. The image processing apparatus according to claim 1, characterized by further comprising:
smoothing means for generating a signal having a pixel luminance L c The smoothed image of (1), the pixels constituting an input image smoothed based on interpolation calculation using pixels constituting a reduced image; and
gain value setting means for setting a gain value g for correcting the contrast;
wherein the gray scale conversion means is based on the luminance L of the pixels constituting the input image c And the luminance L of the pixels constituting the smoothed image 1 And a predetermined gain value g to generate a contrast-corrected image; and
the gain value setting means may be configured so that the initial gain value g is based on an input 0 Reference gain value 1 and use of first brightness threshold value Th 1 A second brightness thresholdTh 2 And the luminance L of the pixels constituting the input image c Calculated attenuation value attn (Th) 1 ,Th 2 ,L c ) The gain value g is set.
4. The image processing apparatus according to claim 1, characterized by further comprising:
conversion means for generating a tone-converted image by converting luminances L of pixels constituting an input image based on a conversion function;
smoothing means for smoothing luminance L of pixels constituting the tone-converted image c Smoothing to generate a smoothed image; and
gain value setting means for setting an initial gain value g based on the reciprocal 1/gamma of the slope gamma representing the conversion function 0 To set a gain value g for correcting the contrast;
wherein the contrast correction means is based on the luminance L of the pixels constituting the tone-converted image c And the brightness L of the pixels constituting the smoothed image 1 And a gain value g for generating a contrast-corrected image; and
gain value setting means based on an input initial gain value g 0 Reference gain value 1, and use of first luminance threshold value Th 1 A second brightness threshold Th 2 And the luminance L of the pixels constituting the tone-converted image c Calculated attenuation value attn (Th) 1 ,Th 2 ,L c ) The gain value g is set.
5. The image processing apparatus according to claim 1, characterized in that:
the reduced image generating means generates a reduced image by converting an input image into a tone-converted image based on a conversion function and reducing the size of the tone-converted image;
the correction information acquisition means acquires correction information including a slope of the conversion function; and
the gray scale conversion means corrects the contrast of the tone-converted image based on the reduced image and the slope of the conversion function.
6. The image processing apparatus according to claim 5, characterized by further comprising:
holding means for holding the reduced image generated by the reduced image generating means and the correction information acquired by the correcting means;
wherein the holding means holds a reduced image corresponding to the image of the previous frame and a slope of a conversion function applied to the image of the previous frame, and
the gray scale conversion means corrects the contrast of the tone-converted image based on the reduced image of the previous frame and the slope of the conversion function both held in the holding means.
7. An image processing method characterized by comprising:
a reduced image generating step of generating a reduced image from an input image;
a correction information acquisition step of acquiring correction information of the input image based on the reduced image; and
a gray scale conversion step of converting a gray scale of an input image;
wherein the gray scale conversion step corrects the contrast of the input image using the correction information as processing to be performed before and/or after converting the gray scale.
CNB2003801004014A 2003-01-09 2003-12-10 Image processing device and method Expired - Fee Related CN100366052C (en)

Applications Claiming Priority (3)

Application Number Priority Date Filing Date Title
JP2003003134A JP4214457B2 (en) 2003-01-09 2003-01-09 Image processing apparatus and method, recording medium, and program
JP3134/2003 2003-01-09
JP3135/2003 2003-01-09

Publications (2)

Publication Number Publication Date
CN1692629A CN1692629A (en) 2005-11-02
CN100366052C true CN100366052C (en) 2008-01-30

Family

ID=32894487

Family Applications (1)

Application Number Title Priority Date Filing Date
CNB2003801004014A Expired - Fee Related CN100366052C (en) 2003-01-09 2003-12-10 Image processing device and method

Country Status (2)

Country Link
JP (1) JP4214457B2 (en)
CN (1) CN100366052C (en)

Families Citing this family (26)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2006098356A1 (en) * 2005-03-15 2006-09-21 Omron Corporation Image processor, image processing method, program and recording medium
EP1871093A4 (en) * 2005-03-15 2009-09-02 Omron Tateisi Electronics Co Image processor, image processing method, image processing system, program and recording medium
JP4831067B2 (en) 2005-06-20 2011-12-07 株式会社ニコン Image processing apparatus, image processing method, image processing program product, and imaging apparatus
US7426312B2 (en) * 2005-07-05 2008-09-16 Xerox Corporation Contrast enhancement of images
JP4687320B2 (en) 2005-08-11 2011-05-25 ソニー株式会社 Image processing apparatus and method, recording medium, and program
JP4992379B2 (en) * 2005-10-24 2012-08-08 株式会社ニコン Image gradation conversion apparatus, program, electronic camera, and method thereof
JP2007295373A (en) * 2006-04-26 2007-11-08 Matsushita Electric Ind Co Ltd Image processing device, method, and program
JP5003196B2 (en) 2007-02-19 2012-08-15 ソニー株式会社 Image processing apparatus and method, and program
JP4950784B2 (en) * 2007-07-06 2012-06-13 キヤノン株式会社 Image forming apparatus and control method thereof
EP2216988B1 (en) 2007-12-04 2013-02-13 Sony Corporation Image processing device and method, program, and recording medium
TWI385638B (en) * 2007-12-21 2013-02-11 Wintek Corp Method for processing image, method and device for converting data of image
CN101546423B (en) * 2008-03-24 2011-05-04 鸿富锦精密工业(深圳)有限公司 Device and method for image interception
CN101588436B (en) * 2008-05-20 2013-03-27 株式会社理光 Method, device and digital camera for compressing dynamic range of original image
CN102090054B (en) * 2008-07-17 2013-09-11 株式会社尼康 Imaging device, image processing program, image processing device, and image processing method
JP5493717B2 (en) 2009-10-30 2014-05-14 大日本印刷株式会社 Image processing apparatus, image processing method, and image processing program
JP5569042B2 (en) * 2010-03-02 2014-08-13 株式会社リコー Image processing apparatus, imaging apparatus, and image processing method
JP5991486B2 (en) * 2010-08-04 2016-09-14 日本電気株式会社 Image processing method, image processing apparatus, and image processing program
JP5966603B2 (en) 2011-06-28 2016-08-10 大日本印刷株式会社 Image processing apparatus, image processing method, image processing program, and recording medium
JP5820213B2 (en) * 2011-09-26 2015-11-24 キヤノン株式会社 Image processing apparatus and method, and imaging apparatus
JP5852385B2 (en) * 2011-09-26 2016-02-03 キヤノン株式会社 Imaging device
JP5815386B2 (en) * 2011-12-02 2015-11-17 富士フイルム株式会社 Image processing apparatus, image processing method, and program
JP2015139082A (en) 2014-01-22 2015-07-30 ソニー株式会社 Image processor, image processing method, program and electronic apparatus
JP6309777B2 (en) * 2014-02-10 2018-04-11 シナプティクス・ジャパン合同会社 Display device, display panel driver, and display panel driving method
EP3054418A1 (en) * 2015-02-06 2016-08-10 Thomson Licensing Method and apparatus for processing high dynamic range images
CN110168604B (en) * 2017-02-16 2023-11-28 奥林巴斯株式会社 Image processing apparatus, image processing method, and storage medium
JP2019028537A (en) 2017-07-26 2019-02-21 キヤノン株式会社 Image processing apparatus and image processing method

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000137805A (en) * 1998-10-29 2000-05-16 Canon Inc Processor and method for image processing
US20010041018A1 (en) * 2000-01-31 2001-11-15 Fumihiro Sonoda Image processing method
US20020008762A1 (en) * 2000-04-28 2002-01-24 Fumito Takemoto Method, apparatus and recording medium for image processing
JP2002238016A (en) * 2001-02-07 2002-08-23 Minolta Co Ltd Image processor, image processing system, image processing method, image processing program, and computer- readable recording medium recorded with the image processing program
JP2002269582A (en) * 2001-03-07 2002-09-20 Namco Ltd Game information, information storage medium, and game device

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000137805A (en) * 1998-10-29 2000-05-16 Canon Inc Processor and method for image processing
US20010041018A1 (en) * 2000-01-31 2001-11-15 Fumihiro Sonoda Image processing method
US20020008762A1 (en) * 2000-04-28 2002-01-24 Fumito Takemoto Method, apparatus and recording medium for image processing
JP2002238016A (en) * 2001-02-07 2002-08-23 Minolta Co Ltd Image processor, image processing system, image processing method, image processing program, and computer- readable recording medium recorded with the image processing program
JP2002269582A (en) * 2001-03-07 2002-09-20 Namco Ltd Game information, information storage medium, and game device

Also Published As

Publication number Publication date
JP2004221644A (en) 2004-08-05
CN1692629A (en) 2005-11-02
JP4214457B2 (en) 2009-01-28

Similar Documents

Publication Publication Date Title
CN100366052C (en) Image processing device and method
KR101051604B1 (en) Image processing apparatus and method
JP4687320B2 (en) Image processing apparatus and method, recording medium, and program
JP4595330B2 (en) Image processing apparatus and method, recording medium, and program
JP4894595B2 (en) Image processing apparatus and method, and program
JP5003196B2 (en) Image processing apparatus and method, and program
US6965416B2 (en) Image processing circuit and method for processing image
US7023580B2 (en) System and method for digital image tone mapping using an adaptive sigmoidal function based on perceptual preference guidelines
US7755670B2 (en) Tone-conversion device for image, program, electronic camera, and tone-conversion method
KR20080035981A (en) Image processing apparatus, imaging apparatus, image processing method, and computer program
US20090073278A1 (en) Image processing device and digital camera
US6137541A (en) Image processing method and image processing apparatus
JP2011040088A (en) Method and system for luminance change-adaptive noise filtering
JP4479527B2 (en) Image processing method, image processing apparatus, image processing program, and electronic camera
JP4161719B2 (en) Image processing apparatus and method, recording medium, and program
US8139856B2 (en) Image processing apparatus, imaging apparatus, and computer readable medium irregularizing pixel signals continuous in level near a saturation level
JP2007180851A (en) Gray scale transformation device, program and method for raw image, and electronic camera
JP2011100204A (en) Image processor, image processing method, image processing program, imaging apparatus, and electronic device
US10387999B2 (en) Image processing apparatus, non-transitory computer-readable medium storing computer program, and image processing method
JP3201049B2 (en) Gradation correction circuit and imaging device
JP4632100B2 (en) Image processing apparatus, image processing method, recording medium, and program
JP2000149014A (en) Image processor and image processing method
JP7332325B2 (en) IMAGE PROCESSING DEVICE, IMAGING DEVICE, IMAGE PROCESSING METHOD, AND PROGRAM
JP2005004767A (en) Method and system for filtering noise adaptively to luminance change

Legal Events

Date Code Title Description
C06 Publication
PB01 Publication
C10 Entry into substantive examination
SE01 Entry into force of request for substantive examination
C14 Grant of patent or utility model
GR01 Patent grant
CF01 Termination of patent right due to non-payment of annual fee

Granted publication date: 20080130

Termination date: 20151210

EXPY Termination of patent right or utility model