WO2007142048A1 - Video signal processing device, video signal processing method, and video signal processing program - Google Patents
Video signal processing device, video signal processing method, and video signal processing program Download PDFInfo
- Publication number
- WO2007142048A1 WO2007142048A1 PCT/JP2007/060758 JP2007060758W WO2007142048A1 WO 2007142048 A1 WO2007142048 A1 WO 2007142048A1 JP 2007060758 W JP2007060758 W JP 2007060758W WO 2007142048 A1 WO2007142048 A1 WO 2007142048A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- video signal
- distance information
- gradation conversion
- correction coefficient
- histogram
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims abstract description 114
- 238000003672 processing method Methods 0.000 title claims description 17
- 238000006243 chemical reaction Methods 0.000 claims abstract description 169
- 238000012937 correction Methods 0.000 claims abstract description 73
- 238000000034 method Methods 0.000 claims abstract description 71
- 238000004364 calculation method Methods 0.000 claims abstract description 43
- 238000003384 imaging method Methods 0.000 claims abstract description 43
- 238000002347 injection Methods 0.000 claims 1
- 239000007924 injection Substances 0.000 claims 1
- 238000012546 transfer Methods 0.000 abstract description 6
- 238000005259 measurement Methods 0.000 abstract 1
- 238000010586 diagram Methods 0.000 description 21
- 238000011156 evaluation Methods 0.000 description 14
- 238000007906 compression Methods 0.000 description 12
- 230000006835 compression Effects 0.000 description 11
- 238000000605 extraction Methods 0.000 description 11
- 238000010606 normalization Methods 0.000 description 8
- 238000005375 photometry Methods 0.000 description 8
- 230000001186 cumulative effect Effects 0.000 description 7
- 230000000694 effects Effects 0.000 description 7
- 230000003044 adaptive effect Effects 0.000 description 5
- 230000010354 integration Effects 0.000 description 5
- 239000000284 extract Substances 0.000 description 3
- 238000009825 accumulation Methods 0.000 description 2
- 230000003247 decreasing effect Effects 0.000 description 2
- 101100432802 Drosophila melanogaster Ypel gene Proteins 0.000 description 1
- 230000015556 catabolic process Effects 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000006731 degradation reaction Methods 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000002349 favourable effect Effects 0.000 description 1
- 230000000704 physical effect Effects 0.000 description 1
- 238000012805 post-processing Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 230000035945 sensitivity Effects 0.000 description 1
- 230000007704 transition Effects 0.000 description 1
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/90—Dynamic range modification of images or parts thereof
- G06T5/92—Dynamic range modification of images or parts thereof based on global image properties
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/40—Image enhancement or restoration using histogram techniques
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N1/00—Scanning, transmission or reproduction of documents or the like, e.g. facsimile transmission; Details thereof
- H04N1/40—Picture signal circuits
- H04N1/407—Control or modification of tonal gradation or of extreme levels, e.g. background level
- H04N1/4072—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original
- H04N1/4074—Control or modification of tonal gradation or of extreme levels, e.g. background level dependent on the contents of the original using histograms
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/70—Circuitry for compensating brightness variation in the scene
- H04N23/71—Circuitry for evaluating the brightness variation
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/82—Camera processing pipelines; Components thereof for controlling camera response irrespective of the scene brightness, e.g. gamma correction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10016—Video; Image sequence
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
Definitions
- Video signal processing device video signal processing method, and video signal processing program
- the present invention relates to a video signal processing device, a video signal processing method, and a video signal processing program for performing signal processing on a video signal, and in particular, a video signal processing device and a video signal processing for performing gradation conversion processing on a video signal.
- the present invention relates to a method and a video signal processing program.
- the gradation bit width may be set wider than the gradation bit width of the output signal. In this case, it is necessary to perform gradation conversion on the input signal so as to match the gradation bit width of the output system.
- Japanese Patent Laid-Open No. 2 0 0 1 — 1 1 8 0 6 2 discloses an example in which a video signal is divided into a plurality of regions based on texture information, and gradation conversion is adaptively performed on each divided region. It is disclosed.
- Japanese Patent Application Laid-Open No. 5-6820 205 discloses an example in which a main subject region is detected using distance information to a subject, and an exposure amount at the time of photographing is controlled with emphasis on the detected region. Has been.
- the main subject area is extracted based on the measured information, and the exposure amount is controlled with emphasis on the extracted area. Exposure.
- gradation compression is performed on the entire video signal on average, or fixed gradation conversion such as r conversion is performed. In this case, when the scene is different from a standard scene such as backlight, there is a problem that the main subject does not have an appropriate gradation and the image is not subjectively preferable.
- tone conversion is performed independently for each region, so that a preferable image can be obtained even in a scene with a large contrast ratio such as backlight.
- tone conversion is performed independently for each region, so that a preferable image can be obtained even in a scene with a large contrast ratio such as backlight.
- the same tone conversion process is performed when the regions existing in the foreground and the back of the image have the same histogram distribution, resulting in a flat image with no sense of contrast. There was a problem.
- an object of the present invention is to provide a video signal processing device, a video signal processing method, and a video signal processing program for performing high-quality gradation conversion corresponding to a subject for a video signal.
- the video signal processing apparatus of the present invention provides an image pickup hand.
- distance information acquisition means for acquiring distance information between the imaging object and the imaging means, and the distance.
- Gradation conversion means for performing gradation conversion processing on the target pixel of the video signal using information.
- Embodiments relating to the present invention correspond to the first embodiment shown in FIGS. 1 to 7 and the second embodiment shown in FIGS. 8 to 13.
- the distance information acquisition means which is the configuration of the present invention, corresponds to the distance information calculation unit 10 8 shown in FIGS. 1 and 8, and the gradation conversion means is shown in FIGS. 1, 2, 5, and 7. Applicable to the conversion unit 1 1 1.
- a preferred application example of the present invention is that the distance information calculation unit 10 8 shown in FIGS. 1 and 8 obtains the distance to the subject, and the conversion unit 1 1 1 provides the distance information for each pixel of interest. It is a video signal processing device that uses it to perform gradation conversion processing.
- the present invention performs gradation conversion processing independently based on distance information for each target pixel of an input video signal. With such a configuration, a subjectively preferable video signal can be obtained even for a scene with a small contrast ratio.
- the video signal processing method of the present invention includes a step of acquiring a video signal of an imaging target by an imaging unit, a step of acquiring distance information between the imaging target and the imaging unit, Performing a gradation conversion process on the target pixel of the video signal using the distance information.
- (A) An application example of the video signal processing method of the present invention will be described together with its configuration.
- (A) corresponds to the first embodiment shown in FIGS. 1 to 7 and the second embodiment shown in FIGS. 8 to 13.
- the stage of acquiring the video signal to be imaged by the imaging means which is the configuration of (A) corresponds to the imaging by the CCD 100 shown in FIGS.
- the step of obtaining the distance information between the photographing object and the photographing means” is the distance information of FIG. 1 and FIG.
- the processing of the calculation unit 1 0 8 corresponds.
- the “gradation conversion process adaptively using the correction coefficient for the target pixel of the video signal using the distance information” is performed by the conversion unit 1 1 1 in FIG. 1 and FIG. Applicable.
- a preferred application example of (A) is the image processing method in the imaging device (video signal processing device) shown in FIGS. 1 and 8 as described above.
- the invention of (9) is not limited to the video signal processing method in the imaging apparatus having the configuration shown in FIGS. 1 and 8, and the video signal processing having a configuration for performing the processing of each stage described above. Any device is applicable.
- (A) is the acquisition of the video signal to be imaged by the imaging means, the acquisition of the distance information between the imaging object and the imaging means, and the gradation conversion process for the target pixel of the video signal using the distance information. Since this process is performed in a series of steps, it is possible to generate high-quality images according to the subject with appropriate gradation and few side effects such as noise increase and color reproduction failure.
- FIG. 1 Another video signal processing method of the present invention uses a step of acquiring a video signal of an imaging target by an imaging unit, a step of acquiring distance information between the imaging target and the imaging unit, and the distance information. A step of calculating a correction coefficient for the target pixel of the video signal, and a step of adaptively performing gradation conversion processing on the target pixel using the correction coefficient.
- (B) corresponds to the first embodiment shown in FIGS. 1 to 7 and the second embodiment shown in FIGS. 8 to 13.
- the stage of acquiring the image signal to be imaged by the imaging means corresponds to the imaging by the CCD 100 shown in FIGS.
- the “step of acquiring distance information between the object to be imaged and the image capturing means” corresponds to the processing of the distance information calculating unit 108 in FIGS.
- the step of calculating the correction coefficient for the target pixel of the video signal using the distance information is performed by the processing of the correction coefficient calculation unit 1 1 2 in FIG. 1 and FIG. Applicable.
- the stage of performing adaptive conversion and conversion processing on the target pixel using the correction coefficient corresponds to the processing of the conversion unit 1 1 1 in FIGS.
- a preferable application example of (B) is the image processing method in the imaging apparatus (video signal processing apparatus) shown in FIGS. 1 and 8 as described above.
- (B) is not limited to the video signal processing method in the imaging device having the configuration shown in FIGS. 1 and 8, and may be a video signal processing device having a configuration for performing the processing of each stage. If applicable.
- (B) is the acquisition of the video signal to be imaged by the imaging means, the acquisition of distance information between the imaging object and the imaging means, the calculation of the correction coefficient for the target pixel of the video signal using the distance information, Since the process of adaptive gradation conversion using the correction coefficient for the pixel of interest is performed in a series of steps, side effects such as an increase in noise and failure of color reproduction at an appropriate gradation are performed. It is possible to generate a few high-quality images according to the subject.
- the video signal processing program includes a procedure for causing a computer to read an unprocessed video signal at the time of shooting, and distance information between the shooting target and the shooting means. And a procedure for performing a gradation conversion process on a target pixel of the video signal using the distance information.
- (C) corresponds to the first embodiment shown in FIGS. 1 to 7 and the second embodiment shown in FIGS. 8 to 13.
- the “Procedure for reading an unprocessed video signal at the time of shooting” is to have a computer output the signal from the CCD 1 0 2 shown in FIG. This corresponds to the process of reading the video signal output from the control unit 1 15 as the header information.
- “Procedure for obtaining distance information between photographing object and photographing means” corresponds to the processing of the distance information calculating unit 108 shown in FIGS. 1 and 8 by the computer.
- “Using the distance information, the video signal The procedure for performing the gradation conversion processing on the target pixel corresponds to the processing of the conversion unit 1 1 1 in FIGS. 1 and 8 by the computer.
- Another video signal processing program of the present invention includes a procedure for causing a computer to read an unprocessed video signal at the time of shooting, a procedure for calculating distance information from the video signal that has been subjected to signal processing, and the distance A procedure for calculating a correction coefficient based on the information; a procedure for sequentially extracting a local region centered on a pixel of interest from the signal-processed video signal; a procedure for creating a histogram of the extracted region; and the correction A procedure for performing clipping processing on the histogram based on a coefficient, a procedure for generating a gradation conversion curve by accumulating and normalizing the histogram after the clipping processing, A procedure for performing gradation conversion processing on a pixel of interest based on a gradation conversion curve;
- (D) corresponds to the first embodiment shown in FIG. 6 and the second embodiment shown in FIG.
- the configuration of (D) corresponds to each Step in Fig. 6 and Fig. 13.
- “Procedure to read unprocessed video signal at the time of shooting” is “Ste P 1”
- “Procedure to calculate distance information from signal processed video signal” is “Step 5”
- the procedure for calculating the correction coefficient based on the distance information is “Step 6”
- the “procedure for sequentially extracting a local region centered on the pixel of interest from the signal-processed video signal” is the Step 3
- the procedure for creating a histogram of the extracted area ” is Step 4,“
- the procedure for performing the clipping process on the histogram based on the correction coefficient ” is Step 7, “Procedure for generating gradation conversion curve by accumulating and normalizing histogram after post-processing” Step 8, “Procedure for performing gradation conversion processing for target
- FIG. 1 is a block diagram of the first embodiment.
- Figure 2 is an illustration of the clip process.
- Fig. 3 is an explanatory diagram of linear interpolation.
- FIG. 4 is a second configuration diagram of the conversion unit 11 1 1 in the first embodiment.
- FIG. 5 is a configuration diagram of the conversion unit 1 1 1 in the first embodiment.
- FIG. 6 is a flowchart of the gradation conversion process in the first embodiment.
- FIG. 7 is a third block diagram of the converter 1 1 1.
- FIG. 8 is a configuration diagram in the second embodiment.
- FIG. 9 is a block diagram of the shooting situation estimation unit 1 1 7.
- FIG. 10 is an explanatory diagram of a division pattern for evaluation photometry.
- Figure 11 is an explanatory diagram of shooting scene classification patterns.
- Fig. 12 is a characteristic diagram of gradation curve setting for classification patterns.
- FIG. 13 is a flowchart of the gradation conversion process in the second embodiment.
- FIGS. Figure 1 shows the first embodiment 2 is a block diagram of the conversion unit 1 1 1
- FIG. 3 is an illustration of clip processing
- FIG. 4 is an illustration of linear interpolation
- FIG. 5 is a second configuration of the conversion unit 1 1 1.
- FIG. 6 is a flowchart of the tone conversion processing in the first embodiment
- FIG. 7 is a third configuration diagram of the conversion unit 11 1 1.
- the configuration of the first embodiment will be described with reference to FIG.
- the image taken through the lens system 100, the aperture 1 0 1 and the CCD 1 02 is A / D converter 1 04 (hereinafter referred to simply as AZD 1 04 in the description and drawings of the present invention). Is converted into a digital signal.
- the video signal from AZD 104 is transferred to the photometry evaluation unit 106, the distance information calculation unit 108, and the signal processing unit 110 via the buffer 105.
- the photometric evaluation unit 106 is connected to the aperture 100 1 and the C CD 102, and the distance information calculation unit 108 is connected to the lens control unit 107 and the correction coefficient calculation unit 1 1 2.
- the lens control unit 107 is connected to the AF motor 103, and the infrared sensor 109 is connected to the distance information calculation unit 108.
- the correction coefficient calculation unit 1 1 2 is connected to the conversion unit 1 1 1, and the signal processing unit 1 1 0 is connected to the conversion unit 1 1 1.
- the conversion unit 1 1 1 is connected to the compression unit 1 1 3, and the compression unit 1 1 3 is connected to the output unit 1 1 4.
- Microcomputer control unit 1 1 5 is AZD 104, photometry evaluation unit 1 06, lens control unit 1 07, distance information calculation unit 1 08, infrared sensor 1 09, signal processing unit 1 1 0, conversion unit 1 1 1, Correction coefficient calculation unit 1 1 2, Compression unit 1 1 3, Output unit 1 14
- an external 1 ZF unit 1 1 6 with a power switch, shut-down turbotan, and an interface for switching various modes during shooting is also connected to the control unit 1 1 5 in both directions. Yes.
- Pre-shooting mode can be entered by selecting this option.
- the video signal photographed through the lens system 100, the aperture 100 1, and the CCD 102 is converted into a digital signal by the AZD 104 and transferred to the buffer 105.
- the gradation width of the digitized video signal is, for example, 12 bits.
- the video signal in the buffer 105 is transferred to the photometric evaluation unit 106, the distance information calculation unit 108, and the signal processing unit 110.
- the photometric evaluation unit 106 obtains the luminance level in the video signal, controls the aperture 1010 and the electronic shutter speed of the CCD 102, etc. so as to achieve proper exposure.
- the distance information calculation unit 108 uses the external infrared sensor 110 to calculate distance information indicating the distance between the imaging device and the subject.
- the contrast information of the video signal is detected, and the in-focus image is obtained by controlling the AF motor 103 so that the contrast is maximized at the position where you want to obtain distance information in the image.
- Distance information is calculated from the lens position (or AF motor control information) when the focused image is obtained. The calculated distance information is transferred to the lens control unit 107 and the correction coefficient calculation unit 1 1 2.
- full shooting is performed by fully pressing the shutter button via the external 1 F section 1 1 6, and the video signal is transferred to the buffer 105 in the same way as pre-imaging.
- Tree shooting is performed based on the exposure conditions obtained by the photometry evaluation unit 106 and the focusing conditions obtained by the lens control unit 107, and these shooting conditions are transferred to the control unit 1 15 Is done.
- the video signal in the buffer 105 is transferred to the signal processing unit 110.
- the signal processing unit 1 1 0 reads a single-plate video signal on the buffer 1 05 based on the control of the control unit 1 15 and performs a known interpolation process, white balance process, enhancement process, etc. Generate a status signal and transfer it to the converter 1 1 1.
- the gradation is calculated based on the distance information indicating the distance between the imaging device and the subject obtained through the distance information calculation unit 1 08.
- a correction coefficient for changing the physical properties of the conversion curve is calculated.
- the correction coefficient is calculated so that the gradient of the gradation conversion curve is monotonously suppressed with respect to the distance.
- the distance information obtained in the distance information calculation 108 is a plurality of points in the video signal.
- the distance information at positions corresponding to multiple areas on the image is calculated using an external infrared sensor 1 0 9 or the contrast of the video signal at positions corresponding to multiple areas including pixels or multiple pixels
- the distance information of multiple points is obtained by calculating the distance information from the target.
- the conversion unit 1 1 1 sets a gradation conversion curve using the local area histogram and the correction coefficient calculated by the correction coefficient calculation unit 1 1 2, and performs gradation conversion processing on the video signal. Do.
- the video signal after the gradation conversion process is transferred to the compression unit 1 1 3.
- the compression unit 1 1 3 the video signal obtained through the conversion unit 1 1 1 is subjected to a compression process such as known JPEG, and the compressed video signal is transferred to the output unit 1 1 4 .
- the output unit 1 1 4 records and stores the compressed signal in a memory card.
- FIG. 2 shows an example of the configuration of the conversion unit 1 1 1.
- the conversion unit 1 1 1 includes a buffer 2 0 0, a local region extraction unit 2 0 1, a representative point extraction unit 2 0 2, a histogram creation unit 2 0 3, and a clipping unit 2 0 4. It consists of an accumulation normalization unit 205, a gradation curve creation unit 206, and a gradation conversion unit 2007.
- the signal processing unit 10 8 is connected to a nother 2 0 0, and the nofer 2 0 0 and the representative point extracting unit 2 0 2 are connected to the local region extracting unit 2 0 1.
- the local region extraction unit 20 1 is connected to the histogram creation unit 20 3, and the histogram creation unit 2 0 3 and the correction coefficient calculation unit 1 1 2 are connected to the clipping unit 2 0 4. ing.
- the clipping unit 20 4 is connected to the cumulative normalization unit 2 05, and the cumulative normalization unit 2 0 5 is connected to the gradation curve creation unit 2 6.
- the tone curve generator 2 0 6 is connected to the tone converter 2 0 7
- the unit 2 0 7 is connected to the compression unit 1 1 3.
- the control unit 1 1 5 includes a local region extraction unit 2 0 1, a representative point extraction unit 2 0 2, a histogram creation unit 2 0 '3, a clipping unit 2 0 4, and a cumulative normalization unit 2 0 It is connected to the 5 gradation curve creation unit 206 and the gradation conversion unit 206 in both directions.
- the video signal transferred from the signal processing unit 110 is stored in the buffer 200.
- the representative point extraction unit 20 2 extracts one or more representative points of the video signal from which the distance information calculation unit 1 0 8 has obtained distance information to the subject via the control unit 1 15. .
- the local region extraction unit 20 1 is a rectangular region having a predetermined size centered on the pixel of the representative point extracted by the representative point extraction unit 202. For example, in this example, the local region extraction unit 20 1 is a local region of 16 ⁇ 16 pixels. Extract region.
- the histogram creation unit 20 03 creates a histogram for each local region extracted by the local region extraction unit 20 1, and transfers the created histogram information to the clipping unit 204.
- the clipping unit 20 4 is created by the histogram creation unit 2 0 3 based on the correction coefficient obtained by the correction coefficient calculation unit 1 1 2 using the distance information of the representative point to be processed. Clip the histogram.
- FIG. 3 is an explanatory diagram of the clip process.
- Fig. 3 (a) shows the original histogram from the histogram creation unit 203 and the clip value for the frequency on the vertical axis.
- Fig. 3 (b) shows a histogram in which the clip value is replaced with the clip frequency more than the clip value by clip processing.
- Fig. 3 (c) shows the tone conversion curve obtained by accumulating and normalizing the original histogram and the histogram after clipping.
- the clipping process suppresses the histogram frequency, and as a result, the gradient of the tone conversion curve is suppressed. Setting the clip value lower in the clip process lowers the signal gain and does not significantly change the contrast in the local region.
- the clip value in the clip process is set high, the gain for the signal is increased, This has the effect of increasing the contrast in the local region.
- the clip C is calculated by equation (1).
- k (d) represents the correction coefficient for the distance d obtained from the correction coefficient calculation unit 1 1 2.
- the correction coefficient k (d) is preferably monotonically decreasing with respect to the distance d indicated by the distance information, and is represented by, for example, a function of equation (2).
- the clipped histogram is transferred to the accumulation normalization unit 205.
- the cumulative normalization unit 205 creates a cumulative histogram by accumulating the histogram, and generates a gradation conversion curve by normalizing it according to the gradation width.
- the gradation conversion curve is a 12-bit input and a 12-bit output.
- the gradation conversion curve is transferred to the gradation curve creation unit 206.
- the gradation curve creation unit 2,06 calculates a gradation conversion curve for all the pixels of the video signal based on the gradation conversion curves at a plurality of points obtained by the cumulative normalization unit 205.
- the gradation conversion curve is calculated by, for example, the linear interpolation process shown in FIG.
- the gradation conversion curve f (x, y) of the target pixel (x, y) is used as the surrounding four pixels. Calculate with equation (3).
- the values of points A and B in Fig. 4 are first obtained by linear interpolation using the left and right values of each, and the obtained points A and B in the vertical direction. Perform linear interpolation.
- the values of points C and D in the figure are obtained by linear interpolation using the upper and lower values, respectively, and then linearly interpolated in the horizontal direction of points C and D to obtain the pixel (x, y ) Gradation conversion curve f (x, y).
- the calculated gradation conversion curve is transferred to the gradation conversion unit 2 07.
- the gradation conversion unit 20 7 performs gradation conversion processing on the pixel of interest on the buffer 2 0 0 based on the gradation conversion curve transferred from the gradation curve creation unit 2 06. After that, division processing is performed so as to conform to the gradation width at the time of output (assuming 8 bits in this embodiment).
- the 8-bit video signal is transferred to the compression unit 1 1 3.
- the gradation conversion curve based on the local area histogram is calculated, but it is not necessary to be limited to such a configuration.
- FIG. 5 shows the configuration of the conversion unit 1 1 1 shown in FIG. 2, in which a local region extraction unit 2 0 1, a histogram creation unit 2 0 3, a clipping unit 2 0 4, and a cumulative normalization unit 2 0 5 is omitted, and a gradation conversion curve ROM 2 08 and a gradation curve changing unit 2 9 are added.
- the basic configuration is the same as that of the converter shown in Fig. 2, and the same configuration is given the same name and number. Only the differences will be described below.
- the tone conversion curve ROM 2 08 is connected to the tone curve changing unit 2 09.
- the correction coefficient calculation unit 1 1 2 is connected to the gradation curve changing unit 2 09 and the gradation curve changing unit 2 0 9 is connected to the gradation curve creating unit 2 0 6
- an adaptive tone conversion curve corresponding to the distance to the subject can be obtained.
- the gradation conversion processing has the same configuration as the conventional one, and the correction processing is added. Therefore, the compatibility with the conventional device is high and mounting is easy. Since the gradation conversion curve is set based on the histogram, it is possible to adaptively perform gradation conversion processing for various scenes, and high-quality video signals even for scenes with large contrast ratios. Can be obtained. In addition, by setting restrictions on gradation characteristics independently for each region, side effects such as an increase in noise components and failure of color reproduction can be suppressed.
- FIG. 6 is a flowchart relating to the software processing of the gradation conversion processing in the first embodiment according to the present invention.
- Ste 1 read header information including unprocessed video signals and supplementary information such as imaging conditions. Performs known interpolation processing, white balance processing, enhancement processing, etc. at Ste P2.
- the distance information of the video signal processed in Step 2 is calculated in Step 5, and the correction coefficient is calculated in Step 6 based on the distance information.
- the video signal processed in Step 2 sequentially extracts local regions centered on the representative point in Step 3 on the other side.
- Step 4 create a histogram of the extracted local region.
- Step 7 the clipping process is performed on the histogram obtained in Step 4 based on the correction factor from Step 6.
- the gradation conversion curve is generated by accumulating and normalizing the histogram after clipping. Steps 3 to 8 are performed on all representative points for which distance information can be calculated.
- Step 9 gradation conversion processing is performed on the pixel of interest based on the gradation conversion curve from Step 8.
- Step 1 it is determined whether or not all pixels of interest have been completed. If all pixels of interest have not been completed, the process proceeds to Step 9. If all pixels of interest have been completed, St 1 is used. Move to 1. At St e p 1 1, compression processing such as known JPEG is performed. At Step 1 2, the processed signal is output and the program ends.
- each image is based on the distance information to the subject.
- the correction coefficient for the element is calculated and the gradation conversion curve is set using the calculated correction coefficient
- the present invention is not limited to this.
- a configuration can also be used in which the tone conversion curve is directly set using distance information without using a correction coefficient.
- a gradation conversion curve corresponding to the distance information value is set in advance, a gradation conversion curve corresponding to the distance information is selected for each pixel, and gradation conversion processing is performed using the selected gradation conversion curve. A way to do this is conceivable.
- FIG. 7 shows an example of the configuration of the conversion unit 1 1 1 for performing the above processing.
- the correction coefficient calculation unit 1 1 2 and the gradation curve changing unit 2 09 are omitted and the gradation curve setting unit 2 10 is added to the configuration shown in FIG.
- the basic configuration is equivalent to the conversion unit 1 1 1 shown in Fig. 5, and the same configuration is given the same name and number. Only the differences from the configuration shown in FIG. 5 will be described below.
- the tone conversion curve R O M 2 0 8 is connected to the tone curve setting unit 2 1 0.
- the tone curve setting unit 2 1 0 is connected to the tone curve creation unit 2 0 6, and the control unit 1 1 5 is connected to the tone curve setting unit 2 1 0.
- the gradation conversion curve R O M 208 the relationship between the distance information and the gradation conversion curve is recorded in advance.
- the gradation curve setting unit 2 10 reads a gradation conversion curve corresponding to the distance information from the control unit 1 15 from the gradation conversion curve R O M 208.
- the present invention is not limited to the configuration in which the tone conversion curve R O M 208 is stored with the tone conversion curve corresponding to the distance information as in this example.
- the relationship between the distance information and the output signal value after gradation conversion corresponding to the luminance information is stored in advance as a table, and the distance information and luminance of the target pixel are stored. It is also possible to perform gradation conversion processing with reference to the table based on the information.
- FIG. 8 to 13 show a second embodiment of the present invention.
- FIG. 8 is a configuration diagram of the second embodiment
- FIG. 9 is a configuration diagram of the shooting situation estimation unit 1 1 7, and
- Fig. 11 is an explanatory diagram of the classification pattern of the shooting scene
- Fig. 12 is a characteristic diagram of gradation curve setting for the classification pattern
- Fig. 13 is in the second embodiment. This is a flowchart of gradation conversion processing.
- the configuration of the second embodiment of the present invention will be described with reference to FIG.
- This embodiment has a configuration in which a shooting estimation situation section 1 17 is added to the first embodiment.
- the basic configuration is the same as in the first embodiment, and the same configuration is given the same name and number. Only the portions different from the first embodiment will be described below.
- the photometric evaluation unit 1 06 and the lens control unit 1 0 7 are connected to the imaging state estimation unit 1 1 7.
- the shooting state estimation unit 1 1 7 is connected to the correction coefficient calculation unit 1 1 2, and the control unit 1 1 5 is connected to the shooting state estimation unit 1 1 7 in both directions.
- the second embodiment is basically the same as the first embodiment, and only different parts will be described.
- the video signal in the three-plate state that has been subjected to known interpolation processing, white balance processing, enhancement processing, and the like by the signal processing unit 110 is transferred to the conversion unit 110.
- the shooting condition estimation section 1 1 7 is used for landscapes, portraits, backlighting, etc.
- the shooting situation is estimated, and the estimated shooting situation is transferred to the correction coefficient calculation unit 1 1 2.
- the correction coefficient calculation unit 1 1 2 calculates the correction coefficient based on the distance information to the subject obtained by the distance information calculation unit 10 8 and the information on the shooting situation from the shooting situation estimation unit 1 1 7. To do.
- the conversion unit 1 1 1 sets a gradation conversion curve using the local area histogram and the correction coefficient calculated by the correction coefficient calculation unit 1 1 2, and performs gradation conversion processing on the video signal. I do.
- the video signal after the gradation conversion processing is transferred to the compression unit 1 1 3.
- Fig. 9 shows an example of the configuration of the shooting situation estimation unit 1 1 7.
- the situation estimation unit 1 1 7 includes a subject distribution estimation unit 3 0 0, a focus position estimation unit 3 0 1, and an integration unit 3 0 2.
- Information from 07 is transferred to the subject distribution estimation unit 3 0 0 and the in-focus position estimation unit 3 0 1.
- the subject distribution estimation unit 3 0 0 and the in-focus position estimation unit 3 0 1 are connected to the integration unit 3 0 2, and the integration unit 3 0 2 is connected to the correction coefficient calculation unit 1 1 2.
- the in-focus position estimation unit 3 0 1 obtains distance information from the distance information calculation unit 1 0 8 via the control unit 1 1 5. Based on the distance information at the in-focus position, the subject is classified into two types, for example, landscape (5 m or more) and portrait (1 m to 5 m). 0 Transfer to 2.
- FIG. 10 is an explanatory diagram showing an example of a division pattern for photometric evaluation. 13 Divided into three areas, and the luminance value (a
- the subject distribution estimation unit 30 1 calculates, for example, a parameter of equation (5) as photometric information from the luminance value a i of each region. Thereby, the luminance distribution is calculated.
- S is the absolute value of the difference between the center area and the edge area. For example, when shooting with a portrait, the value of S is a large positive value when shooting in the dark. When shooting in a backlight environment, the S value becomes a large negative value.
- the subject distribution estimation unit 300 calculates the photometry information as described above and transfers it to the integration unit 302.
- the integration unit 30 2 estimates the shooting situation based on the focusing and photometric information.
- FIG. 11 is an explanatory diagram showing a case where four types of shooting situations are estimated from focusing and photometry information.
- T It is classified into ypel ⁇ Ty pe 3
- T ype 4 landscape
- T ype 1 the standard scene
- T ype 2 the dark place strobo scene
- Ty pe the dark place strobo scene
- Type 3 backlit scenes
- FIG. 12 is an explanatory diagram showing a correction coefficient calculation method in the correction coefficient calculation unit 1 1 2 corresponding to the four types of shooting situations shown in FIG.
- the gain is calculated so that the gain is greatly increased for pixels closer to the subject and the gain is not changed substantially for pixels farther away.
- the clip value corresponding to the distance is set from equation (2), and the histogram generated from the region near the representative point pixel is clipped, or the optimum value according to the distance.
- Select the tone conversion curve to be processed When classified into Type 2, the brightness of pixels that are close to each other increases overall, and the brightness of pixels that are farther away decreases overall.
- the correction coefficient is calculated so that the gain increases as the pixel increases.
- the clip value is set by adding the distance d to the constant const in Eq. (2), and the clip processing of the histogram is performed, or the optimum according to the distance. Select the tone conversion curve.
- Select a correction factor or gradation conversion curve so that the gain is increased and the gain is decreased as the pixel is further away.
- the correction coefficient or gradation conversion curve is selected so that the farther away the pixels, the higher the gain.
- the shooting scene is estimated at the time of shooting and the correction coefficient is obtained, it is possible to control gradation conversion processing for each shooting scene, and a subjectively favorable image corresponding to the subject can be obtained.
- processing by hardware is assumed, but it is not necessary to be limited to such a configuration. It can be configured as a configuration in which video signal processing is performed through a predetermined stage, that is, a video signal processing method.
- the signal from the CCD 10 2 is processed as raw data, and the shooting information from the control unit 1 15 is output as header information.
- a configuration in which processing is performed by a key is also possible.
- FIG. 13 shows a flowchart relating to the software processing of the gradation conversion processing according to the second embodiment of the present invention. Note that the same step numbers are assigned to the same processing steps as the flowchart of the gradation conversion processing in the first embodiment shown in FIG.
- Step 1 the header information including the unprocessed video signal and accompanying information such as imaging conditions is read.
- Step 2 perform known interpolation processing, white balance processing, and enhancement processing.
- the video signal that has undergone signal processing in Step 2 calculates distance information in Step 5 and obtains focusing information as described above based on the distance information. For example, the focus position is estimated for two types of landscapes and portraits.
- Step 6 calculates the correction factor based on the estimated scene type.
- Step 4 create a histogram of the extracted area.
- Step 8 based on the correction coefficient from Step 6, the clipping process is performed on the t-gram obtained in Step 4.
- Step 8 a gradation conversion curve is generated by accumulating and normalizing the histogram after clipping. The processing so far is performed for all representative points for which distance information can be calculated.
- Step 9 gradation conversion processing is performed on the target pixel based on the gradation conversion curve from Step 8.
- Step 1 it is determined whether or not all pixels of interest have been completed.
- Step 3 If all pixels of interest have not been completed, the process proceeds to Step 3. If all pixels of interest have been completed, Step 1 1 is performed. Transition. At Stepll, compression processing such as well-known JPEG is performed. At Step 1 2, the processed signal is output and the program ends.
- the shooting situation is automatically estimated based on the focusing and photometry information, but the present invention is not limited to this, and a configuration in which the shooting situation is manually specified is also possible.
- a video signal processing device As described above, according to the present invention, it is possible to provide a video signal processing device, a video signal processing method, and a video signal processing program that perform high-quality gradation conversion according to a subject.
- Such a video signal processing apparatus is particularly preferably applied to an imaging apparatus.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Studio Devices (AREA)
- Facsimile Image Signal Circuits (AREA)
- Image Processing (AREA)
- Picture Signal Circuits (AREA)
Abstract
A video signal imaged via a CCD (102) is transferred via an A/D (104) to a buffer (105). The video signal in the buffer (105) is transferred to a light measurement evaluating unit (106), a distance information calculation unit (108), and a signal processing unit (110). The distance information calculation unit (108) calculates distance information on a distance between an imaging device and an object and transfers the distance information to a lens control unit (107) and a correction coefficient calculation unit (112). The correction coefficient calculation unit (112) calculates a correction coefficient according to the distance information on the distance between the imaging device and the object obtained via the distance information calculation unit (108). A conversion unit (111) sets a gradation conversion curve by using a histogram of a local region and the correction coefficient calculated by the correction coefficient calculation unit (112) and performs a gradation conversion process on the video signal.
Description
映像信号処理装置と映像信号処理方法、 および映像信号処理 プログラム Video signal processing device, video signal processing method, and video signal processing program
技 術 分 野 Technical field
本発明は、 映像信号に対する信号処理を行う映像信号処理装置と映 像信号処理方法、 および映像信号処理プログラムに係わり、 特に、 映 像信号に対する階調変換処理を行う映像信号処理装置と映像信号処理 方法、 および映像信号処理プログラムに関するものである。 The present invention relates to a video signal processing device, a video signal processing method, and a video signal processing program for performing signal processing on a video signal, and in particular, a video signal processing device and a video signal processing for performing gradation conversion processing on a video signal. The present invention relates to a method and a video signal processing program.
背 景 技 術 Background technology
一般的なデジタルカメラゃビデオカメラ等の電子力メラ (映像信号処 理装置) において、 デジタル信号処理での階調飛びなどのまるめ誤差 による画質劣化を防止するために、 入力および処理系における信号の 階調ビッ ト幅を出力信号の階調 b i t幅より広く設定する場合がある。 この場合、 出力系の階調ビッ ト幅に合致するように入力系の信号に対 して階調変換を行う必要がある。 In general digital cameras and electronic cameras (video signal processing devices) such as video cameras, in order to prevent image quality degradation due to rounding errors such as gradation skip in digital signal processing, The gradation bit width may be set wider than the gradation bit width of the output signal. In this case, it is necessary to perform gradation conversion on the input signal so as to match the gradation bit width of the output system.
従来は、 標準的なシーンに対する固定的な階調変換により処理が行 われていた。 また、 固定的に階調変換を行わずに、 映像信号を複数の 領域に分割し、 各領域ごとに独立に階調変換を行う手法も提案されて いる。 例えば、 特開 2 0 0 1 — 1 1 8 0 6 2号公報ではテクスチャ情 報に基づき映像信号を複数の領域に分割し、 分割した各領域に対して 適応的に階調変換を行う例が開示されている。 In the past, processing was performed by fixed gradation conversion for standard scenes. In addition, a method has been proposed in which the video signal is divided into a plurality of areas without performing fixed gradation conversion, and gradation conversion is performed independently for each area. For example, Japanese Patent Laid-Open No. 2 0 0 1 — 1 1 8 0 6 2 discloses an example in which a video signal is divided into a plurality of regions based on texture information, and gradation conversion is adaptively performed on each divided region. It is disclosed.
また、 特開平 5— 6 8 2 0 5号公報には、 被写体までの距離情報を 用いて主要被写体領域を検出し、 検出した領域に重点を置いて撮影時 の露光量を制御する例が開示されている。 特許文献 2に示されている 方法では、 測距した情報を基に主要被写体領域を抽出し、 抽出した領 域に重点を置いて露光量の制御を行っているため、 主要被写体に応じ た適切な露光が行える。
従来技術では、 映像信号全体に対して平均的に階調の圧縮を行うか、 または r変換などの固定的な階調変換を行っていた。 この場合、 逆光 などの標準的なシーンとは異なる場合に主要被写体が適切な階調とな らず、 主観的に好ましい画像にならないという課題がある。 Japanese Patent Application Laid-Open No. 5-6820 205 discloses an example in which a main subject region is detected using distance information to a subject, and an exposure amount at the time of photographing is controlled with emphasis on the detected region. Has been. In the method disclosed in Patent Document 2, the main subject area is extracted based on the measured information, and the exposure amount is controlled with emphasis on the extracted area. Exposure. In the prior art, gradation compression is performed on the entire video signal on average, or fixed gradation conversion such as r conversion is performed. In this case, when the scene is different from a standard scene such as backlight, there is a problem that the main subject does not have an appropriate gradation and the image is not subjectively preferable.
特開 2 0 0 1 — 1 1 8 0 6 2号公報に示される方法では、 領域ごと に独立に階調変換を行うために、 逆光のような明暗比の大きいシーン でも好ましい画像が得られるが、 個々の階調変換に制限が設けられて いない。 そのため、 極端な階調変換が行われる場合があり、 ノイズ成 分の増加や色再現の破綻、 階調飛びによる画質劣化などの副作用の発 生が新たな課題となる。 さらに、 画像中で手前と奥に存在する領域が 同様のヒス トグラム分布を持つ場合にも同じ階調変換処理が行われる ため、 コ ン ト ラス ト感の欠落した平坦な画像となってしまうという問 題があつた。 In the method disclosed in Japanese Patent Laid-Open No. 2 0 0 1 — 1 1 8 0 6 2, tone conversion is performed independently for each region, so that a preferable image can be obtained even in a scene with a large contrast ratio such as backlight. There are no restrictions on individual tone conversion. For this reason, extreme gradation conversion may be performed, and side effects such as increased noise components, color reproduction failure, and image quality deterioration due to gradation skipping become new challenges. In addition, the same tone conversion process is performed when the regions existing in the foreground and the back of the image have the same histogram distribution, resulting in a flat image with no sense of contrast. There was a problem.
特開平 5— 6 8 2 0 5号公報に示される方法では、 主要被写体と背 景の明暗の比が大きいシーンでは、 背景に対しては適切な露光が行わ れないため、 極喘に明るくなつたり暗くなつたりするという課題があ る。 また、 上記方法では、 距離の最も近い領域を主要被写体の領域で あると しているが、 主要被写体として撮影したい物体の手前に他の物 体が存在する場合、 その手前の物体に適切な露光量で撮影を行ってし まうため、 主要被写体に応じた適切な階調の高品位な画像を得ること ができないという問題があった。 In the method disclosed in Japanese Patent Application Laid-Open No. 5-6 8 20 5, in a scene where the ratio of the main subject to the background is large, the background is not properly exposed, so that it is extremely bright. There is a problem of becoming darker or darker. In the above method, the closest area is the area of the main subject, but if there is another object in front of the object to be photographed as the main object, an appropriate exposure is applied to the object in front of it. Since there is a large amount of shooting, there is a problem that it is not possible to obtain a high-quality image with an appropriate gradation according to the main subject.
本発明は、 上記問題点に鑑み、 映像信号に対して、 被写体に応じた 高品位な階調変換を行なう映像信号処理装置と映像信号処理方法、 お よび映像信号処理プログラムの提供を目的とする。 発 明 の 開 示 In view of the above problems, an object of the present invention is to provide a video signal processing device, a video signal processing method, and a video signal processing program for performing high-quality gradation conversion corresponding to a subject for a video signal. . Disclosure of invention
上記目的を達成するために、 本発明の映像信号処理装置は、 撮像手
段から得られた撮像対象の映像信号に対し. 階調変換処理を行,う 像. 信号処理装置において、 前記撮影対象と前記撮影手段との距離情報を 取得する距離情報取得手段と、 前記距離情報を用いて前記映像信号の 注目画素に対して階調変換処理を行う階調変換手段とを有することを 特徴とする。 In order to achieve the above object, the video signal processing apparatus of the present invention provides an image pickup hand. In the signal processing apparatus, distance information acquisition means for acquiring distance information between the imaging object and the imaging means, and the distance. Gradation conversion means for performing gradation conversion processing on the target pixel of the video signal using information.
本発明に関する実施形態は、 図 1〜図 7に示される第 1 の実施形態 および図 8〜図 1 3に示される第 2の実施形態が対応する。 本発明の 構成である距離情報取得手段は、 図 1、 図 8に示される距離情報算出 部 1 0 8が該当し、 階調変換手段は、 図 1、 図 2、 図 5、 図 7に示さ れる変換部 1 1 1 が該当する。 この発明の好ま しい適用例は、 図 1、 図 8に示される距離情報算出部 1 0 8にて被写体との距離を求め、 変 換部 1 1 1 にて各注目画素に対して距離情報を用いて階調変換処理を 行う映像信号処理装置である。 Embodiments relating to the present invention correspond to the first embodiment shown in FIGS. 1 to 7 and the second embodiment shown in FIGS. 8 to 13. The distance information acquisition means, which is the configuration of the present invention, corresponds to the distance information calculation unit 10 8 shown in FIGS. 1 and 8, and the gradation conversion means is shown in FIGS. 1, 2, 5, and 7. Applicable to the conversion unit 1 1 1. A preferred application example of the present invention is that the distance information calculation unit 10 8 shown in FIGS. 1 and 8 obtains the distance to the subject, and the conversion unit 1 1 1 provides the distance information for each pixel of interest. It is a video signal processing device that uses it to perform gradation conversion processing.
本発明は、 入力映像信号の注目画素ごとに距離情報を基に独立に階 調変換処理を行う。 このような構成としているので、 明暗比の小さな シーンに対しても主観的に好ましい映像信号が得られる。 The present invention performs gradation conversion processing independently based on distance information for each target pixel of an input video signal. With such a configuration, a subjectively preferable video signal can be obtained even for a scene with a small contrast ratio.
また、 上記目的を達成するために、 本発明の映像信号処理方法は、 撮像手段により撮像対象の映像信号を取得する段階と、 前記撮影対象 と前記撮影手段との距離情報を取得する段階と、 前記距離情報を用い て前記映像信号の注目画素に対して階調変換処理を行う段階と、 から なることを特徴とする。 In order to achieve the above object, the video signal processing method of the present invention includes a step of acquiring a video signal of an imaging target by an imaging unit, a step of acquiring distance information between the imaging target and the imaging unit, Performing a gradation conversion process on the target pixel of the video signal using the distance information.
( A ) 本発明の映像信号処理方法の適用例をその構成と共に説明す る。 (A ) は、 図 1〜図 7に示されている第 1の実施形態、 図 8〜図 1 3に示されている第 2の実施形態が対応する。 (A ) の構成である 「撮 像手段により撮像対象の映像信号を取得する段階」 は、 図 1、 図 8に 示されている C C D 1 0 2による撮影が該当する。 「前記撮影対象と前 記撮影手段との距離情報を取得する段階」 は、 図 1、 図 8の距離情報
算出部 1 0 8の処理が該当する。 「前記距離情報を用いて前記映像信号 の注目画素に対し前記補正係数を用いて適応的に階調変換処理 荇'う ' 段階」 は、 図 1、 図 8の変換部 1 1 1 の処理が該当する。 (A ) の好ま しい適用例は、 前記のように図 1、 図 8に示されている撮像装置 (映 像信号処理装置) における画像処理方法である。 しかしながら、 (9 ) の発明は、 図 1、 図 8に示されている構成の撮像装置における映像信 号処理方法には限定されず、 前記各段階の処理を行う構成を備えた映 像信号処理装置であれば、 適用可能である。 (A) An application example of the video signal processing method of the present invention will be described together with its configuration. (A) corresponds to the first embodiment shown in FIGS. 1 to 7 and the second embodiment shown in FIGS. 8 to 13. The stage of acquiring the video signal to be imaged by the imaging means, which is the configuration of (A), corresponds to the imaging by the CCD 100 shown in FIGS. “The step of obtaining the distance information between the photographing object and the photographing means” is the distance information of FIG. 1 and FIG. The processing of the calculation unit 1 0 8 corresponds. The “gradation conversion process adaptively using the correction coefficient for the target pixel of the video signal using the distance information” is performed by the conversion unit 1 1 1 in FIG. 1 and FIG. Applicable. A preferred application example of (A) is the image processing method in the imaging device (video signal processing device) shown in FIGS. 1 and 8 as described above. However, the invention of (9) is not limited to the video signal processing method in the imaging apparatus having the configuration shown in FIGS. 1 and 8, and the video signal processing having a configuration for performing the processing of each stage described above. Any device is applicable.
( A ) は、 撮影手段により撮影対象の映像信号の取得、 前記撮影対 象と前記撮影手段との距離情報の取得、 前記距離情報を用いて前記映 像信号の注目画素に対し階調変換処理を行う処理、 を一連の工程で行 つているので、 適切な階調でノィズの増加や色再現の破綻などの副作 用の少ない、 被写体に応じた高品位な画像を生成することができる。 (A) is the acquisition of the video signal to be imaged by the imaging means, the acquisition of the distance information between the imaging object and the imaging means, and the gradation conversion process for the target pixel of the video signal using the distance information. Since this process is performed in a series of steps, it is possible to generate high-quality images according to the subject with appropriate gradation and few side effects such as noise increase and color reproduction failure.
( B ) 本発明の他の映像信号処理方法は、 撮像手段により撮像対象 の映像信号を取得する段階と、 前記撮影対象と前記撮影手段との距離 情報を取得する段階と、 前記距離情報を用いて前記映像信号の注目画 素に対する補正係数を算出する段階と、 前記注目画素に対し前記補正 係数を用いて適応的に階調変換処理を行う段階と、 からなることを特 徴とする。 (B) Another video signal processing method of the present invention uses a step of acquiring a video signal of an imaging target by an imaging unit, a step of acquiring distance information between the imaging target and the imaging unit, and the distance information. A step of calculating a correction coefficient for the target pixel of the video signal, and a step of adaptively performing gradation conversion processing on the target pixel using the correction coefficient.
( B ) の適用例をその構成と共に説明する。 (B ) は、 図 1〜図 7に 示されている第 1 の実施形態、 図 8〜図 1 3に示されている第 2の実 施形態が対応する。 (B ) の構成である 「撮像手段により撮像対象の映 像信号を取得する段階」 は、 図 1、 図 8に示されている C C D 1 0 2 による撮影が該当する。 「前記撮影対象と前記撮影手段との距離情報を 取得する段階」 は、 図 1、 図 8の距離情報算出部 1 0 8の処理が該当 する。 「前記距離情報を用いて前記映像信号の注目画素に対する補正係 数を算出する段階」 は、 図 1.、 図 8の補正係数算出部 1 1 2の処理が
該当する。 「前記注目画素に対し前記補正係数を用いて適応的に喈,変 換処理を行う段階」 は、 図 1、 図 8の変換部 1 1 1 の処理が該当する。 An application example of (B) will be described together with its configuration. (B) corresponds to the first embodiment shown in FIGS. 1 to 7 and the second embodiment shown in FIGS. 8 to 13. The stage of acquiring the image signal to be imaged by the imaging means, which is the configuration of (B), corresponds to the imaging by the CCD 100 shown in FIGS. The “step of acquiring distance information between the object to be imaged and the image capturing means” corresponds to the processing of the distance information calculating unit 108 in FIGS. “The step of calculating the correction coefficient for the target pixel of the video signal using the distance information” is performed by the processing of the correction coefficient calculation unit 1 1 2 in FIG. 1 and FIG. Applicable. “The stage of performing adaptive conversion and conversion processing on the target pixel using the correction coefficient” corresponds to the processing of the conversion unit 1 1 1 in FIGS.
( B ) の好ましい適用例は、 前記のように図 1、 図 8に示されている 撮像装置 (映像信号処理装置) における画像処理方法である。 しかし ながら、 (B ) は、 図 1、 図 8に示されている構成の撮像装置における 映像信号処理方法には限定されず、 前記各段階の処理を行う構成を備 えた映像信号処理装置であれば、 適用可能である。 A preferable application example of (B) is the image processing method in the imaging apparatus (video signal processing apparatus) shown in FIGS. 1 and 8 as described above. However, (B) is not limited to the video signal processing method in the imaging device having the configuration shown in FIGS. 1 and 8, and may be a video signal processing device having a configuration for performing the processing of each stage. If applicable.
( B ) は、 撮影手段により撮影対象の映像信号の取得、 前記撮影対 象と前記撮影手段との距離情報の取得、 前記距離情報を用いて前記映 像信号の注目画素に対する補正係数の算出、 前記注目画素に対し前記 補正係数を用いて適応的に階調変換処理を行う処理、 を一連の工程で 行っているので、 適切な階調でノィズの増加や色再現の破綻などの副 作用の少ない、 被写体に応じた高品位な画像を生成することができる。 (B) is the acquisition of the video signal to be imaged by the imaging means, the acquisition of distance information between the imaging object and the imaging means, the calculation of the correction coefficient for the target pixel of the video signal using the distance information, Since the process of adaptive gradation conversion using the correction coefficient for the pixel of interest is performed in a series of steps, side effects such as an increase in noise and failure of color reproduction at an appropriate gradation are performed. It is possible to generate a few high-quality images according to the subject.
( C ) 上記目的を達成するために、 本発明の映像信号処理プロダラ ムは、 コ ンピュータに、 撮影時の未処理の映像信号を読み込ませる手 順と、 撮影対象と撮影手段との距離情報を取得させる手順と、 前記距 離情報を用いて前記映像信号の注目画素に対して階調変換処理を行わ せる手順と、 を実行させることを特徴とする。 (C) In order to achieve the above object, the video signal processing program according to the present invention includes a procedure for causing a computer to read an unprocessed video signal at the time of shooting, and distance information between the shooting target and the shooting means. And a procedure for performing a gradation conversion process on a target pixel of the video signal using the distance information.
( C ) の適用例をその構成と共に説明する。 (C ) は、 図 1〜図 7に 示されている第 1の実施形態、 図 8〜図 1 3に示されている第 2の実 施形態が対応する。 (C ) の構成である 「撮影時の未処理の映像信号を 読み込ませる手順」 は、 コンピュータに、 図 2に記載されている C C D 1 0 2からの信号を未処理のままの R a wデータとして、 制御部 1 1 5からの撮影時の情報をへッダ情報として出力した映像信号を読み 込ませる処理に相当する。 「撮影対象と撮影手段との距離情報を取得さ せる手順」 は、 図 1、 図 8の距離情報算出部 1 0 8の処理をコンピュ ータが処理することに該当する。 「前記距離情報を用いて前記映像信号
の注目画素に対して階調変換処理を行わせる手順」 は、 図 1、 図 8の 変換部 1 1 1の処理をコンピュータが処理することに該当する。 An application example of (C) will be described together with its configuration. (C) corresponds to the first embodiment shown in FIGS. 1 to 7 and the second embodiment shown in FIGS. 8 to 13. (C) The “Procedure for reading an unprocessed video signal at the time of shooting” is to have a computer output the signal from the CCD 1 0 2 shown in FIG. This corresponds to the process of reading the video signal output from the control unit 1 15 as the header information. “Procedure for obtaining distance information between photographing object and photographing means” corresponds to the processing of the distance information calculating unit 108 shown in FIGS. 1 and 8 by the computer. "Using the distance information, the video signal The procedure for performing the gradation conversion processing on the target pixel corresponds to the processing of the conversion unit 1 1 1 in FIGS. 1 and 8 by the computer.
(D) 本発明の他の映像信号処理プログラムは、 コンピュータに、 撮影時の未処理の映像信号を読み込ませる手順と、 信号処理された前 記映像信号により距離情報を算出する手順と、 前記距離情報に基づき 補正係数を算出する手順と、 前記信号処理された前記映像信号により 注目画素を中心とする局所領域を順次抽出する手順と、 前記抽出した 領域のヒス トグラムを作成する手順と、 前記補正係数に基づき前記ヒ ス トグラムに対してク リ ツ ピング処理を行う手順と、 前記ク リ ッピン グ処理後のヒス トグラムを累積, 正規化することで階調変換曲線を生 成する手順と、 前記階調変換曲線に基づき注目画素に対して階調変換 処理を行う手順と、 (D) Another video signal processing program of the present invention includes a procedure for causing a computer to read an unprocessed video signal at the time of shooting, a procedure for calculating distance information from the video signal that has been subjected to signal processing, and the distance A procedure for calculating a correction coefficient based on the information; a procedure for sequentially extracting a local region centered on a pixel of interest from the signal-processed video signal; a procedure for creating a histogram of the extracted region; and the correction A procedure for performing clipping processing on the histogram based on a coefficient, a procedure for generating a gradation conversion curve by accumulating and normalizing the histogram after the clipping processing, A procedure for performing gradation conversion processing on a pixel of interest based on a gradation conversion curve;
を実行させることを特徴とする。 Is executed.
(D) の適用例をその構成と共に説明する。 (D) は、 図 6に示され ている第 1の実施形態、 図 1 3に示されている第 2の実施形態が対応 する。 (D) の構成は、 図 6、 図 1 3の各 S t e pと対応している。 す なわち、 「撮影時の未処理の映像信号を読み込ませる手順」 は、 S t e P 1、 「信号処理された映像信号により距離情報を算出する手順する手 順」 は、 S t e p 5、 「前記距離情報に基づき補正係数を算出する手 順」 は、 S t e p 6、 「前記信号処理された映像信号により注目画素を 中心とする局所領域を順次抽出する手順」 は、 S t e p 3、 「前記抽出 した領域のヒス トグラムを作成する手順」 は、 S t e p 4、 「前記補正 係数に基づき前記ヒ ス ト グラムに対してク リ ッ ビング処理を行う手 順」 は、 S t e p 7、 「前記クリ ッビング処理後のヒス トグラムを累積, 正規化することで階調変換曲線を生成する手順」 は、 S t e p 8、 「前 記階調変換曲線に基づき注目画素に対して階調変換処理を行う手順」 は、 S t e p 9、 にそれぞれ対応する。
本発明の映像信号処理プログラムによれば、 ノィズの増加や色再現 の破綻などの副作用の少ない、 被写体に応じた高品位な画像が得られ る。 An application example of (D) will be described together with its configuration. (D) corresponds to the first embodiment shown in FIG. 6 and the second embodiment shown in FIG. The configuration of (D) corresponds to each Step in Fig. 6 and Fig. 13. In other words, “Procedure to read unprocessed video signal at the time of shooting” is “Ste P 1”, “Procedure to calculate distance information from signal processed video signal” is “Step 5,” The procedure for calculating the correction coefficient based on the distance information is “Step 6,” and the “procedure for sequentially extracting a local region centered on the pixel of interest from the signal-processed video signal” is the Step 3, The procedure for creating a histogram of the extracted area ”is Step 4,“ The procedure for performing the clipping process on the histogram based on the correction coefficient ”is Step 7, “Procedure for generating gradation conversion curve by accumulating and normalizing histogram after post-processing” Step 8, “Procedure for performing gradation conversion processing for target pixel based on above gradation conversion curve” "Corresponds to Step 9, respectively. According to the video signal processing program of the present invention, it is possible to obtain a high-quality image corresponding to a subject with few side effects such as an increase in noise and a failure in color reproduction.
本発明によれば、 映像信号に対して、 副作用の少ない、 被写体に応 じた高品位な階調変換を行なう映像信号処理装置と映像信号処理方法、 および映像信号処理プログラムを得ることができる。 図面の簡単な説明 According to the present invention, it is possible to obtain a video signal processing device, a video signal processing method, and a video signal processing program that perform high-quality gradation conversion according to a subject with few side effects on a video signal. Brief Description of Drawings
図 1 は第 1の実施形態の構成図である。 FIG. 1 is a block diagram of the first embodiment.
図 2はクリ ツプ処理の説明図である。 Figure 2 is an illustration of the clip process.
図 3は線形補間の説明図である。 Fig. 3 is an explanatory diagram of linear interpolation.
図 4は第 1 の実施形態における変換部 1 1 1 の第 2の構成図である。 図 5は第 1の実施形態における変換部 1 1 1の構成図である。 FIG. 4 is a second configuration diagram of the conversion unit 11 1 1 in the first embodiment. FIG. 5 is a configuration diagram of the conversion unit 1 1 1 in the first embodiment.
図 6は第 1 の実施形態における階調変換処理のフローチャートであ る。 FIG. 6 is a flowchart of the gradation conversion process in the first embodiment.
図 7は変換部 1 1 1の第 3の構成図である。 FIG. 7 is a third block diagram of the converter 1 1 1.
図 8は第 2の実施形態における構成図である。 FIG. 8 is a configuration diagram in the second embodiment.
図 9は撮影状況推定部 1 1 7の構成図である。 FIG. 9 is a block diagram of the shooting situation estimation unit 1 1 7.
図 1 0は評価測光用の分割パターンの説明図である。 FIG. 10 is an explanatory diagram of a division pattern for evaluation photometry.
図 1 1 は撮影シーンの分類パターンの説明図である。 Figure 11 is an explanatory diagram of shooting scene classification patterns.
図 1 2は分類パターンに対する階調曲線設定の特性図である。 Fig. 12 is a characteristic diagram of gradation curve setting for classification patterns.
図 1 3は第 2の実施形態における階調変換処理のフローチヤ一トで ある。 発明を実施するための最良の形態 FIG. 13 is a flowchart of the gradation conversion process in the second embodiment. BEST MODE FOR CARRYING OUT THE INVENTION
以下、 本発明の第 1 の実施形態について図を参照して説明する。 第 1 の実施形態は、 図 1〜図 7に示されている。 図 1 は第 1 の実施形態
の構成図、 図 2は変換部 1 1 1の構成図、 図 3はク リ ツプ処理の説明 図、 図 4は線形補間の説明図、 図 5は変換部 1 1 1の第 2の構成図、 図 6は第 1の実施形態における階調変換処理のフ口一チャート、 図 7 は変換部 1 1 1の第 3の構成図である。 Hereinafter, a first embodiment of the present invention will be described with reference to the drawings. The first embodiment is shown in FIGS. Figure 1 shows the first embodiment 2 is a block diagram of the conversion unit 1 1 1, FIG. 3 is an illustration of clip processing, FIG. 4 is an illustration of linear interpolation, and FIG. 5 is a second configuration of the conversion unit 1 1 1. FIG. 6 is a flowchart of the tone conversion processing in the first embodiment, and FIG. 7 is a third configuration diagram of the conversion unit 11 1 1.
第 1の実施形態の構成を図 1により説明する。 レンズ系 1 00、 絞 り 1 0 1、 C C D 1 02を介して撮影された映像は、 A/D変換器 1 04 (以下、 本発明の明細書、 図面では単に AZD 1 04と表記する ことがある) にてデジタル信号に変換される。 AZD 1 04からの映 像信号は、 バッファ 1 05を介して測光評価部 1 06、 距離情報算出 部 1 08、 信号処理部 1 1 0へ転送される。 測光評価部 1 06は、 絞 り 1 0 1、 C C D 1 02に接続されており、 距離情報算出部 1 08は、 レンズ制御部 1 07、 補正係数算出部 1 1 2に接続されている。 レン ズ制御部 1 07は AFモータ 1 03に接続されており、 赤外線センサ 1 09は距離情報算出部 1 08に接続されている。 補正係数算出部 1 1 2は、 変換部 1 1 1に接続されており、 信号処理部 1 1 0は変換部 1 1 1に接続されている。 変換部 1 1 1は圧縮部 1 1 3に接続されて おり、 圧縮部 1 1 3は出力部 1 1 4に接続されている。 マイク ロコン ピュータなどの制御部 1 1 5は、 AZD 1 04、 測光評価部 1 06、 レンズ制御部 1 07、 距離情報算出部 1 08、 赤外線センサ 1 09、 信号処理部 1 1 0、 変換部 1 1 1、 補正係数算出部 1 1 2、 圧縮部 1 1 3、 出力部 1 14と双方向に接続されている。 さらに、 電源スィ ッ チ、 シャ ッ ターボタ ン、 撮影時の各種モードの切り替えを行うための イ ンターフ エースを備えた外部 1 ZF部 1 1 6も、 制御部 1 1 5と双 方向に接続されている。 The configuration of the first embodiment will be described with reference to FIG. The image taken through the lens system 100, the aperture 1 0 1 and the CCD 1 02 is A / D converter 1 04 (hereinafter referred to simply as AZD 1 04 in the description and drawings of the present invention). Is converted into a digital signal. The video signal from AZD 104 is transferred to the photometry evaluation unit 106, the distance information calculation unit 108, and the signal processing unit 110 via the buffer 105. The photometric evaluation unit 106 is connected to the aperture 100 1 and the C CD 102, and the distance information calculation unit 108 is connected to the lens control unit 107 and the correction coefficient calculation unit 1 1 2. The lens control unit 107 is connected to the AF motor 103, and the infrared sensor 109 is connected to the distance information calculation unit 108. The correction coefficient calculation unit 1 1 2 is connected to the conversion unit 1 1 1, and the signal processing unit 1 1 0 is connected to the conversion unit 1 1 1. The conversion unit 1 1 1 is connected to the compression unit 1 1 3, and the compression unit 1 1 3 is connected to the output unit 1 1 4. Microcomputer control unit 1 1 5 is AZD 104, photometry evaluation unit 1 06, lens control unit 1 07, distance information calculation unit 1 08, infrared sensor 1 09, signal processing unit 1 1 0, conversion unit 1 1 1, Correction coefficient calculation unit 1 1 2, Compression unit 1 1 3, Output unit 1 14 In addition, an external 1 ZF unit 1 1 6 with a power switch, shut-down turbotan, and an interface for switching various modes during shooting is also connected to the control unit 1 1 5 in both directions. Yes.
次に、 第 1の実施形態について図 1に示す信号の流れに沿って説明 する。 外部 1 部 1 1 6を介して、 I S O感度、 シャ ツター速度な どの撮影条件のパラメータを設定した後に、 シャ ンタ一ボタンを半押
60758 Next, the first embodiment will be described along the signal flow shown in FIG. Set the shooting parameters such as ISO sensitivity and shutter speed via the external part 1 1 1 6 and then press the shutter button halfway. 60758
/ しにすることでプリ撮影モ一 ドに入る。 レンズ系 1 00、 絞り 1 0 1、 C C D 1 02を介して撮影された映像信号は、 AZD 1 04にてデジ タル信号に変換されてバッファ 1 05へ転送される。 こ こで、 本実施 形態においては、 デジタル化された映像信号の階調幅を例えば 1 2ビ ッ ト とする。 バッファ 1 05内の映像信号は、 測光評価部 1 06、 距 離情報算出部 1 08、 信号処理部 1 1 0へ転送される。 測光評価部 1 06では、 映像信号中の輝度レベルを求めて、 適正露光となるよう絞 り 1 0 1や C CD 1 02の電子シャ ツタ一速度などを制御する。 距離 情報算出部 1 08では、 外部赤外線センサ 1 09を用いて撮像装置と 被写体までの距離を示す距離情報を算出する。 あるいは、 映像信号の コ ン ト ラス ト情報を検出し、 画像中の距離情報を得たい位置において コ ン ト ラス トが最大となるように AFモータ 1 03を制御することで 合焦画像を取得する。 当該合焦画像が得られるときのレンズ位置 (あ るいは AFモータの制御情報) から距離情報を算出する。 算出された 距離情報は、 レンズ制御部 1 07、 補正係数算出部 1 1 2へ転送され る。 / Pre-shooting mode can be entered by selecting this option. The video signal photographed through the lens system 100, the aperture 100 1, and the CCD 102 is converted into a digital signal by the AZD 104 and transferred to the buffer 105. In this embodiment, the gradation width of the digitized video signal is, for example, 12 bits. The video signal in the buffer 105 is transferred to the photometric evaluation unit 106, the distance information calculation unit 108, and the signal processing unit 110. The photometric evaluation unit 106 obtains the luminance level in the video signal, controls the aperture 1010 and the electronic shutter speed of the CCD 102, etc. so as to achieve proper exposure. The distance information calculation unit 108 uses the external infrared sensor 110 to calculate distance information indicating the distance between the imaging device and the subject. Alternatively, the contrast information of the video signal is detected, and the in-focus image is obtained by controlling the AF motor 103 so that the contrast is maximized at the position where you want to obtain distance information in the image. To do. Distance information is calculated from the lens position (or AF motor control information) when the focused image is obtained. The calculated distance information is transferred to the lens control unit 107 and the correction coefficient calculation unit 1 1 2.
次に、 外部 1ノ F部 1 1 6を介してシャ ッ ターボタンを全押しにす るこ とにより本撮影が行われ、 映像信号はプリ撮像と同様にバッファ 1 05へ転送される。 木撮影は、 測光評価部 1 06にて求められた露 光条件と、 レンズ制御部 1 07によって求められた合焦条件に基づき 行われ、 これらの撮影時の条件は制御部 1 1 5へ転送される。 バッフ ァ 1 05内の映像信号は、 信号処理部 1 1 0へ転送される。 信号処理 部 1 1 0は、 制御部 1 1 5の制御に基づきバッファ 1 05上の単板状 態の映像信号を読み込み、 公知の補間処理、 ホワイ トバランス処理、 強調処理などが行われた三板状態の信号を生成し、 変換部 1 1 1へ転 送する。 補正係数算出部 1 1 2では、 距離情報算出部 1 08を介して 得られる撮像装置と被写体間の距離を示す距離情報に基づいて、 階調
変換曲線の物性に変化を与えるための補正係数を算出する。 例えば、 距離に対して階調変換曲線の傾斜が単調に抑制されるように補正係数 を算出する。 なお、 本実施形態においては、 距離情報算出剖 1 0 8で 得られる距離情報は映像信号中の複数点であるとする。 外部赤外線セ ンサ 1 0 9を用いて画像上の複数領域に対応する位置での距離情報を 算出したり、 画素あるいは複数画素を包含する複数領域に対応する位 置での映像信号のコン ト ラス トから距離情報を算出したりすることで、 複数点の距離情報を取得する。 変換部 1 1 1 では、 局所領域のヒス ト グラム、 および補正係数算出部 1 1 2にて算出された補正係数を用い て階調変換曲線を設定し、 映像信号に対して階調変換処理を行う。 階 調変換処理後の映像信号は、 圧縮部 1 1 3へ転送される。 圧縮部 1 1 3では、 変換部 1 1 1 を介して得られた映像信号に対して、 公知の J P E Gなどの圧縮処理を行い、 圧縮処理後の映像信号は出力部 1 1 4 へ転送される。 出力部 1 1 4では、 圧縮した信号をメモ リ カー ドなど へ記録、 保存する。 Next, full shooting is performed by fully pressing the shutter button via the external 1 F section 1 1 6, and the video signal is transferred to the buffer 105 in the same way as pre-imaging. Tree shooting is performed based on the exposure conditions obtained by the photometry evaluation unit 106 and the focusing conditions obtained by the lens control unit 107, and these shooting conditions are transferred to the control unit 1 15 Is done. The video signal in the buffer 105 is transferred to the signal processing unit 110. The signal processing unit 1 1 0 reads a single-plate video signal on the buffer 1 05 based on the control of the control unit 1 15 and performs a known interpolation process, white balance process, enhancement process, etc. Generate a status signal and transfer it to the converter 1 1 1. In the correction coefficient calculation unit 1 1 2, the gradation is calculated based on the distance information indicating the distance between the imaging device and the subject obtained through the distance information calculation unit 1 08. A correction coefficient for changing the physical properties of the conversion curve is calculated. For example, the correction coefficient is calculated so that the gradient of the gradation conversion curve is monotonously suppressed with respect to the distance. In the present embodiment, it is assumed that the distance information obtained in the distance information calculation 108 is a plurality of points in the video signal. The distance information at positions corresponding to multiple areas on the image is calculated using an external infrared sensor 1 0 9 or the contrast of the video signal at positions corresponding to multiple areas including pixels or multiple pixels The distance information of multiple points is obtained by calculating the distance information from the target. The conversion unit 1 1 1 sets a gradation conversion curve using the local area histogram and the correction coefficient calculated by the correction coefficient calculation unit 1 1 2, and performs gradation conversion processing on the video signal. Do. The video signal after the gradation conversion process is transferred to the compression unit 1 1 3. In the compression unit 1 1 3, the video signal obtained through the conversion unit 1 1 1 is subjected to a compression process such as known JPEG, and the compressed video signal is transferred to the output unit 1 1 4 . The output unit 1 1 4 records and stores the compressed signal in a memory card.
図 2は変換部 1 1 1 の構成の一例を示すものである。 図 2において、 変換部 1 1 1 は、 バッファ 2 0 0、 局所領域抽出部 2 0 1、 代表点抽 出部 2 0 2、 ヒス ト グラム作成部 2 0 3、 ク リ ツ ピング部 2 0 4、 累 積正規化部 2 0 5、 階調曲線作成部 2 0 6、 階調変換部 2 0 7からな る。 信号処理部 1 0 8は、 ノ ッファ 2 0 0に接続されており、 ノ ッフ ァ 2 0 0および代表点抽出部 2 0 2は局所領域抽出部 2 0 1 に接続さ れている。 局所領域抽出部 2 0 1 は、 ヒス トグラム作成部 2 0 3に接 続されており、 ヒス ト グラム作成部 2 0 3、 および補正係数算出部 1 1 2はクリ ッピング部 2 0 4に接続されている。 FIG. 2 shows an example of the configuration of the conversion unit 1 1 1. In FIG. 2, the conversion unit 1 1 1 includes a buffer 2 0 0, a local region extraction unit 2 0 1, a representative point extraction unit 2 0 2, a histogram creation unit 2 0 3, and a clipping unit 2 0 4. It consists of an accumulation normalization unit 205, a gradation curve creation unit 206, and a gradation conversion unit 2007. The signal processing unit 10 8 is connected to a nother 2 0 0, and the nofer 2 0 0 and the representative point extracting unit 2 0 2 are connected to the local region extracting unit 2 0 1. The local region extraction unit 20 1 is connected to the histogram creation unit 20 3, and the histogram creation unit 2 0 3 and the correction coefficient calculation unit 1 1 2 are connected to the clipping unit 2 0 4. ing.
クリ ッビング部 2 0 4は、 累積正規化部 2 0 5に接続されており、 累積正規化部 2 0 5は階調曲線作成部 2 0 6に接続されている。 階調 曲線作成部 2 0 6は、 階調変換部 2 0 7に接続されており、 階調変換
部 2 0 7は圧縮部 1 1 3に接続されている。 制御部 1 1 5は、 局所領 域抽出部 2 0 1、 代表点抽出部 2 0 2、 ヒス ト グラム作成部 2 0 ' 3、' ク リ ッ ピング部 2 0 4、 累積正規化部 2 0 5階調曲線作成部 2 0 6、 階調変換部 2 0 7と双方向に接続されている。 The clipping unit 20 4 is connected to the cumulative normalization unit 2 05, and the cumulative normalization unit 2 0 5 is connected to the gradation curve creation unit 2 6. The tone curve generator 2 0 6 is connected to the tone converter 2 0 7 The unit 2 0 7 is connected to the compression unit 1 1 3. The control unit 1 1 5 includes a local region extraction unit 2 0 1, a representative point extraction unit 2 0 2, a histogram creation unit 2 0 '3, a clipping unit 2 0 4, and a cumulative normalization unit 2 0 It is connected to the 5 gradation curve creation unit 206 and the gradation conversion unit 206 in both directions.
信号処理部 1 1 0から転送される映像信号は、 バッフ ァ 2 0 0に保 存される。 代表点抽出部 2 0 2は、 制御部 1 1 5を介して、 距離情報 算出部 1 0 8にて被写体との距離情報が得られている映像信号の代表 点を一点、 あるいは複数点抽出する。 局所領域抽出部 2 0 1 は、 代表 点抽出部 2 0 2にて抽出された代表点の画素を中心とした所定のサイ ズの矩形領域、 例えば本例では 1 6 x 1 6画素単位の局所領域を抽出 する。 ヒス ト グラム作成部 2 0 3では、 局所領域抽出部 2 0 1 で抽出 された局所領域ごとにヒス トグラムを作成し、 作成したヒス トグラム の情報をク リ ッピング部 2 0 4へ転送する。 ク リ ツ ピング部 2 0 4は、 処理対象の代表点の距離情報を用いて補正係数算出部 1 1 2にて得ら れた補正係数に基づいて、 ヒス トグラム作成部 2 0 3で作成したヒス トグラムに対してクリ ップ処理を行う。 The video signal transferred from the signal processing unit 110 is stored in the buffer 200. The representative point extraction unit 20 2 extracts one or more representative points of the video signal from which the distance information calculation unit 1 0 8 has obtained distance information to the subject via the control unit 1 15. . The local region extraction unit 20 1 is a rectangular region having a predetermined size centered on the pixel of the representative point extracted by the representative point extraction unit 202. For example, in this example, the local region extraction unit 20 1 is a local region of 16 × 16 pixels. Extract region. The histogram creation unit 20 03 creates a histogram for each local region extracted by the local region extraction unit 20 1, and transfers the created histogram information to the clipping unit 204. The clipping unit 20 4 is created by the histogram creation unit 2 0 3 based on the correction coefficient obtained by the correction coefficient calculation unit 1 1 2 using the distance information of the representative point to be processed. Clip the histogram.
図 3はク リ ップ処理の説明図である。 図 3 ( a ) は、 ヒス ト グラム 作成部 2 0 3からのオリジナルのヒス トグラムと、 縦軸の頻度に対す るク リ ップ値を示している。 第 3図 (b ) は、 ク リ ップ処理によりク リ ップ値以上の頻度をク リ ップ値に置換したヒス トグラムを示す。 第 3図 ( c ) は、 オリジナルのヒス トグラムとク リ ップ処理後のヒス ト グラムを累積、 正規化して得られた階調変換曲線を示す。 ク リ ップ処 理により ヒス トグラムの頻度を抑制し、 その結果、 階調変換曲線の傾 斜が抑制される。 ク リ ップ処理におけるク リ ップ値を低めに設定する と、 信号に対するゲイ ンを低くすると共に、 局所領域におけるコン ト ラス ト をあまり変えない。 これに対して、 ク リ ップ処理におけるク リ ップ値を高めに設定すると、 信号に対するゲイ ンを高くすると共に、
局所領域におけるコン ト ラス ト を高める効果がある。 本実施例では、 クリ ップ Cは ( 1 ) 式で算出する。 Figure 3 is an explanatory diagram of the clip process. Fig. 3 (a) shows the original histogram from the histogram creation unit 203 and the clip value for the frequency on the vertical axis. Fig. 3 (b) shows a histogram in which the clip value is replaced with the clip frequency more than the clip value by clip processing. Fig. 3 (c) shows the tone conversion curve obtained by accumulating and normalizing the original histogram and the histogram after clipping. The clipping process suppresses the histogram frequency, and as a result, the gradient of the tone conversion curve is suppressed. Setting the clip value lower in the clip process lowers the signal gain and does not significantly change the contrast in the local region. On the other hand, if the clip value in the clip process is set high, the gain for the signal is increased, This has the effect of increasing the contrast in the local region. In this embodiment, the clip C is calculated by equation (1).
【数 1】 [Equation 1]
C = vk(d)'N (1) こ こで、 Wは所定の重み係数を、 Nは局所領域内の総画素数を表す。 k (d) は補正係数算出部 1 1 2から得られた距離 dに関する補正係 数を表す。 補正係数 k (d) は、 距離情報で示される距離 dに関して 単調減少となる形が望ましく、 例えば、 (2) 式の関数で表される。 C = vk (d) 'N (1) where W is a predetermined weighting factor and N is the total number of pixels in the local region. k (d) represents the correction coefficient for the distance d obtained from the correction coefficient calculation unit 1 1 2. The correction coefficient k (d) is preferably monotonically decreasing with respect to the distance d indicated by the distance information, and is represented by, for example, a function of equation (2).
【数 2】 [Equation 2]
(2) k(d) = const I d ここで、 c o n s tは所定の定数を表す。 (2) 式に示されているよ うに、 撮像装置から被写体までの距離が近い程、 階調変換曲線を生成 するためのク リ ップ値を高く設定し、 階調変換処理によってコン トラ ス トが高く なるようにする。 ク リ ップ処理されたヒス トグラムは、 累 積正規化部 2 05に転送される。 累積正規化部 2 05は、 ヒス ト グラ ムを累積することで累積ヒス トグラムを作成し、 これを階調幅に合わ せて正規化することで階調変換曲線を生成する。 (2) k (d) = const I d where c o n st represents a predetermined constant. As shown in equation (2), the closer the distance from the imaging device to the subject, the higher the clip value for generating the gradation conversion curve, and the contrast conversion is performed by the gradation conversion process. Make sure that the height is high. The clipped histogram is transferred to the accumulation normalization unit 205. The cumulative normalization unit 205 creates a cumulative histogram by accumulating the histogram, and generates a gradation conversion curve by normalizing it according to the gradation width.
本実施形態では、 映像信号の階調幅を 1 2ビッ ト とするため、 上記 階調変換曲線は 1 2ビッ ト入力 1 2ビッ ト出力となる。 上記階調変換 曲線は、 階調曲線作成部 20 6に転送される。 階調曲線作成部 2,06 では、 累積正規化部 20 5で得られた複数点における階調変換曲線を もとに、 映像信号の全画素における階調変換曲線を算出する。 距離情 報の得られていない画素位置がある場合には、 階調変換曲線の算出は、 例えば図 4に示す線形補間処理にて行う。 本実施形態では注目画素 (x、 y ) の階調変換曲線 f (x、 y ) を、 周囲の 4点の画素を用い
て (3 ) 式で算出する。 In the present embodiment, since the gradation width of the video signal is 12 bits, the gradation conversion curve is a 12-bit input and a 12-bit output. The gradation conversion curve is transferred to the gradation curve creation unit 206. The gradation curve creation unit 2,06 calculates a gradation conversion curve for all the pixels of the video signal based on the gradation conversion curves at a plurality of points obtained by the cumulative normalization unit 205. When there is a pixel position for which distance information is not available, the gradation conversion curve is calculated by, for example, the linear interpolation process shown in FIG. In this embodiment, the gradation conversion curve f (x, y) of the target pixel (x, y) is used as the surrounding four pixels. Calculate with equation (3).
【数 3】 [Equation 3]
+ (Μ+ΐ-χ)([ν]- ΧΛΜ,Μ+ΐ) + (Μ + ΐ-χ) ([ν]-ΧΛΜ, Μ + ΐ)
( 3 ) 式の計算は、 図 4の A点と B点の値を、 最初にそれぞれの左 右の値を用いて線形補間することにより求め、 さらに求めた A , Bの 点を縦方向に線形補間する。 同様に、 同図の C点と D点の値を、 それ ぞれ上下の値を用いて線形補間することにより求め、 さらに C , D点 の横方向に線形補間して、 画素 (x、 y ) の階調変換曲線 f ( x、 y ) を算出する。 算出された上記階調変換曲線は、 階調変換部 2 0 7 へ転送される。 階調変換部 2 0 7は、 階調曲線作成部 2 0 6から転送 される階調変換曲線に基づき、 バッファ 2 0 0上の注目画素を階調変 換処理する。 その後、 出力時の階調幅 (本実施例では 8 ビッ ト と仮 定) に適合するように除算処理を行う。 8ビッ トの映像信号は圧縮部 1 1 3へ転送される。 なお、 上記例では局所領域のヒス トグラムに基 づく階調変換曲線を算出しているが、 このような構成に限定される必 要はない。 In the calculation of Eq. (3), the values of points A and B in Fig. 4 are first obtained by linear interpolation using the left and right values of each, and the obtained points A and B in the vertical direction. Perform linear interpolation. Similarly, the values of points C and D in the figure are obtained by linear interpolation using the upper and lower values, respectively, and then linearly interpolated in the horizontal direction of points C and D to obtain the pixel (x, y ) Gradation conversion curve f (x, y). The calculated gradation conversion curve is transferred to the gradation conversion unit 2 07. The gradation conversion unit 20 7 performs gradation conversion processing on the pixel of interest on the buffer 2 0 0 based on the gradation conversion curve transferred from the gradation curve creation unit 2 06. After that, division processing is performed so as to conform to the gradation width at the time of output (assuming 8 bits in this embodiment). The 8-bit video signal is transferred to the compression unit 1 1 3. In the above example, the gradation conversion curve based on the local area histogram is calculated, but it is not necessary to be limited to such a configuration.
例えば、 図 5に示されるように、 予め設定した階調変換曲線を用い る構成も可能である。 図 5は、 図 2に示す変換部 1 1 1 の構成におい て、 局所領域抽出部 2 0 1 、 ヒス ト グラム作成部 2 0 3、 クリ ツ ピン グ部 2 0 4、 累積正規化部 2 0 5を省略し、 階調変換曲線 R O M 2 0 8および階調曲線変更部 2 0 9を追加した構成になつている。 基本構 成は、 第 2図に示す変換部と同等であり、 同一の構成には同一の名称 と番号を付している。 以下、 異なる部分のみ説明する。 For example, as shown in FIG. 5, a configuration using a preset gradation conversion curve is also possible. FIG. 5 shows the configuration of the conversion unit 1 1 1 shown in FIG. 2, in which a local region extraction unit 2 0 1, a histogram creation unit 2 0 3, a clipping unit 2 0 4, and a cumulative normalization unit 2 0 5 is omitted, and a gradation conversion curve ROM 2 08 and a gradation curve changing unit 2 9 are added. The basic configuration is the same as that of the converter shown in Fig. 2, and the same configuration is given the same name and number. Only the differences will be described below.
階調変換曲線 R O M 2 0 8は、 階調曲線変更部 2 0 9へ接続されて いる。 補正係数算出部 1 1 2は、 階調曲線変更部 2 0 9へ接続されて おり、 階調曲線変更部 2 0 9は階調曲線作成部 2 0 6に接続されてい
る。 階調曲線変更部 2 0 9は、 補正係数算出部 1 1 2の補正係数に基 づき、 階調変換曲線 R O M 2 0 8から読み込まれる階調変換曲線を捕 正する。 例えば、 入力輝度値 i ( i = 0〜 1 ) に対する階調変換曲線 せ Ύ ( i ) で与えられていれば、 補正された階調変換曲線ァ ' ( i ) は (4 ) 式で表される。 The tone conversion curve ROM 2 08 is connected to the tone curve changing unit 2 09. The correction coefficient calculation unit 1 1 2 is connected to the gradation curve changing unit 2 09 and the gradation curve changing unit 2 0 9 is connected to the gradation curve creating unit 2 0 6 The The gradation curve changing unit 2109 corrects the gradation conversion curve read from the gradation conversion curve ROM 2 0 8 based on the correction coefficient of the correction coefficient calculation unit 1 1 2. For example, if the tone conversion curve for the input luminance value i (i = 0 to 1) is given by Ύ (i), the corrected tone conversion curve '(i) is expressed by equation (4). The
【数 4】 f(0 = 7(i)Kd) (4) 階調変換曲線 R O M 2 0 8には、 予め輝度値と階調変換曲線との間 の関係を記録する。 ここでは、 例えば、 距離 dに関する補正係数 k ( d ) は、 距離 dの大きさに応じて補正係数 k ( d ) の値が大きくな るよう設定することが望ましい。 このように、 標準的な階調変換曲線 を予め設定しておく ことにより、 処理速度を高速化することができる。 上記補正された階調変換曲線は、 階調変換作成部 2 0 6へ転送される。 以後の処理は、 図 2で説明した処理と同等である。 [Expression 4] f (0 = 7 (i) Kd) (4) Gradation Conversion Curve The relationship between the luminance value and the gradation conversion curve is recorded in advance in ROM 2 08 . Here, for example, the correction coefficient k (d) for the distance d is desirably set so that the value of the correction coefficient k (d) increases in accordance with the magnitude of the distance d. In this way, the processing speed can be increased by setting a standard gradation conversion curve in advance. The corrected gradation conversion curve is transferred to the gradation conversion creation unit 206. The subsequent processing is equivalent to the processing described in FIG.
上記図 5の構成により、 被写体までの距離に応じた適応的な階調変 換曲線を得ることができる。 このような適応的な階調変換曲線により、 距離に応じて適切な階調特性を有し、 奥行き感のある主観的に好まし い画像が得られる。 階調変換処理は従来と同一構成で、 補正処理を追 加する形態となるため、 従来の装置との親和性が高く実装が容易とな る。 階調変換曲線はヒス トグラムに基づき設定されるため、 多様なシ —ンに対して適応的に階調変換処理を行うことが可能となり、 明暗比 の大きなシーンに対しても高品位な映像信号を得ることができる。 ま た、 領域ごとに独立に階調特性に対する制限を設定することにより、 ノィズ成分の増加や色再現の破綻などの副作用を抑制できる。 さらに、 標準的な階調変換曲線を固定的に設定することにより、 高速な処理が 可能となる。
なお、 上記構成例ではハー ドウヱァによる処理を前提と していたが、 このような構成に限定される必要はない。 映像信号処理を所定の段階 を経て実施する構成、 すなわち、 映像信号処理方法として構成するこ とができる。 さらに、 C C D 1 02からの信号を未処理のままの R a wデータとして、 制御部 1 1 5からの撮影時の情報をへッダ情報とし て出力し、 別途ソフ トゥヱァにて処理する構成も可能である。 図 6は、 本発明にかかる第 1の実施形態における階調変換処理のソフ ト ウヱァ 処理に関するフローチャートを示す。 S t e 1 にて、 未処理の映像 信号と撮像条件などの付隨情報を含むへッダ情報を読み込む。 S t e P 2にて、 公知の補間処理, ホワイ トバランス処理, 強調処理などを 行う。 S t e p 2にて信号処理された映像信号は、 一方で S t e p 5 にて、 距離情報を算出し、 S t e p 6で上記距離情報に基づき補正係 数を算出する。 また、 S t e p 2にて信号処理された映像信号は、 他 方で S t e p 3にて、 代表点を中心とする局所領域を順次抽出する。 次に、 S t e p 4にて、 抽出した局所領域のヒス トグラムを作成する。 With the configuration shown in FIG. 5, an adaptive tone conversion curve corresponding to the distance to the subject can be obtained. With such an adaptive tone conversion curve, a subjectively pleasing image with a sense of depth can be obtained with appropriate tone characteristics according to the distance. The gradation conversion processing has the same configuration as the conventional one, and the correction processing is added. Therefore, the compatibility with the conventional device is high and mounting is easy. Since the gradation conversion curve is set based on the histogram, it is possible to adaptively perform gradation conversion processing for various scenes, and high-quality video signals even for scenes with large contrast ratios. Can be obtained. In addition, by setting restrictions on gradation characteristics independently for each region, side effects such as an increase in noise components and failure of color reproduction can be suppressed. Furthermore, high-speed processing is possible by setting a standard tone conversion curve fixedly. In the above configuration example, processing by hardware is assumed, but it is not necessary to be limited to such a configuration. The video signal processing can be configured through a predetermined stage, that is, a video signal processing method. In addition, the signal from CCD 102 can be output as raw data as raw data, and the shooting information from control unit 1 15 can be output as header information, which can be processed separately by software. It is. FIG. 6 is a flowchart relating to the software processing of the gradation conversion processing in the first embodiment according to the present invention. In Ste 1, read header information including unprocessed video signals and supplementary information such as imaging conditions. Performs known interpolation processing, white balance processing, enhancement processing, etc. at Ste P2. On the other hand, the distance information of the video signal processed in Step 2 is calculated in Step 5, and the correction coefficient is calculated in Step 6 based on the distance information. In addition, the video signal processed in Step 2 sequentially extracts local regions centered on the representative point in Step 3 on the other side. Next, in Step 4, create a histogram of the extracted local region.
S t e p 7にて、 S t e p 6からの補正係数に基づき S t e p 4で 得られたヒス トグラムに対してク リ ッ ピング処理を行う。 S t e ρ 8 にて、 ク リ ッ ピング処理後のヒス トグラムを累積, 正規化することで 階調変換曲線を生成する。 S t e p 3〜 S t e p 8の処理は、 距離情 報を算出できる代表点すべてに対して行う。 S t e p 9にて、 S t e P 8からの階調変換曲線に基づき注目画素に対して階調変換処理を行 う。 S t e p 1 0にて、 全注目画素が完了したか否かを判断し、 全注 目画素が完了していない場合は S t e p 9へ移行し、 全注目画素が完 了した場合は S t e p 1 1へ移行する。 S t e p 1 1 にて、 公知の J P E Gなどの圧縮処理を行う。 S t e p 1 2にて、 処理後の信号が出 力されプログラムを終了する。 In Step 7, the clipping process is performed on the histogram obtained in Step 4 based on the correction factor from Step 6. At St ρ 8, the gradation conversion curve is generated by accumulating and normalizing the histogram after clipping. Steps 3 to 8 are performed on all representative points for which distance information can be calculated. In Step 9, gradation conversion processing is performed on the pixel of interest based on the gradation conversion curve from Step 8. In Step 1 0, it is determined whether or not all pixels of interest have been completed. If all pixels of interest have not been completed, the process proceeds to Step 9. If all pixels of interest have been completed, St 1 is used. Move to 1. At St e p 1 1, compression processing such as known JPEG is performed. At Step 1 2, the processed signal is output and the program ends.
また、 上記図 5の構成例では、 被写体までの距離情報に基づき各画
素に対する補正係数を算出し、 算出した補正係数を用いて階調変換曲 線を設定していたが、 これに限定されるものではない。 補正係数を介 さずに、 距離情報を用いて直接階調変換曲線を設定する構成も可能で ある。 例えば、 距離情報の値に対応した階調変換曲線を予め設定して おき、 各画素ごとに距離情報に対応する階調変換曲線を選択し、 選択 した階調変換曲線を用いて階調変換処理を行う方法が考えられる。 In the configuration example of FIG. 5 above, each image is based on the distance information to the subject. Although the correction coefficient for the element is calculated and the gradation conversion curve is set using the calculated correction coefficient, the present invention is not limited to this. A configuration can also be used in which the tone conversion curve is directly set using distance information without using a correction coefficient. For example, a gradation conversion curve corresponding to the distance information value is set in advance, a gradation conversion curve corresponding to the distance information is selected for each pixel, and gradation conversion processing is performed using the selected gradation conversion curve. A way to do this is conceivable.
図 7は、 上記処理を行うための変換部 1 1 1 の構成の一例を示すも のである。 図 7の構成では、 図 5に示す構成から、 補正係数算出部 1 1 2、 階調曲線変更部 2 0 9を省略し、 階調曲線設定部 2 1 0を追加 した形態となっている。 基本構成は図 5に示す変換部 1 1 1 と同等で あり、 同一の構成には同一の名称と番号を付している。 以下、 図 5に 示す構成と異なる部分のみ説明する。 FIG. 7 shows an example of the configuration of the conversion unit 1 1 1 for performing the above processing. In the configuration of FIG. 7, the correction coefficient calculation unit 1 1 2 and the gradation curve changing unit 2 09 are omitted and the gradation curve setting unit 2 10 is added to the configuration shown in FIG. The basic configuration is equivalent to the conversion unit 1 1 1 shown in Fig. 5, and the same configuration is given the same name and number. Only the differences from the configuration shown in FIG. 5 will be described below.
図 7において、 階調変換曲線 R O M 2 0 8は、 階調曲線設定部 2 1 0に接続されている。 階調曲線設定部 2 1 0は階調曲線作成部 2 0 6 に接続されており、 制御部 1 1 5は階調曲線設定部 2 1 0へ接続され ている。 階調変換曲線 R O M 2 0 8には、 予め距離情報と階調変換曲 線との間の関係を記億する。 階調曲線設定部 2 1 0は、 階調変換曲線 R O M 2 0 8より制御部 1 1 5からの距離情報に応じた階調変換曲線 を読み込む。 In FIG. 7, the tone conversion curve R O M 2 0 8 is connected to the tone curve setting unit 2 1 0. The tone curve setting unit 2 1 0 is connected to the tone curve creation unit 2 0 6, and the control unit 1 1 5 is connected to the tone curve setting unit 2 1 0. In the gradation conversion curve R O M 208, the relationship between the distance information and the gradation conversion curve is recorded in advance. The gradation curve setting unit 2 10 reads a gradation conversion curve corresponding to the distance information from the control unit 1 15 from the gradation conversion curve R O M 208.
以後の処理は、 図 5と同等である。 なお、 本例のように、 階調変換 曲線 R O M 2 0 8に距離情報に応じた階調変換曲線を保存する構成に 限定されるものではない。 例えば、 階調変換曲線 R O M 2 0 8に、 距 離情報と輝度情報に対応する階調変換後の出力信号値の関係をテープ ルと して予め保存しておき、 注目画素の距離情報と輝度情報に基づき、 テーブルを参照して階調変換処理を行うことも可能である。 The subsequent processing is the same as in Figure 5. Note that the present invention is not limited to the configuration in which the tone conversion curve R O M 208 is stored with the tone conversion curve corresponding to the distance information as in this example. For example, in the gradation conversion curve ROM 208, the relationship between the distance information and the output signal value after gradation conversion corresponding to the luminance information is stored in advance as a table, and the distance information and luminance of the target pixel are stored. It is also possible to perform gradation conversion processing with reference to the table based on the information.
図 8〜図 1 3は、 本発明の第 2実施形態を示している。 図 8は第 2 の実施形態の構成図、 図 9は撮影状況推定部 1 1 7の構成図、 図 1 0
は評価測光用の分割パターンの説明図、 図 1 1 は撮影シーンの分類パ ターンの説明図、 図 1 2は分類パターンに対する階調曲線設定の特性 図、 図 1 3は第 2の実施形態における階調変換処理のフローチャー ト である。 8 to 13 show a second embodiment of the present invention. FIG. 8 is a configuration diagram of the second embodiment, FIG. 9 is a configuration diagram of the shooting situation estimation unit 1 1 7, and FIG. Is an explanatory diagram of the division pattern for evaluation metering, Fig. 11 is an explanatory diagram of the classification pattern of the shooting scene, Fig. 12 is a characteristic diagram of gradation curve setting for the classification pattern, and Fig. 13 is in the second embodiment. This is a flowchart of gradation conversion processing.
本発明の第 2の実施形態の構成を、 図 8により説明する。 本実施形 態は、 第 1 の実施形態において、 撮影推定状況部 1 1 7を追加した構 成になっている。 基本構成は第 1 の実施形態と同等であり、 同一の構 成には同一の名称と番号を付している。 以下、 第 1 の実施形態とは異 なる部分のみ説明する。 測光評価部 1 0 6、 レンズ制御部 1 0 7は撮 影状況推定部 1 1 7に接続されている。 撮影状況推定部 1 1 7は、 補 正係数算出部 1 1 2に接続されており、 制御部 1 1 5は、 撮影状況推 定部 1 1 7と双方向に接続されている。 The configuration of the second embodiment of the present invention will be described with reference to FIG. This embodiment has a configuration in which a shooting estimation situation section 1 17 is added to the first embodiment. The basic configuration is the same as in the first embodiment, and the same configuration is given the same name and number. Only the portions different from the first embodiment will be described below. The photometric evaluation unit 1 06 and the lens control unit 1 0 7 are connected to the imaging state estimation unit 1 1 7. The shooting state estimation unit 1 1 7 is connected to the correction coefficient calculation unit 1 1 2, and the control unit 1 1 5 is connected to the shooting state estimation unit 1 1 7 in both directions.
次に、 第 2の実施形態について、 図 8に示す信号の流れに沿って説 明する。 第 2実施形態は、 基本的には第 1 の実施形態と同等であり、 異なる部分のみ説明する。 信号処理部 1 1 0にて、 公知の補間処理、 ホワイ トバランス処理、 強調処理などが行われた三板状態の映像信号 は、 変換部 1 1 1へ転送される。 撮影状況推定部 1 1 7では、 測光評 価部 1 0 6にて求められた露光条件、 レンズ制御部 1 0 7にて求めら れた合焦条件に基づき、 風景、 ポート レー ト、 逆光などの撮影状況を 推定し、 推定した撮影状況を補正係数算出部 1 1 2へ転送する。 Next, a second embodiment will be described along the signal flow shown in FIG. The second embodiment is basically the same as the first embodiment, and only different parts will be described. The video signal in the three-plate state that has been subjected to known interpolation processing, white balance processing, enhancement processing, and the like by the signal processing unit 110 is transferred to the conversion unit 110. Based on the exposure conditions obtained by the photometric evaluation section 10 06 and the focusing conditions obtained by the lens control section 107, the shooting condition estimation section 1 1 7 is used for landscapes, portraits, backlighting, etc. The shooting situation is estimated, and the estimated shooting situation is transferred to the correction coefficient calculation unit 1 1 2.
補正係数算出部 1 1 2では、 距離情報算出部 1 0 8で得られる被写 体までの距離情報と、 上記撮影状況推定部 1 1 7からの撮影状況に関 する情報に基づき補正係数を算出する。 変換部 1 1 1 では、 局所領域 のヒス ト グラム、 および補正係数算出部 1 1 2にて算出された補正係 数を用いて階調変換曲線を設定し、 映像信号に対して階調変換処理を 行う。 階調変換処理後の映像信号は、 圧縮部 1 1 3へ転送される。 The correction coefficient calculation unit 1 1 2 calculates the correction coefficient based on the distance information to the subject obtained by the distance information calculation unit 10 8 and the information on the shooting situation from the shooting situation estimation unit 1 1 7. To do. The conversion unit 1 1 1 sets a gradation conversion curve using the local area histogram and the correction coefficient calculated by the correction coefficient calculation unit 1 1 2, and performs gradation conversion processing on the video signal. I do. The video signal after the gradation conversion processing is transferred to the compression unit 1 1 3.
図 9は撮影状況推定部 1 1 7の構成の一例を示すものであり、 撮影
状況推定部 1 1 7は、 被写体分布推定部 3 0 0、 合焦位置推定部 3 0 1、 統合部 3 0 2からなる。 測光評価部 1 0 6およびレンズ制御部 1Fig. 9 shows an example of the configuration of the shooting situation estimation unit 1 1 7. The situation estimation unit 1 1 7 includes a subject distribution estimation unit 3 0 0, a focus position estimation unit 3 0 1, and an integration unit 3 0 2. Photometric evaluation section 1 0 6 and lens control section 1
0 7からの情報が、 被写体分布推定部 3 0 0および合焦位置推定部 3 0 1へ転送される。 被写体分布推定部 3 0 0、 合焦位置推定部 3 0 1 は統合部 3 0 2へ接続されており、 統合部 3 0 2は補正係数算出部 1 1 2に接続されている。 合焦位置推定部 3 0 1では、 制御部 1 1 5を 介して距離情報算出部 1 0 8から距離情報を得る。 合焦位置での距離 情報に基づき、 被写体を、 例えば風景 (5 m以上)、 ポート レート ( 1 m〜 5 m ) の 2種類に分類し、 この分類結果を合焦情報と して統合部 3 0 2へ転送する。 Information from 07 is transferred to the subject distribution estimation unit 3 0 0 and the in-focus position estimation unit 3 0 1. The subject distribution estimation unit 3 0 0 and the in-focus position estimation unit 3 0 1 are connected to the integration unit 3 0 2, and the integration unit 3 0 2 is connected to the correction coefficient calculation unit 1 1 2. The in-focus position estimation unit 3 0 1 obtains distance information from the distance information calculation unit 1 0 8 via the control unit 1 1 5. Based on the distance information at the in-focus position, the subject is classified into two types, for example, landscape (5 m or more) and portrait (1 m to 5 m). 0 Transfer to 2.
一方、 被写体分布推定部 3 0 0は、 制御部 1 1 5を介して測光評価 に関する情報を得る。 図 1 0は、 測光評価用の分割パターンの一例を 示す説明図であり、 1 3個の領域に分割し、 各領域ごとに輝度値 (aい On the other hand, the subject distribution estimation unit 3 0 0 obtains information on photometric evaluation via the control unit 1 1 5. FIG. 10 is an explanatory diagram showing an example of a division pattern for photometric evaluation. 13 Divided into three areas, and the luminance value (a
1 = 1 〜 1 3 ) を得る。 被写体分布推定部 3 0 1 は、 各領域の輝度値 a i から、 測光情報と して例えば ( 5 ) 式のパラメータを算出する。 これにより輝度分布を算出する。 1 = 1 to 1 3) The subject distribution estimation unit 30 1 calculates, for example, a parameter of equation (5) as photometric information from the luminance value a i of each region. Thereby, the luminance distribution is calculated.
【数 5】 [Equation 5]
S = ai - a ( o S = ai-a (o
Sは中心領域と端領域の差の絶対値を求めるものであり、 例えばポ一 ト レー ト撮影時に、 暗所でス ト 口ボを焚いて撮影した場合には Sの値 が正の大きな値となり、 逆光の環境で撮影した場合には、 Sの値が負 の大きな値となる。 被写体分布推定部 3 0 0では、 上記のような測光 情報を算出し統合部 3 0 2に転送する。 統合部 3 0 2は、 合焦および 測光情報に基づき撮影状況を推定する。 S is the absolute value of the difference between the center area and the edge area. For example, when shooting with a portrait, the value of S is a large positive value when shooting in the dark. When shooting in a backlight environment, the S value becomes a large negative value. The subject distribution estimation unit 300 calculates the photometry information as described above and transfers it to the integration unit 302. The integration unit 30 2 estimates the shooting situation based on the focusing and photometric information.
図 1 1 は、 合焦および測光情報から 4種類の撮影状況を推定する場 合を示す説明図である。 図 1 1 の例では、 3種類のポー ト レー ト (T
y p e l〜Ty p e 3) と、 風景 (T y p e 4 ) に分類されている。 合焦情報からポー ト レート に分類された場合には、 測光情報のパラメ 一夕 Sを所定の閾値 t hと比較することで、 標準シーン (T y p e 1 ) と、 暗所ス ト ロボシーン (Ty p e 2) と、 逆光シーン (T y p e 3) に分類できる。 結合部 302は、 上記撮影状況を補正係数算出 部 1 1 2へ転送する。 FIG. 11 is an explanatory diagram showing a case where four types of shooting situations are estimated from focusing and photometry information. In the example in Figure 11 1, there are three different port rates (T It is classified into ypel ~ Ty pe 3) and landscape (T ype 4). When it is classified as a portrait from the focus information, the standard scene (T ype 1) and the dark place strobo scene (Ty pe) are compared by comparing the parameter S of the photometry information with a predetermined threshold th. 2) and backlit scenes (Type 3). The combining unit 302 transfers the above photographing state to the correction coefficient calculating unit 1 1 2.
図 1 2は、 図 1 1 に示す 4種類の撮影状況に対応する、 補正係数算 出部 1 1 2における補正係数の算出方法を示す説明図である。 例えば、 Ty p e 1 に分別された場合は、 被写体までの距離の近い画素ほどゲ イ ンを大きく上げ、 距離の遠い画素ほどゲイ ンをほぼ変化させないよ うに補正係数を算出する。 具体的には、 (2) 式より距離に応じたクリ ップ値を設定し、 代表点画素近傍領域より生成したヒス トグラムのク リ ップ処理を行う、 もしく は距離に応じて最適な処理を行う階調変換 曲線を選択する。 T y p e 2に分別された場合は、 距離の近い画素の 輝度が全体的に高く なり、 距離の遠い画素の輝度が全体的に低く なる ので、 距離の近い画素ほどゲイ ンを下げ、 距離の遠い画素ほどゲイ ン を大きく上げるよう、 補正係数を算出する。 具体的には、 (2) 式の定 数 c o n s tに距離 dを積算してク リ ップ値を設定し、 ヒス ト グラム のク リ ップ処理を行う、 も しく は距離に応じた最適な階調変換曲線を 選択する。 FIG. 12 is an explanatory diagram showing a correction coefficient calculation method in the correction coefficient calculation unit 1 1 2 corresponding to the four types of shooting situations shown in FIG. For example, when classified into Type 1, the gain is calculated so that the gain is greatly increased for pixels closer to the subject and the gain is not changed substantially for pixels farther away. Specifically, the clip value corresponding to the distance is set from equation (2), and the histogram generated from the region near the representative point pixel is clipped, or the optimum value according to the distance. Select the tone conversion curve to be processed. When classified into Type 2, the brightness of pixels that are close to each other increases overall, and the brightness of pixels that are farther away decreases overall. The correction coefficient is calculated so that the gain increases as the pixel increases. Specifically, the clip value is set by adding the distance d to the constant const in Eq. (2), and the clip processing of the histogram is performed, or the optimum according to the distance. Select the tone conversion curve.
撮影状況が T y p e 3に分別された場合は、 距離の近い画素ほど輝 度が全体的に低くなるのに対して、 距離の遠い画素ほど輝度が全体的 に高く なるので、 距離の近い画素ほどゲイ ンを高く し、 距離の遠い画 素ほどゲイ ンが低くなるよう、 補正係数、 も しく は階調変換曲線を選 択する。 Ty p e 4の場合は、 主要被写体が距離の遠い場所に存在す るので、 距離の遠い画素ほどゲイ ンを大きく上げるよう補正係数、 も しくは階調変換曲線を選択する。
上記構成により、 被写体までの距離と撮影状況に関する情報に基づ いた適応的な階調変換曲線を得ることができる。 撮影時に撮影シーン を推定し補正係数を求めるため、 撮影シーンごとに階調変換処理を制 御することが可能となり、 被写体に応じた主観的に好ま しい画像が得 られる。 なお、 上記構成例ではハー ドゥヱァによる処理を前提として いたが、 このような構成に限定される必要はない。 映像信号処理を所 定の段階を経て実施する構成、 すなわち、 映像信号処理方法と して構 成することができる。 さらに、 C C D 1 0 2からの信号を未処理のま まの R a wデータと して、 制御部 1 1 5からの撮影時の情報をへッダ 情報と して出力し、 別途ソフ ト ゥ Xァにて処理する構成も可能である。 When the shooting situation is classified as Type 3, the closer the pixels are, the lower the brightness overall, whereas the farther away the pixels are, the higher the brightness is, so the closer the pixels are, the closer the distance is. Select a correction factor or gradation conversion curve so that the gain is increased and the gain is decreased as the pixel is further away. In Type 4, since the main subject is located far away, the correction coefficient or gradation conversion curve is selected so that the farther away the pixels, the higher the gain. With the above configuration, it is possible to obtain an adaptive tone conversion curve based on information on the distance to the subject and the shooting situation. Since the shooting scene is estimated at the time of shooting and the correction coefficient is obtained, it is possible to control gradation conversion processing for each shooting scene, and a subjectively favorable image corresponding to the subject can be obtained. In the above configuration example, processing by hardware is assumed, but it is not necessary to be limited to such a configuration. It can be configured as a configuration in which video signal processing is performed through a predetermined stage, that is, a video signal processing method. In addition, the signal from the CCD 10 2 is processed as raw data, and the shooting information from the control unit 1 15 is output as header information. A configuration in which processing is performed by a key is also possible.
図 1 3は、 本発明の第 2の実施形態における階調変換処理のソフ ト ウェア処理に関するフローチャー トを示す。 なお、 図 6に示す第 1 の 実施形態における階調変換処理のフローチヤ一ト と同一の処理ステッ プに関しては、 同一の S t e p数を付している。 S t e p 1 にて、 未 処理の映像信号と撮像条件などの付随情報を含むへッダ情報を読み込 む。 S t e p 2にて、 公知の補間処理, ホワイ トバランス処理, 強調 処理などを行う。 S t e p 2にて信号処理された映像信号は、 一方で S t e p 5にて、 距離情報を算出し、 この距離情報に基づいて上述の ように合焦情報を得る。 例えば風景、 ポー ト レー トの 2種類に合焦位 置の推定を行う。 S t e p 1 3にて、 測光評価に基づき、 例えば逆光 や暗所でス ト ロボなどに応じて変化する輝度分布を算出する。 S t e P 1 4にて、 上記合焦位置および撮影環境を統合し、 例えば図 1 1 に 示す 4種類のタィプに撮影状況を分類する。 S t e p 6で、 推定され たシーンのタイプに基づき補正係数を算出する。 FIG. 13 shows a flowchart relating to the software processing of the gradation conversion processing according to the second embodiment of the present invention. Note that the same step numbers are assigned to the same processing steps as the flowchart of the gradation conversion processing in the first embodiment shown in FIG. At Step 1, the header information including the unprocessed video signal and accompanying information such as imaging conditions is read. At Step 2, perform known interpolation processing, white balance processing, and enhancement processing. On the other hand, the video signal that has undergone signal processing in Step 2 calculates distance information in Step 5 and obtains focusing information as described above based on the distance information. For example, the focus position is estimated for two types of landscapes and portraits. Based on the photometric evaluation, calculate the luminance distribution that changes according to the strobe, for example, in backlight or in dark places. In Ste P 14, the above focusing position and shooting environment are integrated, and for example, the shooting situation is classified into four types shown in FIG. Step 6 calculates the correction factor based on the estimated scene type.
また、 S t e p 2にて信号処理された映像信号は、 他方で S t e p 3にて、 注目画素を中心とする局所領域を順次抽出する。 次に、 S t e p 4にて、 抽出した領域のヒス トグラムを作成する。 S t e p 7に
て、 S t e p 6からの補正係数に基づき S t e p 4で得られた t ス小 グラムに対してク リ ツ ピング処理を行う。 S t e p 8にて、 ク リ ッピ ング処理後のヒス トグラムを累積、 正規化することで階調変換曲線を 生成する。 これまでの処理は、 距離情報を算出できる代表点すべてに 対して行う。 S t e p 9にて、 S t e p 8からの階調変換曲線に基づ き注目画素に対して階調変換処理を行う。 S t e p 1 0で、 全注目画 素が完了したか否かを判断し、 全注目画素が完了していない場合は S t e p 3へ移行し、 全注目画素が完了した場合は S t e 1 1へ移行 する。 S t e p l l にて、 公知の J P E Gなどの圧縮処理を行う。 S t e p 1 2にて、 処理後の信号が出力されプログラムを終了する。 上記構成例では、 合焦および測光情報に基づいて自動的に撮影状況 の推定を行っていたが、 これに限定されることはなく、 手動により撮 影状況を指定する構成も可能である。 産業上の利用可能性 On the other hand, the local signal centered on the target pixel is sequentially extracted from the video signal subjected to signal processing in Step 2 on the other hand. Next, in Step 4, create a histogram of the extracted area. To S tep 7 Then, based on the correction coefficient from Step 6, the clipping process is performed on the t-gram obtained in Step 4. In Step 8, a gradation conversion curve is generated by accumulating and normalizing the histogram after clipping. The processing so far is performed for all representative points for which distance information can be calculated. In Step 9, gradation conversion processing is performed on the target pixel based on the gradation conversion curve from Step 8. In Step 1 0, it is determined whether or not all pixels of interest have been completed. If all pixels of interest have not been completed, the process proceeds to Step 3. If all pixels of interest have been completed, Step 1 1 is performed. Transition. At Stepll, compression processing such as well-known JPEG is performed. At Step 1 2, the processed signal is output and the program ends. In the above configuration example, the shooting situation is automatically estimated based on the focusing and photometry information, but the present invention is not limited to this, and a configuration in which the shooting situation is manually specified is also possible. Industrial applicability
以上説明したように、 本発明によれば、 被写体に応じた高品位な階 調変換を行なう映像信号処理装置と映像信号処理方法、 および映像信 号処理プログラムを提供することができる。 このような映像信号処理 装置は、 特に撮像装置に好適に適用される。
As described above, according to the present invention, it is possible to provide a video signal processing device, a video signal processing method, and a video signal processing program that perform high-quality gradation conversion according to a subject. Such a video signal processing apparatus is particularly preferably applied to an imaging apparatus.
Claims
1 . 撮像手段から得られた撮影対象の映像信号に対し階調変換処理を 行う映像信号処理装置において、 1. In a video signal processing apparatus that performs gradation conversion processing on a video signal to be captured obtained from an imaging means,
前記撮影対象と前記撮影手段との距離情報を取得する距離情報取得 手段と、 前記距離情報を用いて前記映像信号の注目画素に対して階調 変換処理を行う階調変換手段とを有することを特徴とする映像信号処 理装置。 Distance information acquisition means for acquiring distance information between the imaging object and the imaging means; and gradation conversion means for performing gradation conversion processing on a target pixel of the video signal using the distance information. A video signal processing device.
2 . 前記階調変換手段は、 前記注目画素および近傍領域のヒス ト グ ラムを算出するヒス ト グラム算出手段と、 前記距離情報に基づき上記 ヒス トグラムに対するク リ ッ ピング処理を行うク リ ツピング手段と、 前記ク リ ップ処理後のヒス トグラムに基づき階調変換曲線を設定する 階調変換曲線設定手段とを更に有することを特徴とする請求項 1 に記 載の映像信号処理装置。 2. The gradation converting means includes a histogram calculating means for calculating a histogram of the target pixel and a neighboring area, and a clipping means for performing a clipping process on the histogram based on the distance information. The video signal processing apparatus according to claim 1, further comprising a gradation conversion curve setting unit that sets a gradation conversion curve based on a histogram after the clip processing.
3 . 前記階調変換手段は、 前記距離情報に対し所定の階調変換曲線 を記憶する階調変換曲線記憶手段を更に有するこ とを特徴とする請求 項 1 に記載の映像信号処理装置。 3. The video signal processing apparatus according to claim 1, wherein the gradation conversion means further includes gradation conversion curve storage means for storing a predetermined gradation conversion curve for the distance information.
4 . 前記距離情報を用いて前記映像信号の注日画素に対する補正係 数を算出する補正係数算出手段と、 前記注目画素に対し前記補正係数 を用いて階調変換処理を行う階調変換手段とを有することを特徴とす る請求項 1 に記載の映像信号処理装置。 4. Correction coefficient calculation means for calculating a correction coefficient for the date-added pixel of the video signal using the distance information, and gradation conversion means for performing gradation conversion processing on the target pixel using the correction coefficient. The video signal processing apparatus according to claim 1, further comprising:
5 . 前記映像信号の輝度分布を算出する輝度分布算出手段と、 前記 距離情報取得手段で取得した距離情報と、 前記輝度 布算出手段で算 出した輝度分布とに基づいて撮影状況の推定を行う撮影状況推定手段 と、 前記距離情報および前記撮影状況の情報を用いて前記映像信号の 注日画素に対する補正係数を算出する補正係数算出手段と、 前記注目 画素に対し前記補正係数を用いて階調変換処理を行う階調変換手段と を有することを特徴とする請求項 1 に記載の映像信号処理装置。
5. Estimating the shooting situation based on the luminance distribution calculating means for calculating the luminance distribution of the video signal, the distance information acquired by the distance information acquiring means, and the luminance distribution calculated by the luminance distribution calculating means. A shooting condition estimating means; a correction coefficient calculating means for calculating a correction coefficient for an injection date pixel of the video signal using the distance information and the shooting situation information; and a gradation using the correction coefficient for the target pixel. The video signal processing apparatus according to claim 1, further comprising: gradation conversion means for performing conversion processing.
6 . 前記階調変換手段は、 前記注日画素および近傍領域のヒス ト グ ラムを算出する ヒス ト グラム算出手段と、 前記補正係数と前記ヒス ト グラムに基づき階調変換曲線を設定する階調変換曲線設定手段とを更 に有することを特徴とする請求項 4に記載の映像信号処理装置。 6. The gradation converting means includes a histogram calculating means for calculating a histogram of the date-of-date pixel and a neighboring area, and a gradation for setting a gradation conversion curve based on the correction coefficient and the histogram. 5. The video signal processing apparatus according to claim 4, further comprising conversion curve setting means.
7 . 前記階調変換手段は、 前記注目画素および近傍領域のヒス ト グ ラムを算出する ヒス トグラム算出手段と、 前記補正係数に基づき上記 ヒス ト グラムに対するタ リ ッ ピング処理を行うク リ ッ ビング手段と、 前記ク リ ップ処理後のヒス ト グラムに基づき階調変換曲線を設定する 階調変換曲線設定手段とを更に有することを特徴とする請求項 4に記 載の映像信号処理装置。 7. The gradation converting means includes a histogram calculating means for calculating a histogram of the pixel of interest and a neighboring area, and a clipping process for performing a tipping process on the histogram based on the correction coefficient. 5. The video signal processing apparatus according to claim 4, further comprising: means; and gradation conversion curve setting means for setting a gradation conversion curve based on the histogram after the clip processing.
8 . 前記階調変換手段は、 輝度に対し所定の階調変換曲線を記憶す る階調変換曲線記憶手段と、 前記補正係数に基づき上記階調変換曲線 を変更する階調曲線変更手段とを更に有することを特徴とする請求項 4に記載の映像信号処理装置。 8. The gradation conversion means includes gradation conversion curve storage means for storing a predetermined gradation conversion curve for luminance, and gradation curve changing means for changing the gradation conversion curve based on the correction coefficient. 5. The video signal processing apparatus according to claim 4, further comprising:
9 . 撮像手段により撮像対象の映像信号を取得する段階と、 前記撮 影対象と前記撮影手段との距離情報を取得する段階と、 前記距離情報 を用いて前記映像信号の注目画素に対して階調変換処理を行う段階と、 からなることを特徴とする映像信号処理方法。 9. Acquiring a video signal to be imaged by the imaging means; acquiring distance information between the imaging object and the imaging means; and using the distance information for a target pixel of the video signal. A video signal processing method comprising: a step of performing a tone conversion process.
1 0 . 撮像手段により撮像対象の映像信号を取得する段階と、 前記 撮影対象と前記撮影手段との距離情報を取得する段階と、 前記距離情 報を用いて前記映像信号の注目画素に対する補正係数を算出する段階 と、 前記注目画素に対し前記補正係数を用いて適応的に階調変換処理 を行う段階と、 からなることを特徴とする映像信号処理方法。 1 0. Acquiring a video signal to be imaged by the imaging means, acquiring distance information between the imaging object and the imaging means, and a correction coefficient for the target pixel of the video signal using the distance information. And a step of adaptively performing gradation conversion processing on the pixel of interest using the correction coefficient. A video signal processing method comprising:
1 1 . コンピュータに、 撮影時の未処理の映像信号を読み込ませる 手順と、 撮影対象と撮影手段との距離情報を取得させる手順と、 前記 距離情報を用いて前記映像信号の注目画素に対して階調変換処理を行 わせる手順と、 を実行させることを特徴とする映像信号処理プログラ
ム。 1 1. A procedure for causing a computer to read an unprocessed video signal at the time of shooting, a procedure for acquiring distance information between a shooting target and a shooting means, and a target pixel of the video signal using the distance information. A procedure for performing gradation conversion processing and a video signal processing program characterized by executing Mu.
1 2 . コンピュータに、 撮影時の未処理の映像信号を読み込ませる 手順と、 信号処理された前記映像信号により距離情報を算出する手順 と、 前記距離情報に基づき補正係数を算出する手順と、 前記信号処理 された前記映像信号により注目画素を中心とする局所領域を順次抽出 する手順と、 前記抽出した領域のヒス ト グラムを作成する手順と、 前 記補正係数に基づき前記ヒス トグラムに対してク リ ッ ピング処理を行 う手順と、 前記ク リ ッビング処理後のヒス トグラムを累積, 正規化す ることで階調変換曲線を生成する手順と、 前記階調変換曲線に基づき 注目画素に対して階調変換処理を行う手順と、 を実行させることを特 徴とする映像信号処理プログラム。
1. A procedure for causing a computer to read an unprocessed video signal at the time of shooting; a procedure for calculating distance information from the video signal that has been signal-processed; a procedure for calculating a correction coefficient based on the distance information; A procedure for sequentially extracting a local region centered on a pixel of interest from the signal-processed video signal, a procedure for creating a histogram of the extracted region, and a histogram for the histogram based on the correction coefficient. A procedure for performing the ripping process, a procedure for generating a gradation conversion curve by accumulating and normalizing the histogram after the clipping process, and a step for the target pixel based on the gradation conversion curve. A video signal processing program characterized by executing a key conversion process and
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2006-158104 | 2006-06-07 | ||
JP2006158104A JP2007329619A (en) | 2006-06-07 | 2006-06-07 | Video signal processor, video signal processing method and video signal processing program |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2007142048A1 true WO2007142048A1 (en) | 2007-12-13 |
Family
ID=38801308
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2007/060758 WO2007142048A1 (en) | 2006-06-07 | 2007-05-22 | Video signal processing device, video signal processing method, and video signal processing program |
Country Status (2)
Country | Link |
---|---|
JP (1) | JP2007329619A (en) |
WO (1) | WO2007142048A1 (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010103668A1 (en) * | 2009-03-11 | 2010-09-16 | Olympus Corporation | Image processing apparatus, image processing method, and image processing program |
Families Citing this family (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2009090992A1 (en) | 2008-01-17 | 2009-07-23 | Nikon Corporation | Electronic camera |
JP5181687B2 (en) * | 2008-01-17 | 2013-04-10 | 株式会社ニコン | Electronic camera |
WO2009093386A1 (en) * | 2008-01-21 | 2009-07-30 | Olympus Corporation | Image processing apparatus, image processing program, computer readable storage medium having image processing program stored therein, and image processing method |
JP5052365B2 (en) * | 2008-02-15 | 2012-10-17 | オリンパス株式会社 | Imaging system, image processing method, and image processing program |
JP5117217B2 (en) * | 2008-02-15 | 2013-01-16 | オリンパス株式会社 | Imaging system, image processing method, and image processing program |
JP6701640B2 (en) * | 2015-08-07 | 2020-05-27 | コニカミノルタ株式会社 | Color measuring device, color measuring system, and color measuring method |
WO2023047693A1 (en) * | 2021-09-27 | 2023-03-30 | 富士フイルム株式会社 | Image processing device, image processing method, and program |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001238129A (en) * | 2000-02-22 | 2001-08-31 | Olympus Optical Co Ltd | Image processing apparatus and recording medium |
JP2003069821A (en) * | 2001-08-23 | 2003-03-07 | Olympus Optical Co Ltd | Imaging system |
JP2003244620A (en) * | 2002-02-19 | 2003-08-29 | Fuji Photo Film Co Ltd | Image processing method and apparatus, and program |
JP2004215100A (en) * | 2003-01-07 | 2004-07-29 | Nikon Corp | Image pickup device |
JP2005318063A (en) * | 2004-04-27 | 2005-11-10 | Olympus Corp | Video signal processing apparatus and program, and video signal recording medium |
-
2006
- 2006-06-07 JP JP2006158104A patent/JP2007329619A/en not_active Withdrawn
-
2007
- 2007-05-22 WO PCT/JP2007/060758 patent/WO2007142048A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2001238129A (en) * | 2000-02-22 | 2001-08-31 | Olympus Optical Co Ltd | Image processing apparatus and recording medium |
JP2003069821A (en) * | 2001-08-23 | 2003-03-07 | Olympus Optical Co Ltd | Imaging system |
JP2003244620A (en) * | 2002-02-19 | 2003-08-29 | Fuji Photo Film Co Ltd | Image processing method and apparatus, and program |
JP2004215100A (en) * | 2003-01-07 | 2004-07-29 | Nikon Corp | Image pickup device |
JP2005318063A (en) * | 2004-04-27 | 2005-11-10 | Olympus Corp | Video signal processing apparatus and program, and video signal recording medium |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2010103668A1 (en) * | 2009-03-11 | 2010-09-16 | Olympus Corporation | Image processing apparatus, image processing method, and image processing program |
Also Published As
Publication number | Publication date |
---|---|
JP2007329619A (en) | 2007-12-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US7548689B2 (en) | Image processing method | |
JP4218723B2 (en) | Image processing apparatus, imaging apparatus, image processing method, and program | |
US9025049B2 (en) | Image processing method, image processing apparatus, computer readable medium, and imaging apparatus | |
JP5520038B2 (en) | Video processing apparatus and video processing method | |
CN106358030B (en) | Image processing apparatus and image processing method | |
JP4628937B2 (en) | Camera system | |
CN101800858B (en) | Image capturing apparatus and control method thereof | |
JP4468734B2 (en) | Video signal processing apparatus and video signal processing program | |
WO2007142048A1 (en) | Video signal processing device, video signal processing method, and video signal processing program | |
JP4914300B2 (en) | Imaging apparatus and imaging control method | |
US20100245632A1 (en) | Noise reduction method for video signal and image pickup apparatus | |
KR101309008B1 (en) | Image processing method and image processing apparatus | |
WO2008056565A1 (en) | Image picking-up system and image processing program | |
JP2008104009A (en) | Imaging apparatus and method | |
KR20130061083A (en) | Image pickup apparatus, control method for image pickup apparatus, and storage medium | |
WO2007077730A1 (en) | Imaging system and image processing program | |
WO2015119271A1 (en) | Image processing device, imaging device, image processing method, computer-processable non-temporary storage medium | |
JP4999871B2 (en) | Imaging apparatus and control method thereof | |
JP5911525B2 (en) | Image processing apparatus and method, image processing program, and imaging apparatus | |
JP2007329620A (en) | Imaging device and video signal processing program | |
WO2006109702A1 (en) | Image processing device, imaging device, and image processing program | |
KR20120122574A (en) | Apparatus and mdthod for processing image in a digital camera | |
US20130089270A1 (en) | Image processing apparatus | |
JP4705146B2 (en) | Imaging apparatus and imaging method | |
JP2009022044A (en) | Image processing apparatus and image processing program |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 07744192 Country of ref document: EP Kind code of ref document: A1 |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 07744192 Country of ref document: EP Kind code of ref document: A1 |