WO2004066637A1 - 撮像システムおよび画像処理プログラム - Google Patents
撮像システムおよび画像処理プログラム Download PDFInfo
- Publication number
- WO2004066637A1 WO2004066637A1 PCT/JP2004/000395 JP2004000395W WO2004066637A1 WO 2004066637 A1 WO2004066637 A1 WO 2004066637A1 JP 2004000395 W JP2004000395 W JP 2004000395W WO 2004066637 A1 WO2004066637 A1 WO 2004066637A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- interpolation
- video signal
- signal
- unit
- color
- Prior art date
Links
- 238000003384 imaging method Methods 0.000 title claims abstract description 55
- 238000012545 processing Methods 0.000 claims abstract description 156
- 238000000034 method Methods 0.000 claims abstract description 112
- 238000012795 verification Methods 0.000 claims description 88
- 238000000605 extraction Methods 0.000 claims description 70
- 238000000926 separation method Methods 0.000 claims description 35
- 238000003708 edge detection Methods 0.000 abstract description 2
- 238000012546 transfer Methods 0.000 description 30
- 238000010586 diagram Methods 0.000 description 28
- 239000000284 extract Substances 0.000 description 13
- 230000006835 compression Effects 0.000 description 10
- 238000007906 compression Methods 0.000 description 10
- 230000014509 gene expression Effects 0.000 description 9
- 238000001514 detection method Methods 0.000 description 8
- 238000011156 evaluation Methods 0.000 description 7
- 230000002093 peripheral effect Effects 0.000 description 6
- 230000000295 complement effect Effects 0.000 description 5
- 238000009826 distribution Methods 0.000 description 4
- 239000003086 colorant Substances 0.000 description 3
- 238000003860 storage Methods 0.000 description 3
- 230000000694 effects Effects 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- 238000005070 sampling Methods 0.000 description 2
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003139 buffering effect Effects 0.000 description 1
- 230000006866 deterioration Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 238000005375 photometry Methods 0.000 description 1
- 238000002360 preparation method Methods 0.000 description 1
- 238000003825 pressing Methods 0.000 description 1
- 238000004904 shortening Methods 0.000 description 1
- 230000000007 visual effect Effects 0.000 description 1
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
- H04N23/84—Camera processing pipelines; Components thereof for processing colour signals
- H04N23/843—Demosaicing, e.g. interpolating colour pixel values
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/10—Circuitry of solid-state image sensors [SSIS]; Control thereof for transforming different wavelengths into image signals
- H04N25/11—Arrangement of colour filter arrays [CFA]; Filter mosaics
- H04N25/13—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements
- H04N25/134—Arrangement of colour filter arrays [CFA]; Filter mosaics characterised by the spectral characteristics of the filter elements based on three different wavelength filter elements
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N2209/00—Details of colour television systems
- H04N2209/04—Picture signal generators
- H04N2209/041—Picture signal generators using solid-state devices
- H04N2209/042—Picture signal generators using solid-state devices having a single pick-up sensor
- H04N2209/045—Picture signal generators using solid-state devices having a single pick-up sensor using mosaic colour filter
- H04N2209/046—Colour interpolation to calculate the missing colour values
Definitions
- the present invention relates to an imaging system and an image processing program for obtaining a high-quality video signal by interpolating missing color signals by adaptively combining a plurality of interpolation methods.
- This single-plate CCD has a color filter arranged on the front surface, and is roughly classified into a complementary color system and a primary color system depending on the type of the color filter.
- Japanese Unexamined Patent Application Publication No. 7-236147 and Japanese Unexamined Patent Application Publication No. Hei 8-289670 detect a correlation or an edge and detect the correlation.
- a technique for performing an interpolation process in a high direction or a flat direction with a low edge strength is described.
- Japanese Patent Application Laid-Open No. 2000-1599-189 states that when performing the enlargement processing, cubic interpolation is used for the R signal and the G signal, and the two-ares Naper interpolation is used for the B signal.
- Techniques for combining different interpolation methods such as the use of
- Japanese Patent Application Laid-Open No. 2000-222461 describes a technique for adaptively switching between trapping based on hue correlation and linear trapping.
- Means for selecting a direction and interpolating as described in Japanese Patent Application Laid-Open No. 7-23616 / 1995 and Japanese Patent Application Laid-Open No. 8-289670 / 1994 is a method in which a video signal has a single edge structure. Forces that function well when configured If there are multiple edge structures, such as a texture image, the direction selection may fail and capture accuracy may decrease.
- interpolation based on color correlation as described in the above-mentioned Japanese Patent Application Laid-Open No. 2000-224601 is applied to a single edge structure even if there are a plurality of edge structures such as a texture.
- highly accurate interpolation is possible.
- estimation of color correlation fails and artifacts may occur.
- Means for combining different interpolation methods as described in the above-mentioned Japanese Patent Application Laid-Open No. 2000-151918 is to perform interpolation processing in a region in which each of them is good, so that the overall Thus, while there is an advantage that higher-precision interpolation can be performed, the issue is how to properly control the switching of the interpolation processing.
- the switching method described in Japanese Patent Application Laid-Open No. 2000-151589 is fixed, for example, when the B (blue) signal has a complicated edge structure, the accuracy is low. In some cases, it was not possible to make full use of the advantages of each interpolation method.
- the interpolation method is switched based on an original signal in which one or more color signals are missing. Since the switching control is performed, there is a problem that the control method becomes complicated and the processing time becomes longer.
- the present invention has been made in view of the above circumstances, and an object of the present invention is to provide an imaging system and an image processing program that can more accurately interpolate missing color signals in a video signal. ing. Disclosure of the invention
- An imaging system is an imaging system that processes a video signal in which a predetermined number of color signals, which are a plurality of video signals of each pixel, are missing according to a pixel position.
- a first interpolation unit for interpolating the missing color signal from the video signal by a first interpolation method based on edge detection; and a video signal and a color signal interpolated by the first interpolation unit.
- a second interpolation method based on a color correlation different from that of the first interpolation method when the missing color signal is determined to be insufficient.
- a second interpolation unit for interpolating from the video signal.
- FIG. 1 is a block diagram illustrating a configuration of an imaging system according to a first embodiment of the present invention.
- FIG. 2 is a diagram showing a color arrangement of a color filter according to the first embodiment.
- FIG. 3 is a block diagram showing the configuration of the first interpolation unit in the first embodiment.
- FIG. 4 is a diagram for explaining an interpolation method based on an edge direction in the first embodiment.
- FIG. 5 is a block diagram showing the configuration of the verification unit according to the first embodiment.
- Figure 6 In the first embodiment, the color correlation The figure for explaining regression.
- FIG. 7 is a block diagram showing the configuration of the second interpolation unit in the first embodiment.
- FIG. 8 is a flowchart showing interpolation processing by the image processing program according to the first embodiment.
- FIG. 9 is a block diagram showing a configuration of an imaging system according to the second embodiment of the present invention.
- FIG. 10 is a block diagram showing one configuration example and another configuration example of the separation unit in the second embodiment.
- FIG. 11 A diagram for explaining edge extraction in the second embodiment.
- FIG. 12 is a block diagram illustrating a configuration of a first interpolation unit according to the second embodiment.
- FIG. 13 is a block diagram showing the configuration of the verification unit in the second embodiment.
- FIG. 14 A table for explaining hue classes in the second embodiment.
- Fig. 15 Flowchart showing interpolation processing by the image processing program in the second embodiment.
- FIG. 16 is a block diagram showing the configuration of the imaging system according to the third embodiment of the present invention.
- Fig. 17 Block diagram showing the configuration of the verification unit in the third embodiment.
- FIG. 18 A flowchart showing the interpolation processing by the image processing program in the third embodiment.
- FIGS. 1 to 8 show a first embodiment of the present invention.
- Fig. 1 is a block diagram showing the configuration of the imaging system
- Fig. 2 is a diagram showing the color arrangement of the color finletter
- Fig. 3 is a block diagram showing the configuration of the first interpolation unit
- Fig. 4 is interpolation based on the edge direction.
- Fig. 5 is a block diagram showing the structure of the verification unit
- Fig. 6 is a diagram for explaining the color correlation to the linear form
- Fig. 7 is the second interpolation.
- FIG. 8 is a block diagram showing the configuration of the unit
- FIG. 8 is a flowchart showing interpolation processing by the image processing program.
- this imaging system includes a lens system 1 for forming an image of a subject and a lens system 1 which is disposed in the lens system 1 and defines a light beam passing range in the lens system 1.
- the optical subject image formed through the aperture 2 of the lens, the mouth-pass filter 3 for removing unnecessary high-frequency components from the imaged light beam by the lens system 1 and the low-pass filter 3
- a CCD 4 which is an image sensor that converts and outputs an electric video signal
- an A / D converter 5 that converts an analog video signal output from the CCD 4 into a digital signal
- an A / D converter 5 converts an analog video signal output from the CCD 4 into a digital signal
- An image buffer 6 for temporarily storing the digital image data output from the / D converter 5 and a photometric evaluation of the subject based on the image data stored in the image buffer 6 and the evaluation result is obtained.
- a photometric evaluation unit 7 for controlling D 4 and a focus point detection unit 8 that performs AF detection based on the image data stored in the image buffer 6 and drives an AF motor 9 based on the detection result, which will be described later.
- An AF motor 9 controlled by the in-focus detection unit 8 to drive a focus lens and the like included in the lens system 1 and image data recorded in the image buffer 6 will be described in detail later.
- the first interpolating unit 10 is a first interpolating unit that performs an interpolating process of a missing color signal based on an edge direction, and the image data is transferred from the image buffer 6 via the first interpolating unit 10.
- the original image data and the interpolated data relating to the missing color processed by the first interpolation unit 10 are temporarily stored and processed by a second interpolation unit 13 described later. missing A work buffer for temporarily storing the color interpolation data overwritten on the missing color interpolation data processed by the first interpolation unit 10.
- An accuracy verification unit for verifying an area where a single color correlation is established based on the original image data stored in the work buffer 11 and the interpolation data by the first interpolation unit 10.
- the original image data is read from the image buffer 6 for the verification unit 12 and the area determined to have a single color correlation by the verification unit 12 and will be described in detail later.
- the second interpolator 13 is a second interpolator that performs an interpolating process for a missing color signal based on such a color correlation, and is output from the working buffer 11 after the processing by the second interpolator 13 is completed.
- a signal processing unit 14 for performing known enhancement processing and compression processing on the interpolated image data, and an output for outputting the image data from the signal processing unit 14 for recording on, for example, a memory card or the like.
- the external IZF unit 17 as an information acquisition means with an interface to a mode switch for switching between various shooting modes and image quality modes, and a switch for switching interpolation processing, and the like.
- the data is obtained from the photometry evaluation unit 7 and the focus detection unit 8, and the first interpolation unit 10, verification unit 12, second interpolation unit 13, signal processing unit 14, and output unit 15 are used.
- External] A control means consisting of a microcomputer, etc., which is bidirectionally connected to the F section 17 and integrally controls the imaging system including these, and also serves as information acquisition means and judgment means. And a control unit 16.
- an imaging system having a single-plate primary color system color filter is assumed.
- a primary color bay as shown in FIG.
- This primary color Bayer type color filter has two G (green) pixels arranged diagonally, as shown in Fig. 2 (A), and R (red) in the other two pixels.
- B (blue) has a basic arrangement of 2 X 2 pixels, each of which is arranged, and this basic arrangement is repeated two-dimensionally in the vertical and horizontal directions to cover each pixel on the CCD 4. As a result, the filter is arranged as shown in Fig. 2 (B).
- the user can set image quality modes such as compression ratio and image size, shooting modes such as character image shooting and video shooting, and switching of interpolation processing via the external IZF unit 17. After these settings are made, the user enters the pre-shooting mode by half-pressing the shutter button consisting of a two-stage push button switch.
- the video signal captured and output by the CCD 4 via the lens system 1, the aperture 2, and the low-pass filter 3 is converted to a digital signal by the A / D converter 5 and transferred to the image buffer 6. You.
- the video signal in the image buffer 6 is then transferred to the photometric evaluation unit 7 and the focus detection unit 8.
- the photometric evaluation unit 7 obtains the luminance level in the image, and controls the aperture value of the aperture 2 and the electronic shutter speed of the CCD 4, etc., so that appropriate exposure is obtained.
- the in-focus detection unit 8 detects the edge intensity in the image, controls the AF motor 9 so that the edge intensity is maximized, and Get.
- the preparation for the main shooting is completed, and then, it is detected via the external I / F unit 17 that the shutter button is fully pressed. The main shooting is performed.
- This actual photographing is performed based on the exposure conditions obtained by the photometric evaluation unit 7 and the focusing conditions obtained by the focus point detecting unit 8, and these photographing conditions are controlled by the control unit. Transferred to 6.
- the video signal is transferred to the image buffer 6 and stored in the same manner as in the pre-shooting.
- the first interpolation unit 10 reads out the video signal related to the actual shooting stored in the image buffer 6 under the control of the control unit 16 and performs an interpolation process based on the edge direction.
- the edge direction As described above, in the present embodiment, since a single-chip imaging system in which primary color filters are arranged in front of the CCD 4 is assumed, two color signals are lost for one pixel. I have. Therefore, in this trapping process, two missing color signals are generated and output as interpolation signals.
- the first interpolation unit 10 transfers the interpolation signal obtained by the interpolating process and the original signal read from the image buffer 6 to the working buffer 11. In this way, when the interpolated signal and the original signal are stored in the working buffer 11, a three-plate signal in which three RGB signals are arranged for one pixel is obtained.
- the verification unit 12 uses the three-plate signal stored in the work buffer 11 as a unit of a predetermined local area (for example, 5 ⁇ 5 pixels). Read sequentially. Then, the verification unit 12 regresses the correlation in the local area as a linear form, verifies whether a single color correlation is established, and compares the verification result with the second interpolation unit.
- a predetermined local area for example, 5 ⁇ 5 pixels.
- the second interpolation unit 13 Based on the control of the control unit 16, the second interpolation unit 13 generates an original signal corresponding to the local region determined by the verification unit 12 to have a single color correlation.
- the signal is read from the image buffer 6 and interpolation processing based on color correlation is performed.
- the interpolation signal from the second interceptor 13 is transferred to the work buffer 11 and recorded so as to overwrite the interpolation result from the first interpolator 10.
- a local area determined by the verification unit 12 to have a single hue relationship is replaced with an interpolation signal from the second interpolation unit 13.
- the control unit 16 performs the verification by the verification unit 12 and the interpolation by the second interpolation unit 13 that is performed only when necessary according to the verification result, and performs the processing for all the signals in the work buffer 11. After that, the control is performed so that the three-plate signal in the work buffer 11 is transferred to the signal processing unit 14.
- the signal processing unit 14 performs known enhancement processing and compression processing on the interpolated video signal based on the control of the control unit 16 and transfers the signal to the output unit 15.
- the output unit 15 outputs the image data from the signal processing unit 14 for recording on, for example, a memory card.
- the first interpolation unit 10 includes an extraction unit 21 as extraction means for sequentially extracting a region of a predetermined size from the image data stored in the image buffer 6, and an extraction unit 21.
- An area buffer 22 for storing image data of the area, an edge extraction unit 23 as an edge extraction means for calculating an edge component of the area stored in the area buffer 22, and an edge extraction unit 23
- a weight calculation unit 24 as weight calculation means for normalizing the edge components thus calculated and calculating weight coefficients; a weight buffer 25 for storing the weight coefficients calculated by the weight calculation unit 24;
- An interpolation signal calculating method for calculating a color difference component as an interpolation signal for a pixel of interest in an area stored in the area buffer 22.
- a step interpolation unit 26 an interpolation value buffer 27 storing the color difference components calculated by the interpolation unit 26, a color difference component stored in the interpolation value buffer 27 and the weight buffer 25
- a calculation unit 28 as calculation means for calculating a missing color component at the pixel position of interest from the weighting factors stored in the area buffer 22 and outputting the missing color component to the area buffer 22 and the work buffer 11. ing.
- the interpolation unit 2 is used to interpolate the missing component related to another color using the calculated missing color component.
- the calculation result obtained by the step 8 is also stored in the area buffer 22.
- the control unit 16 is bidirectionally connected to the extraction unit 21, the edge extraction unit 23, the weight calculation unit 24, the interpolation unit 26, and the calculation unit 28, and controls these units. To do so.
- the extraction unit 21 sequentially extracts a region of a predetermined size (for example, 6 ⁇ 6 pixel size) from the image buffer 6 based on the control of the control unit 16 and transfers the region to the region buffer 22.
- a predetermined size for example, 6 ⁇ 6 pixel size
- the first interpolation unit 10 performs interpolation processing of the central 2 ⁇ 2 pixel position using such a 6 ⁇ 6 pixel size area.
- the extraction unit 21 extracts a 6 ⁇ 6 pixel size area
- the extraction unit 21 Alternatively, the position in the Y direction is shifted by two pixels, and the pixels are sequentially extracted while overlapping in the X direction or the ⁇ direction by four pixels each.
- the edge extraction unit 23 and the interpolation unit 26 precede the interpolation processing for the missing component G22 at the R22 position and the missing component G33 at the ⁇ 33 position based on the control of the control unit 16. Do it.
- the edge extraction unit 23 uses the values of the peripheral pixels as shown in FIG. 4 (B) for the R22 pixel, and calculates the edge components in four directions (up, down, left, and right) by the following equation (1). And transfer it to the weight calculator 24.
- the edge components in the four directions calculated by the edge extraction unit 23 are divided by the sum total to calculate a normalized weight coefficient as shown in the following Expression 3, and the weight buffer 25 is calculated. Transfer to and memorize.
- the interpolating unit 26 interpolates the color difference components in the four directions (up, down, left, and right) for the R22 pixel as shown in the following Expression 4, and transfers the result to the interpolation value buffer 27 for storage.
- the G 33 at the B 33 position is calculated as shown in FIG. C)
- the value is calculated using the values of the peripheral pixels as shown in (C), and transferred to the area buffer 22 and the work buffer 11 for storage.
- Equations 6 to 10 corresponding to Equations 1 to 5 above when calculating G33 are as follows.
- the difference from the above G 22 when calculating this G33 is that in the case of the above G 22, the color difference components in four directions are In contrast to the C r component (G ⁇ R), the G 33 has a C b component (G ⁇ B).
- the interpolating process for the G signal as described above is performed for all the signals on the image buffer 6 by sequentially extracting the area of 6 ⁇ 6 pixel size, and the area buffer 22 and the working buffer 1 and 1 store the interpolated all G signals.
- the control unit 16 sequentially extracts a region of 6 ⁇ 6 pixel size again, and sends the missing components R 32 and B 32 at the G 32 position to the edge extraction unit 23 and the interpolation unit 26.
- the interpolation process of the missing components R23 and B23 at the G23 position, the remaining missing component B22 at the R22 position, and the remaining missing component R33 at the B33 position is shown in FIG. This is performed as shown in Fig. 4 (G).
- the processes are performed using the G signal calculated as described above.
- FIG. 4 (D) is a diagram illustrating a peripheral pixels used in interpolating the R2 3, B 23 of the G23 position.
- FIG. 4 (E) shows the state of the peripheral pixels used when capturing R32 and BS2 at the G32 position.
- Equations 21 to 25 corresponding to Equations 1 to 5 above when calculating R32 are as follows.
- R 32 G 32- ⁇ C r kWk
- FIG. 4 (F) are diagrams showing a state of peripheral pixels used in interpolating a B 2 2 of R 22 positions.
- Formulas 31 to 35 corresponding to Formulas 1 to 5 above when calculating B22 are as follows.
- FIG. 4 (G) shows the state of the peripheral pixels used when interpolating R33 at position B33.
- Equations 36 to 40 corresponding to Equations 1 to 5 above when calculating R33 are as follows.
- the verification unit 12 is extracted by the extraction unit 31 as extraction means for sequentially extracting a region of a predetermined size from the three-plate image data stored in the work buffer 11, and extracted by the extraction unit 31.
- Area buffer 32 for storing image data of the area stored in the area buffer
- a correlation calculation section 33 as correlation calculation means for calculating a linear form indicating the color correlation of the area stored in the area buffer 32
- this correlation calculation section A coefficient buffer 34 for storing a bias term for determining whether or not it is a single hue area in the linear expression calculated by 33, and a coefficient buffer 34 for storing the coefficient
- the absolute value of the bias term compared with the predetermined threshold value is compared with a predetermined threshold value. When the absolute value of the bias term is equal to or smaller than the threshold value, the correlation information is transferred to the second interpolation unit 13. It is configured as follows.
- the control unit 16 is bidirectionally connected to the extraction unit 31, the correlation calculation unit 33, and the correlation verification unit 35, and controls them.
- the extraction unit 31 sequentially extracts a predetermined size area (for example, a 5 ⁇ 5 pixel size area) from the work buffer 11 based on the control of the control unit 16 and transfers it to the area buffer 32.
- a predetermined size area for example, a 5 ⁇ 5 pixel size area
- the correlation calculation unit 33 returns the correlation between the RGB signals to the area stored in the area buffer 32 in a linear form based on the known least square approximation for the area stored in the area buffer 32 under the control of the control unit 16. .
- Fig. 6 ( ⁇ ) shows an input image composed of a single hue
- Fig. 6 ( ⁇ ) shows a linear form of the color correlation between the R and G signals for the input image composed of a single hue. Shows the return to In addition, in both cases of returning to the linear form of the color correlation between the G— ⁇ signals and returning to the linear form of the hue relationship between the R— ⁇ signals, as shown in FIG. This is performed in the same way as shown in (2).
- the input image is composed of a plurality of hues (for example, a first hue region A and a second hue region B in the illustrated example) as shown in FIG. 6 (C).
- a plurality of hues for example, a first hue region A and a second hue region B in the illustrated example
- FIG. 6 (D) multiple linear forms indicating the color correlation are required, and when this is regressed to one linear form, the bias term] 3 deviates from a value near 0 It will be.
- the input image here, the input area
- the correlation calculator 33 calculates the bias term shown in Expression 41 and transfers the calculated bias term j3 to the coefficient buffer 34.
- the correlation verification unit 35 compares the absolute value of the pulse term j3 stored in the coefficient buffer 34 with a predetermined threshold value, and when the absolute value of the bias term becomes equal to or smaller than the threshold value, The position information of the corresponding area is transferred to the second interpolation unit 13. Thereafter, the correlation verification unit 35 receives from the control unit 16 information indicating that the interpolation processing in the second interpolation unit 13 has been completed. Then, it moves to the next area and performs the processing as described above.
- the correlation verification unit 35 shifts to the next area without transferring anything to the second interpolation unit 13 and described above. Perform such processing.
- the verification unit 12 performs such processing for all signals on the work buffer 11 while sequentially shifting the area.
- the second interpolation unit 13 sequentially extracts an area of a predetermined size determined as a single hue area by the verification unit 12 from the image data stored in the image buffer 6.
- An extracting unit 41 serving as a means, an area buffer 42 for storing image data of an area extracted by the extracting unit 41, and a linear format indicating a color correlation of the area stored in the area buffer 42
- a color signal missing from the original signal stored in the area buffer 42 is calculated using the correlation calculation section 43 for calculating the And an operation unit 44 for outputting to the application buffer 11.
- the control unit 16 is bidirectionally connected to the extraction unit 41, the correlation calculation unit 43, and the calculation unit 44, and controls them.
- the extraction unit 41 extracts the region from the image buffer 6 when the position information of the region determined to be a single hue region is transferred from the verification unit 12 based on the control of the control unit 16. To the area buffer 42.
- the correlation calculation unit 43 regresses the correlation as a linear form from the single-plate original signal on the area buffer 42.
- the correlation calculation section 43 finds a linear form as shown in the equation 42 between the respective signals of R ⁇ G, G ⁇ B, and R_B, and transfers the result to the calculation section 44.
- the operation unit 44 calculates a missing color signal based on the linear format as shown in Expression 42 and the original signal on the area buffer 42, and transfers the missing color signal to the work buffer 11.
- the interpolation value obtained by the first interpolation unit 10 and stored in the work buffer 11 is overwritten by the interpolation value obtained by the second interpolation unit 13 as described above. Will be done.
- the calculation unit 44 After calculating all the missing color signals in the area, the calculation unit 44 notifies the control unit 16 that the processing has been completed.
- the present invention is not limited to such a configuration.
- the signal from the CCD 4 is regarded as raw (Raw) data that has not been processed. It is also possible to output after adding filter information and image size as header information, and to process it with an external computer or the like using an image processing program which is a separate software.
- an original signal consisting of Raw data, header information, and are read (step S1), and the original signal is extracted in units of a block area of a predetermined size (step S1). S 2). Then, for the extracted region, an edge component for each direction is calculated as shown in Equation 1 above (Step S 3), and as shown in Equations 2 and 3 above, each edge component is calculated for each direction.
- the weight coefficient is calculated (step S4).
- step S5 the interpolation value of the color difference signal for each direction is obtained as shown in the above equation 4 and the like. Subsequently, based on the weighting factor obtained in step S4 and the interpolated value obtained in step S5, a G signal is calculated and output as shown in Equation 5 above (step S5). 6).
- step S7 It is determined whether or not such processing has been completed in all the block areas extracted corresponding to all signals (step S7), and if not completed, the flow returns to step S2. Then, the processing described above is repeatedly performed for the next block area.
- step S9 an edge component for each direction is calculated (step S9), and a weight coefficient for each direction is calculated (step S10).
- step S11 an interpolation value of the color difference signal for each direction is obtained for the area extracted in step S8 (step S11). Subsequently, based on the weighting factor obtained in step S10 and the interpolation value obtained in step S11, the missing R signal and B signal are calculated and output as described above (step S 1 2).
- step S13 It is determined whether or not such processing has been completed in all the block areas extracted corresponding to all the signals (step S13), and if not completed, the process returns to step S8. The next block area Then, the above-described processing is repeatedly performed.
- step S6 the first interpolation signal output in step S6 and step S12 is replaced with a block of a predetermined size. Extraction is performed for each area (step S14), and a linear equation indicating the correlation between the color signals is obtained as shown in the above equation 41 (step S15).
- step S15 The absolute value of the linear pass term obtained in step S15 is compared with a predetermined threshold value Th (step S16), and if the bias term is equal to or less than the threshold value Th, Then, the original signal is extracted for each block area of a predetermined size (step S17), and a linear form indicating the correlation between the color signals is obtained as shown in the above equation 42 (step S17).
- step S19 the missing color signal is calculated and output based on the linear form (step S19). This output overwrites the G signal output in step S6 and the RB signal output in step S12.
- step S16 if the bias term is larger than the threshold value Th, or if the processing in step S19 is completed, all the block areas extracted for all signals It is determined whether or not the processing in (1) is completed (step S20). If not completed, the process returns to step S14, and the next block area is set as described above. Repeat the process.
- step S21 the interpolation signal is output (step S21), and the processing in step is ended.
- the processing is always performed by combining the first interpolation processing and the second interpolation processing, but the present invention is not limited to this.
- an image mode with a high compression rate that does not require high-definition capture processing is selected via the external I / F section 17, or a movie that requires high-speed processing
- a shooting mode such as It is also possible to perform only the first interpolation processing without performing the second interpolation processing.
- the operation of the verification unit 12 may be stopped, and the control unit 16 may control the signal so as not to be transferred to the second interpolation unit 13.
- the control unit 16 determines whether or not to perform the second interpolation processing, the image quality information relating to the image quality of the video signal such as the compression ratio and the image size, and the imaging system such as character image shooting and moving image shooting are used.
- the image quality information relating to the image quality of the video signal such as the compression ratio and the image size, and the imaging system such as character image shooting and moving image shooting are used.
- the primary color Bayer type single plate CCD has been described as an example, but the present invention is not limited to this.
- the present invention can be similarly applied to a single-chip CCD having a complementary color filter, or to a two-chip imaging system or a three-chip imaging system in which pixels are shifted.
- the number of color signals is not limited to this, and for example, a color signal that is used in a system that performs more accurate color reproduction is used.
- Six colors may be used, and broadly, a predetermined number of two or more colors may be used. However, it is well known that the predetermined number, which is plural, needs to be 3 or more in order to obtain an image that is recognized as a normal color image by the metadata mechanism. .
- the first embodiment it is possible to adaptively switch between the first interpolation processing for performing the interpolation processing based on the edge direction and the second interpolation processing for performing the interpolation processing based on the color correlation. This makes it possible to perform highly accurate interpolation processing.
- the switching of the capture processing is performed using a signal in a three-plate state without missing color signals using both the original signal and the first interpolation signal, it is possible to perform a high-accuracy and high-speed switching processing. It becomes possible.
- High-precision interpolation processing can be performed in a region having a single edge structure. Further, by performing the interpolation processing based on the color correlation, it is possible to perform the high-precision interpolation processing in a region consisting of a single hue.
- This correlation has a high affinity with the interpolation method using the color correlation, and is suitable for controlling switching between the interpolation method using the color correlation and the other interpolation method.
- the interpolation is performed again to obtain information such as when high-precision interpolation is not required due to high compression or when high-speed processing is prioritized in video shooting.
- the control of whether or not to perform can be automated, and operability is improved.
- the degree of freedom regarding the processing is improved.
- a high-precision interpolation process can be performed because a plurality of interpolation means are adaptively combined.
- both the original signal and the interpolated color signal are used in combination for switching control of a plurality of interpolation means, high-precision and high-speed switching is possible.
- FIGS. 9 to 15 show a second embodiment of the present invention.
- FIG. 9 is a block diagram showing a configuration of an imaging system.
- FIG. 10 is an example of a configuration of a separation unit and another configuration.
- Block diagram showing configuration example Fig. 11 is a diagram for explaining edge extraction
- Fig. 12 is a block diagram showing the configuration of the first interpolation unit
- Fig. 13 shows the configuration of the verification unit
- FIG. 14 is a block diagram illustrating a hue class
- FIG. 15 is a flowchart illustrating interpolation processing by an image processing program.
- the same parts as those in the above-described first embodiment are denoted by the same reference numerals, and description thereof will be omitted. Only different points will be mainly described.
- the imaging system according to the second embodiment has a configuration in which a separating unit 51 serving as a separating unit and an adjusting unit 52 serving as an adjusting unit are added to the configuration of the first embodiment described above. It has become.
- the separation unit 51 reads a video signal (original signal) of a predetermined local area from the image buffer 6 and determines whether the video signal is a flat area or an edge area as a predetermined characteristic of the video signal. If the image signal is a flat area, the video signal of the local area is sent to the first interpolation unit 10, and if it is an edge area, the video signal of the local area is sent to the second interpolation unit 13. Each is output.
- the separation portion 51 is formed by a flat region
- the region information indicating whether the region is a region is also output to the adjustment unit 52.
- the adjusting unit 52 based on the region information from the ⁇ separating unit 51 and the hue information on whether the local region is a single hue or a multiple hue region,
- the interpolation signal already stored in the working buffer 11 is replaced by the interpolation signal obtained by re-interpolating the interpolation signal into the first interpolation unit 10 or the second interpolation unit 13. .
- the verified hue information is output to the adjustment unit 52 instead of the second interpolation unit 13 ⁇
- the operation ⁇ buffer 11> not only outputs the interpolation signal to the detection unit 12, but also outputs the frequency adjustment unit 52.
- the first interpolation unit 10 performs linear interpolation on the R signal and the ⁇ signal.
- the control unit 16 is also bidirectionally connected to the separation unit 51 and the adjustment unit 52 added in the second embodiment, and controls these units. .
- the video signal stored in the image buffer 6 is sequentially transferred to the separation unit 51 in units of a predetermined local area (for example, 8 ⁇ 8 pixels) under the control of the control unit 16.
- the separation unit 51 calculates a plurality of edge components in a plurality of predetermined directions with respect to the center 2 ⁇ 2 pixel in the transferred 8 ⁇ 8 pixel area. Then, the separating unit 51 compares the calculated edge component with a predetermined threshold value, counts the total number of valid edge components equal to or larger than the threshold value, and determines whether the region is a flat region or an edge region based on the total number. Judge. Then, the separation unit 51 transfers the local area of 8 ⁇ 8 pixels to the first interpolation unit 10 when it is determined that the local area is a flat area, and on the other hand, when it determines that the local area is an edge area. The local area of 8 ⁇ 8 pixels is transferred to the second interpolation unit 13.
- the separation unit 51 obtains area information indicating whether the area is a flat area or an edge area for all areas in units of 2 ⁇ 2 pixels at the center of the 8 ⁇ 8 pixel area, and adjusts the area information. Transfer to Part 52.
- the first interpolating unit 10 performs the known linear interpolating process for the R and B signals and the known cubic interpolation process for the G signal for the center 2 ⁇ 2 pixel. To calculate the missing color signal. Then, the first interpolation unit 10 calculates the calculated interpolation signal and The original signal is output to the working buffer 11.
- the second interpolation unit 13 performs an interpolation process based on color correlation on the central 2 ⁇ 2 pixels, and uses the calculated interpolation signal and the original signal for work. Output to buffer 1 1
- control unit 16 converts the three-plate signals stored in the working buffer 11 into a predetermined local area (for example, 8 ⁇ 8 pixels) as a unit. To the verification unit 12 sequentially.
- the verification unit 12 selects a hue for each pixel in the local region and classifies the hue into one of 13 types of hue classes (see FIG. 14). Then, the verification unit 12 determines whether the local region is a single hue region or a plurality of hue regions based on the distribution state of the hue classes obtained for each pixel. The obtained hue information is transferred to the adjustment unit 52.
- the adjustment unit 52 includes region information indicating whether the region is a flat region or an edge region from the separation unit 51, and a region where the local region from the verification unit 12 is a single hue or a plurality of hue regions. Based on the hue information indicating whether or not there is, whether or not to perform the interpolation process again is adjusted.
- the adjustment unit 52 determines whether the region is a “flat region and a plurality of hue regions” or the case of an “edge region and a single hue region” without performing the interpolation process again. On the other hand, in the case of “flat region and single hue region” and in the case of “edge region and multiple hue regions”, adjustment is made so that interpolation processing is performed again.
- the “flat area and single hue area” in the case where the interpolation processing is performed again is an area where the interpolation by the first interpolation unit 10 has been performed in the first interpolation processing.
- the adjusting unit 52 extracts the original signal of the corresponding area from the working buffer 11 and transfers it to the second interpolating unit 13 so that the interpolating process is performed based on the color correlation. .
- the “several hue region” is a region where the interpolation by the second interpolation unit 13 is performed in the first interpolation processing.
- the adjusting unit 52 extracts the original signal of the corresponding area from the working buffer 11 and transfers it to the first interpolating unit 10 for interpolation by linear interpolation processing or cubic interpolation processing. Is performed.
- the interpolation signal generated by the second interpolation processing is output to the work buffer 11 and is overwritten on the interpolation signal generated by the first interpolation processing of the corresponding area. .
- control unit 16 After the control unit 16 completes the verification in the verification unit 12 and the adjustment work in the adjustment unit 52 for all the signals in the work buffer 11, the control unit 16 sends the three signals to the signal processing unit 14. Control to transfer the signal.
- the separation unit 51 shown in this example includes an extraction unit 61 as extraction means for sequentially extracting a region of a predetermined size from the image data stored in the image buffer 6, and an extraction unit 61.
- the calculated edge component is compared with a predetermined threshold value, and the total number of effective edge components that are equal to or greater than the threshold value is counted. If the number is less than half, it is determined that the area is a flat area, and the result of the determination is output to the adjustment section 52 and the video signal separation means for outputting to the transfer section 65 described later.
- Section 6 4 and this video signal separation section 6 4 When the judgment is made in a flat area, the image to which the original signal from the area buffer 62 is transferred to the first interpolation section 10 described above and to the second interpolation section 13 in the case of an edge area. And a transfer unit 65 serving as a signal separation unit.
- the control unit 16 includes the extraction unit 61, the edge extraction unit 63, It is bidirectionally connected to the image signal separation unit 64 and the transfer unit 65, and controls these.
- the extraction unit 61 sequentially extracts block regions of a predetermined size (for example, 8 ⁇ 8 pixels) from the image buffer 6 based on the control of the control unit 16, and performs region buffering. Transfer to 6 2.
- a predetermined size for example, 8 ⁇ 8 pixels
- the edge extraction processing in the subsequent stage is performed at the 2 ⁇ 2 pixel position at the center of each area, when the extraction unit 61 extracts an area of 8 ⁇ 8 pixel size, the position in the X direction or Y The position is shifted two pixels at a time, and the pixels are sequentially extracted while overlapping six pixels each in the X direction or the Y direction.
- the edge extraction unit 63 converts each of the 2 ⁇ 2 pixels at the center of the region composed of the original signals stored in the region buffer 62 into RGB signals. Calculate edge components.
- the edge component is calculated by taking the absolute value of the difference between the four directions of up, down, left and right for the R signal and the B signal, and four directions of 45 degrees for the G signal.
- the R signal is compared with the signals R 0, R 3, R 1, and R 2 that are vertically and horizontally separated from the pixel of interest by one pixel.
- the absolute value of the difference is obtained as shown in the following Expression 43.
- the edge component obtained by the edge extraction unit 63 is transferred to the video signal separation unit 64.
- the video signal separation unit 64 compares the received edge component with a predetermined threshold value, for example, 256 when the output width of the A / D converter 5 is 12 bits, and compares the edge component with the predetermined threshold value. It is assumed that the following edge component is an effective edge component.
- the total number of the valid edge components is a majority of the whole, that is, in the above example, the total number of the valid edge components is 16 or more because the total number of the edge components is 16. If the area Is determined to be an edge region. On the other hand, if the total number of valid edge components is less than the majority of the total (that is, if it is 7 or less), it is determined that the area is a flat area.
- the result of the determination by the video signal separating unit 64 is transferred to the adjusting unit 52 and also to the transferring unit 65.
- the transfer unit 65 captures the original signal from the area buffer 62 in the first interception unit. 10 and, on the other hand, if it indicates that it is an edge area, it is transferred to the second interpolation unit 13.
- the control unit 16 controls the separation unit 51 so as to perform the above-described processing on all the original signals on the image buffer 6.
- the first interpolation unit 10 includes an extraction unit 71 as extraction means for sequentially extracting an area of a predetermined size in the video signal from the separation unit 51, and an area extracted by the extraction unit 71.
- the area buffer 72 for storing the image data of the image data, and the R and B signals missing in the area stored in the area buffer 72 are interpolated by known linear interpolation, and the buffer 1 for the work described above is used.
- the RB linear interpolation unit 7 3 which is an arithmetic means for outputting to the unit 1, and the missing G signal for the area stored in the area buffer 72 are interpolated by known cubic interpolation and output to the work buffer 11.
- a G cubic interpolator 74 as an arithmetic means.
- control unit 16 is bidirectionally connected to the extraction unit 71, the RB linear interpolation unit 73, and the G cubic interpolation unit 74, and controls these units. .
- the extraction unit 71 sends a signal from the separation unit 51 based on the control of the control unit 16.
- a block area of a predetermined size (for example, 8 ⁇ 8 pixels) is extracted as a unit and transferred to the area buffer 72.
- the RB linear sampling unit 73 applies a known linear interpolation process to the missing R signal and B signal for the central 2 ⁇ 2 pixel of the 8 ⁇ 8 pixel area stored in the area buffer 72. And output it to work buffer 11.
- the G cubic trap 74 calculates the missing G signal for the central 2 ⁇ 2 pixels of the 8 ⁇ 8 pixel area stored in the area buffer 72 by a known cubic interpolation process. Output to the working buffer 11.
- the verification unit 12 is extracted by the extraction unit 81 as extraction means for sequentially extracting regions of a predetermined size from the three-plate state image data stored in the work buffer 11 and extracted by the extraction unit 81.
- An area buffer 8 2 for storing image data of the area stored in the area buffer 82
- a hue calculating unit 8 3 as hue calculating means for calculating the hue class of the area stored in the area buffer 82 based on the magnitude relation of the RGB values
- a coefficient buffer 84 for storing the coefficient indicating the hue class calculated by the hue calculation section 83, and a distribution state of the hue class is checked based on the coefficient stored in the coefficient buffer 84 to determine the area.
- a hue verifying unit 85 which is a hue verifying means for determining whether or not a single hue region or a plurality of hue regions and outputting the verification result to the adjusting unit 52. ing.
- control unit 16 is bidirectionally connected to the extraction unit 81, the hue calculation unit 83, and the hue verification unit 85, and controls these units.
- the extraction unit 81 controls the work buffer based on the control of the control unit 16. Areas of a predetermined size (for example, 8 ⁇ 8 pixel size) are sequentially extracted from 11 and transferred to the area buffer 82.
- the hue calculation unit 83 calculates a hue class based on the magnitude relationship of the RGB values for each pixel of the area stored in the area buffer 82 under the control of the control unit 16.
- hue classes classified into 13 according to the magnitude relationship of the RGB values will be described.
- class 1 has B> R> G
- class 2 has R B> G
- class 3 has R> B> G
- class 4 has R> G> B.
- Class 7 is G> R> B
- class 9 is G> B> R
- class 11 is B> G> R
- hue calculation unit 83 calculates a magnitude relationship between the RGB values by using a predetermined coefficient, such as A
- the classification result based on the hue class obtained in this way is transferred to a coefficient buffer 84 and stored.
- the hue verification unit 85 checks the distribution of the hue classes stored in the coefficient buffer 84 based on the control of the control unit 16 and determines whether the region is a single hue region or a plurality of hue classes. It is determined whether the region is a hue region. This judgment is made for a single hue when a certain percentage (for example, 70%) or more of the area size is composed of one hue class. Five
- an area of 8 ⁇ 8 pixel size is assumed, so that there are 64 pixels classified as a hue class, and 70% or more of the 45 pixels are the same. If the image is composed of the following hue classes, it is determined to be a single hue region, and if it is less than 45 pixels, it is determined to be a plurality of hue regions.
- the result of the judgment by the hue verifying unit 85 is transferred to the adjusting unit 52.
- the separation unit 51 separates the original signal based on the edge information.
- the present invention is not limited to such a configuration.
- FIG. 10 (B) A type using correlation information as shown in FIG.
- FIG. 10 (B) Another example of the configuration of the separation unit 51 will be described with reference to FIG. 10 (B).
- the basic configuration of the separation unit 51 shown in FIG. 10 (B) is almost the same as that of the separation unit 51 shown in FIG. 10 (A).
- the description is omitted by attaching the reference numerals and names.
- the separating unit 51 shown in FIG. 10 (B) is obtained by replacing the edge extracting unit 63 in the separating unit 51 shown in FIG. 10 (A) with a correlation calculating unit 66 which is a correlation calculating means. This is the main difference.
- the correlation calculating section 66 returns the correlation between the color signals in the area read from the area buffer 62 to a linear form as shown in Expression 42 in the first embodiment. Then, the correlation calculating section 66 transfers the constant term in the linear form to the video signal separating section 64.
- the video signal separation unit 64 compares the absolute value of the above-mentioned constant term with a predetermined threshold value, determines that the area is a single hue area when the absolute value is less than the threshold value, Is determined to be the hue region of. Soshi Then, the video signal separating section 64 transfers this determination result to the adjusting section 52 and also transfers it to the transferring section 65.
- the transfer unit 65 When the determination result sent from the video signal separation unit 64 indicates that the image data includes a plurality of hue regions, the transfer unit 65 performs first interpolation on the original signal from the region buffer 62. The data is transferred to the unit 10, while if it indicates that it is a single hue area, it is transferred to the second interpolation unit 13.
- the present invention is not limited to such a configuration.
- the signal from the CCD 4 is regarded as raw (Raw) data that has not been processed. It is also possible to add the filter information, image size, etc. as header information and then output it, and process it on an external computer or the like using an image processing program, which is separate software.
- step S31 the original signal composed of Raw data and header information are read.
- the original signal is extracted in units of a block area of a predetermined size (step S32), and as shown in FIG. 11, edge components in a plurality of directions are extracted (step S33).
- the total number of the edge components is output and stored as separation information for determining whether the area is a flat area or an edge area (step S34).
- step S35 After, by comparing the predetermined threshold value Th with the total number of edge components, it is determined whether the region is a flat region or an edge region (step S35).
- step S42 Performed the processing up to step S45 Later, the process proceeds to the next step S36.
- step S46 see FIG. 15 (C)
- step S36 it is determined whether or not extraction has been completed in all block areas corresponding to all signals. If extraction has not been completed, the flow returns to step S32 to return to the next block. Extraction of the work area.
- the signal after the capture processing is extracted in units of a block area of a predetermined size (step S37), and FIG. A hue map is calculated by classifying into 13 hue classes as shown, and it is determined whether the block area is a single hue area or a plurality of hue areas (step S38).
- step S35 the separation information on the flat area or the edge area output in step S35 is input and transferred to the next step S40 (step S39).
- step S40 selects which of the interpolation processing shown in FIG. 5 (B) and the interpolation processing shown in FIG. 15 (C) is to be performed (step S40).
- step S 41 In the case of the “edge region and multiple hue regions”, the process proceeds to step S42 (see FIG. 15 (B)) described later, and after the process up to step S45, the process proceeds to the next step S41. In the case of the “flat region and single hue region”, the step S 46 (FIG. 15 (C)) will be described later. After performing the processing up to step S48, go to the next step S41.
- step S41 it is determined whether or not extraction has been completed for all signal areas for all signals. If extraction has not been completed, the flow returns to step S37 to return to the next block. , And, if completed, terminates this process.
- the missing R signal and the missing B signal are calculated by the linear interpolation processing (step S43), and the missing G signal is calculated by the cubic interpolation processing (step S44). .
- step S45 the original signal and the interpolated signal are output together (step S45), and the process returns to the process of FIG. 15 (A).
- the correlation is obtained as a linear form, and the missing color signal is calculated based on the obtained linear form (step S47). .
- step S48 the original signal and the interpolated signal are output together (step S48), and the process returns to the process of FIG. 15 (A).
- the processing is always performed by combining the first interpolation processing and the second interpolation processing, but the present invention is not limited to this.
- the operation and adjustment of the verification unit 1 and 2 If the control unit 16 controls the operation of the unit 52 to stop.
- the control unit 16 determines whether or not to stop these operations, the image quality information relating to the image quality of the video signal such as the compression ratio and the image size, and the image capturing system such as character image capturing and moving image capturing. Acquisition of at least one of the shooting mode information set in, the interpolation process switching information that can be manually set by the user, and any one of these information The judgment is made based on the above.
- the present invention is not limited to this.
- a configuration may be used in which interpolation processing based on the edge direction is performed, and the present invention is not limited to this, and has different characteristics from the second interpolation processing.
- the first interpolation processing having such arbitrary characteristics may be used.
- the first interpolation processing and the second interpolation processing need only be a combination of different characteristics.
- the primary color Bayer type single plate CCD has been described as an example, but the present invention is not limited to this.
- the present invention can be similarly applied to a single-chip CCD having a complementary color filter, or to a two-chip imaging system or a three-chip imaging system in which pixels are shifted.
- the first interpolation processing and the second interpolation processing are roughly selected based on the original signal, and then the precision is verified accurately based on the original signal and the interpolation signal, and the interpolation processing is performed again. Therefore, the area in which the interpolation processing is redone is reduced, and the processing speed can be improved.
- the G signal which is close to the luminance signal, is processed by cubic sampling, and the other color signals, R and B signals, are processed by linear interpolation to match the visual characteristics. In addition, it is possible to perform high-speed interpolation processing while suppressing deterioration of the overall image quality.
- hue information is obtained by a signal in a three-plate state having no missing color signal obtained by combining the original signal and the intercept signal, it is possible to perform high-accuracy accuracy verification.
- the calculation of the hue information can be performed at high speed because the amount of calculation is small.
- edge information since the original signal is separated based on edge information or correlation information, appropriate processing can be performed at high speed. Since the edge information has a high affinity with the interpolation method using the edge direction, there is an advantage that the edge information is suitable for switching control between the interpolation method using the edge direction and other interpolation methods. On the other hand, since the correlation information has a high affinity with the capture method using color correlation, it has an advantage that it is suitable for controlling switching between the interpolation method using color correlation and other interpolation methods.
- FIGS. 16 to 18 show a third embodiment of the present invention.
- FIG. 16 is a block diagram showing a configuration of an imaging system
- FIG. 17 shows a configuration of a verification unit.
- the block diagram and Fig. 18 are based on the image processing program.
- 6 is a flowchart showing an interpolation process.
- the imaging system according to the third embodiment has a configuration in which a selection unit 90 as a selection unit is added to the configuration of the above-described first embodiment.
- the selection unit 90 selects the interpolation signal by the first interpolation unit 10 and the interpolation signal by the second interpolation unit 13 stored in the work buffer 11. One of the two is selected and output to the signal processing unit 14.
- the verification unit 12 outputs the verification result to the selection unit 90 instead of the second interpolation unit 13.
- the working buffer 11 outputs the interpolation signal to the verification unit 12 and also outputs the interpolation signal to the selection unit 90 instead of the signal processing unit 14.
- the control unit 16 is also bidirectionally connected to the selection unit 90 added in the third embodiment, and controls the selection unit 90.
- the video signal stored in the image buffer 6 is independently interpolated by the first interpolating unit 10 and the second interpolating unit 13 under the control of the control unit 16. Each interpolation process at this time is performed by the first interpolation unit 10 in the edge direction, as in the first embodiment described above.
- the second interpolation unit 13 is an interpolation process based on color correlation.
- the interpolated signal by the first interpolator 10 and the interpolated signal by the second interpolator 13 are transferred to the work buffer 11 and stored independently without being overwritten.
- the control unit 16 uses the three-plate signal stored in the work buffer 11 as a unit of a predetermined local area (for example, 8 ⁇ 8 pixels). Then, the three-plate signal when sequentially transferred to the verification unit 12 is composed of a three-plate signal composed of the original signal and the interpolation signal by the first interpolation unit 10, and the original signal and the second interpolation unit 1.
- the verification unit 12 obtains a luminance signal from the RGB three-plate signals, and calculates an edge component in pixel units by a known Laplacian process or the like.
- the verification unit 12 determines an edge component having a predetermined threshold value or more.
- the total number of valid edge components in the local region is determined as valid esshi components.
- the verification unit 12 selects the larger one. In this way, the verification unit 12 transfers the total number of valid edge components to the selection unit 90 as selection information.
- the selection unit 90 uses the total number of valid edge components from the verification unit 12, and uses the interpolated signal by the first interpolation unit 10 and the second interpolation unit 13. One of the interpolation signals is selected. That is, when the total number of valid edge components is equal to or more than the predetermined threshold, the selection unit 90 selects the interpolation signal by the first interpolation unit 10 as an edge area, and selects the interpolation signal that is less than the predetermined threshold. If, the interpolation signal by the second interpolation unit 13 is selected as a flat area.
- the control unit 16 controls the verification by the verification unit 12 and the selection operation by the selection unit 90 so that is performed for all the signals in the work buffer 11, and outputs the selected signal. Transfer to the signal processing unit 14 sequentially.
- the verification unit 12 includes an extraction unit 91 serving as extraction means for sequentially extracting a region of a predetermined size from the image data stored in the work buffer 11, and an extraction unit 91 for extracting the region extracted by the extraction unit 91.
- Coefficient buffer 94 for storing the stored edge components, and comparing the edge components stored in the coefficient buffer 94 with a predetermined threshold value to determine the total number of effective edge components that are equal to or greater than the threshold value.
- Edge verification means that outputs the larger one of the total number based on the interpolation signal by the section 10 and the total number based on the interpolation signal by the second interpolation section 13 to the selection section 90 as selection information. With part 9 5 and It is configured.
- the control unit 16 is bidirectionally connected to the extraction unit 91, the edge calculation unit 93, and the edge verification unit 95, and controls these.
- the extraction section 91 sequentially extracts areas of a predetermined size (for example, 8 ⁇ 8 pixel size) from the work buffer 11 based on the control of the control section 16 and transfers them to the area buffer 92.
- a predetermined size for example, 8 ⁇ 8 pixel size
- the edge calculation unit 93 calculates the luminance signal Y from the RGB signal value for each pixel in the area stored in the area buffer 92 according to the following equation 46. It is calculated as follows.
- the edge calculator 93 calculates the calculated luminance signal Y
- the edge component is obtained by performing known Laplacian processing. Since the Laplacian processing at this time is performed by the 3 ⁇ 3 filter, the edge component is obtained at 6 ⁇ 6 pixels at the center in the 8 ⁇ 8 pixel area. Therefore, when the extraction unit 91 extracts an area of 8 ⁇ 8 pixel size, it shifts the X-direction position or the Y-direction position by 6 pixels and overlaps the X-direction or Y-direction by 2 pixels each. In this way, extraction is performed sequentially.
- the edge verification unit 95 sequentially reads the edge components stored in the coefficient buffer 94 based on the control of the control unit 16, and reads the edge components into a predetermined threshold value, for example, the A / D converter 5. In the case where the output width of is 12 bits, the edge component that is equal to or more than this threshold is assumed to be the effective edge component as compared with 256. The edge verification unit 95 obtains the total number of components determined to be valid edge components.
- Such processing in the edge verification unit 95 is performed based on the control of the control unit 16, based on the original signal and the first interpolation unit 1 stored in the working unit // file 11.
- the edge verification unit 95 transfers the larger one of the total number of valid edge components obtained as described above to the selection unit 90 as selection information.
- the raw data (Raw) data from the CCD 4 may be left unprocessed. It is also possible to add filter information, image size, etc. as header information and output it, and then process it with an external computer or the like using an image processing program that is separate software. is there.
- the original signal is extracted in units of a block area of a predetermined size (step S52), and as shown in FIG. 4 of the first embodiment, the edge components in a plurality of directions are extracted. Then, a weighting factor for each direction is calculated (step S53).
- step S55 a capture value of a color difference signal for each direction is obtained (step S55).
- a final interpolation signal is calculated and output based on the weighting factor obtained in step S54 and the interpolation value obtained in step S55 (step S56).
- step S57 it is determined whether or not extraction has been completed for all block areas corresponding to all signals. If extraction has not been completed, the flow returns to step S52 to return to the next step. The extraction of the block area is performed, and when the extraction is completed, the process proceeds to step S62 described later.
- the original signal read in step S51 is extracted by using a block area of a predetermined size as a unit (step S58), and the original signal is expressed by equation 42 of the first embodiment described above.
- the correlation between the color signals is obtained as a linear form (step S59), and an interpolation signal is calculated and output based on the obtained linear form (step S60).
- step S61 it is determined whether or not extraction has been completed for all block areas corresponding to all signals. If extraction has not been completed, the process returns to step S58 to return to the next step. A lock area is extracted, and if completed, the process proceeds to step S62 described later.
- step S62 the original signal, the interpolated signal output in step S56, and the interpolated signal output in step S60 are extracted in units of a block area of a predetermined size (step S62)
- step S63 The total number of valid edge components is calculated (step S63).
- step S64 An interpolation signal is selected based on the obtained selection information (step S64).
- step S65 The intercept signal thus selected is output (step S65), and it is determined whether or not extraction has been completed for all block areas corresponding to all signals (step S66). If not completed, the process returns to step S62 to extract the next block area, and if completed, ends this processing.
- the processing is always performed by combining the first interpolation processing and the second interpolation processing, but the present invention is not limited to this.
- the operation of the verification unit 12 and the operation of the second interpolation unit 13 are stopped, and the selection unit 90 is made to select only the interpolation signal by the first interpolation unit 10.
- the control unit 16 performs control.
- the control unit 16 determines whether or not to stop these operations, the image quality information relating to the quality of the video signal, such as the compression ratio and the image size, and the imaging system, such as character image shooting or moving image shooting, are used. Acquisition of at least one of the shooting mode information set in the stem, the switching information of the interpolation processing that can be manually set by the user, and any one of these information Make a decision based on one or more.
- the primary color Bayer type single plate CCD has been described as an example, but the present invention is not limited to this.
- the present invention can be similarly applied to a single-chip CCD having a complementary color filter, or to a two-chip imaging system or a three-chip imaging system in which pixels are shifted.
- the third embodiment substantially the same effects as those of the first and second embodiments described above can be obtained, and the third embodiment is based on the edge direction. Interpolation is performed by both the first interpolation process and the second interpolation process based on color correlation, and the accuracy is verified and one of them is adaptively selected. Thus, a highly accurate interpolation signal can be obtained.
- the accuracy is verified using the original signal, the first interpolation signal, and the second interpolation signal, and selection is performed based on the verification result, highly accurate adaptive control is performed. A high-quality interpolation signal can be obtained.
- the verification is performed based on the edge information, the accuracy can be improved. Since this edge information has a high affinity with the interpolation method using the edge direction, it has an advantage that it is suitable for control of switching between the interpolation method using the edge direction and other interpolation methods.
- control is performed so that the operation of the second interpolation processing 13 and the operation of the verification unit 12 are stopped, and only the signal of the first interpolation processing is selected.
- the selection unit 90 By controlling the selection unit 90 in such a manner, the processing time can be reduced, and the power consumption can be reduced.
- re-interpolation processing is performed to obtain information such as when high-precision interpolation is not required due to high compression or when high-speed processing is prioritized in video shooting etc. It is possible to automate the control of whether or not it is possible, and the operability is improved.
- the interpolation process can be manually switched based on the user's intention, the degree of freedom in the process is improved.
Landscapes
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Color Television Image Signal Generators (AREA)
Abstract
Description
Claims
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
US10/543,079 US7570291B2 (en) | 2003-01-22 | 2004-01-20 | Imaging system and image processing program |
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2003013863A JP4225795B2 (ja) | 2003-01-22 | 2003-01-22 | 撮像システム、画像処理プログラム |
JP2003-13863 | 2003-01-22 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2004066637A1 true WO2004066637A1 (ja) | 2004-08-05 |
Family
ID=32767371
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2004/000395 WO2004066637A1 (ja) | 2003-01-22 | 2004-01-20 | 撮像システムおよび画像処理プログラム |
Country Status (3)
Country | Link |
---|---|
US (1) | US7570291B2 (ja) |
JP (1) | JP4225795B2 (ja) |
WO (1) | WO2004066637A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101998127A (zh) * | 2009-08-18 | 2011-03-30 | 索尼公司 | 信号处理设备、成像设备及信号处理方法 |
Families Citing this family (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US7596273B2 (en) * | 2004-04-19 | 2009-09-29 | Fujifilm Corporation | Image processing method, image processing apparatus, and image processing program |
JP5121206B2 (ja) | 2006-10-23 | 2013-01-16 | オリンパス株式会社 | 画像処理装置、画像処理プログラム、画像処理方法 |
JP5391635B2 (ja) * | 2008-10-06 | 2014-01-15 | 富士通株式会社 | 解析装置、データ保存方法およびデータ保存プログラム |
JP2014123787A (ja) * | 2012-12-20 | 2014-07-03 | Sony Corp | 画像処理装置、画像処理方法、および、プログラム |
US9715735B2 (en) * | 2014-01-30 | 2017-07-25 | Flipboard, Inc. | Identifying regions of free space within an image |
US10416576B2 (en) * | 2016-09-14 | 2019-09-17 | Canon Kabushiki Kaisha | Optical system for use in stage control |
Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11177999A (ja) * | 1997-12-09 | 1999-07-02 | Casio Comput Co Ltd | カラー撮像装置、カラー画像補正方法及び記録媒体 |
JP2000350223A (ja) * | 1999-06-07 | 2000-12-15 | Mitsubishi Electric Corp | 画素データ処理装置および画素データ処理方法 |
WO2001084851A1 (fr) * | 2000-04-12 | 2001-11-08 | Hitachi, Ltd. | Procede, dispositif et systeme de traitement de signal image |
JP2002077645A (ja) * | 2000-08-25 | 2002-03-15 | Sharp Corp | 画像処理装置 |
JP2002232904A (ja) * | 2001-02-06 | 2002-08-16 | Canon Inc | 信号処理装置およびその信号処理方法およびその動作処理プログラムおよびそのプログラムを記憶した記憶媒体 |
JP2003244715A (ja) * | 2002-02-21 | 2003-08-29 | Mega Chips Corp | 混成画素補間装置および混成画素補間方法 |
Family Cites Families (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2931520B2 (ja) * | 1993-08-31 | 1999-08-09 | 三洋電機株式会社 | 単板式カラービデオカメラの色分離回路 |
US5506619A (en) * | 1995-03-17 | 1996-04-09 | Eastman Kodak Company | Adaptive color plan interpolation in single sensor color electronic camera |
JP3503372B2 (ja) * | 1996-11-26 | 2004-03-02 | ミノルタ株式会社 | 画素補間装置及びその画素補間方法 |
JP4032200B2 (ja) | 1998-11-06 | 2008-01-16 | セイコーエプソン株式会社 | 画像データ補間方法、画像データ補間装置および画像データ補間プログラムを記録したコンピュータ読み取り可能な記録媒体 |
US7053944B1 (en) * | 1999-10-01 | 2006-05-30 | Intel Corporation | Method of using hue to interpolate color pixel signals |
US7015961B2 (en) * | 2002-08-16 | 2006-03-21 | Ramakrishna Kakarala | Digital image system and method for combining demosaicing and bad pixel correction |
-
2003
- 2003-01-22 JP JP2003013863A patent/JP4225795B2/ja not_active Expired - Fee Related
-
2004
- 2004-01-20 WO PCT/JP2004/000395 patent/WO2004066637A1/ja active Application Filing
- 2004-01-20 US US10/543,079 patent/US7570291B2/en active Active
Patent Citations (6)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH11177999A (ja) * | 1997-12-09 | 1999-07-02 | Casio Comput Co Ltd | カラー撮像装置、カラー画像補正方法及び記録媒体 |
JP2000350223A (ja) * | 1999-06-07 | 2000-12-15 | Mitsubishi Electric Corp | 画素データ処理装置および画素データ処理方法 |
WO2001084851A1 (fr) * | 2000-04-12 | 2001-11-08 | Hitachi, Ltd. | Procede, dispositif et systeme de traitement de signal image |
JP2002077645A (ja) * | 2000-08-25 | 2002-03-15 | Sharp Corp | 画像処理装置 |
JP2002232904A (ja) * | 2001-02-06 | 2002-08-16 | Canon Inc | 信号処理装置およびその信号処理方法およびその動作処理プログラムおよびそのプログラムを記憶した記憶媒体 |
JP2003244715A (ja) * | 2002-02-21 | 2003-08-29 | Mega Chips Corp | 混成画素補間装置および混成画素補間方法 |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN101998127A (zh) * | 2009-08-18 | 2011-03-30 | 索尼公司 | 信号处理设备、成像设备及信号处理方法 |
Also Published As
Publication number | Publication date |
---|---|
US20060132629A1 (en) | 2006-06-22 |
US7570291B2 (en) | 2009-08-04 |
JP2004266323A (ja) | 2004-09-24 |
JP4225795B2 (ja) | 2009-02-18 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
JP3762725B2 (ja) | 撮像システムおよび画像処理プログラム | |
US7253836B1 (en) | Digital camera, storage medium for image signal processing, carrier wave and electronic camera | |
JP4826028B2 (ja) | 電子カメラ | |
JP4628937B2 (ja) | カメラシステム | |
JP4427001B2 (ja) | 画像処理装置、画像処理プログラム | |
JP3899118B2 (ja) | 撮像システム、画像処理プログラム | |
US20020008760A1 (en) | Digital camera, image signal processing method and recording medium for the same | |
JP4126721B2 (ja) | 顔領域抽出方法及び装置 | |
WO2005104531A1 (ja) | 映像信号処理装置と映像信号処理プログラムおよび映像信号記録媒体 | |
WO2006004151A1 (ja) | 信号処理システム及び信号処理プログラム | |
JPWO2007049418A1 (ja) | 画像処理システム、画像処理プログラム | |
JP2001169160A (ja) | デジタルカメラおよびデジタルカメラにおける画像表示方法 | |
JP5165300B2 (ja) | 映像処理装置および映像処理プログラム | |
JP4200428B2 (ja) | 顔領域抽出方法及び装置 | |
TW200906174A (en) | Methods, systems and apparatuses for motion detection using auto-focus statistics | |
WO2005104537A1 (ja) | 画像処理装置、画像処理方法及びプログラム | |
WO2008129926A1 (ja) | 映像処理装置、映像処理プログラム及び映像処理方法 | |
WO2005099356A2 (ja) | 撮像装置 | |
JP4182566B2 (ja) | デジタルカメラおよびコンピュータ読み取り可能な記録媒体 | |
WO2006109702A1 (ja) | 画像処理装置と撮像装置、および画像処理プログラム | |
WO2004066637A1 (ja) | 撮像システムおよび画像処理プログラム | |
JP2000165896A (ja) | ホワイトバランス制御方法及びその装置 | |
WO2008050548A1 (fr) | Dispositif de traitement d'image, programme de traitement d'image et procédé de traitement d'image | |
JP3643042B2 (ja) | 画像処理装置、画像処理方法および記録媒体 | |
JP3563508B2 (ja) | 自動焦点調節装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
AK | Designated states |
Kind code of ref document: A1 Designated state(s): AE AG AL AM AT AU AZ BA BB BG BR BW BY BZ CA CH CN CO CR CU CZ DE DK DM DZ EC EE EG ES FI GB GD GE GH GM HR HU ID IL IN IS KE KG KP KR KZ LC LK LR LS LT LU LV MA MD MG MK MN MW MX MZ NA NI NO NZ OM PG PH PL PT RO RU SC SD SE SG SK SL SY TJ TM TN TR TT TZ UA UG US UZ VC VN YU ZA ZM ZW |
|
AL | Designated countries for regional patents |
Kind code of ref document: A1 Designated state(s): BW GH GM KE LS MW MZ SD SL SZ TZ UG ZM ZW AM AZ BY KG KZ MD RU TJ TM AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HU IE IT LU MC NL PT RO SE SI SK TR BF BJ CF CG CI CM GA GN GQ GW ML MR NE SN TD TG |
|
121 | Ep: the epo has been informed by wipo that ep was designated in this application | ||
ENP | Entry into the national phase |
Ref document number: 2006132629 Country of ref document: US Kind code of ref document: A1 |
|
WWE | Wipo information: entry into national phase |
Ref document number: 10543079 Country of ref document: US |
|
122 | Ep: pct application non-entry in european phase | ||
WWP | Wipo information: published in national office |
Ref document number: 10543079 Country of ref document: US |