WO1999067743A1 - Image correcting method and image inputting device - Google Patents

Image correcting method and image inputting device Download PDF

Info

Publication number
WO1999067743A1
WO1999067743A1 PCT/JP1999/003243 JP9903243W WO9967743A1 WO 1999067743 A1 WO1999067743 A1 WO 1999067743A1 JP 9903243 W JP9903243 W JP 9903243W WO 9967743 A1 WO9967743 A1 WO 9967743A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
input device
fourier transform
image correction
section
Prior art date
Application number
PCT/JP1999/003243
Other languages
French (fr)
Japanese (ja)
Inventor
Yoshikazu Ichiyama
Original Assignee
Yoshikazu Ichiyama
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yoshikazu Ichiyama filed Critical Yoshikazu Ichiyama
Publication of WO1999067743A1 publication Critical patent/WO1999067743A1/en

Links

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N23/00Cameras or camera modules comprising electronic image sensors; Control thereof
    • H04N23/80Camera processing pipelines; Components thereof
    • H04N23/81Camera processing pipelines; Components thereof for suppressing or minimising disturbance in the image signal generation

Definitions

  • the present invention relates to a method of correcting an image obtained from a digital image input device such as a digital camera or a video camera which converts an image into an electric signal and records the image, and in particular, various distortions of an imaging optical system and a function of converting an image sensor.
  • the present invention relates to an image correction method and an image input device capable of individually correcting defects and the like of the above. Background art
  • an image of a subject is formed by an imaging optical system, and in a film camera, the image is recorded by converting it into an electric signal for each pixel by an image sensor in a digital camera due to a chemical change.
  • an image sensor using a CCD is used for the image sensor.
  • a color filter is required for the structure of the image sensor, and the primary colors of pixels, such as red, green, and blue, are viewed from the imaging optical system.
  • the conditions are not always the same, and severe conditions regarding the aberration are imposed on the imaging optical system.
  • an object of the present invention is to provide a system that can be called a computer lens, which realizes an image faithful to an object by a digital technology even if the adjustment operation after manufacturing is not sufficient, even with a cheap and simple imaging optical system.
  • the purpose is to propose a digital image capture method and image input device that can be realized. Disclosure of the invention
  • the basic concept of the present invention is to capture the image of the graphic for calibration into a digital image input device such as a digital camera, video camera, etc., separately derive and store image correction information for a desired image for each device, and shoot the image. Each time an image is captured, or afterwards, a predetermined operation is performed based on the image correction information to correct and complete the image.
  • a digital image input device such as a digital camera, video camera, etc.
  • a system for performing image correction can be performed by a control unit and a calibration program built into a digital image input device, or by a dedicated device or a method in which a personal computer has image correction software. is there.
  • the image correction method has a calibration mode and a photographing 'correction mode.
  • the calibration mode a known calibration figure is photographed, the captured image is subjected to Fourier transform, and a desired image is obtained for the calibration figure.
  • the image correction information is formed in the spatial frequency domain except for the Fourier transform.
  • the shooting and correction mode the captured image is Fourier-transformed, multiplied by the image correction information in the spatial frequency domain, and then the corrected image is obtained by Fourier inverse transform.
  • This method requires a considerable amount of time for calculation processing, so it has the advantage of being able to compensate for high computational power and reduced power resolution that requires the use of a control unit.
  • image correction information is generated for each primary color of the image sensor, and correction is performed for each primary color, so that the chromatic aberration in the imaging optical system or the environmental problem for the image between the primary colors due to a structural problem of the image sensor.
  • a correction image that can correct the intensity of each primary color of a pixel is generated by including a pattern in a calibration image that can calibrate parameters that affect color reproducibility such as lightness and saturation of color, and generate correction information. The image after shooting can be corrected.
  • Fig. 1 is a diagram for explaining the basic concept of the present invention, which shows a model of an object, an optical system, an image, and the like, and also shows a coordinate system.
  • Figure 2 shows the intensity of the corresponding image when the optical system is distorted using a point light source as the object figure. Show cloth.
  • FIG. 3 shows the procedure of the image correction method according to the first embodiment.
  • FIG. 4 shows a calibration figure having a point light source for each section used in the second embodiment.
  • Figure 5 shows an example of an image corresponding to a calibration figure having a point light source for each section.
  • Figure 6 shows the procedure for the calibration mode of the second embodiment.
  • Fig. 7 shows the procedure of the shooting and correction mode of the second embodiment.
  • Figure 8 shows an example of image segmentation in the shooting and correction mode.
  • Figure 9 shows a diagram for explaining the concept of the segmented window function.
  • FIG. 10 is an external view of a digital camera according to the third embodiment.
  • FIG. 11 shows a functional block diagram of a digital camera according to the third embodiment.
  • the present invention relates to a digital image input device having at least an imaging optical system and an image sensor, which converts an image of a subject into a digital electric signal and captures the image.
  • a method for compensating for The basic concept of the present invention will be described with reference to FIGS. 1 and 2, and then the image correction method and the image input device of the present invention will be described with reference to embodiments.
  • FIG. 1 is a diagram for explaining the basic concept of the present invention, and shows a model of the relationship among a subject, an imaging optical system, an image, and the like. In the figure, the object is indicated by number 11, the lens that is the imaging optical system is indicated by number 12, and the image input to the image sensor 1 is indicated by number 13.
  • the coordinates of the plane where the object is located are (X, y), the coordinates of the plane with the image are ( ⁇ ', y').
  • the amplitude distribution u ( ⁇ ', y') of the image can be approximated from the subject's amplitude distribution g (x, y) in the form of a convolution integral as shown in Eq. (1).
  • the function h (x, y) shown in Eq. (1) is a transfer function that represents the characteristics of the optical system, and includes information such as various aberrations and chromatic aberrations of the optical system.
  • the transfer function of this optical system can be easily obtained by placing a point light source represented by ⁇ (X) ⁇ (y) as the subject, since h (X, y) becomes equal to the amplitude distribution of the image.
  • ⁇ (X) and ⁇ (y) indicate delta functions.
  • the meaning of this transfer function will be described in more detail with reference to FIG. FIG. 2 (a) shows a subject, and in this case, an example using a point light source is shown at number 21.
  • the torsional width distribution is an amplitude distribution without spread as shown in No. 24 when viewed along the line of No. 22 and No. 25 when viewed along the line of No. 23.
  • the image formed on the image sensor via the imaging optical system has a spread and a bias as shown by 31 and 32 in Fig. 2 (b).
  • No. 32 is a contour plot showing the torsion width distribution of the image in a visual manner.
  • the amplitude distribution of the image along the line of No. 33 is No. 35
  • the amplitude distribution along the line 34 spreads out from the original amplitude distribution of the point light source, and becomes blurred or biased.
  • the object is a point light source
  • the distributions indicated by numbers 31 and 32 indicate the transfer functions including various imperfections such as distortion and chromatic aberration of the actual optical system.
  • u * ( ⁇ ', y,, t) is a function of the conjugate of u ( ⁇ ', y ', t), and> indicates the time average.
  • Eq. (4) is expressed by the convolution integral as shown in Eq. (5), and the Fourier transform form is Eq. (6).
  • J (x, y) indicates the intensity distribution of the calibration figure and is given by Eq. (7).
  • Equation (9) the transfer function of the desired optical system is h.
  • the Fourier transform form of the image intensity distribution I 0 (x ′, y ′) in that case is given by Eq. (8).
  • Equation (9) the Fourier transform form of the image for which it is desirable to eliminate the intensity distribution of the unknown object figure is expressed by Equation (9).
  • Equation (10) 3 ⁇ 4 ⁇ )] Therefore, if the function shown on the right side of Equation (10) is used as image correction information and stored and stored for each image input device, it is possible to correct the image after photographing to a desired image by the optical system.
  • the Fourier transform form of the image during normal shooting corresponds to F [I ( ⁇ ', y')]. Therefore, after multiplying the image correction information according to Eq. (9), the desired result is obtained by inverse Fourier transform. To obtain an image equivalent to using an optical system. However, the equation for the inverse Fourier transform is shown in (11).
  • the Fourier transform and inverse transform in each of the above equations are exclusively performed by the FFT algorithm. (9), (10) The product or division in the spatial frequency domain in the equation is performed by matrix operation, and since it is well known, detailed description of this point is omitted.
  • the concept of the correction was explained, but the procedure of the image correction method is shown as a first embodiment in Fig. 3.
  • the first embodiment has a calibration mode and a shooting and correction mode, each of which is implemented in the following procedure. Is done.
  • the conditions for shooting such as the distance and aperture, are set in step (1), and the image of the calibration figure is captured in step (2) and used as the first image.
  • step (3) a desired image corresponding to the graphic for calibration is read out and set as a second image.
  • step (4) One of the primary colors of the di-sensor is selected and the image is processed between the components of this primary color.
  • step (5) the Fourier transform of the second image is divided by the Fourier transform of the first image to obtain image correction information.
  • the selection of primary colors is changed, and step (5) is repeated to obtain image correction information for each primary color.
  • the aperture is changed, and steps (1) and (6) are repeated to obtain image correction information corresponding to a plurality of aperture values.
  • step (8) the image captured in step (8) is fetched, and in step (9), the primary color of the image sensor is selected, and thereafter, the image of this primary color component is processed.
  • step (10) image correction information corresponding to the aperture at the time of photographing the image to be corrected is read. When there is no image correction information of a condition that matches the aperture, the image correction information is obtained by interpolating from the preceding and following image correction information.
  • step (11) the Fourier transform form of the image in step (8) is multiplied by image correction information, and in step (12), inverse Fourier transform is performed to obtain a corrected image for one primary color.
  • step (13) the primary colors are changed, and steps (10)-(12) are repeated to obtain a corrected image for each primary color to complete the image correction.
  • Image correction information is obtained for each primary color of the image sensor, and correction is performed for each primary color component because the optical system does not always have the same conditions between the primary colors. Naturally, care must be taken into account in the design, but there are quite a few adjustment steps, and if image correction is assumed as in the first embodiment, the cost increase factors can be reduced considerably. . In addition, if there is another zoom lens as an element that changes the state of image distortion, it must also be considered. In the procedure of the first embodiment, image correction information should be obtained for each of the magnifications by the zoom lens as well as the aperture.
  • a point light source approximated by the delta function would be an image represented by the delta function, and in that sense, the ideal transfer function would be the delta function.
  • the Fourier transform form of the delta function is a constant
  • the real image for the point light source is Fourier transformed and the ideal image correction information is the reciprocal thereof.
  • overcorrection may actually occur and another distortion may occur. That is, the mathematical Fourier transform domain is infinite
  • the aperture of the optical system is finite, and the distance between pixels of the image sensor for taking in digital images is also finite.
  • the target design transfer function in the process of image correction is set in consideration of the quality of the imaging optical system used, the resolution of the image sensor, and the like.
  • the point of using the calibration system is to accurately match the conditions such as the position and the size of the figure between the captured image and the image that should be a desirable optical system at the time of calibration.
  • a point figure was used in the calibration figure described above. In addition to the central point figure, a similar point figure was also placed at a distant position, and in the real space area before image correction information was calculated in the spatial frequency domain.
  • the aperture of the imaging optical system has a sufficiently small aperture and the light passing through the imaging optical system is only near the central axis, it is considered correct in a considerable range if the aperture is large.
  • the image has large distortion.
  • a plurality of point light sources 43 are dispersed throughout as shown in Fig. 4, the image can be exaggerated and expressed in different forms as indicated by numbers 53, 54, 55 in Fig. 5.
  • the transfer function will be different at each magnification.
  • the image correction method shown in the second embodiment provides a practical solution in such practical application, and will be described below with reference to FIGS. Fig. 4 shows a calibration graphic 41 used in the second embodiment.
  • FIG. 5 shows an image 51 in which the graphic 41 for calibration is captured, and is assumed to be divided into a plurality of sections indicated by reference numeral 52.
  • the image of the point light source 43 exists in those sections, but the shape differs between the center and the periphery as shown by numbers 53, 54, and 55 depending on the location of the section.
  • the concept of the image correction method described in the second embodiment is based on this situation, and the screen is divided into a plurality of sections, image correction information in the frequency domain is obtained for each section, and the image is corrected for each section. , Images are combined and completed.
  • FIG. 5 shows an image 51 in which the graphic 41 for calibration is captured, and is assumed to be divided into a plurality of sections indicated by reference numeral 52.
  • the image of the point light source 43 exists in those sections, but the shape differs between the center and the periphery as shown by numbers 53, 54, and 55 depending on the location of the section.
  • the concept of the image correction method described in the second embodiment is based on this situation
  • FIG. 6 illustrates the procedure of the calibration mode in the image correction method in the second embodiment
  • FIG. 7 illustrates the procedure of the imaging / correction mode in FIG.
  • the screen is divided into multiple sections in step (1).
  • step (2) the photographing conditions such as the distance from the calibration figure and the aperture are set.
  • step (3) as shown in Fig. 4, an image of a calibration figure in which point light sources are arranged for each section is captured.
  • step (4) the primary color of the image sensor is selected, and the image of this primary color component is processed thereafter.
  • step (5) a Fourier transform of the image of the point light source is performed for each section.
  • a Fourier transform of a correct image that should have a point light source is performed.
  • step (7) the Fourier transform form of the image obtained in step (6) is divided by the Fourier transform form obtained in step (5) to obtain divided image correction information in the frequency domain.
  • step (8) select another primary color and repeat steps (5)-(7).
  • step (9) the aperture is changed, and steps (3)-(8) are repeated to obtain the segmented image correction information in the frequency domain for multiple apertures.
  • FIG. 7 shows the procedure in the photographing'correction mode. Take an image in step (10). In step (11), the primary colors of the image sensor are selected, and processing is performed between images of the components of these primary colors.
  • step (12) Select the section in the screen in step (12).
  • step (13) the image in the section is multiplied by the window function to perform Fourier transform.
  • step (14) the divided image information in the frequency range suitable for the aperture at the time of shooting is read. If there is no section image correction information suitable for the aperture at the time of shooting Interpolate or extrapolate from the preceding and following sectioned image correction information.
  • step (15) the Fourier transform obtained in step (13) is multiplied by the segmented image correction information in the frequency domain read in step (14), and a corrected image for each segment is obtained by inverse Fourier transform.
  • step (16) a new section is set so that adjacent parts overlap, and steps (13) to (15) are repeated to scan the entire area of the screen to complete the corrected image.
  • step (17) the selection of the primary color is changed, and steps (12) to (16) are repeated.
  • the sections and calculations in steps (12) and (13) and the corrected image for each section obtained in step (15) will be described with reference to FIGS.
  • the number 81 indicates the section corresponding to the section 52 in FIG. 5, but the section shown in step (12) in FIG. 7 indicates the larger sections 82, 83, 84, etc.
  • the window function to be multiplied in step (13) is a weight function such that it becomes zero at the boundary of the section and 1.0 at the center. This is the curve with the number 91 shown in Fig. 9 in comparison with the section 82, which is a two-dimensionally distributed surface of (x, y), but Fig. 9 shows a cross section.
  • step (16) the adjacent sections overlap so that the corrected image is connected. For example, the section is selected and moved, such as section 83 next to section 82, or section 84 above and below, and the entire screen is sequentially moved. Scan.
  • One of the important objectives of the present invention is to use a low-cost but inexpensive imaging optical system and improve the image quality to the level of high-end products by digital processing.
  • the second embodiment is based on the assumption that the transfer function of the imaging optical system continuously changes while allowing the transfer function to change depending on the location.
  • the image correction information obtained in step (1) is obtained, and the images are corrected in a piecewise manner and combined.
  • the calculation of the Fourier transform and the inverse transform are performed on the entire screen. Only However, in the second embodiment, the entire screen is divided into multiple sections, and the Fourier transform and inverse transform are performed for each section. According to recent trends, image sensors with more than 100,000 pixels are becoming more common.
  • the Fourier transform and the inverse transform are indispensable on the premise of correction in the frequency domain.
  • the formulas in the above description are expressed as continuous quantities, the pixels on the image sensor exist discretely, and are converted to a discrete calculation form in accordance with this pixel distribution. Since they are well known, they have not been described especially with reference to discrete expressions.
  • the Fourier transform is performed at high speed using a discrete model, so the Fourier transform and the inverse Fourier transform are performed using the FFT.
  • This image correction method is a method of correcting an image by a digital image input device based on image correction information unique to the digital image input device to obtain a correct image. It can be collected and generated, and the necessary programs can be installed in a personal computer, or it can be made by dedicated equipment. Of course, it can also be installed in individual devices such as digital cameras.
  • FIG. 10 shows the appearance of a digital camera 100 according to a third embodiment of the present invention.
  • number 101 is the lens which is the imaging optical system
  • number 102 is the viewfinder and the liquid crystal display for displaying the image after shooting
  • number 103 is the shutter button
  • Number one Reference numeral 04 denotes a mode selector for switching various functions
  • reference numeral 105 denotes a power switch
  • reference numeral 106 denotes a connector for connection to an external device.
  • FIG. 11 is a functional block diagram of a digital camera 100 according to a third embodiment of the present invention.
  • the image of the subject is formed on the image sensor 111 by the lens 101 which is an image forming optical system, and is activated by pressing the shutter button 103, and the control unit 112 is controlled by the image sensor 111.
  • the numbers 1 13 indicate the control memory in the control unit 112, and the numbers 115 indicate the batteries.
  • the mode can be switched between the normal shooting mode and the calibration mode by the instruction of the mode selector 104.
  • the calibration mode a predetermined calibration figure 1 16 is used as the subject, and the distance or brightness between the calibration figure 1 16 and the lens 101, as well as various conditions such as aperture and shutter speed, are determined.
  • the control unit 112 loads the image of the calibration figure 111 into the memory 114.
  • control unit 112 divides the screen according to a predetermined division method, performs a Fourier transform for each section, and stores a part of the control memory 113 from the design-required optical system. Based on the Fourier transform type of the image, the divided image correction information is calculated and stored in the control memory 113.
  • These processes and operations are executed by the control unit 112 instructed by a program following the procedure shown in the second embodiment. This processing program is stored in the control memory 113.
  • This processing program is stored in the control memory 113.
  • the image captured by the lens 101 and the image sensor 102 is recorded in the memory 114, and the image correction information stored in the control memory 113 is stored in the memory 114.
  • control unit 112 executes the image correction in accordance with the instruction of the program stored in the control memory 111, and stores the image in the memory 114.
  • the control unit 112 executes the image correction in accordance with the instruction of the program stored in the control memory 111, and stores the image in the memory 114.
  • the image is stored together with the shooting conditions without performing image correction immediately after normal shooting. It would be more realistic to store the data in 14 and perform the image correction separately according to the instruction from the mode selector 104.
  • the calibration mode only the image of the calibration figure is captured, and in the shooting mode, only the shooting conditions and the image are captured. It is also possible to collectively collect images later using a computer or dedicated machine.
  • correction patterns for the color reproducibility in the calibration pattern used in the calibration mode and deriving correction information for each pixel, it is possible to resolve sensitivity differences depending on the location of the image sensor. is there.
  • a pattern with different parameters related to color reproducibility, lightness, saturation, etc. is appropriately placed on the screen and the image is captured into the digital camera. Since the color attributes in each pattern in the image are known in advance, how to read from the obtained image is used as image correction information. In this way, the image correction information is determined, and when the image is corrected, the output is read for each pixel in accordance with the image correction information and the correction is performed.
  • the color conversion function of the image sensor is corrected, including the chromatic aberration in the imaging optical system, so that the fidelity of color can be improved.
  • the image correction method is described in the first and second embodiments, and the image correction method is programmed and stored in the digital image input device in the third embodiment. The example of executing the calibration and the image correction has been described.
  • the present invention as described in the embodiment with a digital camera, in a digital image input device, an image capable of correcting imperfections of an image forming optical system or an image signal conversion system of each device. It has been described that the correction information is stored, and the correct or higher quality image is obtained by using the image correction information every time the image is obtained or collectively afterwards, and can be easily realized by the present invention. . As described above, the present invention has shown that a digital image having a practical level of quality can be obtained by digital technology even if an inexpensive and low-quality imaging optical system or image sensor is used.
  • the present invention can provide a method of realizing low cost as a system including not only completed devices but also manufacturing processes. In other words, if any adjustment is required between the chromatic aberration or the primary colors of the image sensor in the manufacturing process, the calibration figure is read and then left to the calibration system in the device to omit the adjustment in the manufacturing process. It is possible to do so and there are ways to reduce costs.
  • calibration is performed after the production of digital image input devices such as digital cameras and video cameras, and the quality performance is demonstrated with the help of digital processing technology. Should be recognized.
  • the present invention is also effective in this respect, and a calibration program can be built in the apparatus, or re-calibration can be performed by a specialty store or a manufacturer having a calibration control apparatus to maintain design performance. . This is also one of the important objects of the present invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Image Processing (AREA)

Abstract

An image correcting method and a digital image inputting device such as a digital camera or a video camera, wherein various aberrations of an image forming optical system and errors in the color conversion function of an image sensor are corrected for respective devices, by digital techniques, and a practically high-enough image quality is realized by means of a cheap, simple image forming optical system. An image representing a predetermined proofing figure is captured. Image correcting information for converting the image into an image to be present is derived for respective digital image inputting devices and stored in a device or system. After normal imaging, the image is converted to a correct one by using the image correcting information. The image correcting method comprises Fourier-transforming the picked-up image by using image correcting information for correcting the transfer function of an optical system to a design transfer function in a spatial frequency region, correcting the transformed image in a spatial frequency region, and forming a corrected image by inverse Fourier transform.

Description

明細書 画像補正方法及び画像入力機器  Description Image correction method and image input device
技術分野 Technical field
本発明は画像を電気信号に変えて記録するディジタル式のカメラ, ビデオカメラ等の ディジタル画像入力機器から得られる画像の補正方法に係わり, 特に結像光学系の諸々 の歪み, イメージセンサーの変換機能の不具合等を個々に補正できるような画像補正方 法及び画像入力機器に係わる。 背景技術  The present invention relates to a method of correcting an image obtained from a digital image input device such as a digital camera or a video camera which converts an image into an electric signal and records the image, and in particular, various distortions of an imaging optical system and a function of converting an image sensor. The present invention relates to an image correction method and an image input device capable of individually correcting defects and the like of the above. Background art
従来のカメラでは, 結像光学系で被写体の像を結像させ, フィルム式カメラでは化学 変化により, ディジタルカメラではイメージセンサーでその画素毎に電気信号に変換し て画像を記録する。 イメージセンサ一には通常 C C Dによるイメージセンサ一が使用さ れるが, カラーの場合はイメージセンサーの構造に色フィルターが必要であり, 画素の 原色例えば赤, 緑, 青等が結像光学系から見て必ずしも同条件ではなく結像光学系には 収差に関して厳しい条件が賦課されている。 もちろん従来から画像の被写体に対する忠 実度の観点では結像光学系には共通問題として色収差を含む様々な収差が存在し, また 画像に記録する過程では特に色の再現性に関してフィルム式, イメージセンサ一共に種 々問題を有している。  In a conventional camera, an image of a subject is formed by an imaging optical system, and in a film camera, the image is recorded by converting it into an electric signal for each pixel by an image sensor in a digital camera due to a chemical change. Normally, an image sensor using a CCD is used for the image sensor. However, in the case of color, a color filter is required for the structure of the image sensor, and the primary colors of pixels, such as red, green, and blue, are viewed from the imaging optical system. However, the conditions are not always the same, and severe conditions regarding the aberration are imposed on the imaging optical system. Of course, from the viewpoint of the fidelity of the image to the subject, there are various aberrations including chromatic aberration as a common problem in the image forming optical system. Both have various problems.
これらの諸問題の全てを完全に解決することは困難で結局は平均的に満足できるよう 妥協せざるを得ず, それでも結像光学系を構成するレンズには材料, 設計共に多大の努 力が傾注され, また生産時に微妙な調整作業を要している。  It is difficult to completely solve all of these problems, and ultimately compromises must be made to satisfy the average. Nevertheless, a great deal of effort has been made in the materials and design of the lenses that make up the imaging optical system. It is devastated and requires delicate adjustment work during production.
したがって, 本発明の目的は安くシンプルな結像光学系によっても或いは製造後の調 整作業が十分でなくても被写体に忠実な画像実現をディジタル技術によって実現する, いわばコンピュータレンズとも言うべきシステムを実現するディジタル式の画像捕正方 法, 画像入力機器を提案する事である。 発明の開示 Therefore, an object of the present invention is to provide a system that can be called a computer lens, which realizes an image faithful to an object by a digital technology even if the adjustment operation after manufacturing is not sufficient, even with a cheap and simple imaging optical system. The purpose is to propose a digital image capture method and image input device that can be realized. Disclosure of the invention
本発明の基本概念は, 校正用図形の画像をディジタルカメラ, ビデオカメラ等デイジ タル画像入力装置に取り込み, 望ましい画像への画像補正情報を個々の装置毎に別途導 出して記憶し, 撮影して画像を取り込む毎に, 或いは事後に画像補正情報を基に所定の 演算を実施して画像を補正, 完成させようとするものである。  The basic concept of the present invention is to capture the image of the graphic for calibration into a digital image input device such as a digital camera, video camera, etc., separately derive and store image correction information for a desired image for each device, and shoot the image. Each time an image is captured, or afterwards, a predetermined operation is performed based on the image correction information to correct and complete the image.
画像補正を行うシステムは, ディジタル画像入力機器に内蔵した制御部と校正用プロ グラムによっても, また別途の専用機器或いはパ一ソナルコンピュータに画像補正用の ソフトウェアを具備する方法等何れによっても可能である。  A system for performing image correction can be performed by a control unit and a calibration program built into a digital image input device, or by a dedicated device or a method in which a personal computer has image correction software. is there.
画像補正の方法は, 校正モードと, 撮影 '補正モードとを有し, 校正モードでは既知 の校正用図形を撮影し, 取り込まれた画像をフーリエ変換し, 校正用図形に対して望ま しい画像のフーリエ変換形を除して空間周波数領域で画像補正情報を形成する。 撮影 · 補正モードでは撮影した画像をフーリエ変換して空間周波数領域での画像補正情報を乗 じた後, フーリエ逆変換により捕正した画像を得る。 この方式は演算処理に少なからぬ 時間を要するので演算能力の高レ、制御部を使う必要が有る力 分解能の低下も補正対象 に出来る利点が有る。  The image correction method has a calibration mode and a photographing 'correction mode. In the calibration mode, a known calibration figure is photographed, the captured image is subjected to Fourier transform, and a desired image is obtained for the calibration figure. The image correction information is formed in the spatial frequency domain except for the Fourier transform. In the shooting and correction mode, the captured image is Fourier-transformed, multiplied by the image correction information in the spatial frequency domain, and then the corrected image is obtained by Fourier inverse transform. This method requires a considerable amount of time for calculation processing, so it has the advantage of being able to compensate for high computational power and reduced power resolution that requires the use of a control unit.
また, 更に画像の歪みのみではなく, 結像光学系の色収差, イメージセンサーでの色 変^ ^能の誤差等も捕正の対象とする事が出来る。 すなわち, イメージセンサーの原色 毎に画像補正情報を発生させ, 原色毎に補正する事によって結像光学系での色収差, 或 いはイメージセンサーの構造上の問題に起因して原色間で画像に対する環境が異なる等 の不利な条件を解消する事が出来る。 或は色彩の明度, 彩度等色彩再現性に影響するパ ラメ一ターを校正できるようなパターンを校正用画像に含んで画素の原色毎の強弱を捕 正できるような補正情報を発生させ, 撮影後の画像を補正する事が出来る。 図面の簡単な説明  In addition, not only image distortion, but also chromatic aberration of the imaging optical system and error of color change capability of the image sensor can be targeted for correction. In other words, image correction information is generated for each primary color of the image sensor, and correction is performed for each primary color, so that the chromatic aberration in the imaging optical system or the environmental problem for the image between the primary colors due to a structural problem of the image sensor. Disadvantages such as differences in Alternatively, a correction image that can correct the intensity of each primary color of a pixel is generated by including a pattern in a calibration image that can calibrate parameters that affect color reproducibility such as lightness and saturation of color, and generate correction information. The image after shooting can be corrected. BRIEF DESCRIPTION OF THE FIGURES
図 1は, 本発明の基本概念を説明する為の図で被写体, 光学系, 画像等をモデル的に 示し, 同時に座標系を示す。  Fig. 1 is a diagram for explaining the basic concept of the present invention, which shows a model of an object, an optical system, an image, and the like, and also shows a coordinate system.
図 2は, 点光源を被写体図形として光学系に歪みがある場合の対応する画像の強度分 布を示す。 Figure 2 shows the intensity of the corresponding image when the optical system is distorted using a point light source as the object figure. Show cloth.
図 3は, 第一の実施例である画像補正方法の手順を示す。  FIG. 3 shows the procedure of the image correction method according to the first embodiment.
図 4は, 第二の実施例で用いる区分毎に点光源を有する校正用図形を示す。  FIG. 4 shows a calibration figure having a point light source for each section used in the second embodiment.
図 5は, 区分毎に点光源を有する校正用図形に対応する画像の例を示す。  Figure 5 shows an example of an image corresponding to a calibration figure having a point light source for each section.
図 6は, 第二の実施例の校正モード手順を示す。  Figure 6 shows the procedure for the calibration mode of the second embodiment.
図 7は, 第二の実施例の撮影 ·補正モード手順を示す。  Fig. 7 shows the procedure of the shooting and correction mode of the second embodiment.
図 8は, 撮影 ·補正モードでの画像の区分例を示す。  Figure 8 shows an example of image segmentation in the shooting and correction mode.
図 9は, 区分の窓関数の概念を説明する為の図を示す。  Figure 9 shows a diagram for explaining the concept of the segmented window function.
図 1 0は, 第三の実施例であるディジタルカメラの外観図を示す。  FIG. 10 is an external view of a digital camera according to the third embodiment.
図 1 1は, 第三の実施例であるディジタルカメラの機能ブロック図を示す。 発明を実施するための最良な形態  FIG. 11 shows a functional block diagram of a digital camera according to the third embodiment. BEST MODE FOR CARRYING OUT THE INVENTION
本発明は, 少なくとも結像光学系とイメージセンサーとを有して被写体の画像をディ ジタル電気信号に変換して取り込むディジタル画像入力機器に於いて, 撮影した画像を 機器固有の画像補正情報により事後に補正する方法を提案する。 図 1, 図 2を用いて本 発明に関する基本的な概念を説明し, その後に本発明の画像補正方法及び画像入力機器 を実施例を挙げて説明する。 図 1は本発明の基本概念を説明する為の図で被写体, 結像光学系, 及び画像等の関係 をモデル的に示している。 同図に於いて, 被写体は番号 1 1, 結像光学系であるレンズ は番号 1 2, イメージセンサ一に入力する画像は番号 1 3でそれぞれ示され, 被写体の ある平面の座標を(X , y), 画像のある平面の座標を (χ ' , y ' ) とする。 画像の振 幅分布 u ( χ ' , y ' ) は, 被写体の振幅分布 g (x, y ) から (1) 式に示すような 畳み込み積分の形で近似できる。 この ( 1) 式に示す関数 h (x, y ) は光学系の特徴 を表す伝達関数であり, 光学系の諸種の収差, 色収差等の情報をも含む。 この光学系の 伝達関数は被写体として δ ( X ) δ ( y ) で表される点光源を配置すると, h ( X , y ) が画像の振幅分布と等しくなるので容易に求められる。 但し, δ ( X ) , δ ( y ) はデルタ関数を示す。 図 2を用いてこの伝達関数の意味を更に詳しく説明する。 図 2 (a) は被写体を示し, この場合は番号 2 1に点光源を用いた例を示す。 その捩幅分布は番号 2 2の線に沿って 見た場合を番号 24に, 番号 2 3の線に沿って見た場合を番号 2 5に示すように広がり の無い振幅分布である。 しかしながら, 結像光学系を経てイメージセンサ一上に出来る 像は図 2 (b) の 3 1 , 3 2等で示すように広がりと偏りを持っている。 番号 32は画 像の捩幅分布を可視的に示すために等高線風に示した線図であるが, 同図に於いて番号 33の線に沿っての画像の振幅分布は番号 35で, 番号 34の線に沿っての振幅分布は 番号 36に示すように, 元の点光源の振幅分布からは広がってぼけ, 或いは偏りを持つ ような分布となる。 ( 1 ) 式に関して説明したように被写体は点光源であるので番号 3 1, 3 2に示す分布が実際の光学系の歪み, 色収差等諸々の不完全さをも含む伝達関数 を示す。 u (x, y) , g (x, y) , h (x, y) それぞれの関数のフーリエ変換形を F [u(x,y)], F[g(x,y)], F [ h ( x , y )]で表すと, (1 ) 式の畳み込み積分のフーリエ 変換形表現は (2) 式で表される。 但し, X, Yを空間周波数とし, フーリエ変換の定 義は (3) 式に示す。 The present invention relates to a digital image input device having at least an imaging optical system and an image sensor, which converts an image of a subject into a digital electric signal and captures the image. We propose a method for compensating for. The basic concept of the present invention will be described with reference to FIGS. 1 and 2, and then the image correction method and the image input device of the present invention will be described with reference to embodiments. FIG. 1 is a diagram for explaining the basic concept of the present invention, and shows a model of the relationship among a subject, an imaging optical system, an image, and the like. In the figure, the object is indicated by number 11, the lens that is the imaging optical system is indicated by number 12, and the image input to the image sensor 1 is indicated by number 13. The coordinates of the plane where the object is located are (X, y), the coordinates of the plane with the image are (χ ', y'). The amplitude distribution u (χ ', y') of the image can be approximated from the subject's amplitude distribution g (x, y) in the form of a convolution integral as shown in Eq. (1). The function h (x, y) shown in Eq. (1) is a transfer function that represents the characteristics of the optical system, and includes information such as various aberrations and chromatic aberrations of the optical system. The transfer function of this optical system can be easily obtained by placing a point light source represented by δ (X) δ (y) as the subject, since h (X, y) becomes equal to the amplitude distribution of the image. Here, δ (X) and δ (y) indicate delta functions. The meaning of this transfer function will be described in more detail with reference to FIG. FIG. 2 (a) shows a subject, and in this case, an example using a point light source is shown at number 21. The torsional width distribution is an amplitude distribution without spread as shown in No. 24 when viewed along the line of No. 22 and No. 25 when viewed along the line of No. 23. However, the image formed on the image sensor via the imaging optical system has a spread and a bias as shown by 31 and 32 in Fig. 2 (b). No. 32 is a contour plot showing the torsion width distribution of the image in a visual manner. In this figure, the amplitude distribution of the image along the line of No. 33 is No. 35, and As shown in No. 36, the amplitude distribution along the line 34 spreads out from the original amplitude distribution of the point light source, and becomes blurred or biased. As explained in relation to equation (1), the object is a point light source, and the distributions indicated by numbers 31 and 32 indicate the transfer functions including various imperfections such as distortion and chromatic aberration of the actual optical system. u (x, y), g (x, y), h (x, y) The Fourier transform form of each function is F [u (x, y)], F [g (x, y)], F [ h (x, y)], the Fourier transform expression of the convolution integral of Eq. (1) is expressed by Eq. (2). Here, X and Y are spatial frequencies, and the definition of the Fourier transform is shown in Eq. (3).
( 1 ) u * , y' ) = I I g(x, y)h(x, - x, y - yノ dxdy (1) u *, y ') = I I g (x, y) h (x,-x, y-y dxdy
(2) F[u(x',y')] = F[g(x,y)]F[h(x,y)] (2) F [u (x ', y')] = F [g (x, y)] F [h (x, y)]
( 3 ) (3)
F[u(x',y')] u (χ' , y' ) exp[-i2n (Χχ' +Yy' ) ]dx' dy' しかしながら, イメージセンサーで検出する信号は振幅ではなく, エネルギー量であ るので振幅の自乗の時間平均, つまり強度分布での関係式を考える。 被写体図形及び画 像の振幅は時間項を持つとしてそれぞれ g ( X , y, t) , u ( χ ' , y' , t) とす ると画像の強度分布 I (χ ' , y ' ) は (4) 式で表される。 F [u (x ', y')] u (χ ', y') exp [-i2n (+ '+ Yy')] dx 'dy' However, the signal detected by the image sensor is not the amplitude, but the energy Therefore, we consider the time average of the square of the amplitude, that is, the relational expression in the intensity distribution. Assuming that the subject figure and the image have a time term, they are denoted by g (X, y, t) and u (χ ', y', t), respectively. Then, the intensity distribution I (χ ', y') of the image is expressed by Eq. (4).
但し, u* (χ' , y, , t) は u (χ' , y ' , t) の共役の関数とし, く >は時 間平均を示す。 Here, u * (χ ', y,, t) is a function of the conjugate of u (χ', y ', t), and> indicates the time average.
(4) I(x'.y') - (u(x',y', t)u*(x' , y' , t) > (4) I (x'.y ')-(u (x', y ', t) u * (x', y ', t)>
= < g (x, y, t) h (x -x, y -y) dxdy g* (x, y, t) h* -x, y'一 y) dxdy >= <g (x, y, t) h (x -x, y -y) dxdy g * (x, y, t) h * -x, y 'y) dxdy>
Figure imgf000007_0001
一般に対象とする光は空間インコヒーレントであるので (4) 式は (5) 式のよう 畳み込み積分の関係で表され, フーリエ変換形は (6) 式となる。
Figure imgf000007_0001
Since the target light is generally spatially incoherent, Eq. (4) is expressed by the convolution integral as shown in Eq. (5), and the Fourier transform form is Eq. (6).
(5) I (χ' , y' ) = |g(x, y) |J|h(x -x, y -y) |Jdxdy(5) I (χ ', y') = | g (x, y) | J | h (x -x, y -y) | J dxdy
Figure imgf000007_0002
Figure imgf000007_0002
(6) F[I(X',y')] = F[j(X,y)3F[|h(x, y) I2] (6) F [I ( X ', y')] = F [j ( X , y) 3F [| h (x, y) I 2 ]
但し, J (x, y) は校正用図形の強度分布を示して (7) 式で与えられる。 Here, J (x, y) indicates the intensity distribution of the calibration figure and is given by Eq. (7).
(7) J(x, y) = <g(x, y, t)g*(x, y, t)> (7) J (x, y) = <g (x, y, t) g * (x, y, t)>
一方望ましい光学系の伝達関数を h。 (x, y) で表すとその場合の画像の強度分布 I 0 (x ' , y ' ) のフーリエ変換形は (8) 式で与えられる。 (6) , (8) 式から 未知の被写体図形の強度分布を消去すると望ましい画像のフーリエ変換形は (9) 式の ように表される。 On the other hand, the transfer function of the desired optical system is h. Expressed as (x, y), the Fourier transform form of the image intensity distribution I 0 (x ′, y ′) in that case is given by Eq. (8). From Equations (6) and (8), the Fourier transform form of the image for which it is desirable to eliminate the intensity distribution of the unknown object figure is expressed by Equation (9).
(8) F[I。(x',y'〉] = F[j(x,y)]F[|ho(x, y)l2] (8) F [I. (x ', y'>] = F [j (x, y)] F [| ho (x, y) l 2 ]
(9) 。 ( )] = ¾ ^ )] したがって, (1 0) 式右辺に示される関数を画像補正情報とし, 個々の画像入力機 器毎に記憶保持すれば撮影後の画像から望ましい光学系による画像へ補正できる。 すなわち, 通常の撮影時の画像のフーリエ変換形は F [ I (χ' , y' ) ] に相当する ので, (9) 式にしたがって画像補正情報を乗じた後, フーリエ逆変換により, 望まし い光学系を使用したに相当する画像を得る。 但し, フーリエ逆変換の式は (1 1) に示 す。 (9). ()] = ¾ ^)] Therefore, if the function shown on the right side of Equation (10) is used as image correction information and stored and stored for each image input device, it is possible to correct the image after photographing to a desired image by the optical system. In other words, the Fourier transform form of the image during normal shooting corresponds to F [I (χ ', y')]. Therefore, after multiplying the image correction information according to Eq. (9), the desired result is obtained by inverse Fourier transform. To obtain an image equivalent to using an optical system. However, the equation for the inverse Fourier transform is shown in (11).
( 1 0 ) Fi x'.y')! F[|ho(x, y)lJ] (1 0) Fi x'.y ')! F [| ho (x, y) l J ]
F[l(x',y )] F[|h(x, y)|2] F [l (x ', y)] F [| h (x, y) | 2 ]
( 1 1 >(x',y ) = F [Ι。(χ' , y' ) ]exp[i2n(Xx' +Y ) JdXdY ここで, 個々の画像入力機器毎に (1 0) 式で示すような画像補正情報を予め求めて 記憶しておく必要があるが, (6) , (8) 式から (1 0) 式を導出する過程で明らか なように校正用図形を撮影して得られた画像の強度分布のフーリエ変換形で設計値とし て与えられる望ましい光学系を用いて得られる画像のフーリェ変換形を除して演算導出 出来る。 特に校正用図形を点図形或いは点光源とすると, (6) 式に於いて画像は伝達 関数 h (x, y) の自己相関関数となる I h (x, y) に等しい事になるので理解 しゃすい。 但し, 実際にはイメージセンサーの画素は限定された範囲で離散的に存在す るので上述の各式におけるフーリェ変換, 逆変換は専ら F F Tのアルゴリズムによる事 になる。 (9) , (1 0) 式における空間周波数領域での積或いは除算は行列演算で実 施することになる。 周知のことであるのでこの点の詳しい説明は省略した。 以上, 図 1及び図 2を用いて本発明による画像補正の概念を説明したが, 第一の実施 例として図 3に画像補正方法の手順を示す。 第一の実施例では校正モード, 撮影■補正 モードを有し, それぞれ次のような手順で実施される。 (1 1> (x ', y) = F [Ι. ( Χ ', y ')] exp [i2n (Xx' + Y) JdXdY Here, the expression (10) is given for each image input device. It is necessary to obtain and store such image correction information in advance, but it can be obtained by photographing a calibration figure as is clear in the process of deriving equation (10) from equations (6) and (8). The calculation can be derived by subtracting the Fourier transform form of the image obtained using the desired optical system given as the design value in the Fourier transform form of the intensity distribution of the image. In Equation (6), the image is equivalent to I h (x, y), which is the autocorrelation function of the transfer function h (x, y). Since they exist discretely within a limited range, the Fourier transform and inverse transform in each of the above equations are exclusively performed by the FFT algorithm. (9), (10) The product or division in the spatial frequency domain in the equation is performed by matrix operation, and since it is well known, detailed description of this point is omitted. The concept of the correction was explained, but the procedure of the image correction method is shown as a first embodiment in Fig. 3. The first embodiment has a calibration mode and a shooting and correction mode, each of which is implemented in the following procedure. Is done.
先ず校正モードでは, ステップ (1) で撮影の条件である距離, 絞り等を設定し, ス テツプ (2) で校正用図形の画像を取り込み第一の画像とする。 ステップ (3) で校正 用図形に対応する有るべき画像を読み出し第二の画像とする。 ステップ (4) でィメー ジセンサーの原色の一つを選択し, 以後画像はこの原色の成分間で処理をする。 ステツ プ (5 ) で第二の画像のフーリエ変換形を第一の画像のフーリエ変換形で除して画像補 正情報とする。 ステップ (6 ) で原色の選択を変えてステップ (5 ) を繰り返して各原 色毎の画像補正情報を求める。 更にステップ (7 ) で絞りを変えてステップ (1 ) 一 ( 6 ) を繰り返して複数の絞りの値に対応する画像補正情報を求める。 First, in the calibration mode, the conditions for shooting, such as the distance and aperture, are set in step (1), and the image of the calibration figure is captured in step (2) and used as the first image. In step (3), a desired image corresponding to the graphic for calibration is read out and set as a second image. In step (4) One of the primary colors of the di-sensor is selected and the image is processed between the components of this primary color. In step (5), the Fourier transform of the second image is divided by the Fourier transform of the first image to obtain image correction information. In step (6), the selection of primary colors is changed, and step (5) is repeated to obtain image correction information for each primary color. Further, in step (7), the aperture is changed, and steps (1) and (6) are repeated to obtain image correction information corresponding to a plurality of aperture values.
撮影 '補正モードでは, ステップ (8 ) で撮影した画像を取り込み, ステップ (9 ) でイメージセンサ一の原色を選択して以後はこの原色成分の画像について処理をする。 ステップ (1 0 ) で補正対象の画像撮影時の絞りに対応する画像補正情報を読み出す。 絞りに合致した条件の画像補正情報が無い時には前後の画像補正情報から内挿して求め る。 ステップ (1 1 ) でステップ (8 ) の画像のフーリエ変換形に画像補正情報を乗じ, ステップ (1 2 ) でフーリエ逆変換を行って一つの原色に関する補正画像を得る。 ステ ップ (1 3 ) で原色を変えてステップ (1 0 ) — (1 2 ) を繰り返して各原色毎の補正 された画像を得て画像補正を完成させる。 イメージセンサーの原色毎に画像補正情報を求め, 原色成分毎に補正を実施するのは 光学系がそれぞれの原色間で必ずしも同一条件ではないからである。 設計上は当然に配 慮が必要であるが, 調整工程も少なからず存在し, 第一の実施例のように画像補正を前 提とすることになればそれらによるコストアップ要因を少なからず軽減できる。 また, 画像の歪みの状態を変化させる要素として他にもしズームレンズが有ればそれもまた考 慮しなければならない。 第一の実施例の手順に於いて, 絞りと同様にズームレンズによ る拡大倍率の各々について画像補正情報を求めておくべきであろう。 撮影に於いて, 原則的には被写体を忠実に画像とするのが望ましい事は当然である。 デルタ関数で近似される点光源はやはりデルタ関数で表される画像となるのが理想であ り, その意味で理想的な伝達関数はデルタ関数となる。 理論的にデルタ関数のフーリエ 変換形は定数となるので点光源に対する実画像をフーリエ変換して理想的な画像補正情 報はその逆数となる。 し力 しながら, このような条件では実際には過補正となって別の 歪みを引き起こす可能性がある。 すなわち, 数学上のフーリエ変換領域は無限遠である が, 光学系の開口は有限であり, またディジタル画像を取り入れるためのイメージセン サ一の画素間の距離も有限である。 この点からもフーリエ変換された画像の含む周波数 成分には低域周波数も高域周波数も限界があり, 存在していないような周波数成分まで も増幅或いは圧縮して捕償をすることには無理がある。 画像補正の過程で目標とする設 計伝達関数は採用した結像光学系の品質, ィメージセンサーの分解能等を考慮して設定 する。 校正システムを用いる場合の要点は, 校正時に於ける取り込んだ画像と望ましい光学 系であるべきとした画像との位置, 図形の大きさ等の条件整合を正確に行うことである。 上述の校正用図形では点図形を用いたが, 中央の点図形に加えて離れた位置にも同様な 点図形を配し, 空間周波数領域で画像補正情報を演算する前の実空間領域でそれぞれの 位置, 大きさ等をそれらの点図形の中心を基準に正規化を実施すれば問題の解消は容易 である。 但し, 移動及び伸縮させるのは望ましい光学系であるべきとした画像の側であ り, 画像補正情報の演算に用いるのは領域中央近辺の点図形のみで他は除外して処理を 簡単化する。 画像の補正を周波数領域での画像補正情報により実施する第一の実施例は画像の分解 能補償もある程度可能で応用分野の広いものであるが, その反面で実際に適用可能な条 件には厳しい面がある。 即ち, その理論的な根拠は (1 ) 式の近似成立の妥当性にある 力 実際には光軸に関して被写体が何処に有るかで伝達関数は変化する。 結像光学系の 絞りにより開口が十分に小で結像光学系を通過する光線が中心軸近傍のみであればかな りの範囲で正しいとされ, 逆に開口が大きいと中心軸から離れた被写体の像は歪みが大 となる。 例えば, 図 4のように複数の点光源 4 3が全体に分散している場合, その画像 は誇張して表せば, 図 5の番号 5 3 , 5 4, 5 5のようにそれぞれ異なった形態となる。 また, もし光学系がズームレンズをも含むなら倍率の各状態でも伝達関数は異なる事に なる。 第二の実施例に示す画像補正方法はそのような実際への応用面に当たって現実的 な解決を提供するもので以下に図 4から図 9を用いて説明する。 図 4は第二の実施例で使用する校正用図形 41を示し, 全体を複数の区分 42に分割 し, それぞれに点光源 43を配置する。 また, 図 5は校正用図形 41を取り込んだ画像 51を示し, 番号 52で示す複数の区分に分割されているものとする。 それらの区分に は点光源 43の像が存在するが, 区分の場所によって番号 53, 54, 55に示すよう に中心部と周辺部とで異なった形となる。 第二の実施例に示す画像補正方法の考え方は このような実情に即し, 画面を複数の区分に分割し, 区分毎に周波数領域での画像補正 情報を求め, 区分毎に画像を補正し, 画像を連結合成して完成させるものである。 第二の実施例での画像補正方法を図 6では校正モードの手順を, 図 7では撮影■補正 のモードの手順をそれぞれ説明する。 図 6に示す校正モードでは, ステップ (1) で画 面を複数の区分に分割する。 ステップ (2) で校正用図形との距離, 絞り等の撮影条件 を設定する。 ステップ (3) で図 4のように区分毎に点光源を配した校正用図形の画像 を取り込む。 ステップ (4) でイメージセンサーの原色を選択し, 以後この原色成分の 画像に関して処理をする。 ステップ (5) で区分毎に点光源の画像のフーリエ変換を行 う。 ステップ (6) で点光源のあるべき正しい画像のフーリエ変換を行う。 これは全区 間共通であり, フーリエ変換形は予め求めてこれをメモリーに蓄えておいても良い。 但 し, 位置, 大きさ等の正規化は第一の実施例で述べたと同様な処理は必要である。 ステ ップ (7) でステップ (6) で求めた画像のフーリエ変換形をステップ (5) で求めた フーリエ変換形で除して周波数領域での区分画像補正情報とする。 ステップ (8) で別 の原色を選択してステップ (5) - (7) を繰り返す。 ステップ (9) で絞りを変え, ステップ (3) — (8) を繰り返して複数の絞りに対して周波数領域での区分画像補正 情報を求める。 図 7は撮影 '補正モードでの手順を示す。 ステップ (10) で画像撮影する。 ステツ プ (1 1) でイメージセンサーの原色を選択, 以後この原色の成分の画像間で処理を行 う。 ステップ (12) で画面内の区分を選択。 ステップ (1 3) で区分内の画像に窓関 数を乗じてフーリエ変換をする。 ステップ (14) で撮影時の絞りに適合する周波数領 域での区分画像情報を読み出す。 撮影時の絞りに適合した区分画像補正情報が無ければ 前後の区分画像補正情報から内揷或いは外挿する。 ステップ (1 5) でステップ (1 3) で求めたフーリエ変換形に, ステップ (14) で読み出した周波数領域での区分画 像補正情報を乗じ, フーリエ逆変換により区分毎の補正画像を得る。 ステップ (16) で隣接する部分が重なり合うよう新たな区分を設定, ステップ (1 3) — (15) を繰 り返して画面の全領域を走査して補正画像を完成させる。 ステップ (1 7) で原色の選 択を変え, ステップ (1 2) - (16) を繰り返す。 ここで, ステップ (12) , (13) での区分及び演算, 及びステップ (15) で得 る区分毎の補正画像について図 8, 図 9を用いて説明する。 図 8に於いて, 番号 81は 図 5に於ける区分 52に対応する区分を示すが, 図 7のステップ (12) に示す区分は これより大の区分 82, 83, 84等を示す。 ステップ (13) で乗じる窓関数は区分 の境界でゼロ, 中心で 1. 0になるような重み関数である。 図 9に区分 82と対比させ て示す番号 91の曲線がそれであり, (x, y) の二次元に分布する曲面であるが, 図 9では断面を示している。 フ一リエ変換は周期的な関数を対象にしているのでこのよう にして境界条件を合わせる。 したがって, このように窓関数を乗じて演算を施し, フ一 リエ逆変換を経てステップ (1 5) で補正された画像を得ても区分の境界近傍では正し い結果とならないので, その区分の中心部 85の部分のみを補正された画像とする。 ス テツプ (16) ではこの補正された画像が繋がるように隣接する区分は重なり合うよう, 例えば区分 82の次は区分 83, 或いは上下には区分 84のように区分を選定移動させ て全画面を順次走査する。 本発明の重要な目的の一つはあまり性能は良くないが安価な結像光学系を用い, ディ ジタル処理によって画質を高級品並に向上させる事である。 その観点に立てば, むしろ 図 5に点光源の画像が示されるような品質の良くない結像光学系を持つディジタル画像 入力機器こそが対象であるべきである。 第二の実施例は結像光学系の伝達関数は場所に よって変化する事を許容しつつ連続的に変化する事を前提に画面上の複数の点での伝達 関数から内挿して任意の点での画像補正情報を得, 区分的に画像を補正, 連結合成する。 第一の実施例では全画面を対象にフーリエ変換, 及びその逆変換の演算を行った。 しか し, 第二の実施例では全画面を複数の区分に分割し, 区分毎にフーリエ変換, 逆変換の 演算を実施している。 最近の傾向では 1 0 0万以上もの画素を有するイメージセンサ一 が一般的になりつつあるが, 1 0 0万もの画素を対象に一括してフーリエ変換, 逆変換 を行うよりは区分的に演算する方が遙かに容易であり, また高速化の為に演算ュニット を専用ハードウエア化する事も容易になる。 第一, 第二の実施例では周波数領域での補正を前提にフーリエ変換, 逆変換が必須で ある。 上記説明での数式は連続量での表現であるが, イメージセンサー上の画素は離散 的に存在するのでこの画素分布に合わせて離散的な計算形態に変換して実施する。 それ らについては周知の事柄であるので特に離散的な表現式まで引用しては説明しなかった。 当然に離散的なモデルで高速にフーリエ変換をするので F F Tを用いてのフーリエ変換, フーリェ逆変換を行うことになるが, 演算時間はフーリェ変換に F F Tを使用しても画 素数が多いのでかなり要し, 撮影と並行しての画像補正実施は現段階の演算ュニッ卜で は未だ実用レベルではないかもしれない。 しかしながら, これは第二の実施例のように 区分を決めて F F Tの演算を専用ハ一ドウエア化が出来ればかなりの時間短縮は可能に なる。 本発明の第一及ぴ第二の実施例の画像補正方法について説明した。 この画像補正方法 はディジタル画像入力機器による画像をそのディジタル画像入力機器固有の画像補正情 報を基に補正を行って正しい画像を得る方法であり, 画像補正情報は予め用意しても, 事後に収集生成しても可能であり, パーソナルコンピューターに必要なプログラムを搭 載してもまた専用の機器によっても可能である。 もちろん, 個々のディジタルカメラ等 の機器に搭載することも可能であり, 第三の実施例としてディジタルカメラを例に図 1 0, 図 1 1を用いて説明する。 図 1 0は本発明の第三の実施例であるディジタルカメラ 1 0 0の外観を示す。 同図に 於いては, 番号 1 0 1は結像光学系であるレンズを, 番号 1 0 2はファインダー及び撮 影後の画像表示の為の液晶ディスプレイを, 番号 1 0 3はシャッターボタンを, 番号 1 0 4は各種機能切り替えの為のモードセレクタ一を, 番号 1 0 5は電源スィッチを, 番 号 1 0 6は外部機器との接続コネクターをそれぞれ示す。 図 1 1は, 本発明に於ける第三の実施例であるディジタルカメラ 1 0 0の機能プロッ ク図を示す。 結像光学系であるレンズ 1 0 1によって被写体の画像はイメージセンサ一 1 1 1上に形成され, シャッターボタン 1 0 3を押す事により起動されて制御部 1 1 2 はイメージセンサ一 1 1 1により画像を電気信号に変えてメモリ一 1 1 4に記憶させる。 番号 1 1 3は制御部 1 1 2内の制御メモリー, 番号 1 1 5は電池を, それぞれ示す。 通常の撮影モードか, 或は校正モードかはモードセレクタ一 1 0 4の指示により切り 替える。 校正モードでは, 被写体として予め定めた校正用図形 1 1 6を用い, 校正用図 形 1 1 6とレンズ 1 0 1間の距離, 或は明るさ, 更には絞り, シャッター速度等諸種の 条件を設定し, 校正用図形 1 1 6の画像を制御部 1 1 2はメモリー 1 1 4に取り込む。 その後制御部 1 1 2は予め決められた分割方法に従って画面を分割し, 区分毎にフーリ ェ変換を行い, 制御メモリー 1 1 3の一部に記憶させてあった設計上望ましい光学系か らの画像のフーリエ変換形とから, 区分画像補正情報を演算導出して制御メモリー 1 1 3に記憶する。 これらの処理, 演算は第二の実施例に於いて示された手順に沿うプログ ラムに指示されて制御部 1 1 2が実行する。 この処理プログラムは制御メモリー 1 1 3 に記憶されている。 また通常の撮影時には, レンズ 1 0 1, イメージセンサー 1 0 2に より取り込まれた画像をメモリー 1 1 4に記録し, 前記制御メモリー 1 1 3に記憶せし められている画像捕正情報と併せて制御メモリ一 1 1 3に格納されているプログラムの 指示により制御部 1 1 2が画像補正を実行してメモリ一 1 1 4に蓄える。 しかしながら, フーリエ変換に F F Tの演算方式を用いたとしてもフーリエ変換, 逆 変換の処理には少なからぬ時間を要するので, 通常の撮影後には直ちに画像補正を実行 せずに撮影条件と共に画像をメモリー 1 1 4に蓄え, 別途モードセレクタ一 1 0 4から の指示で画像補正を実施する方が現実的であろう。 或いは校正モードでは校正用図形の 画像のみを取り込み, 撮影モードではやはり撮影条件と画像のみを取り込んでパ一ソナ ルコンピューター或いは専用機で後でまとめて画像捕正を実行する方法も可能である。 本発明の実施例としては特に図を以て説明はしなかったが, 色彩情報の再現性を改善 する事も本発明には含まれる。 色彩の再現もまたカラー画像では大きな問題ではあるが, 結像光学系に色収差はあり, またイメージセンサーにも色に対する感度差, 場合によつ てはそれぞれの画素レベルでも感度差が存在する。 これらの誤差を全て製造プロセスに 帰して結局コストを上げるのは社会的な損失であるが, 本発明に依ればまた歪み改善等 と同様な考えで解決を図る事が出来る。 すなわち, イメージセンサーの原色毎に画像補 正情報を持って補正することにより, 結像光学系の色収差を完全に補正出来る。 更に校 正モ一ドで使用する校正用図形内に色彩の再現性を検査できるようなパターンを含ませ て画素毎に補正情報を導出する事でイメージセンサーの場所による感度差の解決も可能 である。 例えば, 色彩の再現性に関係のあるパラメーター, 明度, 彩度等を異ならせた パターンを画面内に適度に配置してディジタルカメラ内に画像を取り込む。 画像内のそ れぞれのパターン内の色彩の属性は予め判っているので得られた画像からどう読み替え るかを画像補正情報とする。 このように画像補正情報を決め, 画像の補正時にこれら画 像補正情報に従って画素毎にその出力を読み替えて補正する。 この過程で結像光学系で の色収差も含んでイメージセンサーの色変換機能を補正して色彩に関する忠実度を向上 させる事が出来る。 以上, 実施例を用いて説明したように, 第一, 第二のの実施例で画像補正方法を説明 し, 第三の実施例でディジタル画像入力機器に画像補正方法をプロダラム化して格納し, 校正及び画像補正を実行する例を説明した。 実施の形態は本発明に従って, 画像入力機 器内に校正に必要なプログラム或いはデータ等全てをもって完結したシステムとする事 もできるし, また汎用のパーソナルコンピュータ一等のシステムに必要なプログラムを 用いて撮影後に画像補正する事も可能であり, それらのプログラムは本発明の対象にな る。 後者の場合, 撮影画像とディジタル画像入力機器の製造番号等を含む固有の識別番 号と結合させる事で画像補正時の錯綜を防止する事も重要となろう。 産業上の利用可能性 In the "photographing" correction mode, the image captured in step (8) is fetched, and in step (9), the primary color of the image sensor is selected, and thereafter, the image of this primary color component is processed. In step (10), image correction information corresponding to the aperture at the time of photographing the image to be corrected is read. When there is no image correction information of a condition that matches the aperture, the image correction information is obtained by interpolating from the preceding and following image correction information. In step (11), the Fourier transform form of the image in step (8) is multiplied by image correction information, and in step (12), inverse Fourier transform is performed to obtain a corrected image for one primary color. In step (13), the primary colors are changed, and steps (10)-(12) are repeated to obtain a corrected image for each primary color to complete the image correction. Image correction information is obtained for each primary color of the image sensor, and correction is performed for each primary color component because the optical system does not always have the same conditions between the primary colors. Naturally, care must be taken into account in the design, but there are quite a few adjustment steps, and if image correction is assumed as in the first embodiment, the cost increase factors can be reduced considerably. . In addition, if there is another zoom lens as an element that changes the state of image distortion, it must also be considered. In the procedure of the first embodiment, image correction information should be obtained for each of the magnifications by the zoom lens as well as the aperture. In photographing, it is, of course, generally desirable to faithfully image the subject. Ideally, a point light source approximated by the delta function would be an image represented by the delta function, and in that sense, the ideal transfer function would be the delta function. Theoretically, since the Fourier transform form of the delta function is a constant, the real image for the point light source is Fourier transformed and the ideal image correction information is the reciprocal thereof. However, under these conditions, overcorrection may actually occur and another distortion may occur. That is, the mathematical Fourier transform domain is infinite However, the aperture of the optical system is finite, and the distance between pixels of the image sensor for taking in digital images is also finite. From this point as well, the frequency components included in the Fourier-transformed image have both low-frequency and high-frequency limits, and it is impossible to amplify or compress even frequency components that do not exist. There is. The target design transfer function in the process of image correction is set in consideration of the quality of the imaging optical system used, the resolution of the image sensor, and the like. The point of using the calibration system is to accurately match the conditions such as the position and the size of the figure between the captured image and the image that should be a desirable optical system at the time of calibration. A point figure was used in the calibration figure described above. In addition to the central point figure, a similar point figure was also placed at a distant position, and in the real space area before image correction information was calculated in the spatial frequency domain. The problem can be easily solved by normalizing the position, size, etc. of the points based on the centers of the point figures. However, moving and expanding / contracting is performed on the side of the image that should be a desirable optical system, and only the point figure near the center of the area is used to calculate the image correction information, simplifying the process by excluding others. . In the first embodiment, in which image correction is performed using image correction information in the frequency domain, image resolution can be compensated to some extent and the field of application is wide, but on the other hand, conditions that can be applied in practice are: There is a tough side. That is, the theoretical basis is based on the validity of the approximation of Eq. (1). Force The transfer function actually changes depending on where the subject is located with respect to the optical axis. If the aperture of the imaging optical system has a sufficiently small aperture and the light passing through the imaging optical system is only near the central axis, it is considered correct in a considerable range if the aperture is large. The image has large distortion. For example, if a plurality of point light sources 43 are dispersed throughout as shown in Fig. 4, the image can be exaggerated and expressed in different forms as indicated by numbers 53, 54, 55 in Fig. 5. Becomes Also, if the optical system includes a zoom lens, the transfer function will be different at each magnification. The image correction method shown in the second embodiment provides a practical solution in such practical application, and will be described below with reference to FIGS. Fig. 4 shows a calibration graphic 41 used in the second embodiment. The whole is divided into a plurality of sections 42, and a point light source 43 is arranged in each section. FIG. 5 shows an image 51 in which the graphic 41 for calibration is captured, and is assumed to be divided into a plurality of sections indicated by reference numeral 52. The image of the point light source 43 exists in those sections, but the shape differs between the center and the periphery as shown by numbers 53, 54, and 55 depending on the location of the section. The concept of the image correction method described in the second embodiment is based on this situation, and the screen is divided into a plurality of sections, image correction information in the frequency domain is obtained for each section, and the image is corrected for each section. , Images are combined and completed. FIG. 6 illustrates the procedure of the calibration mode in the image correction method in the second embodiment, and FIG. 7 illustrates the procedure of the imaging / correction mode in FIG. In the calibration mode shown in Fig. 6, the screen is divided into multiple sections in step (1). In step (2), the photographing conditions such as the distance from the calibration figure and the aperture are set. In step (3), as shown in Fig. 4, an image of a calibration figure in which point light sources are arranged for each section is captured. In step (4), the primary color of the image sensor is selected, and the image of this primary color component is processed thereafter. In step (5), a Fourier transform of the image of the point light source is performed for each section. In step (6), a Fourier transform of a correct image that should have a point light source is performed. This is common to all sections, and the Fourier transform type may be obtained in advance and stored in memory. However, normalization of the position, size, etc. requires the same processing as described in the first embodiment. In step (7), the Fourier transform form of the image obtained in step (6) is divided by the Fourier transform form obtained in step (5) to obtain divided image correction information in the frequency domain. In step (8), select another primary color and repeat steps (5)-(7). In step (9), the aperture is changed, and steps (3)-(8) are repeated to obtain the segmented image correction information in the frequency domain for multiple apertures. FIG. 7 shows the procedure in the photographing'correction mode. Take an image in step (10). In step (11), the primary colors of the image sensor are selected, and processing is performed between images of the components of these primary colors. Select the section in the screen in step (12). In step (13), the image in the section is multiplied by the window function to perform Fourier transform. In step (14), the divided image information in the frequency range suitable for the aperture at the time of shooting is read. If there is no section image correction information suitable for the aperture at the time of shooting Interpolate or extrapolate from the preceding and following sectioned image correction information. In step (15), the Fourier transform obtained in step (13) is multiplied by the segmented image correction information in the frequency domain read in step (14), and a corrected image for each segment is obtained by inverse Fourier transform. In step (16), a new section is set so that adjacent parts overlap, and steps (13) to (15) are repeated to scan the entire area of the screen to complete the corrected image. In step (17), the selection of the primary color is changed, and steps (12) to (16) are repeated. Here, the sections and calculations in steps (12) and (13) and the corrected image for each section obtained in step (15) will be described with reference to FIGS. In FIG. 8, the number 81 indicates the section corresponding to the section 52 in FIG. 5, but the section shown in step (12) in FIG. 7 indicates the larger sections 82, 83, 84, etc. The window function to be multiplied in step (13) is a weight function such that it becomes zero at the boundary of the section and 1.0 at the center. This is the curve with the number 91 shown in Fig. 9 in comparison with the section 82, which is a two-dimensionally distributed surface of (x, y), but Fig. 9 shows a cross section. Since the Fourier transform targets a periodic function, the boundary conditions are adjusted in this way. Therefore, even if the image is multiplied by the window function and the image corrected in step (15) is obtained through the inverse Fourier transform, a correct result is not obtained near the boundary of the segment. Only the portion at the center 85 of is corrected. In step (16), the adjacent sections overlap so that the corrected image is connected. For example, the section is selected and moved, such as section 83 next to section 82, or section 84 above and below, and the entire screen is sequentially moved. Scan. One of the important objectives of the present invention is to use a low-cost but inexpensive imaging optical system and improve the image quality to the level of high-end products by digital processing. From that point of view, rather, digital image input equipment with poor quality imaging optics, such as the one shown in Figure 5 with a point light source image, should be the target. The second embodiment is based on the assumption that the transfer function of the imaging optical system continuously changes while allowing the transfer function to change depending on the location. The image correction information obtained in step (1) is obtained, and the images are corrected in a piecewise manner and combined. In the first embodiment, the calculation of the Fourier transform and the inverse transform are performed on the entire screen. Only However, in the second embodiment, the entire screen is divided into multiple sections, and the Fourier transform and inverse transform are performed for each section. According to recent trends, image sensors with more than 100,000 pixels are becoming more common. However, rather than performing a Fourier transform or inverse transform on 100,000 pixels at a time, they operate in a piecewise manner. It is much easier to do this, and it is also easier to use dedicated hardware for the arithmetic unit for speeding up. In the first and second embodiments, the Fourier transform and the inverse transform are indispensable on the premise of correction in the frequency domain. Although the formulas in the above description are expressed as continuous quantities, the pixels on the image sensor exist discretely, and are converted to a discrete calculation form in accordance with this pixel distribution. Since they are well known, they have not been described especially with reference to discrete expressions. Naturally, the Fourier transform is performed at high speed using a discrete model, so the Fourier transform and the inverse Fourier transform are performed using the FFT. However, even if the FFT is used for the Fourier transform, the number of pixels is large. In short, image correction performed in parallel with shooting may not be at a practical level at the current stage of computation unit. However, as in the second embodiment, if the division can be determined and the FFT operation can be made into dedicated hardware, the time can be significantly reduced. The image correction method according to the first and second embodiments of the present invention has been described. This image correction method is a method of correcting an image by a digital image input device based on image correction information unique to the digital image input device to obtain a correct image. It can be collected and generated, and the necessary programs can be installed in a personal computer, or it can be made by dedicated equipment. Of course, it can also be installed in individual devices such as digital cameras. The third embodiment will be described with reference to FIGS. 10 and 11 using a digital camera as an example. FIG. 10 shows the appearance of a digital camera 100 according to a third embodiment of the present invention. In the figure, number 101 is the lens which is the imaging optical system, number 102 is the viewfinder and the liquid crystal display for displaying the image after shooting, number 103 is the shutter button, Number one Reference numeral 04 denotes a mode selector for switching various functions, reference numeral 105 denotes a power switch, and reference numeral 106 denotes a connector for connection to an external device. FIG. 11 is a functional block diagram of a digital camera 100 according to a third embodiment of the present invention. The image of the subject is formed on the image sensor 111 by the lens 101 which is an image forming optical system, and is activated by pressing the shutter button 103, and the control unit 112 is controlled by the image sensor 111. To convert the image into an electric signal and store it in the memory 114. The numbers 1 13 indicate the control memory in the control unit 112, and the numbers 115 indicate the batteries. The mode can be switched between the normal shooting mode and the calibration mode by the instruction of the mode selector 104. In the calibration mode, a predetermined calibration figure 1 16 is used as the subject, and the distance or brightness between the calibration figure 1 16 and the lens 101, as well as various conditions such as aperture and shutter speed, are determined. After setting, the control unit 112 loads the image of the calibration figure 111 into the memory 114. After that, the control unit 112 divides the screen according to a predetermined division method, performs a Fourier transform for each section, and stores a part of the control memory 113 from the design-required optical system. Based on the Fourier transform type of the image, the divided image correction information is calculated and stored in the control memory 113. These processes and operations are executed by the control unit 112 instructed by a program following the procedure shown in the second embodiment. This processing program is stored in the control memory 113. During normal shooting, the image captured by the lens 101 and the image sensor 102 is recorded in the memory 114, and the image correction information stored in the control memory 113 is stored in the memory 114. At the same time, the control unit 112 executes the image correction in accordance with the instruction of the program stored in the control memory 111, and stores the image in the memory 114. However, even if the FFT calculation method is used for the Fourier transform, the Fourier transform and the inverse transform require a considerable amount of time, so the image is stored together with the shooting conditions without performing image correction immediately after normal shooting. It would be more realistic to store the data in 14 and perform the image correction separately according to the instruction from the mode selector 104. Alternatively, in the calibration mode, only the image of the calibration figure is captured, and in the shooting mode, only the shooting conditions and the image are captured. It is also possible to collectively collect images later using a computer or dedicated machine. Although the embodiment of the present invention has not been particularly described with reference to the drawings, improving the reproducibility of color information is also included in the present invention. Color reproduction is also a major problem in color images, but there are chromatic aberrations in the imaging optics, and there are also differences in sensitivity to color in image sensors and, in some cases, at each pixel level. It is a social loss to increase all costs due to all these errors in the manufacturing process. However, according to the present invention, it is possible to solve the same problem by improving the distortion. In other words, the chromatic aberration of the imaging optical system can be completely corrected by performing correction with the image correction information for each primary color of the image sensor. Furthermore, by including correction patterns for the color reproducibility in the calibration pattern used in the calibration mode and deriving correction information for each pixel, it is possible to resolve sensitivity differences depending on the location of the image sensor. is there. For example, a pattern with different parameters related to color reproducibility, lightness, saturation, etc. is appropriately placed on the screen and the image is captured into the digital camera. Since the color attributes in each pattern in the image are known in advance, how to read from the obtained image is used as image correction information. In this way, the image correction information is determined, and when the image is corrected, the output is read for each pixel in accordance with the image correction information and the correction is performed. In this process, the color conversion function of the image sensor is corrected, including the chromatic aberration in the imaging optical system, so that the fidelity of color can be improved. As described above with reference to the embodiment, the image correction method is described in the first and second embodiments, and the image correction method is programmed and stored in the digital image input device in the third embodiment. The example of executing the calibration and the image correction has been described. According to the present invention, according to the present invention, it is possible to provide a complete system with all the programs or data necessary for calibration in an image input device, or to use a program necessary for a system such as a general-purpose personal computer. It is also possible to correct images after shooting, and those programs are the subject of the present invention. In the latter case, it will be important to prevent complications during image correction by combining the captured image with a unique identification number that includes the serial number of the digital image input device. Industrial applicability
本発明はディジタルカメラを実施例に挙げて説明したようにディジタル方式の画像入 力機器に於いて, 個々の機器の結像光学系或いは画像 '電気信号変換系の不完全さを補 正できる画像補正情報を記憶し, 画像を得る毎に或いは後でまとめてその画像補正情報 を利用して正しい或いは更に品質の良い画像を得ようとするもので本発明により容易に 構成実現できる事を説明した。 このように, 本発明によって安価で低品質の結像光学系 或いはイメージセンサーを使用してもディジタル技術により実用レベルの品質を有する ディジタル画像を得る事が可能である事を示した。  According to the present invention, as described in the embodiment with a digital camera, in a digital image input device, an image capable of correcting imperfections of an image forming optical system or an image signal conversion system of each device. It has been described that the correction information is stored, and the correct or higher quality image is obtained by using the image correction information every time the image is obtained or collectively afterwards, and can be easily realized by the present invention. . As described above, the present invention has shown that a digital image having a practical level of quality can be obtained by digital technology even if an inexpensive and low-quality imaging optical system or image sensor is used.
本発明は完成された機器のみでなく, 製造工程をも含めてシステムとして低コスト実 現の方法を提供することもできる。 即ち, 製造工程に於いて色収差或いはイメージセン サ一の原色間で何らの調整が必要であるなら, 校正用図形を読み込ませた後機器内の校 正システムに任せて製造工程での調整を省略することも可能で低コスト化の手段とも出 来る。  The present invention can provide a method of realizing low cost as a system including not only completed devices but also manufacturing processes. In other words, if any adjustment is required between the chromatic aberration or the primary colors of the image sensor in the manufacturing process, the calibration figure is read and then left to the calibration system in the device to omit the adjustment in the manufacturing process. It is possible to do so and there are ways to reduce costs.
また, 一般に機器, 材料は製造後時間経過と共に変質或いは調整点の狂い等により初 期の性能を維持できない場合が多い。 本発明は, ディジタルカメラ, ビデオカメラ等デ ィジタル方式画像入力装置製造後に校正を行いディジタル処理技術の助けを借りて品質 性能を発揮せしめるものであるが, これもやはり経時的には校正点もずれると認識すベ きである。 本発明はこの点でも有効であって, 校正用のプログラムを装置内に内蔵させ, 或いは校正用制御装置を有する専門店, メーカ一等により再校正を実施して設計性能を 維持させる事もできる。 これも本発明の重要な目的の一^ 3である。  In general, equipment and materials often cannot maintain their initial performance due to deterioration or misalignment of adjustment points over time after production. In the present invention, calibration is performed after the production of digital image input devices such as digital cameras and video cameras, and the quality performance is demonstrated with the help of digital processing technology. Should be recognized. The present invention is also effective in this respect, and a calibration program can be built in the apparatus, or re-calibration can be performed by a specialty store or a manufacturer having a calibration control apparatus to maintain design performance. . This is also one of the important objects of the present invention.

Claims

請求の範囲 The scope of the claims
1. 少なくとも光学系とイメージセンサーとを有する画像入力機器に於いて, 既知の 校正用図形に対応する画像を得て空間周波数領域で光学系固有の画像補正情報を形成し, その画像補正情報により画像入力機器の出力画像を補正する事を特徴とする画像補正方 法及び画像入力機器で, 少なくとも以下のステップを有する。  1. In an image input device having at least an optical system and an image sensor, an image corresponding to a known calibration figure is obtained, and image correction information unique to the optical system is formed in the spatial frequency domain. An image correction method and an image input device characterized by correcting an output image of an image input device include at least the following steps.
(1) 既知の校正用図形を第一の画像として取り込み, フーリエ変換する。  (1) Capture a known calibration figure as the first image and perform Fourier transform.
(2) 校正用図形に対応する有るべき正しい画像を第二の画像としてフーリエ変換する。 (2) Fourier transform the correct image corresponding to the graphic for calibration as the second image.
(3) 第二の画像のフーリエ変換形を第一の画像のフーリエ変換形で除して周波数領域 での画像補正情報を形成し記憶する。 (3) Divide the Fourier transform form of the second image by the Fourier transform form of the first image to form and store image correction information in the frequency domain.
(4) 画像入力機器の出力画像をフーリエ変換し, ステップ (3) で形成された画像補 正情報を乗じた後にフーリエ逆変換して補正された画像を得る。  (4) Fourier transform the output image of the image input device, multiply by the image correction information formed in step (3), and perform inverse Fourier transform to obtain a corrected image.
2. 請求の範囲第 1項記載の画像補正方法及び画像入力機器に於いて, ステップ (1) 及び (2) の校正用図形を点図形或いは点光源とする事を特徴とする画像捕正方 法及び画像入力機器。 2. The image correction method and the image input device according to claim 1, wherein the calibration graphic in steps (1) and (2) is a point graphic or a point light source. And image input devices.
3. 請求の範囲第 1項記載の面像補正方法及び画像入力機器に於いて, 画像補正情報 をイメージセンサーの原色毎に求め, ステップ (4) で原色毎に補正して補正画像を得 る事を特徴とする画像補正方法及び画像入力機器。 3. In the surface image correction method and the image input device described in claim 1, image correction information is obtained for each primary color of the image sensor, and a corrected image is obtained by correcting each primary color in step (4). And an image input device.
4. 請求の範囲第 1項記載の画像補正方法及び画像入力機器に於いて, 絞り, 焦点距 離等複数の撮像条件毎にステップ (1) , (2) , (3) によりそれぞれの画像補正情 報群を導出記憶し, 撮像条件に応じて前記画像補正情報群の内挿或いは外挿により最適 の画像補正情報を生成し, ステップ (4) での画像補正に使用する事を特徴とする画像 補正方法及び画像入力機器 4. In the image correction method and the image input device according to claim 1, each image correction is performed by steps (1), (2), and (3) for each of a plurality of imaging conditions such as an aperture and a focal length. The information group is derived and stored, and optimal image correction information is generated by interpolation or extrapolation of the image correction information group according to imaging conditions, and is used for image correction in step (4). Image correction method and image input device
5. 請求の範囲第 1項, 第 3項, 第 4項記載の画像補正方法及び画像入力機器に於い て, 対象とする画像を複数の区分に分割し, 既知の校正用図形から取り込んだ出力画像 を区分毎にフーリエ変換して, 空間周波数領域で固有の区分画像補正情報を形成し, そ の区分画像補正情報により前記ディジタル画像入力機器の出力画像を空間周波数領域で 区分毎に補正し, 連結合成する事を特徴とする画像補正方法及び画像入力機器で, 以下 のステップを有する。 5. In the image correction method and the image input device according to claims 1, 3, and 4, the target image is divided into a plurality of sections, and the image is taken from a known calibration figure. Output image Is subjected to Fourier transform for each section to form unique segmented image correction information in the spatial frequency domain, and the output image of the digital image input device is corrected for each section in the spatial frequency domain using the segmented image correction information. An image correction method and an image input device characterized by combining are provided with the following steps.
(1) 対象とする画面を複数の区分に分割する。  (1) Divide the target screen into multiple sections.
(2) 既知の校正用図形を第一の画像として取り込み, 区分毎にフーリエ変換する。 (2) Capture a known calibration figure as the first image and perform Fourier transform for each section.
(3) 校正用図形に対応する有るべき正しい画像を第二の画像として区分毎にフーリエ 変換する。 (3) Fourier transform is performed for each section as a second image, which should be a correct image corresponding to the calibration figure.
(4) 第二の画像の区分毎のフーリェ変換形を第一の画像の対応する区分毎のフ一リェ 変換形で除して周波数領域の区分画像補正情報を形成し記憶する。  (4) Dividing the Fourier transform type for each section of the second image by the Fourier transform type for each corresponding section of the first image to form and store frequency domain sectioned image correction information.
(5) ディジタル画像入力機器の出力画像を区分毎にフーリエ変換し, ステップ (4) で形成された周波数領域の区分画像補正情報を乗じた後に区分毎にフーリエ逆変換して 区分毎の補正された画像を得る。  (5) Fourier-transform the output image of the digital image input device for each section, multiply by the frequency-domain section image correction information formed in step (4), and then perform Fourier inverse transform for each section to correct for each section. Image.
(6) ステップ (5) で得られた区分毎の補正画像を連結合成して補正された画像を得 る。  (6) The corrected images for each section obtained in step (5) are connected and combined to obtain a corrected image.
6. 請求の範囲第 5項記載の画像補正方法及び画像入力機器に於いて, ステップ (2) 及び (3) の校正用図形を複数の区分毎に配置した点図形或いは点光源とする事 を特徴とする画像補正方法及び画像入力機器 6. The image correction method and the image input device according to claim 5, wherein the calibration graphic in steps (2) and (3) is a point graphic or a point light source arranged in a plurality of sections. Characteristic image correction method and image input device
7. 請求の範囲第 5項記載の画像補正方法及び画像入力機器に於いて, ステップ ( 1 ) で分割された区分は隣接する区分毎に互いに重なり合つている事を特徴とする画 像補正方法及び画像入力機器 7. The image correction method and the image input device according to claim 5, wherein the sections divided in step (1) overlap each other for each adjacent section. And image input equipment
PCT/JP1999/003243 1998-06-22 1999-06-17 Image correcting method and image inputting device WO1999067743A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP19106998 1998-06-22
JP10/191069 1998-06-22

Publications (1)

Publication Number Publication Date
WO1999067743A1 true WO1999067743A1 (en) 1999-12-29

Family

ID=16268364

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP1999/003243 WO1999067743A1 (en) 1998-06-22 1999-06-17 Image correcting method and image inputting device

Country Status (1)

Country Link
WO (1) WO1999067743A1 (en)

Cited By (18)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005069216A1 (en) * 2004-01-15 2005-07-28 Matsushita Electric Industrial Co., Ltd. Measuring method for optical transfer function, image restoring method, and digital imaging device
US7295345B2 (en) * 2003-04-29 2007-11-13 Eastman Kodak Company Method for calibration independent defect correction in an imaging system
JP2008533550A (en) * 2005-01-19 2008-08-21 ドゥ ラブズ Method for manufacturing an image recording and / or reproduction device and device obtained by said method
JP2009122514A (en) * 2007-11-16 2009-06-04 Fujinon Corp Imaging system, imaging apparatus having the imaging system, portable terminal apparatus, onboard apparatus, medical apparatus, and method of manufacturing the imaging system
JP2009124568A (en) * 2007-11-16 2009-06-04 Fujinon Corp Imaging system, imaging apparatus with the imaging system, portable terminal apparatus, onboard equipment, and medical apparatus
JP2009124569A (en) * 2007-11-16 2009-06-04 Fujinon Corp Imaging system, imaging apparatus with the imaging system, portable terminal apparatus, onboard equipment, and medical apparatus
JP2009124567A (en) * 2007-11-16 2009-06-04 Fujinon Corp Imaging system, imaging apparatus with the imaging system, portable terminal equipment, onboard apparatus, medical apparatus, and manufacturing method of the imaging system
JP2009139698A (en) * 2007-12-07 2009-06-25 Fujinon Corp Imaging system, imaging apparatus, portable terminal apparatus, in-vehicle apparatus and medical apparatus with imaging system
JP2009141742A (en) * 2007-12-07 2009-06-25 Fujinon Corp Imaging system, imaging apparatus with the imaging system, mobile terminal device, on-vehicle device, and medical device
JP2009139697A (en) * 2007-12-07 2009-06-25 Fujinon Corp Imaging system, and imaging apparatus, portable terminal apparatus, in-vehicle apparatus and medical apparatus with imaging system, and method for manufacturing imaging system
JP2009159603A (en) * 2007-12-07 2009-07-16 Fujinon Corp Imaging system, imaging apparatus with the system, portable terminal apparatus, on-vehicle apparatus, medical apparatus, and manufacturing method of imaging system
US20100079626A1 (en) * 2008-09-30 2010-04-01 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and image pickup apparatus
US20100079615A1 (en) * 2008-09-30 2010-04-01 Canon Kabushiki Kaisha Image processing method, image processing apparatus, image pickup apparatus, and storage medium
WO2010038411A1 (en) * 2008-09-30 2010-04-08 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and image pickup apparatus
US8077247B2 (en) 2007-12-07 2011-12-13 Fujinon Corporation Imaging system, imaging apparatus, portable terminal apparatus, onboard apparatus, medical apparatus and method of manufacturing the imaging system
US8103123B2 (en) 2001-06-28 2012-01-24 Nokia Corporation Method and apparatus for image improvement
US8111318B2 (en) 2007-12-07 2012-02-07 Fujinon Corporation Imaging system, imaging apparatus, portable terminal apparatus, onboard apparatus, medical apparatus and method of manufacturing the imaging system
JP2012090310A (en) * 2011-12-09 2012-05-10 Canon Inc Image processing method, image processing apparatus, image pickup apparatus, and program

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0221374A (en) * 1987-12-29 1990-01-24 Minolta Camera Co Ltd Picture reader
JPH0232477A (en) * 1988-07-22 1990-02-02 Tokyo Graphic Aatsu:Kk Method and device for correcting picture information
JPH0442376A (en) * 1990-06-08 1992-02-12 Photo Composing Mach Mfg Co Ltd Digital picture processing method

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0221374A (en) * 1987-12-29 1990-01-24 Minolta Camera Co Ltd Picture reader
JPH0232477A (en) * 1988-07-22 1990-02-02 Tokyo Graphic Aatsu:Kk Method and device for correcting picture information
JPH0442376A (en) * 1990-06-08 1992-02-12 Photo Composing Mach Mfg Co Ltd Digital picture processing method

Cited By (32)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US8103123B2 (en) 2001-06-28 2012-01-24 Nokia Corporation Method and apparatus for image improvement
US7295345B2 (en) * 2003-04-29 2007-11-13 Eastman Kodak Company Method for calibration independent defect correction in an imaging system
WO2005069216A1 (en) * 2004-01-15 2005-07-28 Matsushita Electric Industrial Co., Ltd. Measuring method for optical transfer function, image restoring method, and digital imaging device
JP2008533550A (en) * 2005-01-19 2008-08-21 ドゥ ラブズ Method for manufacturing an image recording and / or reproduction device and device obtained by said method
KR101226423B1 (en) 2005-01-19 2013-01-24 디엑스오 랩스 Method for production of an image recording and/or reproduction device and device obtained by said method
JP2009124568A (en) * 2007-11-16 2009-06-04 Fujinon Corp Imaging system, imaging apparatus with the imaging system, portable terminal apparatus, onboard equipment, and medical apparatus
JP2009124567A (en) * 2007-11-16 2009-06-04 Fujinon Corp Imaging system, imaging apparatus with the imaging system, portable terminal equipment, onboard apparatus, medical apparatus, and manufacturing method of the imaging system
TWI402552B (en) * 2007-11-16 2013-07-21 Fujifilm Corp A photographing system, a photographing device including the photographing system, a portable terminal device, a vehicle-mounted device, and a medical device
JP2009124569A (en) * 2007-11-16 2009-06-04 Fujinon Corp Imaging system, imaging apparatus with the imaging system, portable terminal apparatus, onboard equipment, and medical apparatus
TWI401482B (en) * 2007-11-16 2013-07-11 Fujifilm Corp A photographing system, a manufacturing method thereof, a photographing device, a mobile terminal device, a vehicle-mounted device, and a medical device
TWI401481B (en) * 2007-11-16 2013-07-11 Fujifilm Corp A photographing system, a manufacturing method thereof, a photographing device, a mobile terminal device, a vehicle-mounted device, and a medical device
US8094207B2 (en) 2007-11-16 2012-01-10 Fujifilm Corporation Imaging system, imaging apparatus, portable terminal apparatus, onboard apparatus, and medical apparatus, and method of manufacturing the imaging system
US8149287B2 (en) 2007-11-16 2012-04-03 Fujinon Corporation Imaging system using restoration processing, imaging apparatus, portable terminal apparatus, onboard apparatus and medical apparatus having the imaging system
US8134609B2 (en) 2007-11-16 2012-03-13 Fujinon Corporation Imaging system, imaging apparatus, portable terminal apparatus, onboard apparatus, medical apparatus and method of manufacturing the imaging system
JP2009122514A (en) * 2007-11-16 2009-06-04 Fujinon Corp Imaging system, imaging apparatus having the imaging system, portable terminal apparatus, onboard apparatus, medical apparatus, and method of manufacturing the imaging system
US8054368B2 (en) 2007-11-16 2011-11-08 Fujinon Corporation Imaging system, imaging apparatus, portable terminal apparatus, onboard apparatus, and medical apparatus
JP2009141742A (en) * 2007-12-07 2009-06-25 Fujinon Corp Imaging system, imaging apparatus with the imaging system, mobile terminal device, on-vehicle device, and medical device
JP2009159603A (en) * 2007-12-07 2009-07-16 Fujinon Corp Imaging system, imaging apparatus with the system, portable terminal apparatus, on-vehicle apparatus, medical apparatus, and manufacturing method of imaging system
JP2009139698A (en) * 2007-12-07 2009-06-25 Fujinon Corp Imaging system, imaging apparatus, portable terminal apparatus, in-vehicle apparatus and medical apparatus with imaging system
US8111318B2 (en) 2007-12-07 2012-02-07 Fujinon Corporation Imaging system, imaging apparatus, portable terminal apparatus, onboard apparatus, medical apparatus and method of manufacturing the imaging system
US8077247B2 (en) 2007-12-07 2011-12-13 Fujinon Corporation Imaging system, imaging apparatus, portable terminal apparatus, onboard apparatus, medical apparatus and method of manufacturing the imaging system
JP2009139697A (en) * 2007-12-07 2009-06-25 Fujinon Corp Imaging system, and imaging apparatus, portable terminal apparatus, in-vehicle apparatus and medical apparatus with imaging system, and method for manufacturing imaging system
WO2010038411A1 (en) * 2008-09-30 2010-04-08 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and image pickup apparatus
KR101219412B1 (en) 2008-09-30 2013-01-11 캐논 가부시끼가이샤 Image processing method, image processing apparatus, and image pickup apparatus
US20100079626A1 (en) * 2008-09-30 2010-04-01 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and image pickup apparatus
US8477206B2 (en) * 2008-09-30 2013-07-02 Canon Kabushiki Kaisha Image processing apparatus and image processing method for performing image restoration using restoration filter
US20100079615A1 (en) * 2008-09-30 2010-04-01 Canon Kabushiki Kaisha Image processing method, image processing apparatus, image pickup apparatus, and storage medium
CN102165761A (en) * 2008-09-30 2011-08-24 佳能株式会社 Image processing method, image processing apparatus, and image pickup apparatus
CN102165761B (en) * 2008-09-30 2013-08-14 佳能株式会社 Image processing method, image processing apparatus, and image pickup apparatus
US8605163B2 (en) * 2008-09-30 2013-12-10 Canon Kabushiki Kaisha Image processing method, image processing apparatus, image pickup apparatus, and storage medium capable of suppressing generation of false color caused by image restoration
US9041833B2 (en) 2008-09-30 2015-05-26 Canon Kabushiki Kaisha Image processing method, image processing apparatus, and image pickup apparatus
JP2012090310A (en) * 2011-12-09 2012-05-10 Canon Inc Image processing method, image processing apparatus, image pickup apparatus, and program

Similar Documents

Publication Publication Date Title
RU2716843C1 (en) Digital correction of optical system aberrations
WO1999067743A1 (en) Image correcting method and image inputting device
US7683950B2 (en) Method and apparatus for correcting a channel dependent color aberration in a digital image
US9516285B2 (en) Image processing device, image processing method, and image processing program
US7356198B2 (en) Method and system for calculating a transformed image from a digital image
US7529424B2 (en) Correction of optical distortion by image processing
US8692909B2 (en) Image processing device and image pickup device using the same
US8754957B2 (en) Image processing apparatus and method
US8830341B2 (en) Selection of an optimum image in burst mode in a digital camera
CN112116539B (en) Optical aberration blurring removal method based on deep learning
JP2002077645A (en) Image processor
EP3891693A1 (en) Image processor
CN102714737A (en) Image processing device and image capture apparatus using same
CN113676629A (en) Image sensor, image acquisition device, image processing method and image processor
JP7504629B2 (en) IMAGE PROCESSING METHOD, IMAGE PROCESSING APPARATUS, IMAGE PROCESSING PROGRAM, AND STORAGE MEDIUM
JP5159715B2 (en) Image processing device
GB2460241A (en) Correction of optical lateral chromatic aberration
JP2001197354A (en) Digital image pickup device and image restoring method
JP6807538B2 (en) Image processing equipment, methods, and programs
JPH11205652A (en) Learning digital image input device
JP5197447B2 (en) Image processing apparatus and imaging apparatus
CN101505431A (en) Shadow compensation method and apparatus for image sensor
JP4139587B2 (en) Interpolation apparatus and method for captured image in single-plate color digital camera
JP3806477B2 (en) Image processing device
JP2023069527A (en) Image processing device and image processing method

Legal Events

Date Code Title Description
AK Designated states

Kind code of ref document: A1

Designated state(s): JP US

DFPE Request for preliminary examination filed prior to expiration of 19th month from priority date (pct application filed before 20040101)
NENP Non-entry into the national phase

Ref country code: JP

Ref document number: 2000556336

Format of ref document f/p: F