WO2014076915A1 - Image correction device and image correction method - Google Patents

Image correction device and image correction method Download PDF

Info

Publication number
WO2014076915A1
WO2014076915A1 PCT/JP2013/006595 JP2013006595W WO2014076915A1 WO 2014076915 A1 WO2014076915 A1 WO 2014076915A1 JP 2013006595 W JP2013006595 W JP 2013006595W WO 2014076915 A1 WO2014076915 A1 WO 2014076915A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
correction value
correction
value calculation
offset
Prior art date
Application number
PCT/JP2013/006595
Other languages
French (fr)
Japanese (ja)
Inventor
彰彦 池谷
Original Assignee
日本電気株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電気株式会社 filed Critical 日本電気株式会社
Publication of WO2014076915A1 publication Critical patent/WO2014076915A1/en

Links

Images

Classifications

    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise

Definitions

  • the present invention relates to an image correction apparatus that corrects fixed pattern noise, an image correction method, and a program therefor.
  • various related techniques are known for removing information corresponding to noise from an input image mixed with noise and obtaining an image that accurately corresponds to subject radiation.
  • Patent Document 1 discloses an infrared imaging device that removes fixed pattern noise.
  • the fixed pattern noise is noise caused by variations in the characteristics of the pixels of the infrared sensor.
  • the infrared imaging apparatus described in Patent Document 1 removes fixed pattern noise for each pixel by using a technique called Scene-based NUC (Nonformity Correction) or Scene-based FPN (Fixed Pattern Noise) correction.
  • the Scene-based NUC calculates a correction value from the time average and average deviation of a plurality of frames for each pixel based on the premise that “the time average and variance of each pixel value are constant for all pixels”. Then, the Scene-based NUC removes the fixed pattern noise for each pixel using the correction value, and estimates the original imaged object image.
  • the infrared imaging device has a mechanism for changing the field of view of the infrared sensor with respect to incident infrared light.
  • the accuracy of the corrected image of the picked-up image in Scene-based NUC is based on the premise that “the time average and variance of the pixel values are constant for all pixels”. Therefore, this Scene-based NUC requires a large amount of input images in which the input light to each pixel changes.
  • the infrared imaging apparatus described in Patent Document 1 discloses a technique for changing input light to each pixel without moving the camera body even if the subject is a stationary object.
  • Patent Document 2 discloses an imaging apparatus that corrects an output luminance value of an infrared imaging element.
  • the imaging apparatus described in Patent Literature 2 images each of a plurality of imaging objects having “different uniform illuminances”. And the imaging device calculates
  • Such a correction technique is generally referred to as Reference based NUC (Nonformity Correction) or Calibration based NUC.
  • the infrared imaging device described in Patent Document 1 has a mechanism for changing the field of view of the infrared sensor with respect to incident infrared light, as described above.
  • the image pickup apparatus of Patent Document 2 using Reference based NUC requires a shutter mechanism that blocks light entering the image pickup element in the image pickup unit. The reason is that it is necessary to acquire an image of the imaging object having uniform illuminance in real time in order to dynamically perform appropriate correction.
  • An object of the present invention is to provide an image correction apparatus, an image correction method, and a program therefor that can solve the above-described problems.
  • An image correction apparatus is based on a plurality of input images in which noise images corresponding to variations in sensitivity of a plurality of imaging elements are superimposed on a captured image corresponding to radiation of the imaging target. And a correction value calculating means for calculating and outputting a correction value corresponding to an offset assumed to be a high-frequency component existing in common in the plurality of input images.
  • An image correction method is based on a plurality of input images in which noise images corresponding to variations in sensitivity of a plurality of imaging elements are superimposed on a captured image corresponding to radiation of the imaging target. Then, a correction value corresponding to an offset assumed to be a high-frequency component existing in common in the plurality of input images is calculated and output.
  • a non-transitory computer-readable recording medium includes a plurality of noise images corresponding to variations in sensitivity of a plurality of imaging elements superimposed on an imaging target image corresponding to radiation of the imaging target. Based on the input image, a correction value corresponding to the offset assumed to be a high-frequency component that is common to the plurality of input images is calculated, and a program for causing the computer to execute the output process is recorded.
  • the present invention has an effect that it is possible to obtain a correction value for removing fixed pattern noise without providing a mechanical structure in the imaging apparatus.
  • FIG. 1 is a block diagram showing a configuration of an image correction apparatus according to the first embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a configuration of the variational image correction apparatus according to the first embodiment.
  • FIG. 3 is a block diagram illustrating a configuration of an infrared imaging device including the image correction device according to the first embodiment.
  • FIG. 4 is a diagram illustrating an example of a change in luminance value of each pixel of fixed pattern noise.
  • FIG. 5 is a diagram illustrating an example of a change in luminance value of each pixel of the captured image.
  • FIG. 6 is a diagram illustrating an example of a change in luminance value of each pixel of the input image.
  • FIG. 1 is a block diagram showing a configuration of an image correction apparatus according to the first embodiment of the present invention.
  • FIG. 2 is a block diagram illustrating a configuration of the variational image correction apparatus according to the first embodiment.
  • FIG. 3 is a block diagram illustrating a configuration of an in
  • FIG. 7 is a diagram illustrating an example of a change in the luminance value of each pixel of the captured image.
  • FIG. 8 is a diagram illustrating an example of a change in luminance value of each pixel of the input image.
  • FIG. 9 is a block diagram illustrating a hardware configuration of a computer that implements the image correction apparatus according to the first embodiment.
  • FIG. 10 is a flowchart showing an outline of the operation of the image correction apparatus according to the first embodiment.
  • FIG. 11 is a block diagram showing a configuration of an image correction apparatus according to the second embodiment of the present invention.
  • FIG. 12 is a diagram illustrating an example of a temperature offset table in the second embodiment.
  • FIG. 13 is a block diagram illustrating a configuration of a variational image correction apparatus according to the second embodiment.
  • FIG. 14 is a block diagram showing a configuration of an image correction apparatus according to the third embodiment of the present invention.
  • FIG. 15 is a flowchart illustrating an outline of the operation of the image correction apparatus according to the third embodiment.
  • FIG. 16 is a flowchart showing an outline of the operation of the image correction apparatus in the fourth embodiment of the present invention.
  • FIG. 17 is a block diagram showing a configuration of an infrared imaging system according to the fifth embodiment of the present invention.
  • FIG. 1 is a block diagram showing a configuration of an image correction apparatus 100 according to the first embodiment of the present invention.
  • the image correction apparatus 100 includes a correction value calculation unit 110.
  • the correction value calculation unit 110 calculates and outputs a correction value based on a plurality of input images.
  • the input image is an image in which noise images corresponding to variations in sensitivity of a plurality of image sensors are superimposed on an imaged image corresponding to radiation of the imaged object.
  • the correction value is a correction value corresponding to an offset that is assumed to be “a high-frequency component that exists in common among the plurality of input images”.
  • the correction value calculation unit 110 includes a means for calculating a correction value (a processor or a circuit that executes a program) and a means for outputting the calculated correction value (a processor or a circuit that executes the program).
  • the image correction apparatus 100 can obtain a correction value for removing fixed pattern noise without providing a mechanical structure in the imaging apparatus.
  • FIG. 2 is a block diagram showing the configuration of the modified image correction apparatus 101 according to the present embodiment.
  • the image correction apparatus 101 includes a correction value calculation unit 110 and a correction processing unit 120.
  • the correction value calculation unit 110 in FIG. 2 is the correction value calculation unit 110 of the image correction apparatus 100 shown in FIG.
  • FIG. 3 is a block diagram showing a configuration of the infrared imaging apparatus 102 including the image correction apparatus 101 according to the present embodiment.
  • the infrared imaging device 102 includes an imaging unit 400, an image recording unit 450, and an image correction device 101.
  • the imaging unit 400 receives infrared light emitted from the imaging target, converts it into a digital signal indicating the luminance of each pixel, and outputs the digital signal.
  • the imaging unit 400 includes a lens 410, an infrared imaging element unit 420, an amplification circuit 430, and an A / D (Analogue / Digital) conversion circuit 440.
  • the lens 410 receives the infrared light emitted from the imaging object, refracts the infrared light, and forms a real image on the infrared imaging element unit 420 side.
  • the infrared imaging element unit 420 includes a plurality of infrared light receiving elements (also referred to as imaging elements, not shown) arranged on a plane so as to correspond to the pixels of the captured image. Each infrared light receiving element receives the infrared light that has passed through the lens 410, converts it into an electrical signal, and outputs it.
  • each of the infrared light receiving elements generates unique noise due to variations in sensitivity of the infrared light receiving elements.
  • the electric signal output from the infrared imaging element unit 420 includes the noise.
  • the amplification circuit 430 receives the electrical signal output from the infrared light receiving element (infrared imaging element unit 420), and outputs an analog signal amplified so that the A / D conversion circuit 440 can process it.
  • the A / D conversion circuit 440 receives the analog signal output from the amplification circuit 430, converts it into a digital signal, and outputs it.
  • the image recording unit 450 receives the digital signal output from the imaging unit 400, generates an image based on the digital signal, and records it in a recording unit (not shown) in the image recording unit 450.
  • This image is an image including a noise image (hereinafter referred to as fixed pattern noise) corresponding to noise generated in the infrared imaging element unit 420.
  • the image correction apparatus 101 corrects the image recorded in the image recording unit 450 so as to remove fixed pattern noise using the image itself, and outputs the corrected image.
  • the constituent elements shown in FIG. 1 may be constituent elements in hardware units or constituent elements divided into functional units of the computer apparatus.
  • the components shown in FIG. 1 will be described as components divided into functional units of the computer apparatus.
  • the correction value is, for example, the offset itself.
  • the correction value may be a product of the offset and a predetermined value, or a value obtained by adding a predetermined value to the offset.
  • the input image is, for example, an image stored in the image recording unit 450.
  • the input image is an image in which a noise image is superimposed on the imaged body image.
  • the captured object image is an image corresponding to the radiation of the captured object.
  • the noise image is an image corresponding to variations in sensitivity of a plurality of infrared light receiving elements (also referred to as imaging elements). Therefore, in other words, the offset is a noise image when a high-frequency component that exists in common in the input images is considered to be noise generated due to variations in sensitivity of the plurality of infrared light receiving elements.
  • the offset which is a high-frequency component that exists in common in the input image, will be described with a specific example.
  • FIG. 4 is a diagram illustrating a change in luminance value of each pixel of the noise image.
  • the horizontal axis (referred to as the p-axis) in FIG. 4 is the u coordinate position (v coordinate position is fixed) of the pixels constituting the image placed on the uv plane.
  • FIG. 6 is a diagram illustrating a change in the luminance value of each pixel of the input image in which the noise image illustrated in FIG. 4 is superimposed on the imaging target image illustrated in FIG.
  • the high-frequency components in the noise image are present at the same position even if the imaged images (FIGS. 5 and 7) are different.
  • This high frequency component is a high frequency component (offset) that exists in common in the input image.
  • N 1, 2,..., N.
  • y i is an input image (for example, an image corresponding to FIGS. 6 and 8) treated as a vector.
  • x i is the object to be imaged image to be treated as a vector (e.g., an image corresponding to FIG. 5, FIG. 7).
  • b offset is an offset treated as a vector (ie, a noise image, for example, an image corresponding to FIG. 4).
  • Expression 2 is an expression showing target energy (also referred to as target energy value) E that is an average value of energy of high frequency components of an image obtained by correcting the input image y i with an offset b offset .
  • C is, for example, a Laplacian filter matrix (High-Pass filter). Note that C is not limited to a Laplacian filter matrix, but may be a matrix of any filter (for example, a Sobel filter) that can be used to detect a boundary (edge) of a region. Also, “ ⁇ C (y i ⁇ b offset ) ⁇ 2 ” indicates “L2 norm of C (y i ⁇ b offset )”.
  • each element of C may be a value that is appropriately set in advance in accordance with the content of the high-frequency component of “imaged object image and noise” obtained empirically or theoretically. With this setting, it is possible to arbitrarily control the content of the high-frequency component to be corrected (target to be reduced).
  • the correction value calculation unit 110 calculates b offset that minimizes the target energy E expressed by Equation 2 as “a correction value corresponding to a high-frequency component that exists in common in the N input images yi”. Specifically, the correction value calculation unit 110 solves an equation obtained for a partial offset with respect to b offset in Equation 2 with respect to b offset .
  • Methods for solving equations include direct methods (such as Gaussian elimination, LU (a lower triangular matrix L and an upper triangular matrix U) decomposition), iterative methods (such as conjugate gradient methods), deconvolution in frequency space (Deconvolution). Any method may be used.
  • FIG. 9 is a diagram illustrating a hardware configuration of a computer 700 that realizes the image correction apparatus 100, the image correction apparatus 101, and the infrared imaging apparatus 102 according to the present embodiment.
  • the computer 700 includes a CPU (Central Processing Unit) 701, a storage unit 702, a storage device 703, an input unit 704, an output unit 705, and a communication unit 706. Furthermore, the computer 700 includes a recording medium (or storage medium) 707 supplied from the outside.
  • the recording medium 707 may be a non-volatile recording medium that stores information non-temporarily.
  • the CPU is also called a processor.
  • the computer 700 may be included as part of a large scale integrated circuit. That is, the image correction apparatus 100 may be included in a part of a large-scale integrated circuit.
  • the CPU 701 controls the overall operation of the computer 700 by operating an operating system (not shown).
  • the CPU 701 reads a program and data from a recording medium 707 mounted on the storage device 703, for example, and writes the read program and data to the storage unit 702.
  • the program is, for example, a program that causes the computer 700 to execute operations of flowcharts shown in FIGS. 10, 15, and 16 to be described later.
  • the CPU 701 executes various processes as the correction value calculation unit 110 and the correction processing unit 120 shown in FIGS. 1 and 2 according to the read program and based on the read data.
  • the CPU 701 may download a program or data to the storage unit 702 from an external computer (not shown) connected to a communication network (not shown).
  • the storage unit 702 stores programs and data.
  • the storage unit 702 may include an image recording unit 450.
  • the storage device 703 is, for example, an optical disk, a flexible disk, a magnetic optical disk, an external hard disk, and a semiconductor memory, and includes a recording medium 707.
  • the storage device 703 (recording medium 707) stores the program in a computer-readable manner.
  • the storage device 703 may store data.
  • the storage device 703 may include an image recording unit 450.
  • the input unit 704 may include the imaging unit 400.
  • the input unit 704 is realized by, for example, a mouse, a keyboard, a built-in key button, and the like, and is used for an input operation.
  • the input unit 704 is not limited to a mouse, a keyboard, and a built-in key button, and may be a touch panel, an accelerometer, a gyro sensor, a camera, or the like.
  • the output unit 705 is realized by a display, for example, and is used for confirming the output.
  • the communication unit 706 realizes an interface with the outside. In the case of the image correction apparatus 100 and the image correction apparatus 101, the communication unit 706 implements an interface with the image recording unit 450. In the case of the image correction device 100 and the image correction device 101, the communication unit 706 is included as a part of the correction value calculation unit 110 and a part of the correction processing unit 120.
  • each functional block of the image correction apparatus 100, the image correction apparatus 101, and the infrared imaging apparatus 102 shown in FIGS. 1, 2, and 3 is a computer having the hardware configuration shown in FIG. 700.
  • the means for realizing each unit included in the computer 700 is not limited to the above.
  • the computer 700 may be realized by one physically coupled device, or may be realized by two or more physically separated devices connected by wire or wirelessly and by a plurality of these devices. .
  • the recording medium 707 in which the above-described program code is recorded may be supplied to the computer 700, and the CPU 701 may read and execute the program code stored in the recording medium 707.
  • the CPU 701 may store the code of the program stored in the recording medium 707 in the storage unit 702, the storage device 703, or both. That is, the present embodiment includes an embodiment of a recording medium 707 that stores a program (software) executed by the computer 700 (CPU 701) temporarily or non-temporarily.
  • FIG. 10 is a flowchart showing the operation of the image correction apparatus 101 in this embodiment. Note that the processing according to this flowchart may be executed based on the above-described program control by the CPU. Further, the step name of the process is described by a symbol as in S601.
  • the correction value calculation unit 110 calculates and outputs a correction value (offset b offset ) based on a plurality of input images (S601).
  • the correction processing unit 120 corrects the input image using the correction value (offset b offset ) calculated by the correction value calculation unit 110 and outputs the corrected image (S602).
  • the first effect of the present embodiment described above is that a correction value for removing fixed pattern noise can be obtained without providing a mechanical structure in the imaging apparatus.
  • the reason is that the correction value calculation unit 110 calculates the correction value corresponding to the high-frequency component that exists in common in a plurality of input images.
  • the second effect of the present embodiment described above is that it is possible to remove fixed pattern noise from those input images without providing a mechanical structure in the imaging apparatus.
  • correction processing unit 120 uses the correction value calculated by the correction value calculation unit 110 to correct those input images.
  • FIG. 11 is a block diagram showing a configuration of an image correction apparatus 201 according to the second embodiment of the present invention.
  • the image correction apparatus 201 according to the present embodiment is different from the image correction apparatus 101 according to the first embodiment in that a correction value calculation unit 210 is replaced with a correction value calculation unit 110 and a correction processing unit 120 is used. Instead, a correction processing unit 220 is included.
  • the correction value is the difference between the offset and the temporary offset b 0.
  • the temporary offset b 0 is an initial value of the offset measured at the time of shipment of the infrared imaging element unit 420 shown in FIG.
  • FIG. 12 is a diagram illustrating an example of a temporary offset table. As shown in FIG. 12, the initial offset table holds a set of ambient temperature and offset.
  • the correction value calculation unit 210 may hold a temporary offset table in an internal storage unit (not shown).
  • Equation 4 shows the average value of the energy of the high frequency component of the “image obtained by correcting the input image y i with the offset b offset ” and the energy of the “vector obtained by subtracting the temporary offset b 0 from the offset b offset (referred to as bar b)”. It is the type
  • Expression 5 is an expression obtained by modifying Expression 4 so that the equation is represented by the bar b.
  • the correction value calculation unit 210 calculates the bar b that minimizes the target energy E represented by Expression 5 as “a correction value corresponding to a high-frequency component that is common to the N input images yi”. Specifically, the correction value calculation unit 210 solves for the bar b an equation obtained by setting the partial differentiation with respect to the bar b in Expression 5 to zero.
  • any method such as a direct method (Gauss elimination method, LU decomposition, etc.), an iterative method (conjugate gradient method, etc.), a deconvolution in frequency space, or the like may be used.
  • the second term of equation 5, “ ⁇ bar b ⁇ 2 2 ”, constrains the solution of the equation so that each element of bar b has a value close to zero. That is, the second term serves to reduce the difference between the offset b offset (current estimated noise) and the temporary offset b 0 (noise measured as an initial value). In this way, the correction value calculation unit 210 calculates a more stable correction value than the correction value calculation unit 110.
  • the stable correction value is a correction value that does not cause a corrected far removed from the temporary offset b 0.
  • the correction far from the temporary offset b 0 is, for example, a correction that sets the luminance of each element (pixel) to the maximum value / minimum value in order to minimize the target energy E.
  • the stable correction value is an effective correction value even when a plurality of input images are similar.
  • the stable correction value is a correction value that leaves the high-frequency component of the image to be captured and removes the high-frequency component that is noise.
  • weight ⁇ is calculated theoretically and empirically so that the balance between the effect of the first term (reducing high-frequency components) and the effect of the second term (stabilization of the offset b offset ) in Equation 5 is optimal. This is a predetermined value.
  • FIG. 13 is a block diagram illustrating a configuration of a modified image correction apparatus 200 according to the present embodiment.
  • the image correction apparatus 200 includes a correction value calculation unit 210 and does not include a correction processing unit 220. That is, the image correction apparatus 200 outputs the correction value calculated by the correction value calculation unit 210.
  • the first effect of the present embodiment described above is that, in addition to the effect of the first embodiment, it is possible to output a stable correction value.
  • correction value calculation unit 210 calculates the correction value so as to reduce the difference between the offset b offset and the temporary offset b 0 in addition to the reduction of the high frequency component.
  • the second effect of the present embodiment described above is that the fixed pattern noise of those input images can be more stably removed.
  • correction processing unit 220 corrects these input images using the stable correction value calculated by the correction value calculation unit 210.
  • FIG. 14 is a block diagram showing a configuration of an image correction apparatus 300 according to the third embodiment of the present invention.
  • the image correction apparatus 300 further includes a correction value calculation trigger generation unit 330 as compared with the image correction apparatus 100 according to the first embodiment.
  • the correction value calculation trigger generation unit 330 receives the input of a moving image, for example, and notifies the correction value calculation unit 110 of a correction value calculation trigger each time a predetermined number of frames that become a plurality of input images are counted. To do. In this case, the correction value calculation trigger generation unit 330 may count the frames continuously, or may count the frames every predetermined number. The number of frames to be counted and the predetermined number of frames to be counted may be set from the outside (for example, the input unit 704 shown in FIG. 9).
  • the correction value calculation trigger generation unit 330 notifies the correction value calculation unit 110 of a correction value calculation trigger at a predetermined time (for example, at regular time intervals or at a specific time).
  • the predetermined time may be set from the outside (for example, the input unit 704 shown in FIG. 9).
  • the correction value calculation trigger generation unit 330 may monitor the output of the correction value by the correction value calculation unit 110. If the correction value calculation unit 110 does not output a correction value within a predetermined time, the correction value calculation trigger generation unit 330 calculates the next correction value at an earlier timing regardless of the above-described predetermined time. An opportunity may be notified to the correction value calculation unit 110.
  • the correction value calculation trigger generation unit 330 outputs a predetermined number of frames serving as the plurality of input images from the received moving image to the correction value calculation unit 110 in correspondence with the correction value calculation trigger. You may do it.
  • the image correction apparatus 101 shown in FIG. 2, the image correction apparatus 201 shown in FIG. 11, and the image correction apparatus 200 shown in FIG. 13 may also include a correction value calculation trigger generation unit 330.
  • the correction processing unit 120 of the image correction apparatus 101 receives and holds the correction value every time the correction value calculation unit 110 outputs a new correction value.
  • the correction processing unit 220 of the image correction apparatus 201 receives and holds the correction value every time the correction value calculation unit 210 outputs a new correction value. Then, each of the correction processing unit 120 and the correction processing unit 220 corrects and outputs the input image using the stored correction value.
  • FIG. 15 is a flowchart showing the operation of the image correction apparatus 300. Note that the processing according to this flowchart may be executed based on the above-described program control by the CPU.
  • the correction value calculation trigger generation unit 330 notifies the correction value calculation unit 110 of the correction value calculation trigger (S631).
  • the correction value calculation unit 110 calculates and outputs a correction value (offset b offset ) based on a plurality of input images. (S632).
  • the effect of the present embodiment described above is that, in addition to the effect of the first embodiment, it is possible to appropriately maintain a load for generating a correction value with an appropriate load.
  • correction value calculation trigger generation unit 330 notifies the correction value calculation unit 110 of the correction value calculation trigger.
  • the configuration of the fourth embodiment may be the same as the configuration of the image correction apparatus 100 in FIG. 1 or the configuration of the image correction apparatus 300 in FIG.
  • the configuration of the modified example of the fourth embodiment may be the same as the configuration of the image correction apparatus 101 in FIG.
  • the configuration of the infrared imaging device 102 including the image correction device 100 of the fourth embodiment may be the same as the configuration shown in FIG.
  • the image correction apparatus 100 and the image correction apparatus 101 of the present embodiment are different in operation of the correction value calculation unit 110 from the image correction apparatus 100 and the image correction apparatus 101 of the first embodiment.
  • the internal temperature of the infrared imaging device 102 increases rapidly (capturing a plurality of input images). For example, the time required for In this case, the characteristics of the infrared receiving elements vary with the change of the internal temperature, since the b offset changes, the target energy E can not be obtained b offset such that the lower of the desired.
  • the correction value calculation unit 110 for example, does not output a correction value when a correction value that causes the target energy E to be a predetermined value or less cannot be calculated at a certain correction value calculation. Then, the correction value calculation unit 110 calculates and outputs a correction value based on a plurality of new input images at the time of the next correction value calculation.
  • the correction processing unit 120 corrects the input image using the correction value output from the correction value calculation unit 110 before that.
  • the input image is a moving image frame.
  • the correction value calculation unit 110 uses, for example, 10 frames at arbitrary intervals (for example, selected every 5 frames) in a continuous frame of the moving image as a plurality of input images.
  • the correction value calculation unit 110 calculates b offset using the 10 frames as a plurality of input images at a certain correction value calculation opportunity.
  • the correction value calculation unit 110 calculates E by applying the calculated b offset to Equation 2.
  • the correction value calculation unit 110 compares the calculated E with a predetermined threshold value, and when the calculated E is equal to or less than the threshold value, outputs the b offset as a correction value.
  • the correction value calculation unit 110 does not output the b offset as a correction value.
  • the correction processing unit 120 may correct the input image using the correction value received before that.
  • the correction processing unit 120 may correct the input image using a predetermined correction value when no correction value has been received after initialization.
  • FIG. 16 is a flowchart showing the operation of the image correction apparatus 300 shown in FIG. 14 in the present embodiment. Note that the processing according to this flowchart may be executed based on the above-described program control by the CPU.
  • the correction value calculation trigger generation unit 330 notifies the correction value calculation unit 110 of a correction value calculation trigger including 10 frames (S641).
  • the correction value calculation unit 110 receives an opportunity for calculating the correction value, and calculates b offset using the 10 frames as a plurality of input images (S642).
  • the correction value calculation unit 110 calculates E by applying the calculated b offset to Equation 2 (S643).
  • the correction value calculation unit 110 compares the calculated E with a predetermined threshold value (S644). When the calculated E is equal to or smaller than the threshold (YES in S644), the correction value calculation unit 110 outputs the b offset as a correction value (S645). Then, the process returns to S641.
  • the effect of the present embodiment described above is that, in addition to the effect of the first embodiment, it is possible to prevent an inappropriate correction value from being output.
  • the reason is that, when the correction value calculation unit 110 cannot calculate a correction value such that the target energy E is equal to or less than the predetermined value, the correction is performed based on a plurality of new input images instead of the plurality of input images in that case. This is because the value is calculated and output.
  • FIG. 17 is a block diagram showing a configuration of an infrared imaging system 105 according to the fifth embodiment of the present invention.
  • the infrared imaging system 105 in the present embodiment includes an imaging unit 504 and an image processing unit 505.
  • the imaging unit 504 and the image processing unit 505 are connected via a network (not shown).
  • each of the imaging unit 504 and the image processing unit 505 may be an arbitrary number.
  • the imaging unit 504 transmits the digital signal output from the A / D conversion circuit 440 via a communication circuit 460 to a network (not shown).
  • the image processing unit 505 receives the digital signal output from the imaging unit 504 from a network (not shown) via the communication unit 510.
  • the communication unit 510 outputs the received digital signal to the image recording unit 450.
  • the image processing unit 505 is replaced with the image correction apparatus 100 shown in FIG. 2, the image correction apparatus 201 shown in FIG. 11, the image correction apparatus 200 shown in FIG. 13, and the image correction apparatus shown in FIG. Any of 300 may be included.
  • the effect of the present embodiment described above is that the reliability and availability of the infrared imaging system 105 can be improved in addition to the effects of the first embodiment and the second embodiment.
  • the reason is that the imaging unit 504 and the image processing unit 505 including the image correction apparatus 100 are connected via a network. That is, since a plurality of imaging units 504 with few components can be connected to one image processing unit 505, multiplexing and expansion of the imaging units 504 are facilitated.
  • each component described in each of the above embodiments does not necessarily need to be an independent entity.
  • a plurality of components may be realized as one module, or one component may be realized as a plurality of modules.
  • Each component is configured such that a component is a part of another component, or a part of a component overlaps a part of another component. Also good.
  • each component and a module that realizes each component may be realized by hardware as long as necessary, or may be realized by a computer and a program. It may be realized by mixing hardware modules, computers, and programs.
  • a plurality of operations are not limited to being executed at different timings. For example, another operation may occur during the execution of a certain operation, or the execution timing of a certain operation and another operation may partially or entirely overlap.
  • each of the embodiments described above it is described that a certain operation becomes a trigger for another operation, but the description does not limit all relationships between the certain operation and other operations. For this reason, when each embodiment is implemented, the relationship between the plurality of operations can be changed within a range that does not hinder the contents.
  • the specific description of each operation of each component does not limit each operation of each component. For this reason, each specific operation
  • movement of each component may be changed in the range which does not cause trouble with respect to a functional, performance, and other characteristic in implementing each embodiment.
  • An image correction apparatus including a correction value calculation means for calculating and outputting a correction value corresponding to an offset assumed to be a high-frequency component.
  • the target energy value is an addition value obtained by further adding an energy value corresponding to a subtraction value when a temporary offset corresponding to an ambient temperature is subtracted from the offset to the energy value.
  • Appendix 6 The image correction apparatus according to any one of appendices 1 to 5, wherein the plurality of input images are a plurality of frames at arbitrary intervals among a plurality of continuous frames of moving image data.
  • the image correction unit corrects the input image using the correction value already output from the correction value calculation unit when the correction value is not newly output from the correction value calculation unit.
  • the image correction apparatus according to appendix 7.
  • Appendix 9 An imaging apparatus comprising the image correction apparatus according to any one of appendices 1 to 8.
  • the target energy value is an addition value obtained by further adding an energy value corresponding to a subtraction value when a temporary offset corresponding to an ambient temperature is subtracted from the offset to the energy value.
  • the image correction method as described.
  • Appendix 14 The image correction method according to any one of appendices 10 to 13, wherein an opportunity for calculating a correction value is generated and the correction value is calculated based on the opportunity.
  • Appendix 15 The image correction method according to any one of appendices 10 to 14, wherein the plurality of input images are a plurality of frames at arbitrary intervals among a plurality of consecutive frames of moving image data.
  • Appendix 16 The image correction method according to any one of appendices 10 to 15, wherein the input image is corrected and output using the correction value.
  • the target energy value is an addition value obtained by further adding an energy value corresponding to a subtraction value when a temporary offset corresponding to an ambient temperature is subtracted from the offset to the energy value.
  • Appendix 23 The program according to any one of appendices 18 to 22, wherein the plurality of input images are a plurality of the frames at arbitrary intervals among a plurality of consecutive frames of moving image data.
  • Appendix 24 The program according to any one of appendices 18 to 23, further causing a computer to execute a process of correcting and outputting the input image using the correction value.
  • the correction value calculating unit is configured to output the plurality of input images based on a plurality of input images obtained by superimposing noise images corresponding to variations in sensitivity of the plurality of image pickup elements on an image pickup object image corresponding to radiation of the image pickup object.
  • An image correction apparatus that calculates and outputs a correction value corresponding to an offset assumed to be a high-frequency component that exists in common in an input image.
  • the present invention can be applied to a device that receives and processes information from a sensor group in which each sensor generates unique noise due to variations in sensitivity of a plurality of sensors.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Studio Devices (AREA)

Abstract

The present invention provides an image correction device for obtaining a correction value against noise caused by variation in sensitivity of an image-capturing sensor, without requiring a mechanical structure for that purpose in an image-capturing device. This image correction device is provided with a correcti on value calculation means for calculating and then outputting a correction value corresponding to an offset assumed to be a "high frequency component coexi sting in a plurality of input images," the calculation being made on the basis of the plurality of input images, each of said images having a noise image cor responding to variation in sensitivity of a plurality of image-capturing sensors superposed on an image of a subject to be imaged corresponding to radiation from the subject to be imaged.

Description

画像補正装置及び画像補正方法Image correction apparatus and image correction method
 本発明は、固定パターンノイズを補正する画像補正装置、画像補正方法、及びそのためのプログラムに関する。 The present invention relates to an image correction apparatus that corrects fixed pattern noise, an image correction method, and a program therefor.
 撮像装置において、ノイズが混入した入力画像からノイズ分の情報を除去し、被写体放射に正確に対応する画像を得るための、さまざまな関連技術が知られている。 In an imaging apparatus, various related techniques are known for removing information corresponding to noise from an input image mixed with noise and obtaining an image that accurately corresponds to subject radiation.
 特許文献1は、固定パターンノイズを除去する赤外線撮像装置を開示する。ここで、その固定パターンノイズは、赤外線センサの画素の特性のばらつきに起因するノイズである。 Patent Document 1 discloses an infrared imaging device that removes fixed pattern noise. Here, the fixed pattern noise is noise caused by variations in the characteristics of the pixels of the infrared sensor.
 特許文献1に記載の赤外線撮像装置は、Scene-based NUC(Nonuniformity Correction)、またはScene-based FPN(Fixed Pattern Noise)補正と呼ばれる技術を用いて、画素毎の固定パターンノイズを除去する。Scene-based NUCは、「全画素について、画素値のそれぞれの時間平均及び分散が一定である」という前提に基づいて、各画素について複数フレームの時間平均及び平均偏差から補正値を求める。そして、Scene-based NUCは、その補正値を用いて画素毎の固定パターンノイズを除去し、本来の被撮像体画像を推定する。 The infrared imaging apparatus described in Patent Document 1 removes fixed pattern noise for each pixel by using a technique called Scene-based NUC (Nonformity Correction) or Scene-based FPN (Fixed Pattern Noise) correction. The Scene-based NUC calculates a correction value from the time average and average deviation of a plurality of frames for each pixel based on the premise that “the time average and variance of each pixel value are constant for all pixels”. Then, the Scene-based NUC removes the fixed pattern noise for each pixel using the correction value, and estimates the original imaged object image.
 ここで、「全画素について、画素値のそれぞれの時間平均及び分散が一定である」という前提を満足するためには、赤外線センサの画素のそれぞれへの入力光が満遍なく変化する必要がある。このため、その赤外線撮像装置は、入射される赤外光に対する赤外線センサの視野を変化させる機構を有する。 Here, in order to satisfy the premise that “the time average and the variance of each pixel value are constant for all pixels”, the input light to each of the pixels of the infrared sensor needs to change uniformly. For this reason, the infrared imaging device has a mechanism for changing the field of view of the infrared sensor with respect to incident infrared light.
 即ち、Scene-based NUCにおける補正された被撮像体画像の正確性は、「全画素について、画素値のそれぞれの時間平均及び分散が一定である」という前提に基づいている。従って、このScene-based NUCにおいては、各画素への入力光が変化する、大量の入力画像が必要である。特許文献1に記載の赤外線撮像装置は、被写体が静止物体であっても、カメラ本体を移動させることなく、各画素への入力光を変化させる技術を開示している。 In other words, the accuracy of the corrected image of the picked-up image in Scene-based NUC is based on the premise that “the time average and variance of the pixel values are constant for all pixels”. Therefore, this Scene-based NUC requires a large amount of input images in which the input light to each pixel changes. The infrared imaging apparatus described in Patent Document 1 discloses a technique for changing input light to each pixel without moving the camera body even if the subject is a stationary object.
 特許文献2は、赤外線撮像素子の出力輝度値を補正する撮像装置を開示する。特許文献2に記載の撮像装置は、「異なる均一照度」を有する複数の被撮像体のそれぞれを撮像する。そして、その撮像装置は、撮像した画像から、撮像素子のそれぞれに対応する固有の輝度値(補正値に相当)を求める。そして、その撮像装置は、その算出した輝度値に基づいて、画素毎の固定パターンノイズを除去するように出力輝度値を補正する。このような補正技術は、一般的にReference based NUC(Nonuniformity Correction)、またはCalibration based NUCと呼ばれる。 Patent Document 2 discloses an imaging apparatus that corrects an output luminance value of an infrared imaging element. The imaging apparatus described in Patent Literature 2 images each of a plurality of imaging objects having “different uniform illuminances”. And the imaging device calculates | requires the intrinsic | native luminance value (equivalent to a correction value) corresponding to each of an image pick-up element from the imaged image. Then, the imaging apparatus corrects the output luminance value based on the calculated luminance value so as to remove the fixed pattern noise for each pixel. Such a correction technique is generally referred to as Reference based NUC (Nonformity Correction) or Calibration based NUC.
特開2009-207072号公報JP 2009-207072 A 特開2011-044813号公報JP 2011-044813 A
 しかしながら、上述した先行技術文献に記載された技術においては、補正値を得るために、撮像装置に機械的な構造を必要とするという問題点がある。 However, the technique described in the above-mentioned prior art document has a problem that a mechanical structure is required for the imaging device in order to obtain a correction value.
 例えば、特許文献1に記載の赤外線撮像装置は、上述したとおり、入射される赤外光に対する赤外線センサの視野を変化させる機構を有している。 For example, the infrared imaging device described in Patent Document 1 has a mechanism for changing the field of view of the infrared sensor with respect to incident infrared light, as described above.
 また、Reference based NUCを用いる特許文献2の撮像装置は、画像撮像部において、撮像素子への入光を阻止するシャッター機構を必要とする。その理由は、適切な補正を動的に実施するために、均一照度を有する被撮像体の画像をリアルタイムで取得する必要があるからである。 In addition, the image pickup apparatus of Patent Document 2 using Reference based NUC requires a shutter mechanism that blocks light entering the image pickup element in the image pickup unit. The reason is that it is necessary to acquire an image of the imaging object having uniform illuminance in real time in order to dynamically perform appropriate correction.
 本発明の目的は、上述した問題点を解決できる画像補正装置、画像補正方法、及びそのためのプログラムを提供することにある。 An object of the present invention is to provide an image correction apparatus, an image correction method, and a program therefor that can solve the above-described problems.
 本発明の一様態における画像補正装置は、被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する補正値算出手段を含む。 An image correction apparatus according to an embodiment of the present invention is based on a plurality of input images in which noise images corresponding to variations in sensitivity of a plurality of imaging elements are superimposed on a captured image corresponding to radiation of the imaging target. And a correction value calculating means for calculating and outputting a correction value corresponding to an offset assumed to be a high-frequency component existing in common in the plurality of input images.
 本発明の一様態における画像補正方法は、被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する。 An image correction method according to an aspect of the present invention is based on a plurality of input images in which noise images corresponding to variations in sensitivity of a plurality of imaging elements are superimposed on a captured image corresponding to radiation of the imaging target. Then, a correction value corresponding to an offset assumed to be a high-frequency component existing in common in the plurality of input images is calculated and output.
 本発明の一様態におけるコンピュータ読み取り可能な非一時的記録媒体は、被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する処理をコンピュータに実行させるプログラムを記録する。 A non-transitory computer-readable recording medium according to an embodiment of the present invention includes a plurality of noise images corresponding to variations in sensitivity of a plurality of imaging elements superimposed on an imaging target image corresponding to radiation of the imaging target. Based on the input image, a correction value corresponding to the offset assumed to be a high-frequency component that is common to the plurality of input images is calculated, and a program for causing the computer to execute the output process is recorded.
 本発明は、撮像装置に機械的な構造を設けることなく、固定パターンノイズを除去するための補正値を得ることが可能になるという効果がある。 The present invention has an effect that it is possible to obtain a correction value for removing fixed pattern noise without providing a mechanical structure in the imaging apparatus.
図1は、本発明の第1の実施形態に係る画像補正装置の構成を示すブロック図である。FIG. 1 is a block diagram showing a configuration of an image correction apparatus according to the first embodiment of the present invention. 図2は、第1の実施形態に係る変化形の画像補正装置の構成を示すブロック図である。FIG. 2 is a block diagram illustrating a configuration of the variational image correction apparatus according to the first embodiment. 図3は、第1の実施形態に係る画像補正装置を含む赤外線撮像装置の構成を示すブロック図である。FIG. 3 is a block diagram illustrating a configuration of an infrared imaging device including the image correction device according to the first embodiment. 図4は、固定パターンノイズの各画素の輝度値の変化の一例を示す図である。FIG. 4 is a diagram illustrating an example of a change in luminance value of each pixel of fixed pattern noise. 図5は、被撮像体画像の各画素の輝度値の変化の一例を示す図である。FIG. 5 is a diagram illustrating an example of a change in luminance value of each pixel of the captured image. 図6は、入力画像の各画素の輝度値の変化の一例を示す図である。FIG. 6 is a diagram illustrating an example of a change in luminance value of each pixel of the input image. 図7は、被撮像体画像の各画素の輝度値の変化の一例を示す図である。FIG. 7 is a diagram illustrating an example of a change in the luminance value of each pixel of the captured image. 図8は、入力画像の各画素の輝度値の変化の一例を示す図である。FIG. 8 is a diagram illustrating an example of a change in luminance value of each pixel of the input image. 図9は、第1の実施形態に係る画像補正装置を実現するコンピュータのハードウェア構成を示すブロック図である。FIG. 9 is a block diagram illustrating a hardware configuration of a computer that implements the image correction apparatus according to the first embodiment. 図10は、第1の実施形態における画像補正装置の動作の概要を示すフローチャートである。FIG. 10 is a flowchart showing an outline of the operation of the image correction apparatus according to the first embodiment. 図11は、本発明の第2の実施形態に係る画像補正装置の構成を示すブロック図である。FIG. 11 is a block diagram showing a configuration of an image correction apparatus according to the second embodiment of the present invention. 図12は、第2の実施形態における温度オフセットテーブルの一例を示す図である。FIG. 12 is a diagram illustrating an example of a temperature offset table in the second embodiment. 図13は、第2の実施形態に係る変化形の画像補正装置の構成を示すブロック図である。FIG. 13 is a block diagram illustrating a configuration of a variational image correction apparatus according to the second embodiment. 図14は、本発明の第3の実施形態に係る画像補正装置の構成を示すブロック図である。FIG. 14 is a block diagram showing a configuration of an image correction apparatus according to the third embodiment of the present invention. 図15は、第3の実施形態における画像補正装置の動作の概要を示すフローチャートである。FIG. 15 is a flowchart illustrating an outline of the operation of the image correction apparatus according to the third embodiment. 図16は、本発明の第4の実施形態における画像補正装置の動作の概要を示すフローチャートである。FIG. 16 is a flowchart showing an outline of the operation of the image correction apparatus in the fourth embodiment of the present invention. 図17は、本発明の第5の実施形態に係る赤外線撮像システムの構成を示すブロック図である。FIG. 17 is a block diagram showing a configuration of an infrared imaging system according to the fifth embodiment of the present invention.
 本発明を実施するための形態について図面を参照して詳細に説明する。尚、各図面及び明細書記載の各実施形態において、同様の構成要素には同様の符号を付与し、適宜説明を省略する。 Embodiments for carrying out the present invention will be described in detail with reference to the drawings. In each embodiment described in each drawing and specification, the same reference numerals are given to the same components, and the description thereof is omitted as appropriate.
 <<<第1の実施形態>>>
 図1は、本発明の第1の実施形態に係る画像補正装置100の構成を示すブロック図である。
<<<< first embodiment >>>>
FIG. 1 is a block diagram showing a configuration of an image correction apparatus 100 according to the first embodiment of the present invention.
 図1を参照すると、本実施形態に係る画像補正装置100は、補正値算出部110を含む。 Referring to FIG. 1, the image correction apparatus 100 according to the present embodiment includes a correction value calculation unit 110.
 補正値算出部110は、複数の入力画像に基づいて、補正値を算出し、出力する。その入力画像は、被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた画像である。また、その補正値は、「それらの複数の入力画像に共通して存在する高周波成分」であると仮定したオフセットに対応する補正値である。 The correction value calculation unit 110 calculates and outputs a correction value based on a plurality of input images. The input image is an image in which noise images corresponding to variations in sensitivity of a plurality of image sensors are superimposed on an imaged image corresponding to radiation of the imaged object. The correction value is a correction value corresponding to an offset that is assumed to be “a high-frequency component that exists in common among the plurality of input images”.
 例えば、補正値算出部110は、補正値を算出する手段(プログラムを実行するプロセッサまたは回路)と、その算出された補正値を出力する手段(プログラムを実行するプロセッサまたは回路)とで、構成される。 For example, the correction value calculation unit 110 includes a means for calculating a correction value (a processor or a circuit that executes a program) and a means for outputting the calculated correction value (a processor or a circuit that executes the program). The
 上述の構成により、画像補正装置100は、撮像装置に機械的な構造を設けることなく、固定パターンノイズを除去するための補正値を得ることが可能になる。 With the above-described configuration, the image correction apparatus 100 can obtain a correction value for removing fixed pattern noise without providing a mechanical structure in the imaging apparatus.
 以下に、更に具体的に、本実施形態に係る画像補正装置100について説明する。 Hereinafter, the image correction apparatus 100 according to the present embodiment will be described more specifically.
 図2は、本実施形態に係る変化形の画像補正装置101の構成を示すブロック図である。 FIG. 2 is a block diagram showing the configuration of the modified image correction apparatus 101 according to the present embodiment.
 図2を参照すると、本実施形態に係る画像補正装置101は、補正値算出部110と、補正処理部120とを含む。尚、図2の補正値算出部110は、図1に示す画像補正装置100の補正値算出部110である。 Referring to FIG. 2, the image correction apparatus 101 according to the present embodiment includes a correction value calculation unit 110 and a correction processing unit 120. The correction value calculation unit 110 in FIG. 2 is the correction value calculation unit 110 of the image correction apparatus 100 shown in FIG.
 図3は、本実施形態に係る画像補正装置101を含む赤外線撮像装置102の構成を示すブロック図である。 FIG. 3 is a block diagram showing a configuration of the infrared imaging apparatus 102 including the image correction apparatus 101 according to the present embodiment.
 図3を参照すると、本実施形態に係る赤外線撮像装置102は、撮像部400と画像記録部450と画像補正装置101とを含む。 Referring to FIG. 3, the infrared imaging device 102 according to the present embodiment includes an imaging unit 400, an image recording unit 450, and an image correction device 101.
 まず、本実施形態に係る赤外線撮像装置102の動作の概要を説明する。 First, an outline of the operation of the infrared imaging apparatus 102 according to the present embodiment will be described.
 撮像部400は、被撮像体の放射する赤外光を受け、画素毎の輝度を示すデジタル信号に変換し、出力する。 The imaging unit 400 receives infrared light emitted from the imaging target, converts it into a digital signal indicating the luminance of each pixel, and outputs the digital signal.
 撮像部400は、レンズ410、赤外線撮像素子部420、増幅回路430及びA/D(Analogue/Digital)変換回路440を含む。 The imaging unit 400 includes a lens 410, an infrared imaging element unit 420, an amplification circuit 430, and an A / D (Analogue / Digital) conversion circuit 440.
 レンズ410は、被撮像体の放射する赤外光を受け、その赤外光を屈折させて、赤外線撮像素子部420側に実像を結ぶ。 The lens 410 receives the infrared light emitted from the imaging object, refracts the infrared light, and forms a real image on the infrared imaging element unit 420 side.
 赤外線撮像素子部420は、撮像画像の画素に対応するように平面上に配置された、複数の赤外線受光素子(撮像素子とも呼ばれる、不図示)よりなる。個々の赤外線受光素子は、レンズ410を通った赤外光を受け、電気信号に変換して出力する。 The infrared imaging element unit 420 includes a plurality of infrared light receiving elements (also referred to as imaging elements, not shown) arranged on a plane so as to correspond to the pixels of the captured image. Each infrared light receiving element receives the infrared light that has passed through the lens 410, converts it into an electrical signal, and outputs it.
 赤外線撮像素子部420において、それらの赤外線受光素子の感度のばらつきにより、それらの赤外線受光素子のそれぞれは固有のノイズを発生する。赤外線撮像素子部420が出力する電気信号は、そのノイズを含む。 In the infrared imaging element section 420, each of the infrared light receiving elements generates unique noise due to variations in sensitivity of the infrared light receiving elements. The electric signal output from the infrared imaging element unit 420 includes the noise.
 増幅回路430は、赤外線受光素子(赤外線撮像素子部420)が出力した電気信号を受け、A/D変換回路440が処理可能なように増幅したアナログ信号を出力する。 The amplification circuit 430 receives the electrical signal output from the infrared light receiving element (infrared imaging element unit 420), and outputs an analog signal amplified so that the A / D conversion circuit 440 can process it.
 A/D変換回路440は、増幅回路430が出力したアナログ信号を受け、デジタル信号に変換し、出力する。 The A / D conversion circuit 440 receives the analog signal output from the amplification circuit 430, converts it into a digital signal, and outputs it.
 画像記録部450は、撮像部400が出力したデジタル信号を受け、そのデジタル信号に基づいて画像を生成し、画像記録部450内の記録手段(不図示)に記録する。この画像は、赤外線撮像素子部420において発生したノイズに対応するノイズ画像(以後、固定パターンノイズと呼ぶ)を含む画像である。 The image recording unit 450 receives the digital signal output from the imaging unit 400, generates an image based on the digital signal, and records it in a recording unit (not shown) in the image recording unit 450. This image is an image including a noise image (hereinafter referred to as fixed pattern noise) corresponding to noise generated in the infrared imaging element unit 420.
 画像補正装置101は、画像記録部450内に記録された画像を、その画像自身を利用して固定パターンノイズを除去するように補正し、補正した画像を出力する。 The image correction apparatus 101 corrects the image recorded in the image recording unit 450 so as to remove fixed pattern noise using the image itself, and outputs the corrected image.
 以上が、本実施形態に係る赤外線撮像装置102の動作の概要の説明である。 The above is the outline of the operation of the infrared imaging apparatus 102 according to the present embodiment.
 次に、本実施形態における画像補正装置101が備える各構成要素について説明する。尚、図1に示す構成要素は、ハードウェア単位の構成要素でも、コンピュータ装置の機能単位に分割した構成要素でもよい。ここでは、図1に示す構成要素は、コンピュータ装置の機能単位に分割した構成要素として説明する。 Next, each component included in the image correction apparatus 101 according to the present embodiment will be described. The constituent elements shown in FIG. 1 may be constituent elements in hardware units or constituent elements divided into functional units of the computer apparatus. Here, the components shown in FIG. 1 will be described as components divided into functional units of the computer apparatus.
 ===補正値算出部110===
 上述したように、補正値算出部110は、複数の入力画像に基づいて、それらの入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する、補正値を算出する。補正値は、例えば、オフセットそのものである。尚、補正値は、オフセットと所定の値との積や、オフセットに対して所定の値を加算した値であってもよい。
=== Correction Value Calculation Unit 110 ===
As described above, the correction value calculation unit 110 calculates a correction value corresponding to an offset assumed to be a high-frequency component that exists in common in the input images, based on the plurality of input images. The correction value is, for example, the offset itself. The correction value may be a product of the offset and a predetermined value, or a value obtained by adding a predetermined value to the offset.
 ここで、その入力画像は、例えば、画像記録部450が記憶している画像である。その入力画像は、被撮像体画像にノイズ画像が重ねられた画像である。その被撮像体画像は、被撮像体の放射に対応する画像である。そのノイズ画像は、複数の赤外線受光素子(撮像素子とも呼ばれる)の感度のばらつきに対応する画像である。従って、換言すると、そのオフセットは、それらの入力画像に共通して存在する高周波成分を、複数の赤外線受光素子の感度のばらつきにより発生するノイズであると見做した場合の、ノイズ画像である。 Here, the input image is, for example, an image stored in the image recording unit 450. The input image is an image in which a noise image is superimposed on the imaged body image. The captured object image is an image corresponding to the radiation of the captured object. The noise image is an image corresponding to variations in sensitivity of a plurality of infrared light receiving elements (also referred to as imaging elements). Therefore, in other words, the offset is a noise image when a high-frequency component that exists in common in the input images is considered to be noise generated due to variations in sensitivity of the plurality of infrared light receiving elements.
 ここで、入力画像に共通して存在する高周波成分であるオフセットについて、具体例を示して、説明する。 Here, the offset, which is a high-frequency component that exists in common in the input image, will be described with a specific example.
 図4は、ノイズ画像の各画素の輝度値の変化を示す図である。図4の横軸(p軸と呼ぶ)は、u-v平面に置いた画像を構成する画素のu座標位置(v座標位置は固定)である。図4の縦軸は、輝度値である。図4を参照すると、p=p01、p=p02及びp=p03に、高周波成分がある。 FIG. 4 is a diagram illustrating a change in luminance value of each pixel of the noise image. The horizontal axis (referred to as the p-axis) in FIG. 4 is the u coordinate position (v coordinate position is fixed) of the pixels constituting the image placed on the uv plane. The vertical axis in FIG. 4 is the luminance value. Referring to FIG. 4, there are high frequency components at p = p 01 , p = p 02 and p = p 03 .
 図5は、図4と同じ座標系に示した、被撮像体画像の各画素の輝度値の変化を示す図である。図5を参照すると、p=p21、p=p22、p=p23、p=p24、p=p25及びp=p26に、高周波成分がある。 FIG. 5 is a diagram illustrating a change in luminance value of each pixel of the captured image, which is shown in the same coordinate system as FIG. Referring to FIG. 5, there are high frequency components at p = p 21 , p = p 22 , p = p 23 , p = p 24 , p = p 25 and p = p 26 .
 図6は、図5に示す被撮像体画像に、図4に示すノイズ画像が重ねられた入力画像の各画素の輝度値の変化を示す図である。図6を参照すると、p=p01、p=p02、p=p03、p=p21、p=p22、p=p23、p=p24、p=p25及びp=p26に、高周波成分がある。 FIG. 6 is a diagram illustrating a change in the luminance value of each pixel of the input image in which the noise image illustrated in FIG. 4 is superimposed on the imaging target image illustrated in FIG. Referring to FIG. 6, p = p 01 , p = p 02 , p = p 03 , p = p 21 , p = p 22 , p = p 23 , p = p 24 , p = p 25 and p = p 26 There is a high frequency component.
 図7は、図5の場合とは異なる被撮像体を撮像した場合の、被撮像体画像の各画素の輝度値の変化を示す図である。図7を参照すると、p=p31、p=p32、p=p33及びp=p34に、高周波成分がある。 FIG. 7 is a diagram illustrating a change in the luminance value of each pixel of the captured image when a captured object different from that in FIG. 5 is captured. Referring to FIG. 7, there are high frequency components at p = p 31 , p = p 32 , p = p 33 and p = p 34 .
 図8は、図7に示す被撮像体画像に、図4に示すノイズ画像が重ねられた入力画像の各画素の輝度値の変化を示す図である。図8を参照すると、p=p01、p=p02、p=p03、p=p31、p=p32、p=p33及びp=p34に、高周波成分がある。 FIG. 8 is a diagram illustrating a change in the luminance value of each pixel of the input image in which the noise image illustrated in FIG. 4 is overlaid on the imaging target image illustrated in FIG. Referring to FIG. 8, there are high-frequency components at p = p 01 , p = p 02 , p = p 03 , p = p 31 , p = p 32 , p = p 33 and p = p 34 .
 2つの入力画像(図6及び図8)において、被撮像体画像(図5及び図7)が異なっても、ノイズ画像(図4)における高周波成分は、同一の位置に存在する。この高周波成分が、入力画像に共通して存在する高周波成分(オフセット)である。 In the two input images (FIGS. 6 and 8), the high-frequency components in the noise image (FIG. 4) are present at the same position even if the imaged images (FIGS. 5 and 7) are different. This high frequency component is a high frequency component (offset) that exists in common in the input image.
 次に、補正値算出部110が、オフセットを算出する処理の一例を説明する。 Next, an example of processing in which the correction value calculation unit 110 calculates an offset will be described.
 N個の入力画像と被撮像体画像とノイズ画像との関係は、以下に示す式1で表される。尚、i=1,2,・・・,Nである。yはベクトルとして扱われる入力画像(例えば、図6、図8に対応する画像)である。xはベクトルとして扱われる被撮像体画像(例えば、図5、図7に対応する画像)である。boffsetはベクトルとして扱われるオフセット(即ち、ノイズ画像、例えば図4に対応する画像)である。
Figure JPOXMLDOC01-appb-I000001
The relationship among the N input images, the imaged body image, and the noise image is expressed by Expression 1 shown below. Note that i = 1, 2,..., N. y i is an input image (for example, an image corresponding to FIGS. 6 and 8) treated as a vector. x i is the object to be imaged image to be treated as a vector (e.g., an image corresponding to FIG. 5, FIG. 7). b offset is an offset treated as a vector (ie, a noise image, for example, an image corresponding to FIG. 4).
Figure JPOXMLDOC01-appb-I000001
 式2は、入力画像yをオフセットboffsetで補正した画像の、高周波成分のエネルギーの平均値であるターゲットエネルギー(ターゲットエネルギー値とも呼ばれる)Eを示す式である。ここで、Cは、例えば、ラプラシアンフィルタ行列(High-Passフィルタ)である。尚、Cは、ラプラシアンフィルタ行列に限らず、領域の境界(エッジ)を検出するために利用できる任意のフィルタ(例えば、ソーベル(Sobel)フィルタ)の行列であってもよい。また、「∥C(y-boffset)∥」は、「C(y-boffset)のL2ノルム」を示す。 Expression 2 is an expression showing target energy (also referred to as target energy value) E that is an average value of energy of high frequency components of an image obtained by correcting the input image y i with an offset b offset . Here, C is, for example, a Laplacian filter matrix (High-Pass filter). Note that C is not limited to a Laplacian filter matrix, but may be a matrix of any filter (for example, a Sobel filter) that can be used to detect a boundary (edge) of a region. Also, “∥C (y i −b offset ) ∥ 2 ” indicates “L2 norm of C (y i −b offset )”.
 また、Cの各要素の値は、経験的或いは理論的に取得される、「被撮像体画像及びノイズ」の高周波成分の内容などに応じて、予め適切に設定される値であってよい。この設定によって、補正の対象(削減する対象)とする高周波成分の内容を、任意に制御することができる。
Figure JPOXMLDOC01-appb-I000002
In addition, the value of each element of C may be a value that is appropriately set in advance in accordance with the content of the high-frequency component of “imaged object image and noise” obtained empirically or theoretically. With this setting, it is possible to arbitrarily control the content of the high-frequency component to be corrected (target to be reduced).
Figure JPOXMLDOC01-appb-I000002
 補正値算出部110は、式2で表されるターゲットエネルギーEが最小になるboffsetを、「N個の入力画像yiに共通して存在する高周波成分に対応する、補正値」として算出する。具体的には、補正値算出部110は、式2のboffsetに関する偏微分を0として得られる方程式を、boffsetについて解く。方程式を解く手法としては、直接法(ガウス消去法、LU(a lower triangular matrix L and an upper triangular matrix U)分解など)、反復法(共役勾配法など)、周波数空間でのデコンボリューション(Deconvolution)など、任意の手法を使用してよい。 The correction value calculation unit 110 calculates b offset that minimizes the target energy E expressed by Equation 2 as “a correction value corresponding to a high-frequency component that exists in common in the N input images yi”. Specifically, the correction value calculation unit 110 solves an equation obtained for a partial offset with respect to b offset in Equation 2 with respect to b offset . Methods for solving equations include direct methods (such as Gaussian elimination, LU (a lower triangular matrix L and an upper triangular matrix U) decomposition), iterative methods (such as conjugate gradient methods), deconvolution in frequency space (Deconvolution). Any method may be used.
 ===補正処理部120===
 補正処理部120は、補正値算出部110が算出したオフセットboffsetを利用してその入力画像を補正し、出力する。具体的には、補正処理部120は、以下に示す式3により入力画像を補正する。
Figure JPOXMLDOC01-appb-I000003
=== Correction Processing Unit 120 ===
The correction processing unit 120 corrects the input image using the offset b offset calculated by the correction value calculation unit 110 and outputs the corrected input image. Specifically, the correction processing unit 120 corrects the input image according to Equation 3 shown below.
Figure JPOXMLDOC01-appb-I000003
 以上が、画像補正装置101の機能単位の各構成要素についての説明である。 This completes the description of each component of the functional unit of the image correction apparatus 101.
 次に、画像補正装置100、画像補正装置101及び赤外線撮像装置102のハードウェア単位の構成要素について説明する。 Next, components in hardware units of the image correction apparatus 100, the image correction apparatus 101, and the infrared imaging apparatus 102 will be described.
 図9は、本実施形態における画像補正装置100、画像補正装置101及び赤外線撮像装置102を実現するコンピュータ700のハードウェア構成を示す図である。 FIG. 9 is a diagram illustrating a hardware configuration of a computer 700 that realizes the image correction apparatus 100, the image correction apparatus 101, and the infrared imaging apparatus 102 according to the present embodiment.
 図9に示すように、コンピュータ700は、CPU(Central Processing Unit)701、記憶部702、記憶装置703、入力部704、出力部705及び通信部706を含む。更に、コンピュータ700は、外部から供給される記録媒体(または記憶媒体)707を含む。記録媒体707は、情報を非一時的に記憶する不揮発性記録媒体であってもよい。 As shown in FIG. 9, the computer 700 includes a CPU (Central Processing Unit) 701, a storage unit 702, a storage device 703, an input unit 704, an output unit 705, and a communication unit 706. Furthermore, the computer 700 includes a recording medium (or storage medium) 707 supplied from the outside. The recording medium 707 may be a non-volatile recording medium that stores information non-temporarily.
 尚、CPUは、プロセッサーとも呼ばれる。また、コンピュータ700は、大規模集積回路の一部に含まれてもよい。即ち、画像補正装置100は、大規模集積回路の一部に含まれてもよい。 The CPU is also called a processor. The computer 700 may be included as part of a large scale integrated circuit. That is, the image correction apparatus 100 may be included in a part of a large-scale integrated circuit.
 CPU701は、オペレーティングシステム(不図示)を動作させて、コンピュータ700の、全体の動作を制御する。また、CPU701は、例えば記憶装置703に装着された記録媒体707から、プログラムやデータを読み込み、読み込んだプログラムやデータを記憶部702に書き込む。ここで、そのプログラムは、例えば、後述の図10、図15及び図16に示すフローチャートの動作をコンピュータ700に実行させるプログラムである。 The CPU 701 controls the overall operation of the computer 700 by operating an operating system (not shown). The CPU 701 reads a program and data from a recording medium 707 mounted on the storage device 703, for example, and writes the read program and data to the storage unit 702. Here, the program is, for example, a program that causes the computer 700 to execute operations of flowcharts shown in FIGS. 10, 15, and 16 to be described later.
 そして、CPU701は、読み込んだプログラムに従って、また読み込んだデータに基づいて、図1及び図2に示す補正値算出部110及び補正処理部120として各種の処理を実行する。 The CPU 701 executes various processes as the correction value calculation unit 110 and the correction processing unit 120 shown in FIGS. 1 and 2 according to the read program and based on the read data.
 尚、CPU701は、通信網(不図示)に接続されている外部コンピュータ(不図示)から、記憶部702にプログラムやデータをダウンロードするようにしてもよい。 Note that the CPU 701 may download a program or data to the storage unit 702 from an external computer (not shown) connected to a communication network (not shown).
 記憶部702は、プログラムやデータを記憶する。赤外線撮像装置102の場合、記憶部702は、画像記録部450を含んでよい。 The storage unit 702 stores programs and data. In the case of the infrared imaging device 102, the storage unit 702 may include an image recording unit 450.
 記憶装置703は、例えば、光ディスク、フレキシブルディスク、磁気光ディスク、外付けハードディスク及び半導体メモリであって、記録媒体707を含む。記憶装置703(記録媒体707)は、プログラムをコンピュータ読み取り可能に記憶する。また、記憶装置703は、データを記憶してもよい。赤外線撮像装置102の場合、記憶装置703は、画像記録部450を含んでよい。 The storage device 703 is, for example, an optical disk, a flexible disk, a magnetic optical disk, an external hard disk, and a semiconductor memory, and includes a recording medium 707. The storage device 703 (recording medium 707) stores the program in a computer-readable manner. The storage device 703 may store data. In the case of the infrared imaging device 102, the storage device 703 may include an image recording unit 450.
 赤外線撮像装置102の場合、入力部704は、撮像部400を含んでよい。また、入力部704は、例えばマウスやキーボード、内蔵のキーボタンなどで実現され、入力操作に用いられる。入力部704は、マウスやキーボード、内蔵のキーボタンに限らず、例えばタッチパネル、加速度計、ジャイロセンサ、カメラなどでもよい。 In the case of the infrared imaging device 102, the input unit 704 may include the imaging unit 400. The input unit 704 is realized by, for example, a mouse, a keyboard, a built-in key button, and the like, and is used for an input operation. The input unit 704 is not limited to a mouse, a keyboard, and a built-in key button, and may be a touch panel, an accelerometer, a gyro sensor, a camera, or the like.
 出力部705は、例えばディスプレイで実現され、出力を確認するために用いられる。 The output unit 705 is realized by a display, for example, and is used for confirming the output.
 通信部706は、外部とのインタフェースを実現する。画像補正装置100及び画像補正装置101の場合、通信部706は、画像記録部450とのインタフェースを実現する。画像補正装置100及び画像補正装置101の場合、通信部706は、補正値算出部110の一部及び補正処理部120の一部として含まれる。 The communication unit 706 realizes an interface with the outside. In the case of the image correction apparatus 100 and the image correction apparatus 101, the communication unit 706 implements an interface with the image recording unit 450. In the case of the image correction device 100 and the image correction device 101, the communication unit 706 is included as a part of the correction value calculation unit 110 and a part of the correction processing unit 120.
 以上説明したように、図1、図2及び図3のそれぞれに示す画像補正装置100、画像補正装置101及び赤外線撮像装置102のそれぞれの機能単位のブロックは、図9に示すハードウェア構成のコンピュータ700によって実現される。但し、コンピュータ700が備える各部の実現手段は、上記に限定されない。すなわち、コンピュータ700は、物理的に結合した1つの装置により実現されてもよいし、物理的に分離した2つ以上の装置を有線または無線で接続し、これら複数の装置により実現されてもよい。 As described above, each functional block of the image correction apparatus 100, the image correction apparatus 101, and the infrared imaging apparatus 102 shown in FIGS. 1, 2, and 3 is a computer having the hardware configuration shown in FIG. 700. However, the means for realizing each unit included in the computer 700 is not limited to the above. In other words, the computer 700 may be realized by one physically coupled device, or may be realized by two or more physically separated devices connected by wire or wirelessly and by a plurality of these devices. .
 尚、上述のプログラムのコードを記録した記録媒体707が、コンピュータ700に供給され、CPU701は、記録媒体707に格納されたプログラムのコードを読み出して実行するようにしてもよい。或いは、CPU701は、記録媒体707に格納されたプログラムのコードを、記憶部702、記憶装置703またはその両方に格納するようにしてもよい。すなわち、本実施形態は、コンピュータ700(CPU701)が実行するプログラム(ソフトウェア)を、一時的にまたは非一時的に、記憶する記録媒体707の実施形態を含む。 Note that the recording medium 707 in which the above-described program code is recorded may be supplied to the computer 700, and the CPU 701 may read and execute the program code stored in the recording medium 707. Alternatively, the CPU 701 may store the code of the program stored in the recording medium 707 in the storage unit 702, the storage device 703, or both. That is, the present embodiment includes an embodiment of a recording medium 707 that stores a program (software) executed by the computer 700 (CPU 701) temporarily or non-temporarily.
 以上が、本実施形態における画像補正装置100、画像補正装置101及び赤外線撮像装置102を実現するコンピュータ700の、ハードウェア単位の各構成要素についての説明である。 The above is a description of each hardware component of the computer 700 that implements the image correction apparatus 100, the image correction apparatus 101, and the infrared imaging apparatus 102 in the present embodiment.
 次に本実施形態の動作について、図1~図10を参照して詳細に説明する。 Next, the operation of this embodiment will be described in detail with reference to FIGS.
 図10は、本実施形態における画像補正装置101の動作を示すフローチャートである。尚、このフローチャートによる処理は、前述したCPUによるプログラム制御に基づいて、実行されても良い。また、処理のステップ名については、S601のように、記号で記載する。 FIG. 10 is a flowchart showing the operation of the image correction apparatus 101 in this embodiment. Note that the processing according to this flowchart may be executed based on the above-described program control by the CPU. Further, the step name of the process is described by a symbol as in S601.
 補正値算出部110は、複数の入力画像に基づいて、補正値(オフセットboffset)を算出し、出力する(S601)。 The correction value calculation unit 110 calculates and outputs a correction value (offset b offset ) based on a plurality of input images (S601).
 次に、補正処理部120は、補正値算出部110が算出した補正値(オフセットboffset)を利用してその入力画像を補正し、出力する(S602)。 Next, the correction processing unit 120 corrects the input image using the correction value (offset b offset ) calculated by the correction value calculation unit 110 and outputs the corrected image (S602).
 以上が、本実施形態の動作の説明である。 The above is the description of the operation of the present embodiment.
 上述した本実施形態における第1の効果は、撮像装置に機械的な構造を設けることなく、固定パターンノイズを除去するための補正値を得ることを可能にする点である。 The first effect of the present embodiment described above is that a correction value for removing fixed pattern noise can be obtained without providing a mechanical structure in the imaging apparatus.
 その理由は、補正値算出部110が、複数の入力画像に共通して存在する高周波成分に対応する補正値を算出するようにしたからである。 The reason is that the correction value calculation unit 110 calculates the correction value corresponding to the high-frequency component that exists in common in a plurality of input images.
 上述した本実施形態における第2の効果は、撮像装置に機械的な構造を設けることなく、それらの入力画像の固定パターンノイズを除去することを可能にする点である。 The second effect of the present embodiment described above is that it is possible to remove fixed pattern noise from those input images without providing a mechanical structure in the imaging apparatus.
 その理由は、補正処理部120が、補正値算出部110の算出した補正値を利用してそれらの入力画像を補正するようにしたからである。 The reason is that the correction processing unit 120 uses the correction value calculated by the correction value calculation unit 110 to correct those input images.
 <<<第2の実施形態>>>
 次に、本発明の第2の実施形態について図面を参照して詳細に説明する。以下、本実施形態の説明が不明確にならない範囲で、前述の説明と重複する内容については説明を省略する。
<<< Second Embodiment >>>
Next, a second embodiment of the present invention will be described in detail with reference to the drawings. Hereinafter, the description overlapping with the above description is omitted as long as the description of the present embodiment is not obscured.
 図11は、本発明の第2の実施形態に係る画像補正装置201の構成を示すブロック図である。 FIG. 11 is a block diagram showing a configuration of an image correction apparatus 201 according to the second embodiment of the present invention.
 図11を参照すると、本実施形態における画像補正装置201は、第1の実施形態の画像補正装置101と比べて、補正値算出部110に替えて補正値算出部210を、補正処理部120に替えて補正処理部220を、含む。 Referring to FIG. 11, the image correction apparatus 201 according to the present embodiment is different from the image correction apparatus 101 according to the first embodiment in that a correction value calculation unit 210 is replaced with a correction value calculation unit 110 and a correction processing unit 120 is used. Instead, a correction processing unit 220 is included.
 ===補正値算出部210===
 補正値算出部210は、複数の入力画像に基づいて、それらの入力画像に共通して存在する高周波成分に対応する、補正値を算出する。ここで、補正値は、そのオフセットと仮オフセットbとの差分である。仮オフセットbは、例えば、生産工場等において図3に示す赤外線撮像素子部420の出荷時に測定された、オフセットの初期値である。
=== Correction Value Calculation Unit 210 ===
Based on a plurality of input images, the correction value calculation unit 210 calculates a correction value corresponding to a high-frequency component that exists in common among the input images. Here, the correction value is the difference between the offset and the temporary offset b 0. The temporary offset b 0 is an initial value of the offset measured at the time of shipment of the infrared imaging element unit 420 shown in FIG.
 図12は、仮オフセットテーブルの例を示す図である。図12に示すように、初期オフセットテーブルは、周辺温度とオフセットとの組を保持する。 FIG. 12 is a diagram illustrating an example of a temporary offset table. As shown in FIG. 12, the initial offset table holds a set of ambient temperature and offset.
 補正値算出部210は、内部の記憶手段(不図示)に仮オフセットテーブルを保持するようにしてよい。 The correction value calculation unit 210 may hold a temporary offset table in an internal storage unit (not shown).
 式4は、「入力画像yをオフセットboffsetで補正した画像」の高周波成分のエネルギーの平均値と、「オフセットboffsetから仮オフセットbを減算したベクトル(バーbと呼ぶ)」のエネルギーに重み付けをした値とを加算した、ターゲットエネルギーEを示す式である。尚、αは重みである。
Figure JPOXMLDOC01-appb-I000004
Equation 4 shows the average value of the energy of the high frequency component of the “image obtained by correcting the input image y i with the offset b offset ” and the energy of the “vector obtained by subtracting the temporary offset b 0 from the offset b offset (referred to as bar b)”. It is the type | formula which shows the target energy E which added the value weighted to. Α is a weight.
Figure JPOXMLDOC01-appb-I000004
 式5は、バーbで方程式を表すように、式4を変形した式である。
Figure JPOXMLDOC01-appb-I000005
Expression 5 is an expression obtained by modifying Expression 4 so that the equation is represented by the bar b.
Figure JPOXMLDOC01-appb-I000005

 補正値算出部210は、式5で表されるターゲットエネルギーEが最小になるバーbを、『N個の入力画像yiに共通して存在する高周波成分に対応する、補正値』として算出する。具体的には、補正値算出部210は、式5のバーbに関する偏微分を0として得られる方程式を、バーbについて解く。方程式を解く手法としては、直接法(ガウス消去法、LU分解など)、反復法(共役勾配法など)、周波数空間でのデコンボリューション(Deconvolution)など、任意の手法を使用してよい。

The correction value calculation unit 210 calculates the bar b that minimizes the target energy E represented by Expression 5 as “a correction value corresponding to a high-frequency component that is common to the N input images yi”. Specifically, the correction value calculation unit 210 solves for the bar b an equation obtained by setting the partial differentiation with respect to the bar b in Expression 5 to zero. As a method for solving the equation, any method such as a direct method (Gauss elimination method, LU decomposition, etc.), an iterative method (conjugate gradient method, etc.), a deconvolution in frequency space, or the like may be used.
 式5の第2項である「α∥バーb∥ 」は、バーbの各要素が0に近い値を持つように、方程式の解を制約する。即ち、その第2項は、オフセットboffset(現状の推定ノイズ)と仮オフセットb(初期値として計測されたノイズ)との差異を、より小さくするように働く。こうして、補正値算出部210は、補正値算出部110に比べて、より安定した補正値を算出する。 The second term of equation 5, “α∥bar b∥ 2 2 ”, constrains the solution of the equation so that each element of bar b has a value close to zero. That is, the second term serves to reduce the difference between the offset b offset (current estimated noise) and the temporary offset b 0 (noise measured as an initial value). In this way, the correction value calculation unit 210 calculates a more stable correction value than the correction value calculation unit 110.
 ここで、安定した補正値とは、仮オフセットbからかけ離れた補正にならないような補正値である。その仮オフセットbからかけ離れた補正とは、例えば、ターゲットエネルギーEを最小にするために、各要素(画素)の輝度を最大値/最小値にしてしまうような補正である。 Here, the stable correction value is a correction value that does not cause a corrected far removed from the temporary offset b 0. The correction far from the temporary offset b 0 is, for example, a correction that sets the luminance of each element (pixel) to the maximum value / minimum value in order to minimize the target energy E.
 また、安定した補正値とは、複数の入力画像が類似している場合にも有効な補正値である。換言すると、安定した補正値とは、被撮像体画像の高周波成分を残し、ノイズである高周波成分を除去する補正値である。 Also, the stable correction value is an effective correction value even when a plurality of input images are similar. In other words, the stable correction value is a correction value that leaves the high-frequency component of the image to be captured and removes the high-frequency component that is noise.
 尚、重みのαは、式5において、第1項の効果(高周波成分を削減)と第2項の効果(オフセットboffsetの安定化)とのバランスが最適になるように、理論的及び経験的に予め定められた値である。 It should be noted that the weight α is calculated theoretically and empirically so that the balance between the effect of the first term (reducing high-frequency components) and the effect of the second term (stabilization of the offset b offset ) in Equation 5 is optimal. This is a predetermined value.
 ===補正処理部220===
 補正処理部220は、補正値算出部210が算出したバーbを利用してそれらの入力画像を補正し、出力する。具体的には、補正処理部220は、以下に示す式6により入力画像を補正する。
Figure JPOXMLDOC01-appb-I000006
=== Correction Processing Unit 220 ===
The correction processing unit 220 corrects and outputs those input images using the bar b calculated by the correction value calculation unit 210. Specifically, the correction processing unit 220 corrects the input image according to Expression 6 shown below.
Figure JPOXMLDOC01-appb-I000006

 図13は、本実施形態に係る変形例の画像補正装置200の構成を示す、ブロック図である。画像補正装置200は、補正値算出部210を含み、補正処理部220を含まない。即ち、画像補正装置200は、補正値算出部210が算出した補正値を出力する。

FIG. 13 is a block diagram illustrating a configuration of a modified image correction apparatus 200 according to the present embodiment. The image correction apparatus 200 includes a correction value calculation unit 210 and does not include a correction processing unit 220. That is, the image correction apparatus 200 outputs the correction value calculated by the correction value calculation unit 210.
 尚、画像補正装置200、画像補正装置201のハードウェア単位の構成要素は、図9に示す構成要素と同様である。 Note that the components of the image correction device 200 and the image correction device 201 in hardware units are the same as those shown in FIG.
 上述した本実施形態における第1の効果は、第1の実施形態の効果に加えて、安定した補正値を出力することを可能にする点である。 The first effect of the present embodiment described above is that, in addition to the effect of the first embodiment, it is possible to output a stable correction value.
 その理由は、補正値算出部210が、高周波成分の削減に加えて、オフセットboffsetと仮オフセットbとの差異をより小さくするように補正値を算出するようにしたからである。 This is because the correction value calculation unit 210 calculates the correction value so as to reduce the difference between the offset b offset and the temporary offset b 0 in addition to the reduction of the high frequency component.
 上述した本実施形態における第2の効果は、それらの入力画像の固定パターンノイズをより安定的に除去することを可能にする点である。 The second effect of the present embodiment described above is that the fixed pattern noise of those input images can be more stably removed.
 その理由は、補正処理部220が、補正値算出部210の算出した安定した補正値を利用してそれらの入力画像を補正するようにしたからである。 The reason is that the correction processing unit 220 corrects these input images using the stable correction value calculated by the correction value calculation unit 210.
 <<<第3の実施形態>>>
 次に、本発明の第3の実施形態について図面を参照して詳細に説明する。以下、本実施形態の説明が不明確にならない範囲で、前述の説明と重複する内容については説明を省略する。
<<< Third Embodiment >>>
Next, a third embodiment of the present invention will be described in detail with reference to the drawings. Hereinafter, the description overlapping with the above description is omitted as long as the description of the present embodiment is not obscured.
 図14は、本発明の第3の実施形態に係る画像補正装置300の構成を示すブロック図である。 FIG. 14 is a block diagram showing a configuration of an image correction apparatus 300 according to the third embodiment of the present invention.
 図14を参照すると、本実施形態における画像補正装置300は、第1の実施形態の画像補正装置100と比べて、補正値算出契機発生部330を、更に含む。 Referring to FIG. 14, the image correction apparatus 300 according to the present embodiment further includes a correction value calculation trigger generation unit 330 as compared with the image correction apparatus 100 according to the first embodiment.
 ===補正値算出契機発生部330===
 補正値算出契機発生部330は、補正値算出の契機を補正値算出部110に通知する。
=== Correction Value Calculation Trigger Generation Unit 330 ===
The correction value calculation trigger generation unit 330 notifies the correction value calculation unit 110 of the correction value calculation trigger.
 補正値算出契機発生部330は、例えば、動画の入力を受け付け、複数の入力画像となる、予め定められた数のフレームを計数するたびに、補正値算出の契機を補正値算出部110に通知する。この場合、補正値算出契機発生部330は、そのフレームを連続して計数してもよいし、所定の枚数おきにそのフレームを計数してもよい。その計数するフレームの枚数及び何枚おきに計数するかの所定の枚数は、外部(例えば、図9に示す入力部704)から設定するようにしてよい。 The correction value calculation trigger generation unit 330 receives the input of a moving image, for example, and notifies the correction value calculation unit 110 of a correction value calculation trigger each time a predetermined number of frames that become a plurality of input images are counted. To do. In this case, the correction value calculation trigger generation unit 330 may count the frames continuously, or may count the frames every predetermined number. The number of frames to be counted and the predetermined number of frames to be counted may be set from the outside (for example, the input unit 704 shown in FIG. 9).
 また、補正値算出契機発生部330は、予め定められた時間(例えば、一定時間毎にあるいは特定の時刻)に、補正値算出の契機を補正値算出部110に通知する。その予め定められた時間は、外部(例えば、図9に示す入力部704)から設定するようにしてよい。 Also, the correction value calculation trigger generation unit 330 notifies the correction value calculation unit 110 of a correction value calculation trigger at a predetermined time (for example, at regular time intervals or at a specific time). The predetermined time may be set from the outside (for example, the input unit 704 shown in FIG. 9).
 また、補正値算出契機発生部330は、補正値算出部110による補正値の出力を監視するようにしてもよい。そして、所定の時間以内に補正値算出部110が補正値を出力しない場合、補正値算出契機発生部330は、上述の予め定められた時間に係わらず、より早いタイミングで次の補正値算出の契機を補正値算出部110に通知するようにしてもよい。 Further, the correction value calculation trigger generation unit 330 may monitor the output of the correction value by the correction value calculation unit 110. If the correction value calculation unit 110 does not output a correction value within a predetermined time, the correction value calculation trigger generation unit 330 calculates the next correction value at an earlier timing regardless of the above-described predetermined time. An opportunity may be notified to the correction value calculation unit 110.
 補正値算出契機発生部330は、例えば、その受け付けた動画から、その複数の入力画像となる予め定められた数のそのフレームを、補正値算出の契機に対応させて補正値算出部110に出力するようにしてよい。 For example, the correction value calculation trigger generation unit 330 outputs a predetermined number of frames serving as the plurality of input images from the received moving image to the correction value calculation unit 110 in correspondence with the correction value calculation trigger. You may do it.
 尚、図2に示す画像補正装置101、図11に示す画像補正装置201及び図13に示す画像補正装置200もまた、補正値算出契機発生部330を含むようにしてもよい。 Note that the image correction apparatus 101 shown in FIG. 2, the image correction apparatus 201 shown in FIG. 11, and the image correction apparatus 200 shown in FIG. 13 may also include a correction value calculation trigger generation unit 330.
 この場合、画像補正装置101の補正処理部120は、補正値算出部110が新たな補正値を出力するたびにその補正値を受け取り、保持する。また、画像補正装置201の補正処理部220は、補正値算出部210が新たな補正値を出力するたびにその補正値を受け取り、保持する。そして、補正処理部120及び補正処理部220のそれぞれは、保持している補正値を用いて、入力画像を補正し、出力する。 In this case, the correction processing unit 120 of the image correction apparatus 101 receives and holds the correction value every time the correction value calculation unit 110 outputs a new correction value. The correction processing unit 220 of the image correction apparatus 201 receives and holds the correction value every time the correction value calculation unit 210 outputs a new correction value. Then, each of the correction processing unit 120 and the correction processing unit 220 corrects and outputs the input image using the stored correction value.
 尚、画像補正装置300のハードウェア単位の構成要素は、図9に示す構成要素と同様である。 Note that the components of the image correction apparatus 300 in hardware units are the same as those shown in FIG.
 次に本実施形態の動作について、図14~図15を参照して詳細に説明する。 Next, the operation of this embodiment will be described in detail with reference to FIGS.
 図15は、画像補正装置300の動作を示すフローチャートである。尚、このフローチャートによる処理は、前述したCPUによるプログラム制御に基づいて、実行されても良い。 FIG. 15 is a flowchart showing the operation of the image correction apparatus 300. Note that the processing according to this flowchart may be executed based on the above-described program control by the CPU.
 補正値算出契機発生部330は、補正値算出の契機を補正値算出部110に通知する(S631)。 The correction value calculation trigger generation unit 330 notifies the correction value calculation unit 110 of the correction value calculation trigger (S631).
 次に、補正値算出部110は、複数の入力画像に基づいて、補正値(オフセットboffset)を算出し、出力する。(S632)。 Next, the correction value calculation unit 110 calculates and outputs a correction value (offset b offset ) based on a plurality of input images. (S632).
 以上が、本実施形態の動作の説明である。 The above is the description of the operation of the present embodiment.
 上述した本実施形態における効果は、第1の実施形態の効果に加えて、適切な負荷で補正値を生成する負荷を適切に保つことが可能になる点である。 The effect of the present embodiment described above is that, in addition to the effect of the first embodiment, it is possible to appropriately maintain a load for generating a correction value with an appropriate load.
 その理由は、補正値算出契機発生部330が、補正値算出の契機を補正値算出部110に通知するようにしたからである。 The reason is that the correction value calculation trigger generation unit 330 notifies the correction value calculation unit 110 of the correction value calculation trigger.
 <<<第4の実施形態>>>
 次に、本発明の第4の実施形態について図面を参照して詳細に説明する。以下、本実施形態の説明が不明確にならない範囲で、前述の説明と重複する内容については説明を省略する。
<<< Fourth Embodiment >>>
Next, a fourth embodiment of the present invention will be described in detail with reference to the drawings. Hereinafter, the description overlapping with the above description is omitted as long as the description of the present embodiment is not obscured.
 第4の実施形態の構成は、図1の画像補正装置100の構成、または図14の画像補正装置300の構成、と同様であってよい。第4の実施形態の変形例の構成は、図2の画像補正装置101の構成と同様であってよい。また、第4の実施形態の画像補正装置100を含む赤外線撮像装置102の構成は、図3に示す構成と同様であってよい。 The configuration of the fourth embodiment may be the same as the configuration of the image correction apparatus 100 in FIG. 1 or the configuration of the image correction apparatus 300 in FIG. The configuration of the modified example of the fourth embodiment may be the same as the configuration of the image correction apparatus 101 in FIG. Further, the configuration of the infrared imaging device 102 including the image correction device 100 of the fourth embodiment may be the same as the configuration shown in FIG.
 本実施形態の画像補正装置100及び画像補正装置101は、第1の実施形態の画像補正装置100及び画像補正装置101と比べて、補正値算出部110の動作が異なる。 The image correction apparatus 100 and the image correction apparatus 101 of the present embodiment are different in operation of the correction value calculation unit 110 from the image correction apparatus 100 and the image correction apparatus 101 of the first embodiment.
 ===補正値算出部110===
 補正値算出部110は、ターゲットエネルギーEが所定値以下になるような補正値を算出できない場合、その場合の複数の入力画像に替えて、新たな複数の入力画像に基づいて補正値を算出し、出力する。
=== Correction Value Calculation Unit 110 ===
When the correction value calculation unit 110 cannot calculate a correction value such that the target energy E is equal to or less than a predetermined value, the correction value calculation unit 110 calculates a correction value based on a plurality of new input images instead of the plurality of input images in that case. ,Output.
 ターゲットエネルギーEが所定値以下になるような補正値を算出できない場合は、例えば、赤外線撮像装置102の電源が投入された直後において、赤外線撮像装置102の内部温度が急激(複数の入力画像の撮像に掛かる時間に対して)に上昇する場合などである。この場合、内部温度の変化に伴って赤外線受光素子の特性が変化し、boffsetが変化するため、ターゲットエネルギーEが所望どおりの低さになるようなboffsetが得られない。 When a correction value that makes the target energy E equal to or less than a predetermined value cannot be calculated, for example, immediately after the infrared imaging device 102 is turned on, the internal temperature of the infrared imaging device 102 increases rapidly (capturing a plurality of input images). For example, the time required for In this case, the characteristics of the infrared receiving elements vary with the change of the internal temperature, since the b offset changes, the target energy E can not be obtained b offset such that the lower of the desired.
 補正値算出部110は、例えば、ある補正値算出の契機において、ターゲットエネルギーEが所定値以下になるような補正値を算出できなかった場合、補正値を出力しない。そして、補正値算出部110は、次の補正値算出の契機において、新たな複数の入力画像に基づいて補正値を算出し、出力する。 The correction value calculation unit 110, for example, does not output a correction value when a correction value that causes the target energy E to be a predetermined value or less cannot be calculated at a certain correction value calculation. Then, the correction value calculation unit 110 calculates and outputs a correction value based on a plurality of new input images at the time of the next correction value calculation.
 補正値算出部110から補正値が出力されなかった場合、補正処理部120は、それ以前に補正値算出部110から出力された補正値を利用して、入力画像を補正する。 When the correction value is not output from the correction value calculation unit 110, the correction processing unit 120 corrects the input image using the correction value output from the correction value calculation unit 110 before that.
 例えば、入力画像が動画のフレームである場合について、具体的に説明する。 For example, the case where the input image is a moving image frame will be specifically described.
 補正値算出部110は、例えば、その動画の連続するフレームの内の、任意の間隔(例えば5フレームおきに選択)の、10枚のフレームを複数の入力画像とする。 The correction value calculation unit 110 uses, for example, 10 frames at arbitrary intervals (for example, selected every 5 frames) in a continuous frame of the moving image as a plurality of input images.
 補正値算出部110は、ある補正値算出の契機において、その10枚のフレームを複数の入力画像として、boffsetを算出する。 The correction value calculation unit 110 calculates b offset using the 10 frames as a plurality of input images at a certain correction value calculation opportunity.
 次に、補正値算出部110は、この算出したboffsetを式2に適用してEを算出する。 Next, the correction value calculation unit 110 calculates E by applying the calculated b offset to Equation 2.
 次に、補正値算出部110は、算出したEを予め定められた閾値と比較し、算出したEが閾値以下の場合、そのboffsetを補正値として出力する。 Next, the correction value calculation unit 110 compares the calculated E with a predetermined threshold value, and when the calculated E is equal to or less than the threshold value, outputs the b offset as a correction value.
 また、補正値算出部110は、算出したEが閾値を超える場合、そのboffsetを補正値として出力しない。 Further, when the calculated E exceeds the threshold value, the correction value calculation unit 110 does not output the b offset as a correction value.
 この場合、補正処理部120は、補正値算出部110が補正値を出力しなかった場合、それより以前に受け取っていた補正値を用いて、入力画像を補正するようにしてよい。尚、補正処理部120は、初期化後に一度も補正値を受け取っていない場合、予め定められた補正値を利用して入力画像を補正するようにしてもよい。 In this case, when the correction value calculation unit 110 does not output the correction value, the correction processing unit 120 may correct the input image using the correction value received before that. The correction processing unit 120 may correct the input image using a predetermined correction value when no correction value has been received after initialization.
 図11に示す画像補正装置201及び図13に示す画像補正装置200の補正値算出部210も同様にしてよい。 The image correction device 201 shown in FIG. 11 and the correction value calculation unit 210 of the image correction device 200 shown in FIG.
 次に本実施形態の動作について、図14及び図16を参照して詳細に説明する。 Next, the operation of this embodiment will be described in detail with reference to FIGS.
 図16は、本実施形態における、図14に示す画像補正装置300の動作を示すフローチャートである。尚、このフローチャートによる処理は、前述したCPUによるプログラム制御に基づいて、実行されても良い。 FIG. 16 is a flowchart showing the operation of the image correction apparatus 300 shown in FIG. 14 in the present embodiment. Note that the processing according to this flowchart may be executed based on the above-described program control by the CPU.
 補正値算出契機発生部330は、10枚のフレームを含む補正値算出の契機を補正値算出部110に通知する(S641)。 The correction value calculation trigger generation unit 330 notifies the correction value calculation unit 110 of a correction value calculation trigger including 10 frames (S641).
 次に、補正値算出部110は、補正値算出の契機を受け取り、その10枚のフレームを複数の入力画像として、boffsetを算出する(S642)。 Next, the correction value calculation unit 110 receives an opportunity for calculating the correction value, and calculates b offset using the 10 frames as a plurality of input images (S642).
 次に、補正値算出部110は、この算出したboffsetを式2に適用してEを算出する(S643)。 Next, the correction value calculation unit 110 calculates E by applying the calculated b offset to Equation 2 (S643).
 次に、補正値算出部110は、算出したEを予め定められた閾値と比較する(S644)。算出したEが閾値以下の場合(S644でYES)、補正値算出部110は、そのboffsetを補正値として出力する(S645)。そして、処理は、S641に戻る。 Next, the correction value calculation unit 110 compares the calculated E with a predetermined threshold value (S644). When the calculated E is equal to or smaller than the threshold (YES in S644), the correction value calculation unit 110 outputs the b offset as a correction value (S645). Then, the process returns to S641.
 また、算出したEが閾値を超える場合(S644でNO)、処理は、S641に戻る。 If the calculated E exceeds the threshold (NO in S644), the process returns to S641.
 以上が、本実施形態の動作の説明である。 The above is the description of the operation of the present embodiment.
 上述した本実施形態における効果は、第1の実施形態の効果に加えて、不適切な補正値の出力を防止することを可能にする点である。 The effect of the present embodiment described above is that, in addition to the effect of the first embodiment, it is possible to prevent an inappropriate correction value from being output.
 その理由は、補正値算出部110が、ターゲットエネルギーEが所定値以下になるような補正値を算出できない場合、その場合の複数の入力画像に替えて、新たな複数の入力画像に基づいて補正値を算出し、出力するようにしたからである。 The reason is that, when the correction value calculation unit 110 cannot calculate a correction value such that the target energy E is equal to or less than the predetermined value, the correction is performed based on a plurality of new input images instead of the plurality of input images in that case. This is because the value is calculated and output.
 <<<第5の実施形態>>>
 次に、本発明の第5の実施形態について図面を参照して詳細に説明する。以下、本実施形態の説明が不明確にならない範囲で、前述の説明と重複する内容については説明を省略する。
<<< Fifth Embodiment >>>
Next, a fifth embodiment of the present invention will be described in detail with reference to the drawings. Hereinafter, the description overlapping with the above description is omitted as long as the description of the present embodiment is not obscured.
 図17は、本発明の第5の実施形態に係る赤外線撮像システム105の構成を示すブロック図である。 FIG. 17 is a block diagram showing a configuration of an infrared imaging system 105 according to the fifth embodiment of the present invention.
 図17を参照すると、本実施形態における赤外線撮像システム105は、撮像部504と画像処理部505とを備える。撮像部504と画像処理部505とは、図示しないネットワークで接続されている。また、撮像部504及び画像処理部505のそれぞれは、任意の台数であってよい。
===撮像部504===
 撮像部504は、レンズ410、赤外線撮像素子部420、増幅回路430、A/D変換回路440及び通信回路460を含む。
Referring to FIG. 17, the infrared imaging system 105 in the present embodiment includes an imaging unit 504 and an image processing unit 505. The imaging unit 504 and the image processing unit 505 are connected via a network (not shown). In addition, each of the imaging unit 504 and the image processing unit 505 may be an arbitrary number.
=== Imaging Unit 504 ===
The imaging unit 504 includes a lens 410, an infrared imaging element unit 420, an amplification circuit 430, an A / D conversion circuit 440, and a communication circuit 460.
 撮像部504は、通信回路460を介して、A/D変換回路440が出力したデジタル信号を、図示しないネットワークに送信する。
===画像処理部505===
 画像処理部505は、画像記録部450、画像補正装置100及び通信部510を含む。
The imaging unit 504 transmits the digital signal output from the A / D conversion circuit 440 via a communication circuit 460 to a network (not shown).
=== Image Processing Unit 505 ===
The image processing unit 505 includes an image recording unit 450, the image correction device 100, and a communication unit 510.
 画像処理部505は、通信部510を介して、図示しないネットワークから撮像部504が出力したデジタル信号を受信する。 The image processing unit 505 receives the digital signal output from the imaging unit 504 from a network (not shown) via the communication unit 510.
 通信部510は、受信したそのデジタル信号を画像記録部450に出力する。 The communication unit 510 outputs the received digital signal to the image recording unit 450.
 尚、画像処理部505のハードウェア単位の構成要素は、図9に示す構成要素と同様である。 Note that the components of the image processing unit 505 in hardware units are the same as those shown in FIG.
 尚、画像処理部505は、画像補正装置100に替えて、図2に示す画像補正装置101、図11に示す画像補正装置201、図13に示す画像補正装置200及び図14に示す画像補正装置300のいずれかを含むようにしてもよい。 Note that the image processing unit 505 is replaced with the image correction apparatus 100 shown in FIG. 2, the image correction apparatus 201 shown in FIG. 11, the image correction apparatus 200 shown in FIG. 13, and the image correction apparatus shown in FIG. Any of 300 may be included.
 上述した本実施形態における効果は、第1の実施形態及び第2の実施形態の効果に加えて、赤外線撮像システム105の信頼性、可用性を向上させることが可能になる点である。 The effect of the present embodiment described above is that the reliability and availability of the infrared imaging system 105 can be improved in addition to the effects of the first embodiment and the second embodiment.
 その理由は、撮像部504と画像補正装置100を含む画像処理部505とをネットワークで接続するようにしたからである。即ち、1台の画像処理部505に、構成要素の少ない撮像部504を複数台接続できるので、撮像部504の多重化や増設が容易になるからである。 The reason is that the imaging unit 504 and the image processing unit 505 including the image correction apparatus 100 are connected via a network. That is, since a plurality of imaging units 504 with few components can be connected to one image processing unit 505, multiplexing and expansion of the imaging units 504 are facilitated.
 以上の各実施形態で説明した各構成要素は、必ずしも個々に独立した存在である必要はない。例えば、各構成要素は、複数の構成要素が1個のモジュールとして実現されたり、1つの構成要素が複数のモジュールで実現されたりしてもよい。また、各構成要素は、ある構成要素が他の構成要素の一部であったり、ある構成要素の一部と他の構成要素の一部とが重複していたり、といったような構成であってもよい。 Each component described in each of the above embodiments does not necessarily need to be an independent entity. For example, in each component, a plurality of components may be realized as one module, or one component may be realized as a plurality of modules. Each component is configured such that a component is a part of another component, or a part of a component overlaps a part of another component. Also good.
 以上説明した各実施形態における各構成要素及び各構成要素を実現するモジュールは、必要に応じ可能であれば、ハードウェア的に実現されても良いし、コンピュータ及びプログラムで実現されても良いし、ハードウェア的なモジュールとコンピュータ及びプログラムとの混在により実現されても良い。 In the embodiments described above, each component and a module that realizes each component may be realized by hardware as long as necessary, or may be realized by a computer and a program. It may be realized by mixing hardware modules, computers, and programs.
 また、以上説明した各実施形態では、複数の動作をフローチャートの形式で順番に記載してあるが、その記載の順番は複数の動作を実行する順番を限定するものではない。このため、各実施形態を実施するときには、その複数の動作の順番は内容的に支障のない範囲で変更することができる。 In each of the embodiments described above, a plurality of operations are described in order in the form of a flowchart. However, the order of description does not limit the order in which the plurality of operations are executed. For this reason, when each embodiment is implemented, the order of the plurality of operations can be changed within a range that does not hinder the contents.
 更に、以上説明した各実施形態では、複数の動作は個々に相違するタイミングで実行されることに限定されない。例えば、ある動作の実行中に他の動作が発生したり、ある動作と他の動作との実行タイミングが部分的に乃至全部において重複していたりしていてもよい。 Furthermore, in each embodiment described above, a plurality of operations are not limited to being executed at different timings. For example, another operation may occur during the execution of a certain operation, or the execution timing of a certain operation and another operation may partially or entirely overlap.
 更に、以上説明した各実施形態では、ある動作が他の動作の契機になるように記載しているが、その記載はある動作と他の動作の全ての関係を限定するものではない。このため、各実施形態を実施するときには、その複数の動作の関係は内容的に支障のない範囲で変更することができる。また各構成要素の各動作の具体的な記載は、各構成要素の各動作を限定するものではない。このため、各構成要素の具体的な各動作は、各実施形態を実施する上で機能的、性能的、その他の特性に対して支障をきたさない範囲内で変更されて良い。 Further, in each of the embodiments described above, it is described that a certain operation becomes a trigger for another operation, but the description does not limit all relationships between the certain operation and other operations. For this reason, when each embodiment is implemented, the relationship between the plurality of operations can be changed within a range that does not hinder the contents. The specific description of each operation of each component does not limit each operation of each component. For this reason, each specific operation | movement of each component may be changed in the range which does not cause trouble with respect to a functional, performance, and other characteristic in implementing each embodiment.
 以上、各実施形態及び実施例を参照して本発明を説明したが、本発明は上記実施形態及び実施例に限定されるものではない。本発明の構成や詳細には、本発明のスコープ内で当業者が理解しえるさまざまな変更をすることができる。 As mentioned above, although this invention was demonstrated with reference to each embodiment and an Example, this invention is not limited to the said embodiment and Example. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 上記の実施形態の一部又は全部は、以下の付記のようにも記載されうるが、以下には限られない。 Some or all of the above embodiments can be described as in the following supplementary notes, but are not limited thereto.
 (付記1)
 被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する補正値算出手段を含む
 画像補正装置。
(Appendix 1)
Common to the plurality of input images based on a plurality of input images in which noise images corresponding to variations in sensitivity of the plurality of image sensors are superimposed on the image of the body to be captured corresponding to the radiation of the body to be imaged An image correction apparatus including a correction value calculation means for calculating and outputting a correction value corresponding to an offset assumed to be a high-frequency component.
 (付記2)
 前記補正値算出手段は、前記入力画像のそれぞれと前記オフセットとの差分に存在する、高周波成分に対応するエネルギー値をターゲットエネルギー値とした場合に、前記ターゲットエネルギー値が最小となるような、前記補正値を算出する
ことを特徴とする付記1記載の画像補正装置。
(Appendix 2)
The correction value calculation means, when the energy value corresponding to the high frequency component, which exists in the difference between each of the input images and the offset, is the target energy value, the target energy value is minimized, The image correction apparatus according to appendix 1, wherein a correction value is calculated.
 (付記3)
 前記ターゲットエネルギー値は、前記エネルギー値に、周辺温度に対応する仮オフセットを前記オフセットから減算した場合の減算値に対応するエネルギー値を更に加算した場合の加算値である
 ことを特徴とする付記2記載の画像補正装置。
(Appendix 3)
The target energy value is an addition value obtained by further adding an energy value corresponding to a subtraction value when a temporary offset corresponding to an ambient temperature is subtracted from the offset to the energy value. The image correction apparatus described.
 (付記4)
 前記補正値算出手段は、前記ターゲットエネルギー値が所定値以下になるような前記補正値を算出できない場合、前記場合の前記複数の入力画像に替えて、新たな前記複数の入力画像に基づいて前記補正値を算出し、出力する
 ことを特徴とする付記2または3に記載の画像補正装置。
(Appendix 4)
When the correction value calculation unit cannot calculate the correction value such that the target energy value is equal to or less than a predetermined value, the correction value calculation unit replaces the plurality of input images in the case with the new input images. The image correction apparatus according to appendix 2 or 3, wherein a correction value is calculated and output.
 (付記5)
 補正値算出の契機を、前記補正値算出手段に通知する補正値算出契機発生手段を含む
 ことを特徴とする付記1乃至4のいずれか1項に記載の画像補正装置。
(Appendix 5)
The image correction apparatus according to any one of appendices 1 to 4, further comprising: a correction value calculation trigger generation unit that notifies the correction value calculation trigger to the correction value calculation unit.
 (付記6)
 前記複数の入力画像は、動画データの連続する複数のフレームの内の、任意の間隔の複数の前記フレームである
 ことを特徴とする付記1乃至5のいずれか1項に記載の画像補正装置。
(Appendix 6)
The image correction apparatus according to any one of appendices 1 to 5, wherein the plurality of input images are a plurality of frames at arbitrary intervals among a plurality of continuous frames of moving image data.
 (付記7)
 前記補正値算出手段が出力する前記補正値を利用して前記入力画像を補正し、出力する画像補正手段を含む
 ことを特徴とする付記1乃至6のいずれか1項に記載の画像補正装置。
(Appendix 7)
7. The image correction apparatus according to claim 1, further comprising an image correction unit that corrects and outputs the input image using the correction value output from the correction value calculation unit.
 (付記8)
 前記画像補正手段は、前記補正値算出手段から前記補正値が新たに出力されない場合、前記補正値算出手段から既に出力された前記補正値を利用して前記入力画像を補正する
 ことを特徴とする付記7記載の画像補正装置。
(Appendix 8)
The image correction unit corrects the input image using the correction value already output from the correction value calculation unit when the correction value is not newly output from the correction value calculation unit. The image correction apparatus according to appendix 7.
 (付記9)
 付記1乃至8のいずれか1項に記載の画像補正装置を備える
 撮像装置。
(Appendix 9)
An imaging apparatus comprising the image correction apparatus according to any one of appendices 1 to 8.
 (付記10)
 被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する
 画像補正方法。
(Appendix 10)
Common to the plurality of input images based on a plurality of input images in which noise images corresponding to variations in sensitivity of the plurality of image sensors are superimposed on the image of the body to be captured corresponding to the radiation of the body to be imaged An image correction method that calculates and outputs a correction value corresponding to an offset assumed to be a high-frequency component.
 (付記11)
 前記入力画像のそれぞれと前記オフセットとの差分に存在する、高周波成分に対応するエネルギー値をターゲットエネルギー値とした場合に、前記ターゲットエネルギー値が最小となるような、前記補正値を算出する
ことを特徴とする付記10記載の画像補正方法。
(Appendix 11)
Calculating the correction value such that the target energy value is minimized when an energy value corresponding to a high frequency component existing in a difference between each of the input images and the offset is set as a target energy value; The image correction method according to supplementary note 10, which is a feature.
 (付記12)
 前記ターゲットエネルギー値は、前記エネルギー値に、周辺温度に対応する仮オフセットを前記オフセットから減算した場合の減算値に対応するエネルギー値を更に加算した場合の加算値である
 ことを特徴とする付記11記載の画像補正方法。
(Appendix 12)
The target energy value is an addition value obtained by further adding an energy value corresponding to a subtraction value when a temporary offset corresponding to an ambient temperature is subtracted from the offset to the energy value. The image correction method as described.
 (付記13)
 前記ターゲットエネルギー値が所定値以下になるような前記補正値を算出できない場合、前記場合の前記複数の入力画像に替えて、新たな前記複数の入力画像に基づいて前記補正値を算出し、出力する
 ことを特徴とする付記11または12に記載の画像補正方法。
(Appendix 13)
When the correction value such that the target energy value is equal to or less than a predetermined value cannot be calculated, the correction value is calculated based on the plurality of new input images instead of the plurality of input images in the case, and output. The image correction method according to appendix 11 or 12, characterized in that:
 (付記14)
 補正値算出の契機を発生し、前記契機に基づいて前記補正値を算出する
 ことを特徴とする付記10乃至13のいずれか1項に記載の画像補正方法。
(Appendix 14)
The image correction method according to any one of appendices 10 to 13, wherein an opportunity for calculating a correction value is generated and the correction value is calculated based on the opportunity.
 (付記15)
 前記複数の入力画像は、動画データの連続する複数のフレームの内の、任意の間隔の複数の前記フレームである
 ことを特徴とする付記10乃至14のいずれか1項に記載の画像補正方法。
(Appendix 15)
The image correction method according to any one of appendices 10 to 14, wherein the plurality of input images are a plurality of frames at arbitrary intervals among a plurality of consecutive frames of moving image data.
 (付記16)
 前記補正値を利用して前記入力画像を補正し、出力する
 ことを特徴とする付記10乃至15のいずれか1項に記載の画像補正方法。
(Appendix 16)
The image correction method according to any one of appendices 10 to 15, wherein the input image is corrected and output using the correction value.
 (付記17)
 前記補正値を新たに算出できない場合、既に算出した前記補正値を利用して前記入力画像を補正する
 ことを特徴とする付記16記載の画像補正方法。
(Appendix 17)
The image correction method according to claim 16, wherein when the correction value cannot be newly calculated, the input image is corrected using the already calculated correction value.
 (付記18)
 被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する処理をコンピュータに実行させる
 プログラム。
(Appendix 18)
Common to the plurality of input images based on a plurality of input images in which noise images corresponding to variations in sensitivity of the plurality of image sensors are superimposed on the image of the body to be captured corresponding to the radiation of the body to be imaged A program that calculates and outputs a correction value corresponding to an offset assumed to be a high-frequency component to be output.
 (付記19)
 前記補正値を算出する処理は、前記入力画像のそれぞれと前記オフセットとの差分に存在する、高周波成分に対応するエネルギー値をターゲットエネルギー値とした場合に、前記ターゲットエネルギー値が最小となるような、前記補正値を算出する処理である
ことを特徴とする付記18記載のプログラム。
(Appendix 19)
The process of calculating the correction value is such that the target energy value is minimized when an energy value corresponding to a high-frequency component that exists in the difference between each of the input images and the offset is used as the target energy value. The program according to appendix 18, which is a process of calculating the correction value.
 (付記20)
 前記ターゲットエネルギー値は、前記エネルギー値に、周辺温度に対応する仮オフセットを前記オフセットから減算した場合の減算値に対応するエネルギー値を更に加算した場合の加算値である
 ことを特徴とする付記19記載のプログラム。
(Appendix 20)
The target energy value is an addition value obtained by further adding an energy value corresponding to a subtraction value when a temporary offset corresponding to an ambient temperature is subtracted from the offset to the energy value. The listed program.
 (付記21)
 前記補正値を算出する処理は、前記ターゲットエネルギー値が所定値以下になるような前記補正値を算出できない場合、前記場合の前記複数の入力画像に替えて、新たな前記複数の入力画像に基づいて前記補正値を算出し、出力する処理である
 ことを特徴とする付記19または20に記載のプログラム。
(Appendix 21)
The process of calculating the correction value is based on the plurality of new input images instead of the plurality of input images in the case when the correction value such that the target energy value is not more than a predetermined value cannot be calculated. The program according to appendix 19 or 20, wherein the correction value is calculated and output.
 (付記22)
 補正値算出の契機を発生する処理を、更に、コンピュータに実行させ、
 前記コンピュータは、前記契機に基づいて前記補正値を算出する処理を実行する
 ことを特徴とする付記18乃至21のいずれか1項に記載のプログラム。
(Appendix 22)
Furthermore, let the computer execute a process for generating an opportunity for calculating the correction value,
The computer according to any one of appendices 18 to 21, wherein the computer executes a process of calculating the correction value based on the trigger.
 (付記23)
 前記複数の入力画像は、動画データの連続する複数のフレームの内の、任意の間隔の複数の前記フレームである
 ことを特徴とする付記18乃至22のいずれか1項に記載のプログラム。
(Appendix 23)
The program according to any one of appendices 18 to 22, wherein the plurality of input images are a plurality of the frames at arbitrary intervals among a plurality of consecutive frames of moving image data.
 (付記24)
 前記補正値を利用して前記入力画像を補正し、出力する処理を、更に、コンピュータに実行させる
 ことを特徴とする付記18乃至23のいずれか1項に記載のプログラム。
(Appendix 24)
The program according to any one of appendices 18 to 23, further causing a computer to execute a process of correcting and outputting the input image using the correction value.
 (付記25)
 前記補正値を新たに算出できない場合、既に算出した前記補正値を利用して前記入力画像を補正する処理を、コンピュータに実行させる
 ことを特徴とする付記24記載のプログラム。
(Appendix 25)
25. The program according to appendix 24, wherein when the correction value cannot be newly calculated, the computer executes a process of correcting the input image using the already calculated correction value.
 (付記26)
 プロセッサと
 プロセッサが補正値算出手段として動作するための、プロセッサによって実行される命令を保持する記憶部とを含み、
 前記補正値算出手段は、被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する
 画像補正装置。
(Appendix 26)
A processor and a storage unit that holds instructions executed by the processor for the processor to operate as correction value calculation means,
The correction value calculating unit is configured to output the plurality of input images based on a plurality of input images obtained by superimposing noise images corresponding to variations in sensitivity of the plurality of image pickup elements on an image pickup object image corresponding to radiation of the image pickup object. An image correction apparatus that calculates and outputs a correction value corresponding to an offset assumed to be a high-frequency component that exists in common in an input image.
 以上、実施形態を参照して本願発明を説明したが、本願発明は上記実施形態に限定されるものではない。本願発明の構成や詳細には、本願発明のスコープ内で当業者が理解し得る様々な変更をすることができる。 The present invention has been described above with reference to the embodiments, but the present invention is not limited to the above embodiments. Various changes that can be understood by those skilled in the art can be made to the configuration and details of the present invention within the scope of the present invention.
 この出願は、2012年11月15日に出願された日本出願特願2012-251145を基礎とする優先権を主張し、その開示の全てをここに取り込む。 This application claims priority based on Japanese Patent Application No. 2012-251145 filed on November 15, 2012, the entire disclosure of which is incorporated herein.
 本発明は、複数のセンサの感度のばらつきにより、それらのセンサのそれぞれが固有のノイズを発生するようなセンサ群から情報を受け取り、処理する装置に適用できる。 The present invention can be applied to a device that receives and processes information from a sensor group in which each sensor generates unique noise due to variations in sensitivity of a plurality of sensors.
 100  画像補正装置
 101  画像補正装置
 102  赤外線撮像装置
 105  赤外線撮像システム
 110  補正値算出部
 120  補正処理部
 200  画像補正装置
 201  画像補正装置
 210  補正値算出部
 220  補正処理部
 300  画像補正装置
 330  補正値算出契機発生部
 400  撮像部
 410  レンズ
 420  赤外線撮像素子部
 430  増幅回路
 440  A/D変換回路
 450  画像記録部
 460  通信回路
 504  撮像部
 505  画像処理部
 510  通信部
 700  コンピュータ
 701  CPU
 702  記憶部
 703  記憶装置
 704  入力部
 705  出力部
 706  通信部
 707  記録媒体
DESCRIPTION OF SYMBOLS 100 Image correction apparatus 101 Image correction apparatus 102 Infrared imaging device 105 Infrared imaging system 110 Correction value calculation part 120 Correction processing part 200 Image correction apparatus 201 Image correction apparatus 210 Correction value calculation part 220 Correction processing part 300 Image correction apparatus 330 Correction value calculation Trigger generation unit 400 Imaging unit 410 Lens 420 Infrared imaging device unit 430 Amplifying circuit 440 A / D conversion circuit 450 Image recording unit 460 Communication circuit 504 Imaging unit 505 Image processing unit 510 Communication unit 700 Computer 701 CPU
702 Storage unit 703 Storage device 704 Input unit 705 Output unit 706 Communication unit 707 Recording medium

Claims (11)

  1.  被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する補正値算出手段を含む
     画像補正装置。
    Common to the plurality of input images based on a plurality of input images in which noise images corresponding to variations in sensitivity of the plurality of image sensors are superimposed on the image of the body to be captured corresponding to the radiation of the body to be imaged An image correction apparatus including a correction value calculation means for calculating and outputting a correction value corresponding to an offset assumed to be a high-frequency component.
  2.  前記補正値算出手段は、前記入力画像のそれぞれと前記オフセットとの差分に存在する、高周波成分に対応するエネルギー値をターゲットエネルギー値とした場合に、前記ターゲットエネルギー値が最小となるような、前記補正値を算出する
    ことを特徴とする請求項1記載の画像補正装置。
    The correction value calculation means, when the energy value corresponding to the high frequency component, which exists in the difference between each of the input images and the offset, is the target energy value, the target energy value is minimized, The image correction apparatus according to claim 1, wherein a correction value is calculated.
  3.  前記ターゲットエネルギー値は、前記エネルギー値に、周辺温度に対応する仮オフセットを前記オフセットから減算した場合の減算値に対応するエネルギー値を更に加算した場合の加算値である
     ことを特徴とする請求項2記載の画像補正装置。
    The target energy value is an addition value obtained by further adding an energy value corresponding to a subtraction value when a temporary offset corresponding to an ambient temperature is subtracted from the offset to the energy value. 2. The image correction apparatus according to 2.
  4.  前記補正値算出手段は、前記ターゲットエネルギー値が所定値以下になるような前記補正値を算出できない場合、前記場合の前記複数の入力画像に替えて、新たな前記複数の入力画像に基づいて前記補正値を算出し、出力する
     ことを特徴とする請求項2または3項に記載の画像補正装置。
    When the correction value calculation unit cannot calculate the correction value such that the target energy value is equal to or less than a predetermined value, the correction value calculation unit replaces the plurality of input images in the case with the new input images. The image correction apparatus according to claim 2, wherein a correction value is calculated and output.
  5.  補正値算出の契機を、前記補正値算出手段に通知する補正値算出契機発生手段を含む
     ことを特徴とする請求項1乃至4のいずれか1項に記載の画像補正装置。
    The image correction apparatus according to claim 1, further comprising a correction value calculation trigger generation unit that notifies the correction value calculation trigger to the correction value calculation unit.
  6.  前記複数の入力画像は、動画データの連続する複数のフレームの内の、任意の間隔の複数の前記フレームである
     ことを特徴とする請求項1乃至5のいずれか1項に記載の画像補正装置。
    The image correction apparatus according to claim 1, wherein the plurality of input images are a plurality of the frames at arbitrary intervals among a plurality of continuous frames of moving image data. .
  7.  前記補正値算出手段が出力する前記補正値を利用して前記入力画像を補正し、出力する画像補正手段を含む
     ことを特徴とする請求項1乃至6のいずれか1項に記載の画像補正装置。
    The image correction apparatus according to claim 1, further comprising: an image correction unit that corrects and outputs the input image using the correction value output by the correction value calculation unit. .
  8.  前記画像補正手段は、前記補正値算出手段から前記補正値が新たに出力されない場合、前記補正値算出手段から既に出力された前記補正値を利用して前記入力画像を補正する
     ことを特徴とする請求項7記載の画像補正装置。
    The image correction unit corrects the input image using the correction value already output from the correction value calculation unit when the correction value is not newly output from the correction value calculation unit. The image correction apparatus according to claim 7.
  9.  請求項1乃至8のいずれか1項に記載の画像補正装置を備える
     撮像装置。
    An image pickup apparatus comprising the image correction apparatus according to claim 1.
  10.  被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する
     画像補正方法。
    Common to the plurality of input images based on a plurality of input images in which noise images corresponding to variations in sensitivity of the plurality of image sensors are superimposed on the image of the body to be captured corresponding to the radiation of the body to be imaged An image correction method that calculates and outputs a correction value corresponding to an offset assumed to be a high-frequency component.
  11.  被撮像体の放射に対応する被撮像体画像に、複数の撮像素子の感度のばらつきに対応するノイズ画像を重ねられた、複数の入力画像に基づいて、前記複数の入力画像に共通して存在する高周波成分であると仮定したオフセットに対応する補正値を算出し、出力する処理をコンピュータに実行させる
     プログラムを記録したコンピュータ読み取り可能な非一時的記録媒体。
    Common to the plurality of input images based on a plurality of input images in which noise images corresponding to variations in sensitivity of the plurality of image sensors are superimposed on the image of the body to be captured corresponding to the radiation of the body to be imaged A computer-readable non-transitory recording medium that records a program that calculates a correction value corresponding to an offset assumed to be a high-frequency component and outputs the correction value to a computer.
PCT/JP2013/006595 2012-11-15 2013-11-08 Image correction device and image correction method WO2014076915A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2012251145 2012-11-15
JP2012-251145 2012-11-15

Publications (1)

Publication Number Publication Date
WO2014076915A1 true WO2014076915A1 (en) 2014-05-22

Family

ID=50730850

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2013/006595 WO2014076915A1 (en) 2012-11-15 2013-11-08 Image correction device and image correction method

Country Status (1)

Country Link
WO (1) WO2014076915A1 (en)

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000125206A (en) * 1998-10-16 2000-04-28 Nec Corp Fixed pattern noise correcting device and its correcting method
JP2006140982A (en) * 2004-10-14 2006-06-01 Nissan Motor Co Ltd Vehicle-mounted image-processing device and method

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2000125206A (en) * 1998-10-16 2000-04-28 Nec Corp Fixed pattern noise correcting device and its correcting method
JP2006140982A (en) * 2004-10-14 2006-06-01 Nissan Motor Co Ltd Vehicle-mounted image-processing device and method

Similar Documents

Publication Publication Date Title
US7636106B2 (en) Image processing apparatus and method, and program used therewith
JP6040338B2 (en) Camera shake correction device, imaging device
KR20140041714A (en) Non-uniformity correction techniques for infrared imaging devices
KR20140035491A (en) Low power and small form factor infrared imaging
JP2011119802A (en) Image processor and image processing method
JP5746521B2 (en) IMAGING DEVICE, ITS CONTROL METHOD, PROGRAM, AND STORAGE MEDIUM
JP2006295626A (en) Fish-eye image processing apparatus, method thereof and fish-eye imaging apparatus
US10778896B2 (en) Image processing apparatus and image processing method
JP2014197739A5 (en)
JP2018116239A (en) Image blur correction device, method for controlling the same, imaging device, program, and storage medium
US20130070128A1 (en) Image processing device that performs image processing
US8836800B2 (en) Image processing method and device interpolating G pixels
US8237827B2 (en) Digital photographing apparatus for correcting smear taking into consideration acquired location relationship
JPWO2018124055A1 (en) Imaging device, camera, and imaging method
KR20150146424A (en) Method for determining estimated depth in an image and system thereof
JP2011120111A (en) Solid-state image pickup device and electronic information apparatus
JP2013225779A (en) Image processing device, imaging device, and image processing program
WO2014076915A1 (en) Image correction device and image correction method
JP2014160998A (en) Image processing system, image processing method, image processing program and recording medium
US20140111482A1 (en) Exposure mechanism of optical touch system and optical touch system using the same
JP2015095670A (en) Imaging apparatus, control method thereof and control program
JP2015041819A (en) Imaging apparatus and its control method, program, and storage medium
JP2011090606A (en) Image processor, image display system and image processing method
JP2015195453A (en) Image processing device and image processing method
US20230138779A1 (en) Linear transform of undistorted image for fusion

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 13854357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 13854357

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP