WO2016185699A1 - Infrared imaging device - Google Patents
Infrared imaging device Download PDFInfo
- Publication number
- WO2016185699A1 WO2016185699A1 PCT/JP2016/002352 JP2016002352W WO2016185699A1 WO 2016185699 A1 WO2016185699 A1 WO 2016185699A1 JP 2016002352 W JP2016002352 W JP 2016002352W WO 2016185699 A1 WO2016185699 A1 WO 2016185699A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- region
- pixel
- infrared
- value
- signal
- Prior art date
Links
- 238000003331 infrared imaging Methods 0.000 title claims abstract description 111
- 238000001514 detection method Methods 0.000 claims abstract description 148
- 238000003384 imaging method Methods 0.000 claims abstract description 134
- 238000012937 correction Methods 0.000 claims abstract description 125
- 230000003287 optical effect Effects 0.000 claims abstract description 80
- 238000003705 background correction Methods 0.000 claims description 29
- 238000006243 chemical reaction Methods 0.000 claims description 11
- 230000008859 change Effects 0.000 claims description 7
- 238000012545 processing Methods 0.000 description 37
- 238000000034 method Methods 0.000 description 30
- 230000008569 process Effects 0.000 description 15
- 238000004364 calculation method Methods 0.000 description 13
- 238000010586 diagram Methods 0.000 description 10
- 230000005855 radiation Effects 0.000 description 9
- 239000011159 matrix material Substances 0.000 description 4
- 230000008878 coupling Effects 0.000 description 3
- 238000010168 coupling process Methods 0.000 description 3
- 238000005859 coupling reaction Methods 0.000 description 3
- 230000006872 improvement Effects 0.000 description 3
- 238000002347 injection Methods 0.000 description 3
- 239000007924 injection Substances 0.000 description 3
- 238000004891 communication Methods 0.000 description 2
- 230000002093 peripheral effect Effects 0.000 description 2
- 229910001220 stainless steel Inorganic materials 0.000 description 2
- 239000010935 stainless steel Substances 0.000 description 2
- 229910052782 aluminium Inorganic materials 0.000 description 1
- XAGFODPZIPBFFR-UHFFFAOYSA-N aluminium Chemical compound [Al] XAGFODPZIPBFFR-UHFFFAOYSA-N 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 230000000903 blocking effect Effects 0.000 description 1
- 238000013461 design Methods 0.000 description 1
- 230000000694 effects Effects 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 239000012212 insulator Substances 0.000 description 1
- 238000004519 manufacturing process Methods 0.000 description 1
- 239000000463 material Substances 0.000 description 1
- 239000007769 metal material Substances 0.000 description 1
- 239000011347 resin Substances 0.000 description 1
- 229920005989 resin Polymers 0.000 description 1
Images
Classifications
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N25/00—Circuitry of solid-state image sensors [SSIS]; Control thereof
- H04N25/60—Noise processing, e.g. detecting, correcting, reducing or removing noise
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J1/00—Photometry, e.g. photographic exposure meter
- G01J1/02—Details
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01J—MEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
- G01J1/00—Photometry, e.g. photographic exposure meter
- G01J1/42—Photometry, e.g. photographic exposure meter using electric radiation detectors
- G01J1/44—Electric circuits
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/20—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
Definitions
- the present invention relates to an imaging device that captures an infrared image, and more particularly to an infrared imaging device that corrects fluctuations in the value of a pixel signal output from an infrared sensor.
- An infrared imaging device that captures an infrared image by detecting infrared rays emitted from a subject such as an object or a person with an infrared sensor is known. It is generally known that a subject whose temperature is higher than absolute zero emits infrared light, and that the higher the temperature of the subject, the more infrared light with a shorter wavelength, and the lower the temperature of the subject, the less infrared light with a longer wavelength. When a subject is imaged by an infrared imaging device, the captured image is displayed white at a high temperature and black at a low temperature. However, when a temperature change occurs in the vicinity of the infrared imaging device or the like, the detection signal detected by the infrared sensor changes due to the temperature change, and noise is generated in the captured subject image.
- Patent Document 1 among detection regions in which pixels that are thermal infrared imaging elements are two-dimensionally arranged, pixels in a region where infrared rays are incident are effective pixels, and pixels in a region where infrared rays are not incident are reference pixels.
- An infrared imaging device is disclosed in which, when a temperature rise occurs in a region where a reference pixel is located, the fluctuation of the detection signal of the infrared sensor due to the temperature rise is suppressed by reducing the bias current flowing to the effective pixel. ing.
- Patent Document 1 a ridge that covers the periphery of the detection region, that is, an infrared shielding body is provided inside the infrared sensor, and the infrared shielding body shields infrared light incident on the infrared sensor, thereby generating a region where no infrared light is incident. I am letting.
- an infrared shielding body is provided in a housing in which an infrared sensor is accommodated so that infrared rays from a subject do not enter a part of the pixels, that is, the reference pixels, and the reference pixels are used as a reference.
- An infrared sensor is disclosed in which an electrical signal from another pixel is read to cancel a variation in infrared rays radiated from a casing and detect infrared rays incident from a subject.
- infrared rays emitted from a subject and infrared rays emitted from an infrared imaging device main body are incident on the infrared sensor.
- the temperature of the infrared imaging device main body rises, the amount of infrared rays emitted from the infrared imaging device main body increases, so that the entire captured image becomes whitish.
- the captured infrared image is generally whitish on the right side. Therefore, it is desired to perform correction for reducing fluctuations in infrared rays emitted from the infrared imaging device body from the pixel signals output from the infrared sensor.
- FIG. 20 is a diagram illustrating an example of the amount of infrared rays in an infrared sensor provided with the infrared shielding body 99.
- an effective area where infrared rays from the imaging optical system 2 are incident is indicated by A
- a reference area where infrared rays from the imaging optical system 2 are not incident is indicated by B.
- the incident amount of infrared rays unrelated to the infrared rays from the imaging optical system 2 is different from the region B. For this reason, when the reference pixels in the reference region B are used, it is difficult to accurately perform correction for reducing the incident amount of infrared rays unrelated to the infrared rays from the imaging optical system 2.
- the present invention has been made in view of such problems, and provides an infrared imaging device capable of accurately correcting fluctuation due to a temperature change in the value of a pixel signal output from an infrared sensor without increasing the number of components. It is intended.
- An infrared imaging device of the present invention includes an imaging optical system that forms an infrared image, It has a detection region located on the imaging surface of the imaging optical system and in which a plurality of pixels that are thermoelectric conversion elements are arranged, and outputs a pixel signal based on infrared rays incident from the imaging optical system for each pixel.
- An infrared sensor that matches the optical axis of the imaging optical system with the center of the detection area, and makes the length of the detection area larger than the length of the imaging area of the imaging optical system in one direction passing through the center.
- An infrared sensor provided with an effective area in which the detection area and the imaging area overlap in the detection area, and a reference area in which the detection area and the imaging area do not overlap; A signal that corrects a change in the value of the pixel signal due to a temperature change by correcting the value of the pixel signal of the effective pixel that is a pixel in the effective region using the value of the pixel signal of the reference pixel that is a pixel in the reference region A correction unit.
- infrared rays includes all of near infrared rays, middle infrared rays, and far infrared rays.
- the “effective region where the detection region and the imaging region overlap” means a region where infrared rays incident from the imaging optical system reach in the detection region.
- the “reference region where the detection region and the imaging region do not overlap” means a region where infrared rays incident from the imaging optical system do not reach in the detection region.
- the “reference area” means an area that is not an effective area among the detection areas.
- the signal correction unit can perform the offset correction by subtracting the average value of the pixel signal value of the reference pixel from the value of the pixel signal of the effective pixel.
- the infrared imaging device of the present invention is provided with two or more reference regions in the detection region,
- the signal correction unit may perform offset correction by subtracting the average value of the pixel signal values of the reference pixels in at least one reference region of the two or more reference regions from the value of the pixel signal of the effective pixel. .
- the detection area is rectangular, and reference areas are provided at the four corners of the detection area, respectively.
- the signal correction unit may perform offset correction by subtracting an average value of pixel signal values of reference pixels in at least one reference region of the reference regions at the four corners from the pixel signal value of the effective pixel. .
- the infrared imaging device of the present invention is provided with two or more reference regions in the detection region,
- the signal correction unit calculates the average value of the pixel signals of the reference pixels in at least two reference areas among the two or more reference areas, and uses the calculated at least two average values to obtain the pixel signal of the effective pixel
- the shading correction can be performed on the value of.
- shading correction refers to non-uniformity of incident infrared rays generated on the imaging surface of the infrared sensor, such as a decrease in the amount of infrared rays at the periphery of the imaging region caused by the imaging optical system, or to the circuit board. It means correction that reduces non-uniformity of infrared rays caused by non-uniformity of infrared rays generated from a circuit board by energization and non-uniformity of infrared rays caused by non-uniformity of external heat from an optical system or an infrared imaging device main body.
- the detection area is rectangular, and reference areas are provided at the four corners of the detection area, respectively.
- the signal correction unit calculates an average value of the pixel signal values of the reference pixels in the reference area for at least two reference areas of the reference areas at the four corners, and uses the calculated at least two average values to Shading correction may be performed on the value of the pixel signal of the pixel.
- a reference region is provided at an edge in a detection region, a frame-like reference region portion in which a plurality of reference pixels are arranged is provided in the reference region, and the frame-like reference region portion is provided as a plurality of regions.
- Divided into The signal correction unit calculates an average value of the pixel signal values of the reference pixels located in the plurality of regions for the plurality of divided regions, and uses each of the calculated average values for the pixel signal of the effective pixel. Shading correction may be performed on the value.
- the infrared imaging device of the present invention is provided with at least one temperature sensor in the main body of the infrared imaging device,
- the signal correction unit can perform a further offset correction by calculating a value corresponding to the value of the output signal from the temperature sensor to the value of the pixel signal of the effective pixel.
- the “value according to the value of the output signal from the temperature sensor” is a value set in advance for each type of infrared imaging device.
- a table corresponding to the value of the output signal is created in advance, and a value based on this table can be used.
- the infrared imaging device of the present invention it is preferable to provide a temperature sensor inside the main body of the infrared imaging device.
- the infrared imaging device of the present invention it is preferable to provide a temperature sensor at a position facing the imaging optical system.
- the optical axis of the imaging optical system and the center of the detection area of the infrared sensor in which a plurality of pixels as thermoelectric conversion elements are arranged coincide with each other, and in one direction passing through the center, the detection area Is made larger than the length of the imaging region of the imaging optical system, an effective region where the detection region and the imaging region overlap within the detection region, and a reference region where the detection region and the imaging region do not overlap exist.
- 1 is a schematic cross-sectional view illustrating a configuration of an infrared imaging device according to an embodiment of the present invention.
- 1 is a schematic block diagram illustrating a configuration of an infrared imaging device according to an embodiment of the present invention.
- region of an infrared sensor The figure which shows an example of the 2nd detection area
- region of an infrared sensor The figure which shows an example of the 4th detection area
- the figure explaining the correction method when a reference field is frame shape
- the figure explaining the calculation method of the correction value of the shading correction when a reference area is frame shape
- the figure which shows an example of the correction value of shading correction when a reference area is frame shape
- Flowchart of third correction process of signal correction unit The figure which shows the relationship between the value of the output signal from the temperature sensor and the calculated value according to this value
- Flowchart of fourth correction process of signal correction unit Schematic cross-sectional view illustrating the configuration of a conventional infrared imaging device
- FIG. 1 is a schematic cross-sectional view illustrating the configuration of an infrared imaging device according to an embodiment of the present invention
- FIG. 2 is a schematic block diagram illustrating the configuration of an infrared imaging device 1 according to an embodiment of the present invention.
- the infrared imaging device 1 according to the present embodiment is installed in an infrared imaging device main body 12 including a first main body portion 10 and a second main body portion 11, and the first main body portion 10.
- An imaging optical system 2 capable of imaging infrared rays emitted from a subject on the imaging plane 30 and a second main body 11, located on the imaging plane 30 of the imaging optical system 2,
- An infrared sensor 3 having a detection region 31 in which a plurality of pixels as conversion elements are arranged and outputting pixel signals based on infrared rays incident from the imaging optical system 2 is provided for each pixel.
- the infrared imaging device main body 12 is made of a metal material such as aluminum or stainless steel or a resin material such as plastic, but the infrared imaging device main body 12 of this embodiment is made of stainless steel. The internal structure of the infrared imaging device body 12 will be described later in detail.
- the imaging optical system 2 is a lens group composed of one or more lenses.
- the lenses are held by a holding frame, and the holding frame is fixed to the first main body 10.
- the imaging optical system 2 is described as a fixed focus optical system, but the present invention is not limited to this, and may be a variable focus optical system.
- the infrared sensor 3 is driven by a drive control unit (not shown), takes a subject image formed in the detection area 31 as an infrared image, converts it into a pixel signal, and outputs it.
- the infrared sensor 3 outputs a pixel signal by sequentially transferring charges accumulated in each pixel and converting them into an electrical signal.
- FIG. 3 shows a diagram illustrating a first embodiment of the detection region 31 of the infrared sensor 3. It is assumed that the region C of the detection region 31 of the infrared sensor 3 of the present embodiment has a rectangular shape, and the imaging region D of the imaging optical system 2 has a circular shape.
- the optical axis of the imaging optical system 2 that is, the center O of the imaging region D is matched with the center 31o of the region C of the detection region 31 of the infrared sensor 3, and a straight line passing through the centers O and 31o is a straight line d
- the length of the region C of the detection region 31 is configured to be greater than the length of the imaging region D on the straight line d.
- the straight line d is a straight line connecting diagonal lines of the region C of the detection region 31.
- a region where the infrared light that has passed through the imaging optical system 2 is incident that is, a region where the region C of the detection region 31 and the imaging region D of the imaging optical system 2 overlap is an effective region AR.
- a region where the transmitted infrared light does not enter that is, a region where the region C of the detection region 31 and the imaging region D of the imaging optical system 2 do not overlap with each other becomes the reference region BR.
- the length of the region C of the detection region 31 is larger than the length of the imaging region D also on the straight line orthogonal to the straight line d in FIG.
- Reference regions BR are formed at the four corners of the region C, respectively.
- pixels that are a plurality of thermoelectric conversion elements are arranged in the effective region AR and the reference region BR, and pixels located in the effective region AR become effective pixels and pixels located in the reference region BR become reference pixels.
- the area that forms the largest rectangle in the effective area AR and the area in which the effective pixels are arranged in a matrix is set as the effective area A, and is output in the effective pixels in the effective area A.
- the pixel data forms an infrared image.
- the unit 61 performs infrared image signal correction processing using pixel data output from the reference pixels in the reference regions B1 to B4.
- the infrared sensor 3 can be manufactured easily and at low cost. Further, it is advantageous to provide the reference areas B1 to B4 in the area excluding the imaging area D while suitably securing the positions and areas of the reference areas B1 to B4 with respect to the effective area AR. For example, as shown in FIG. 19, the lens of the imaging optical system 2 is more than the case where the region C of the detection region 31 overlaps the imaging region D, that is, all the pixels in the region C of the detection region 31 become effective pixels. Therefore, the cost of the imaging optical system 2 can be reduced.
- the infrared sensor 3 is a rectangular sensor, and the four corners of the infrared sensor 3 are reference areas BR that are areas that do not overlap with the imaging area D.
- the reference pixel used for the infrared image signal correction process can be arbitrarily selected from the reference pixels in the reference region BR. That is, the region formed by the reference pixels to be used can be selectively provided in an arbitrary shape at an arbitrary position in the reference region BR.
- the position information of the effective pixel and the reference pixel is stored in the storage unit 7 and is referred to by the signal correction unit 61 as appropriate.
- the use effective area A and the reference area B having different shapes from the above-described embodiment may be provided by appropriately changing the sizes.
- 4 to 6 show examples of other embodiments of the detection region 31 of the infrared sensor 3, respectively. 4 to 6 are schematic diagrams for explaining the configuration of the detection region 31 of the infrared sensor 3, and the size of each region is different from the actual one.
- the second detection region 31 of the infrared sensor 3 includes the optical axis of the imaging optical system 2, that is, the center O of the imaging region D, and the center 31 o of the region C of the detection region 31 of the infrared sensor 3.
- the vertical line connecting the upper and lower sides of the detection region 31 of the infrared sensor 3 through the centers O and 31o is defined as a straight line d, and the length of the region C of the detection region 31 on the straight line d is the imaging region D. It is configured to be larger than the length of.
- the region where the infrared rays that have passed through the imaging optical system 2 are incident that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap.
- An effective area AR which is an area, and an outer peripheral portion surrounding the imaging area D is an area where infrared rays that have passed through the coupling optical system 2 do not enter, that is, an area where the imaging area D of the imaging optical system does not overlap with the area C of the detection area 31 Is a reference region BR.
- an area that forms the largest rectangle in the effective area AR and in which effective pixels are arranged in a matrix is used effective area
- A is the pixel data output from the effective pixels in the use effective area A and forms an infrared image.
- a signal correction unit 61 which will be described later, is referred to as reference areas B1 and B2, which are tangent to the imaging area D and have upper and lower ends from the horizontal line of the detection area 31, respectively. Performs signal correction processing of an infrared image using pixel data output in reference pixels in the reference regions B1 and B2.
- the third detection region 31 of the infrared sensor 3 includes an optical axis of the imaging optical system 2, that is, a center O of the imaging region D, and a region C of the detection region 31 of the infrared sensor 3.
- a horizontal line connecting the left and right of the detection region 31 of the infrared sensor 3 through the centers O and 31o is defined as a straight line d, and the length of the region C of the detection region 31 is imaged on the straight line d. It is configured to be larger than the length of the region D. Accordingly, in the detection region 31 of FIG.
- the region where the infrared rays that have passed through the imaging optical system 2 are incident that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap.
- the left and right regions adjacent to the imaging region D are regions where the infrared rays that have passed through the coupling optical system 2 are not incident, that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap.
- the reference region BR is a region that should not be used.
- the area that forms the largest rectangle in the effective area AR and the area in which the effective pixels are arranged in a matrix is used effective area
- A is the pixel data output from the effective pixels in the use effective area A and forms an infrared image.
- a signal correction unit 61 which will be described later as reference regions B3 and B4, which are tangents to the imaging region D and have left and right ends from the vertical line of the detection region 31 as reference regions B3 and B4, respectively.
- Infrared image signal correction processing is performed using pixel data output from the reference pixels in the reference regions B3 and B4.
- the third detection region 31 is configured such that the length of the region C of the detection region 31 is larger than the length of the imaging region D on the straight line d, that is, in the horizontal direction of the detection region 31.
- the length of the region C of the detection region 31 is not configured to be greater than the length of the imaging region D. That is, the length of the area C of the detection area 31 is shorter than the length of the imaging area D in the vertical direction. Therefore, when the second detection region 31 and the third detection region 31 are the same size, the third detection region 31 is a region of the detection region 31 than the second detection region 31. Since the area where C and the imaging area D of the imaging optical system overlap, that is, the effective area AR becomes wide, the usable effective area A can be widened.
- the fourth detection region 31 of the infrared sensor 3 includes an optical axis of the imaging optical system 2, that is, a center O of the imaging region D and a region C of the detection region 31 of the infrared sensor 3, as shown in FIG.
- a vertical line connecting the center 31o and passing through the centers O and 31o and connecting the upper and lower sides of the detection region 31 of the infrared sensor 3 is a straight line d1
- the right and left of the detection region 31 of the infrared sensor 3 is passed through the centers O and 31o.
- the horizontal line to be connected is a straight line d2
- the length of the region C of the detection region 31 is configured to be larger than the length of the imaging region D on the straight lines d1 and d2.
- the region where the infrared rays that have passed through the imaging optical system 2 are incident that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap.
- An effective area AR which is an area, and an outer peripheral portion surrounding the imaging area D is an area where infrared rays that have passed through the coupling optical system 2 do not enter, that is, an area where the imaging area D of the imaging optical system does not overlap with the area C of the detection area 31 Is a reference region BR.
- the fourth detection area 31 as well, as in the detection area 31 of the above-described embodiment, an area that forms the largest rectangle in the effective area AR and in which effective pixels are arranged in a matrix is used effective area
- A is the pixel data output from the effective pixels in the use effective area A and forms an infrared image.
- the reference region BR it is a tangent to the imaging region D and includes an upper end and a lower end from the horizontal line of the detection region 31, and the upper end and the lower end.
- the signal correction unit 61 which will be described later, uses a frame-shaped region around the region C of the detection region 31 formed with the same width as the reference region B, and uses pixel data output in the reference pixels in the reference regions B1 and B2. The infrared image signal correction process is then performed.
- the effective pixels in the usable effective area A and the reference pixels in the reference area B are infrared detecting elements (infrared detectors) that can detect infrared rays (wavelengths of 0.7 ⁇ m to 1 mm). It is an infrared detecting element capable of detecting 8 ⁇ m to 15 ⁇ m).
- a microbolometer-type or SOI (Silicon-on-Insulator) diode-type infrared detection element can be used as the infrared detection element used as the effective pixel and the reference pixel.
- the infrared imaging device 1 includes the imaging optical system 2, the infrared sensor 3, an analog signal processing unit 4 that performs various analog signal processing such as amplifying an output signal from the infrared sensor 3, and the like.
- a / D conversion (Analog-to-Digital conversion) unit 5 for converting the analog image signal processed by the analog signal processing unit 4 into digital image data, and the digital image data converted by the A / D conversion unit 5
- a digital signal processing unit 6 that performs various signal processing, a storage unit 7 that stores information associated with various digital signal processing, and an output that outputs an infrared image subjected to the digital signal to the storage unit 7 or a display unit (not shown). Part 8.
- the storage unit 7 stores various types of information used in the digital signal processing unit 6, infrared images subjected to various digital signal processing, and the like as necessary, and a volatile memory such as a DRAM (Dynamic Random Access Memory).
- a non-volatile memory such as a flash memory is included.
- the storage unit 7 is provided separately from the digital signal processing unit 6, but the present invention is not limited to this, and the storage unit 7 is provided in the digital signal processing unit 6. Also good.
- the output unit 8 outputs an infrared image subjected to various digital signal processing to the storage unit 7, a display unit (not shown), or a storage unit outside the apparatus by wireless or wired communication.
- the digital signal processing unit 6 includes a signal correction unit 61 that performs correction processing on the value of the pixel signal output from the infrared sensor 3. Normally, when the temperature of the infrared imaging device main body 12 rises, the amount of infrared radiation emitted from the infrared imaging device main body 12 increases, and the entire captured infrared image becomes whitish.
- the signal correction unit 61 of the present embodiment uses pixels in the reference region B in order to perform correction for offsetting a value that contributes to infrared rays emitted from the infrared imaging device main body 12 from the pixel signals output from the infrared sensor 3.
- the value of the pixel signal of the effective pixel that is the pixel in the use effective area A is corrected using the value of the pixel signal of a certain reference pixel.
- FIG. 7 shows a flowchart of a series of processes of the infrared imaging apparatus 1 of the present embodiment.
- the infrared imaging apparatus 1 of the present embodiment first images a subject (S1), and the infrared sensor 3 analogly outputs pixel signals based on infrared rays incident from the imaging optical system 2 for each pixel.
- the analog signal processing unit 4 performs various analog signal processing such as amplifying the detection signal from the infrared sensor 3, and the A / D conversion unit 5 converts the processed analog image signal into digital image data.
- the digital signal processing unit 6 performs various signal processing on the digital image data converted by the A / D conversion unit 5, and stores the image data subjected to the various signal processing in the storage unit 7 (S2). ).
- FIG. 8 is a flowchart of the first correction process of the signal correction unit 61 of this embodiment.
- the signal correction unit 61 calculates an average value of pixel signal values of reference pixels that are pixels in the reference region B (S11).
- the reference region B is formed at the four corners of the detection region C as shown in FIG. 3, the values of all pixel signals of the reference pixels in the reference regions B1 to B4 at the four corners
- An average value M is calculated.
- the number of reference pixels in the upper left reference area B1, i, the number of reference pixels in the upper right reference area B2, j, the number of reference pixels in the lower left reference area B3, and the lower right reference area B4 If the number of reference pixels is m, the average value M can be calculated by the following equation (1).
- M (B1 1 + B1 2 +,..., + B1 i + B2 1 + B2 2 +,...
- B1 1 , B1 i are the pixel signal values of the reference pixels in the reference area B1
- B2 1 , B2 j are the pixel signal values of the reference pixels in the reference area B2
- B3 1 , B3 k represents the value of the pixel signal of each reference pixel in the reference region B3
- B4 1 , B4 m represent the value of the pixel signal of each reference pixel in the reference region B4.
- the pixel signal values of all the reference pixels in the reference regions B1 to B4 may be added together as in the above formula (1), and then divided by the number of pixels.
- An average value may be calculated for each of the reference regions B1 to B4, and a value obtained by adding the calculated four average values and dividing by four may be used as the average value. That is, the average value M may be calculated by the following formula (2).
- the reference area B is provided at the upper and lower ends of the detection area C as shown in FIG. 4, the reference pixel in the reference area B1 at the upper end and the reference area B2 at the lower end are referred to.
- An average value M of values of both pixel signals of the pixel is calculated. If the number of reference pixels in the reference region B1 at the upper end is i and the number of reference pixels in the reference region B2 at the lower end is j, the average value M can be calculated by the following equation (3).
- M (B1 1 + B1 2 +,..., + B1 i + B2 1 + B2 2 +,...
- B1 1, ..., B1 i indicate the value of the pixel signal of each reference pixel in the reference area B1
- B2 1, ... B2 j indicate the value of the pixel signal of each reference pixel in the reference area B2.
- B3 1, ..., B3 i indicate the pixel signal value of each reference pixel in the reference area B3, and B4 1, ... B4 j indicate the value of the pixel signal of each reference pixel in the reference area B4.
- the average value of the pixel signal values of the reference pixels in the frame-shaped reference region B is calculated. calculate.
- B 1, ... B i indicate pixel signal values of the respective reference pixels in the reference region B.
- the signal correction unit 61 subtracts the calculated average value M from each of the pixel signal values of the effective pixels in the use effective area A to obtain the value of the image signal. Offset correction is performed (S12).
- the signal correction unit 61 stores the value of the pixel signal subjected to the correction process in the storage unit 7 (S4).
- the image data stored in the storage unit 7 is appropriately output by the output unit 8 to an external storage unit or a display unit (not shown).
- the corrected image data may be appropriately subjected to other necessary correction processing by the digital signal processing unit 6 of the infrared imaging device 1.
- the reference region B can capture only the infrared radiation from the infrared imaging device body 12 at the same timing as the use effective region A captures the infrared radiation from the subject. Therefore, the average value M calculated above is a value resulting from the infrared rays incident from the infrared imaging device main body 12.
- the conventional infrared imaging device 100 shown in FIG. 19 a region where infrared rays are not incident is generated by blocking infrared rays incident on the infrared sensor with an infrared shield, and an infrared shielding plate is required, so that the structure is simple. Improvement of cost and cost saving is desired. Moreover, in the conventional infrared imaging device 100, even if the temperature of the imaging device main body 12 rises, the temperature of the infrared shielding body 99 does not rise immediately, so that the temperature in the reference region B is lower than that of the imaging device main body 12. Infrared rays radiated from the infrared shielding body 99 are added.
- the amount of incident infrared rays differs between the use effective area A and the reference area B, and the correction amount to be offset differs between the use effective area A and the reference area B. Therefore, the effective pixels in the use effective area A are different. In the offset correction in which the average value M is subtracted from each of the pixel signal values, it is difficult to perform correction with high accuracy.
- the optical axis O of the imaging optical system 2 and the center 31o of the detection region 31 of the infrared sensor 3 are made to coincide with each other, and the unidirectional direction passing through the centers O and 31o.
- the straight line d by making the length of the region C of the detection region 31 larger than the length of the image formation region D of the imaging optical system 2, the region C of the detection region 31 and the region C of the detection region 31 are included in the detection region 31 of the infrared sensor 3.
- the effective area AR where the imaging area D overlaps the area C of the detection area 31 and the reference area BR where the imaging area D does not overlap are provided, even without using additional components to avoid the incidence of infrared rays, An effective area AR in which infrared rays are incident and a reference area BR in which no infrared rays are incident can be formed in the detection area 31.
- injection of infrared rays is unnecessary, the infrared sensor 3 can be manufactured simply and at low cost.
- the amount of infrared radiation incident on each of the effective area AR and the reference area BR is not affected by the temperature difference between the infrared imaging apparatus main body 12 and a further component for avoiding the incidence of infrared radiation.
- the fluctuation of the signal value can be accurately corrected.
- the influence of the temperature difference that occurs in the left-right direction of the imaging device body 12 can be reduced. Further, when the offset correction is performed using the average value M of the pixel signal values of the reference pixels in the frame-like reference region B as shown in FIG. 6, the influence of the temperature difference generated around the infrared imaging device main body 12. Can be reduced.
- the average value of the pixel signal values of the pixels is used for offset correction, the present invention is not limited to this.
- An average value of pixel signal values of only reference pixels may be used.
- the average value of the pixel signals of the reference pixels only in the reference region B1 at the upper end may be used, or the average value of the pixel signals of the reference pixels in the reference region B2 at the lower end is used. You may do it.
- the average value of the pixel signals of the reference pixels only in the reference region B3 at the left end may be used, or the average value of the pixel signals of the reference pixels only in the reference region B4 at the right end is used. It may be changed as appropriate.
- FIG. 9 is a flowchart of the second correction processing of the signal correction unit 61
- FIG. 10 is a diagram for explaining a correction method when the reference area is at four corners
- FIG. 11 is a reference area having four corners
- FIG. 12 is a diagram illustrating a method for calculating a correction value for shading correction when the reference area is in FIG. 12
- FIG. 12 is a diagram illustrating a correction method when the reference area is frame-shaped
- FIG. 13 is a shading correction when the reference area is frame-shaped
- FIG. 14 is a diagram illustrating a method for calculating the correction value
- FIG. 14 is a diagram illustrating an example of a correction value for shading correction when the reference region has a frame shape.
- the signal correction unit 61 first calculates an average value of pixel signal values of reference pixels that are pixels in the reference region B (S21).
- the reference area B is provided at the four corners of the detection area C as shown in FIG. 3, the average value M1 of the pixel signal values of the reference pixels in the upper left reference area B1, and the upper right
- the pixel signal values of the reference pixels in the lower right reference region B4 An average value M4 is calculated.
- M1 (B1 1 + B1 2 +,... + B1 i ) / i (6-1)
- M2 (B2 1 + B2 2 +,... + B2 j ) / j (6-2)
- M3 (B3 1 + B3 2 +,... + B3 k ) / k (6-3)
- M4 (B4 1 + B4 2 +,...
- B1 1 , B1 i are the pixel signal values of the reference pixels in the reference area B1
- B2 1 , B2 j are the pixel signal values of the reference pixels in the reference area B2
- B3 1 , B3 k represents the value of the pixel signal of each reference pixel in the reference region B3
- B4 1 , B4 m represent the value of the pixel signal of each reference pixel in the reference region B4.
- B1 1, ..., B1 i indicate the value of the pixel signal of each reference pixel in the reference area B1
- B2 1, ... B2 j indicate the value of the pixel signal of each reference pixel in the reference area B2.
- M4 (B4 1 + B4 2 +,... + B4 j ) / j (8-2)
- B3 1, ..., B3 i indicate the pixel signal value of each reference pixel in the reference area B3
- B4 1, ... B4 j indicate the value of the pixel signal of each reference pixel in the reference area B4.
- the frame-shaped reference area B is divided into a plurality of reference areas.
- the upper row and the lower row are each divided into five divided reference regions B11 to B15, B51 to B55, and the remaining reference regions, that is, divided reference regions B11 to B15, B51 to The left column and the right column, which are reference regions excluding B55, are divided into three divided reference regions B21 to B41 and B25 to B45, respectively, to create a total of 16 divided reference regions B11 to B55.
- the average values M12 to M55 of the remaining reference areas B12 to B55 can be calculated in the same manner as the reference area M11.
- the signal correction unit 61 performs shading correction on the value of the pixel signal of the effective pixel in the use effective area A (S22).
- the reference area B is provided at the four corners of the detection area C as shown in FIG. 3, the reference pixels in the calculated reference areas B1 to B4 for each of the four reference areas B1 to B4.
- the shading amount S is calculated using the average values M1 to M4 of the pixel signal values.
- a method for calculating the shading amount S will be described in detail.
- the row direction that is the direction in which the pixel columns of the use effective area A extend is the x direction
- the column direction that is the direction in which the pixel rows extend is the y direction
- the position of each effective pixel in the use effective area A is Expressed in the xy coordinate system.
- the x coordinate of the endmost column on one end side in the row direction is x1
- the x coordinate of the endmost column on the other end side is x2
- the y coordinate of the endmost row on the one end side in the column direction is y1.
- the y coordinate of the outermost row on the other end side in the column direction is y2.
- the average value of the pixel signal values of the reference pixels in the reference areas B1 to B4 is set as Z11, Z12, Z21, and Z22, respectively, and the shading amount S at the coordinates (x, y) is calculated.
- S (x, y) a1x + b1 (10-1)
- a1 is the inclination in the vertical direction and is represented by the following formula (10-2).
- a1 (Z12-Z11) / (x2-x1) (10-2)
- S (x, y) a2x + b2 (11-1)
- a2 is an inclination and is represented by the following formula (11-2).
- a2 (Z22-Z21) / (x2-x1) (11-2)
- the value of the shading amount S shown in FIG. 11 can be calculated. Then, the shading correction is performed by calculating the calculated shading amount S to the value of the pixel signal of the effective pixel in the use effective area A.
- the average of the pixel signal values of the reference pixels in the reference area B1 at the upper end calculated.
- linear interpolation is performed to calculate the shading amount S of each effective pixel in the use effective region A, and the calculated shading
- the shading correction is performed by calculating the amount S to the value of the pixel signal of each effective pixel in the use effective area A.
- the captured infrared image becomes generally whitish on the upper side. That is, the density of the infrared image captured by the temperature difference around the infrared imaging device body 12 is uneven.
- the shading amount S in the vertical direction of the image can be acquired in detail, the unevenness in the vertical direction of the infrared image can be accurately corrected.
- linear interpolation is performed using the calculated step difference between the average value M3 of the pixel signal values of the reference pixels in the reference region B3 at the left end portion and the average value M4 of the pixel signal values of the reference pixels in the reference region B4 at the right end portion.
- the shading correction S is performed by calculating the shading amount S of the effective pixels in the use effective region A and calculating the calculated shading amount S to the value of the pixel signal of the effective pixels in the use effective region A.
- the calculation is performed for each of the 16 divided reference areas B11 to B55 created above.
- the shading amount S is calculated using the average values M11 to M55 of the pixel signal values of the reference pixels in the divided reference regions B11 to B55.
- the shading amount S can be calculated using, for example, pixel signal values of surrounding pixels.
- a method for calculating the shading amount S will be described in detail.
- the shading amount at the upper right effective pixel Z24 in the use effective area A can be calculated by the following equation (14).
- the usable effective area A is divided into four areas divided in the center in the horizontal direction and the vertical direction, and the effective pixel T1 located in the upper left area with respect to the center of the usable effective area A is the effective pixel T1.
- the average value M or effective pixel value of the divided reference region B adjacent to the upper and left sides is added, and the average value M or effective pixel value of the divided reference region B adjacent to the upper left of the effective pixel T1 is added from the added value.
- the shading amount S at the effective pixel T is calculated by subtracting the value.
- the average value M or the effective pixel value of the divided reference area B adjacent to the lower and left sides of the effective pixel T2 is added.
- the shading amount S at the effective pixel T2 is calculated by subtracting the average value M or the value of the effective pixel of the divided reference region B adjacent to the lower left of the effective pixel T2 from the added value.
- the average value M or the effective pixel value of the divided reference area B adjacent to the upper and right sides of the effective pixel T3 is set.
- the shading amount S in the effective pixel T3 is calculated by adding and subtracting the average value M or the value of the effective pixel of the divided reference region B adjacent to the diagonally upper right of the effective pixel T3 from the added value.
- the average value M or the effective pixel value of the divided reference area B adjacent to the lower and right sides of the effective pixel T4 is added.
- the shading amount S at the effective pixel T4 is calculated by subtracting the average value M or the value of the effective pixel of the divided reference region B adjacent to the lower left of the effective pixel T4 from the added value.
- the shading amount is calculated for each effective pixel in the use effective region A by calculating the shading amount from the effective pixel farthest from the center of the use effective region A.
- the effective pixel Z23 in FIG. 13 is a pixel located in the center in the horizontal direction of the use effective area A.
- the effective value Z13 above the effective pixel Z23 and the left effective pixel Z22 are slanted.
- the shading amount is calculated using the upper left average value Z12
- the shading amount may be calculated using the average value Z13 above the effective pixel Z23, the right effective pixel Z24, and the diagonally upper right average value Z14.
- the shading amount value calculated previously in the surrounding pixels can be used.
- the pixel located in the center in the vertical direction of the use effective area A can be calculated in the same manner. Further, when the effective pixel is located in the center of the use effective area A, the value of the shading amount calculated in any of the surrounding pixels may be used or may be appropriately selected.
- the signal correction unit 61 calculates the value of the shading amount calculated for each effective pixel from each of the pixel signal values of the effective pixels in the use effective area A, and shades the pixel signal value. to correct.
- the value of the shading amount calculated for each pixel may include a positive value and a negative value.
- the shading amount is a negative value
- the absolute value of the shading amount is added to the value of the pixel signal of the effective pixel in the use effective area A.
- the shading amount is positive In this case, the value of the shading amount is subtracted from the value of the pixel signal of the effective pixel in the use effective area A.
- shading correction is performed using average values M11 to M55 of pixel signal values of reference pixels in 16 divided reference regions B11 to B55 provided in a frame shape at the edge of the detection region C.
- the shading amount S in the vertical direction and the horizontal direction of the infrared image due to the temperature difference around the infrared imaging device main body 12 can be acquired in detail, the unevenness of the entire infrared image can be accurately corrected.
- the optical axis O of the imaging optical system 2 and the center 31o of the detection region 31 of the infrared sensor 3 are made to coincide with each other, and the centers O and 31o are set.
- the detection region 31 in the detection region 31 of the infrared sensor 3 is obtained.
- An effective area AR in which the imaging area D overlaps with the imaging area D and a reference area BR in which the imaging area D does not overlap with the effective area AR of the detection area 31 are provided.
- the effective area AR in which infrared rays enter and the reference area BR in which infrared rays do not enter can be formed in the detection area 31.
- injection of infrared rays is unnecessary, the infrared sensor 3 can be manufactured simply and at low cost.
- the amount of infrared radiation incident on each of the effective area AR and the reference area BR is not affected by the temperature difference between the infrared imaging apparatus main body 12 and a further component for avoiding the incidence of infrared radiation. The fluctuation of the signal value can be accurately corrected.
- the above-described interpolation method is used to calculate the shading amount S.
- the present invention is not limited to this, and a known method such as linear interpolation or nonlinear interpolation can be used. .
- FIG. 15 is a schematic cross-sectional view illustrating the configuration of the infrared imaging device according to the second embodiment of the present invention.
- the infrared imaging apparatus of the second embodiment has a configuration in which a temperature sensor described later is provided in the infrared imaging apparatus 1 of the above-described embodiment, and therefore, the description of the configuration of FIG.
- the temperature sensor 13 and the correction method using the output signal from the temperature sensor 13 by the signal correction unit 61 will be described in detail.
- a temperature sensor 13 is provided at a position facing the imaging optical system 2 in the first main body portion 10.
- a known one such as a thermistor, a thermocouple, or a resistance temperature detector can be used.
- the output signal from the temperature sensor 13 is sent to the signal correction unit 61 by wired or wireless communication (not shown).
- FIG. 16 is a flowchart of the third correction process of the signal correction unit 61.
- a series of processing of the infrared imaging apparatus of the present embodiment is the same as the processing of the flowchart of FIG. 7 of the above-described embodiment, and thus description thereof is omitted here, and only correction processing by the signal correction unit 61 is described. To do.
- the same processing as in FIG. 8 is denoted by the same step number, and detailed description thereof is omitted.
- the signal correction unit 61 calculates the average value M of the pixel signal values of the reference pixels that are the pixels in the reference region B in step S11, and in the use effective region A in step S12. Offset correction is performed by subtracting the average value M from the value of the pixel signal of the effective pixel.
- the signal correction unit 61 detects an output signal from the temperature sensor 13 and calculates a value corresponding to the detected value of the output signal (step S13).
- FIG. 17 shows a relationship between the value of the output signal from the temperature sensor 13 and the calculated value corresponding to this value.
- the calculation value corresponding to the value of the output signal from the temperature sensor 13 is set in advance for each type of the infrared imaging device 1, and corresponds to the value of the output signal from the temperature sensor 13, for example, as shown in FIG.
- a table of calculation values is stored in the storage unit 7. In this table, the amount of infrared rays radiated for each temperature of the infrared imaging device body 12 is detected, and a value based on the amount of infrared rays is set as a calculation value.
- the infrared imaging device 1 in a state where the infrared imaging device 1 is placed in a constant temperature bath and the temperature of the infrared imaging device 1 is set to be constant, a reference heat source whose absolute temperature is known is photographed by the infrared imaging device 1, and the infrared imaging device
- the calculated value of FIG. 17 is calculated from the difference between the temperature data of 1 and the temperature of the reference heat source.
- This table is set in advance at the design stage or manufacturing stage of the infrared imaging device 1.
- the signal correction unit 61 refers to the table stored in the storage unit 7 to calculate a calculation value corresponding to the value of the output signal from the temperature sensor 13.
- the signal correction unit 61 calculates a calculation value corresponding to the value of the output signal of the temperature sensor 13 calculated in step S13 to the value after the offset correction in step S12, and further performs offset correction (step) S14).
- the infrared image indicated by the value after offset correction is stored in the storage unit 7 in step S12. Is done.
- the calculated values M1 to M3 when the value of the output signal from the temperature sensor 13 is any of N1 to N3 are positive values
- the calculated values M1 to M3 are offset in step S12.
- the value of the pixel signal is further offset-corrected by subtracting from the corrected value. If the value after the calculated value M5 when the value of the output signal from the temperature sensor 13 is any value after N5 is a negative value, the absolute value of the value after the calculated value M5 is obtained in step S12.
- the value of the pixel signal is further offset corrected by adding to the value after the offset correction.
- the value of the output signal from the temperature sensor 13 is a calculated value that is a value based on the amount of infrared rays radiated from the infrared imaging device body 12, and thus is based on the value of the output signal from the temperature sensor 13.
- the calculation value which is a value, to the pixel signal value after offset correction, it is possible to offset the value due to the infrared ray radiated from the infrared imaging device main body 12 out of the infrared rays incident on the infrared sensor 3. Therefore, it is possible to acquire a highly accurate image signal value based on the infrared rays emitted from the camera, and to acquire a highly accurate infrared image based on the absolute temperature of the subject.
- FIG. 18 is a flowchart of the fourth correction process of the signal correction unit 61.
- a series of processing of the infrared imaging apparatus of the present embodiment is the same as the processing of the flowchart of FIG. 7 of the above-described embodiment, and thus description thereof is omitted here, and only correction processing by the signal correction unit 61 is described. To do.
- FIG. 18 the same processing as in FIG. 9 is indicated by the same step number, and detailed description thereof is omitted.
- the signal correction unit 61 calculates an average value M of pixel signal values of reference pixels, which are pixels in each reference region B, in step S21, and in the use effective region A in step S22.
- the shading correction is performed on the value of the pixel signal of the effective pixel.
- the signal correction unit 61 detects an output signal from the temperature sensor 13 and calculates a value corresponding to the detected value of the output signal (step S23).
- step S23 since the process of step S23 is the same as the process of step 13 of FIG. 16, description here is abbreviate
- the signal correction unit 61 performs an offset correction by calculating a calculation value corresponding to the value of the output signal of the temperature sensor 13 calculated in step S23 to the value after the shading correction in step S22 (step S24). ).
- the calculated values M1 to M3 when the value of the output signal from the temperature sensor 13 is any of N1 to N3 are positive values
- the calculated values M1 to M3 are shaded in step S22.
- the value of the pixel signal is offset-corrected by subtracting from the corrected value. If the value after the calculated value M5 when the value of the output signal from the temperature sensor 13 is any value after N5 is a negative value, the absolute value of the value after the calculated value M5 is determined in step S22.
- the value of the pixel signal is offset corrected by adding to the value after the shading correction.
- infrared rays emitted from the subject and infrared rays emitted from the infrared imaging device main body 12 are incident on the infrared sensor 3.
- the value of the output signal from the temperature sensor 13 is a calculated value that is a value based on the amount of infrared rays radiated from the infrared imaging device body 12, it is based on the value of the output signal from the temperature sensor 13.
- the calculated value which is a value
- the value of the pixel signal after shading correction it is possible to offset the value due to the infrared rays radiated from the infrared imaging device main body 12 out of the infrared rays incident on the infrared sensor 3.
- a highly accurate image signal value based on infrared rays emitted from the subject can be acquired, and a highly accurate infrared image based on the absolute temperature of the subject can be acquired.
- the temperature sensor 13 is provided at a position facing the imaging optical system 2 in the infrared imaging apparatus main body 12, but the present invention is not limited to this.
- the temperature sensor 13 may be provided anywhere on the infrared imaging device 12, for example, may be provided on a wall surface constituting the second main body portion 11 inside the infrared imaging device main body 12 of FIG. You may provide in the wall surface which comprises the 1st main-body part 10 and the 2nd main-body part 11 outside the infrared imaging device main body 12, and can change suitably.
- the temperature sensor 13 When the temperature sensor 13 is provided inside the infrared imaging device main body 12, the temperature of the wall located in the space where the infrared sensor 3 is installed is measured, and the temperature of the wall that radiates infrared rays to the infrared sensor 3. Therefore, the correction accuracy can be improved. Further, when the infrared sensor 3 is provided at a position corresponding to the imaging optical system 2 in the infrared imaging apparatus main body 12, the temperature of the wall that emits infrared rays to the infrared sensor 3 can be measured. Further improvement can be achieved. In the present embodiment, one temperature sensor 13 is provided, but a plurality of temperature sensors 13 may be provided at different positions. When a plurality of temperature sensors 13 are provided, the average value of the output signals from each temperature sensor 13 can be used.
- an infrared sensor that detects far infrared rays
- the present invention is not limited to this, and an infrared sensor that detects mid-infrared rays and near-infrared rays may be used.
- the infrared imaging device 1 can be suitably applied to a security imaging device, an in-vehicle imaging device, and the like, and may be configured as a single imaging device that captures an infrared image. It may be configured to be incorporated in an imaging system having an infrared image capturing function.
- the infrared imaging device of the present invention is not limited to the above embodiment, and can be appropriately changed without departing from the spirit of the invention.
Landscapes
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Spectroscopy & Molecular Physics (AREA)
- Engineering & Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Transforming Light Signals Into Electric Signals (AREA)
Abstract
[Problem] To correct, with a high degree of accuracy, for fluctuations in the values for pixel signals output by an infrared sensor (3) caused by temperature variations, without causing an increase in the number of components of an infrared imaging device (1). [Solution] This infrared imaging device (1) comprises: an imaging optical system (2); an infrared sensor (3) wherein, by matching the center (31o) of the detection region (31) and the optical axis (O) of the imaging optical system, and making the length of the detection region (31) longer than the length of the imaging region (D) of the imaging optical system in one direction passing through the center (31o), an effective region (A) in which the detection region (31) and the imaging region (D) overlap, and reference regions (B) in which the detection region (31) and the imaging region (D) do not overlap, are provided within the detection region (31); and a signal correction unit (61) that corrects fluctuations in the values of pixel signals caused by temperature variations, by correcting values of pixel signals for effective pixels, which are pixels within the effective region (A), using the values for pixel signals for reference signals, which are pixels within the reference region (B).
Description
本発明は、赤外線画像を撮像する撮像装置に関し、特に赤外線センサの出力する画素信号の値の変動を補正する赤外線撮像装置に関するものである。
The present invention relates to an imaging device that captures an infrared image, and more particularly to an infrared imaging device that corrects fluctuations in the value of a pixel signal output from an infrared sensor.
物体や人物等の被写体から放射される赤外線を赤外線センサで検出することにより赤外線画像を撮像する赤外線撮像装置が知られている。通常、絶対零度よりも温度が高い被写体は赤外線を発し、被写体の温度が高いほど、短い波長の赤外線を多く発し、被写体の温度が低いほど長い波長の赤外線を少なく発することが知られている。赤外線撮像装置で被写体を撮像すると、撮像された画像は、温度が高いところは白く、温度が低いところは黒く表示される。しかしながら赤外線撮像装置の周辺等で温度の変化が生じると、赤外線センサが検出する検出信号に温度変化に伴う変動が生じてしまい、撮像した被写体の画像にノイズが生じてしまう。
An infrared imaging device that captures an infrared image by detecting infrared rays emitted from a subject such as an object or a person with an infrared sensor is known. It is generally known that a subject whose temperature is higher than absolute zero emits infrared light, and that the higher the temperature of the subject, the more infrared light with a shorter wavelength, and the lower the temperature of the subject, the less infrared light with a longer wavelength. When a subject is imaged by an infrared imaging device, the captured image is displayed white at a high temperature and black at a low temperature. However, when a temperature change occurs in the vicinity of the infrared imaging device or the like, the detection signal detected by the infrared sensor changes due to the temperature change, and noise is generated in the captured subject image.
そこで、特許文献1には、熱型赤外線撮像素子である画素が二次元に配列された検出領域のうち、赤外線が入射する領域の画素を有効画素、赤外線が入射しない領域の画素を参照画素として、参照画素が位置する領域において温度上昇が生じた場合に、有効画素に流すバイアス電流を減少させることで、温度上昇に伴う赤外線センサの検出信号の変動を抑えるようにした赤外線撮像素子が開示されている。特許文献1においては、検出領域の周辺部を覆うような庇すなわち赤外線遮蔽体を赤外線センサの内部に設け、この赤外線遮蔽体で赤外線センサに入射する赤外線を遮ることで赤外線が入射しない領域を生じさせている。
Therefore, in Patent Document 1, among detection regions in which pixels that are thermal infrared imaging elements are two-dimensionally arranged, pixels in a region where infrared rays are incident are effective pixels, and pixels in a region where infrared rays are not incident are reference pixels. An infrared imaging device is disclosed in which, when a temperature rise occurs in a region where a reference pixel is located, the fluctuation of the detection signal of the infrared sensor due to the temperature rise is suppressed by reducing the bias current flowing to the effective pixel. ing. In Patent Document 1, a ridge that covers the periphery of the detection region, that is, an infrared shielding body is provided inside the infrared sensor, and the infrared shielding body shields infrared light incident on the infrared sensor, thereby generating a region where no infrared light is incident. I am letting.
また、特許文献2には、赤外線センサが収納された筐体に、赤外線遮蔽体を設けることで画素の一部すなわち参照画素に被写体からの赤外線が入射しないようにして、参照画素を基準にして他の画素からの電気信号を読み出すことによって、筐体から放射される赤外線の変動をキャンセルし、被写体から入射する赤外線を検出するようにした赤外線センサが開示されている。
Further, in Patent Document 2, an infrared shielding body is provided in a housing in which an infrared sensor is accommodated so that infrared rays from a subject do not enter a part of the pixels, that is, the reference pixels, and the reference pixels are used as a reference. An infrared sensor is disclosed in which an electrical signal from another pixel is read to cancel a variation in infrared rays radiated from a casing and detect infrared rays incident from a subject.
一般的に、赤外線センサには、被写体から放射される赤外線と赤外線撮像装置本体から放射される赤外線が入射する。赤外線撮像装置本体の温度が上昇すると、赤外線撮像装置本体から放射される赤外線の量が増加するため、撮像された画像全体が白っぽくなってしまう。例えば、赤外線撮像装置本体の右側に高温の物体等が置かれた場合には、撮像された赤外線画像は右側が全体的に白っぽくなる。そこで赤外線センサが出力する画素信号から、赤外線撮像装置本体から放射される赤外線の変動を低減する補正を行うことが望まれている。
Generally, infrared rays emitted from a subject and infrared rays emitted from an infrared imaging device main body are incident on the infrared sensor. When the temperature of the infrared imaging device main body rises, the amount of infrared rays emitted from the infrared imaging device main body increases, so that the entire captured image becomes whitish. For example, when a high-temperature object or the like is placed on the right side of the infrared imaging apparatus main body, the captured infrared image is generally whitish on the right side. Therefore, it is desired to perform correction for reducing fluctuations in infrared rays emitted from the infrared imaging device body from the pixel signals output from the infrared sensor.
特許文献1や特許文献2に記載された赤外線センサでは、例えば図19に示すように、赤外線遮蔽体99が設けられている。特許文献1や特許文献2では、赤外線遮蔽板を必要とするため、構造の簡易化や省コスト性の向上が望まれている。また、このような赤外線遮蔽体99を用いた場合に、参照画素に赤外線遮蔽体99に寄与する赤外線が入射することによる以下の問題が生じることがわかってきた。図20は、赤外線遮蔽体99を備えた赤外線センサにおける赤外線量の一例を示す図である。図19において赤外線センサの検出領域のうち、結像光学系2からの赤外線が入射する有効領域をA、結像光学系2からの赤外線が入射しない参照領域をBで示す。赤外線遮蔽体99が設けられている場合には、撮像装置本体12の温度が上昇しても、赤外線遮蔽体99の温度はすぐには上昇しないので、有効領域Aには撮像装置本体12からの赤外線が入射することに対し、参照領域Bには撮像装置本体12よりも温度の低い赤外線遮蔽体99から放射される赤外線が入射することになり、図20に示すように、有効領域Aと参照領域Bとでは結像光学系2からの赤外線に無関係な赤外線の入射量が異なってしまう。このため参照領域Bの参照画素を用いた場合に、結像光学系2からの赤外線に無関係な赤外線の入射量を低減する補正を精度よく行うのが困難となる。
In the infrared sensor described in Patent Document 1 and Patent Document 2, for example, as shown in FIG. 19, an infrared shield 99 is provided. In Patent Document 1 and Patent Document 2, since an infrared shielding plate is required, simplification of the structure and improvement in cost saving are desired. Moreover, when such an infrared shielding body 99 is used, it has been found that the following problems occur due to the incidence of infrared rays contributing to the infrared shielding body 99 on the reference pixels. FIG. 20 is a diagram illustrating an example of the amount of infrared rays in an infrared sensor provided with the infrared shielding body 99. In FIG. 19, of the detection areas of the infrared sensor, an effective area where infrared rays from the imaging optical system 2 are incident is indicated by A, and a reference area where infrared rays from the imaging optical system 2 are not incident is indicated by B. When the infrared shielding body 99 is provided, the temperature of the infrared shielding body 99 does not increase immediately even if the temperature of the imaging apparatus body 12 rises. In contrast to the incidence of infrared rays, infrared rays radiated from the infrared shielding body 99 having a temperature lower than that of the imaging apparatus main body 12 are incident on the reference region B. As shown in FIG. The incident amount of infrared rays unrelated to the infrared rays from the imaging optical system 2 is different from the region B. For this reason, when the reference pixels in the reference region B are used, it is difficult to accurately perform correction for reducing the incident amount of infrared rays unrelated to the infrared rays from the imaging optical system 2.
本発明はかかる問題点に鑑みてなされたもので、部品点数を増加させることなく赤外線センサが出力する画素信号の値の温度変化による変動を精度よく補正することができる赤外線撮像装置を提供することを目的とするものである。
The present invention has been made in view of such problems, and provides an infrared imaging device capable of accurately correcting fluctuation due to a temperature change in the value of a pixel signal output from an infrared sensor without increasing the number of components. It is intended.
本発明の赤外線撮像装置は、赤外線を結像させる結像光学系と、
結像光学系の結像面に位置し、熱電変換素子である画素が複数配列された検出領域を有して、画素毎に、結像光学系から入射する赤外線に基づく画素信号をそれぞれ出力する赤外線センサであって、結像光学系の光軸と検出領域の中心を一致させ、この中心を通る一方向において、検出領域の長さを結像光学系の結像領域の長さよりも大きくすることにより、検出領域内に検出領域と結像領域が重複する有効領域と検出領域と結像領域が重複しない参照領域とが設けられた赤外線センサと、
参照領域内の画素である参照画素の画素信号の値を使用して有効領域内の画素である有効画素の画素信号の値を補正することにより温度変化による画素信号の値の変動を補正する信号補正部とを備える。 An infrared imaging device of the present invention includes an imaging optical system that forms an infrared image,
It has a detection region located on the imaging surface of the imaging optical system and in which a plurality of pixels that are thermoelectric conversion elements are arranged, and outputs a pixel signal based on infrared rays incident from the imaging optical system for each pixel. An infrared sensor that matches the optical axis of the imaging optical system with the center of the detection area, and makes the length of the detection area larger than the length of the imaging area of the imaging optical system in one direction passing through the center. An infrared sensor provided with an effective area in which the detection area and the imaging area overlap in the detection area, and a reference area in which the detection area and the imaging area do not overlap;
A signal that corrects a change in the value of the pixel signal due to a temperature change by correcting the value of the pixel signal of the effective pixel that is a pixel in the effective region using the value of the pixel signal of the reference pixel that is a pixel in the reference region A correction unit.
結像光学系の結像面に位置し、熱電変換素子である画素が複数配列された検出領域を有して、画素毎に、結像光学系から入射する赤外線に基づく画素信号をそれぞれ出力する赤外線センサであって、結像光学系の光軸と検出領域の中心を一致させ、この中心を通る一方向において、検出領域の長さを結像光学系の結像領域の長さよりも大きくすることにより、検出領域内に検出領域と結像領域が重複する有効領域と検出領域と結像領域が重複しない参照領域とが設けられた赤外線センサと、
参照領域内の画素である参照画素の画素信号の値を使用して有効領域内の画素である有効画素の画素信号の値を補正することにより温度変化による画素信号の値の変動を補正する信号補正部とを備える。 An infrared imaging device of the present invention includes an imaging optical system that forms an infrared image,
It has a detection region located on the imaging surface of the imaging optical system and in which a plurality of pixels that are thermoelectric conversion elements are arranged, and outputs a pixel signal based on infrared rays incident from the imaging optical system for each pixel. An infrared sensor that matches the optical axis of the imaging optical system with the center of the detection area, and makes the length of the detection area larger than the length of the imaging area of the imaging optical system in one direction passing through the center. An infrared sensor provided with an effective area in which the detection area and the imaging area overlap in the detection area, and a reference area in which the detection area and the imaging area do not overlap;
A signal that corrects a change in the value of the pixel signal due to a temperature change by correcting the value of the pixel signal of the effective pixel that is a pixel in the effective region using the value of the pixel signal of the reference pixel that is a pixel in the reference region A correction unit.
ここで、本発明において「赤外線」は、近赤外線、中赤外線、遠赤外線の全てを含むものとする。
Here, in the present invention, “infrared rays” includes all of near infrared rays, middle infrared rays, and far infrared rays.
また、本発明において、「検出領域と結像領域が重複する有効領域」は、検出領域において結像光学系から入射する赤外線が到達する領域を意味する。また「検出領域と結像領域が重複しない参照領域」は、検出領域において結像光学系から入射する赤外線が到達しない領域を意味する。また「参照領域」は、検出領域のうち有効領域ではない領域を意味する。
In the present invention, the “effective region where the detection region and the imaging region overlap” means a region where infrared rays incident from the imaging optical system reach in the detection region. In addition, the “reference region where the detection region and the imaging region do not overlap” means a region where infrared rays incident from the imaging optical system do not reach in the detection region. The “reference area” means an area that is not an effective area among the detection areas.
本発明の赤外線撮像装置は、信号補正部が、有効画素の画素信号の値から、参照画素の画素信号の値の平均値をそれぞれ減算してオフセット補正を行うことができる。
In the infrared imaging device of the present invention, the signal correction unit can perform the offset correction by subtracting the average value of the pixel signal value of the reference pixel from the value of the pixel signal of the effective pixel.
本発明の赤外線撮像装置は、検出領域に参照領域を2つ以上設け、
信号補正部が、2つ以上の参照領域のうち少なくとも1つの参照領域の参照画素の画素信号の値の平均値を、有効画素の画素信号の値からそれぞれ減算してオフセット補正を行ってもよい。 The infrared imaging device of the present invention is provided with two or more reference regions in the detection region,
The signal correction unit may perform offset correction by subtracting the average value of the pixel signal values of the reference pixels in at least one reference region of the two or more reference regions from the value of the pixel signal of the effective pixel. .
信号補正部が、2つ以上の参照領域のうち少なくとも1つの参照領域の参照画素の画素信号の値の平均値を、有効画素の画素信号の値からそれぞれ減算してオフセット補正を行ってもよい。 The infrared imaging device of the present invention is provided with two or more reference regions in the detection region,
The signal correction unit may perform offset correction by subtracting the average value of the pixel signal values of the reference pixels in at least one reference region of the two or more reference regions from the value of the pixel signal of the effective pixel. .
本発明の赤外線撮像装置は、検出領域が矩形状であり、検出領域の四隅にそれぞれ参照領域を設け、
信号補正部が、有効画素の画素信号の値から、四隅の参照領域の少なくとも1つ以上の参照領域内の参照画素の画素信号の値の平均値をそれぞれ減算してオフセット補正を行ってもよい。 In the infrared imaging device of the present invention, the detection area is rectangular, and reference areas are provided at the four corners of the detection area, respectively.
The signal correction unit may perform offset correction by subtracting an average value of pixel signal values of reference pixels in at least one reference region of the reference regions at the four corners from the pixel signal value of the effective pixel. .
信号補正部が、有効画素の画素信号の値から、四隅の参照領域の少なくとも1つ以上の参照領域内の参照画素の画素信号の値の平均値をそれぞれ減算してオフセット補正を行ってもよい。 In the infrared imaging device of the present invention, the detection area is rectangular, and reference areas are provided at the four corners of the detection area, respectively.
The signal correction unit may perform offset correction by subtracting an average value of pixel signal values of reference pixels in at least one reference region of the reference regions at the four corners from the pixel signal value of the effective pixel. .
本発明の赤外線撮像装置は、検出領域に、参照領域を2つ以上設け、
信号補正部が、2つ以上の参照領域のうち少なくとも2つの参照領域の参照画素の画素信号の値の平均値をそれぞれ算出し、算出した少なくとも2つの平均値を用いて、有効画素の画素信号の値に対してシェーディング補正を行うことができる。 The infrared imaging device of the present invention is provided with two or more reference regions in the detection region,
The signal correction unit calculates the average value of the pixel signals of the reference pixels in at least two reference areas among the two or more reference areas, and uses the calculated at least two average values to obtain the pixel signal of the effective pixel The shading correction can be performed on the value of.
信号補正部が、2つ以上の参照領域のうち少なくとも2つの参照領域の参照画素の画素信号の値の平均値をそれぞれ算出し、算出した少なくとも2つの平均値を用いて、有効画素の画素信号の値に対してシェーディング補正を行うことができる。 The infrared imaging device of the present invention is provided with two or more reference regions in the detection region,
The signal correction unit calculates the average value of the pixel signals of the reference pixels in at least two reference areas among the two or more reference areas, and uses the calculated at least two average values to obtain the pixel signal of the effective pixel The shading correction can be performed on the value of.
なお本発明において「シェーディング補正」は、結像光学系に起因する結像領域の周辺部における赤外線量の低下など、赤外線センサの結像面上で生じる入射赤外線の不均一、または、回路基板に通電することによって回路基板から生じる赤外線の不均一、光学系や赤外線撮像装置本体などからの外部熱の不均一などから生じる赤外線の画素位置ごとの不均一を低減する補正を意味する。
In the present invention, “shading correction” refers to non-uniformity of incident infrared rays generated on the imaging surface of the infrared sensor, such as a decrease in the amount of infrared rays at the periphery of the imaging region caused by the imaging optical system, or to the circuit board. It means correction that reduces non-uniformity of infrared rays caused by non-uniformity of infrared rays generated from a circuit board by energization and non-uniformity of infrared rays caused by non-uniformity of external heat from an optical system or an infrared imaging device main body.
本発明の赤外線撮像装置は、検出領域が矩形状であり、検出領域の四隅にそれぞれ参照領域を設け、
信号補正部が、四隅の参照領域のうち少なくとも2つの参照領域に対し、参照領域内の参照画素の画素信号の値の平均値をそれぞれ算出し、算出した少なくとも2つの平均値を用いて、有効画素の画素信号の値に対してシェーディング補正を行ってもよい。 In the infrared imaging device of the present invention, the detection area is rectangular, and reference areas are provided at the four corners of the detection area, respectively.
The signal correction unit calculates an average value of the pixel signal values of the reference pixels in the reference area for at least two reference areas of the reference areas at the four corners, and uses the calculated at least two average values to Shading correction may be performed on the value of the pixel signal of the pixel.
信号補正部が、四隅の参照領域のうち少なくとも2つの参照領域に対し、参照領域内の参照画素の画素信号の値の平均値をそれぞれ算出し、算出した少なくとも2つの平均値を用いて、有効画素の画素信号の値に対してシェーディング補正を行ってもよい。 In the infrared imaging device of the present invention, the detection area is rectangular, and reference areas are provided at the four corners of the detection area, respectively.
The signal correction unit calculates an average value of the pixel signal values of the reference pixels in the reference area for at least two reference areas of the reference areas at the four corners, and uses the calculated at least two average values to Shading correction may be performed on the value of the pixel signal of the pixel.
本発明の赤外線撮像装置は、検出領域内の縁部に参照領域を設け、参照領域に複数の参照画素が配列された枠状の参照領域部を設け、枠状の参照領域部を複数の領域に分割して、
信号補正部が、分割した複数の領域に対して、複数の領域に位置する参照画素の画素信号の値の平均値をそれぞれ算出し、算出した各平均値を用いて、有効画素の画素信号の値に対してシェーディング補正を行ってもよい。 In the infrared imaging device of the present invention, a reference region is provided at an edge in a detection region, a frame-like reference region portion in which a plurality of reference pixels are arranged is provided in the reference region, and the frame-like reference region portion is provided as a plurality of regions. Divided into
The signal correction unit calculates an average value of the pixel signal values of the reference pixels located in the plurality of regions for the plurality of divided regions, and uses each of the calculated average values for the pixel signal of the effective pixel. Shading correction may be performed on the value.
信号補正部が、分割した複数の領域に対して、複数の領域に位置する参照画素の画素信号の値の平均値をそれぞれ算出し、算出した各平均値を用いて、有効画素の画素信号の値に対してシェーディング補正を行ってもよい。 In the infrared imaging device of the present invention, a reference region is provided at an edge in a detection region, a frame-like reference region portion in which a plurality of reference pixels are arranged is provided in the reference region, and the frame-like reference region portion is provided as a plurality of regions. Divided into
The signal correction unit calculates an average value of the pixel signal values of the reference pixels located in the plurality of regions for the plurality of divided regions, and uses each of the calculated average values for the pixel signal of the effective pixel. Shading correction may be performed on the value.
本発明の赤外線撮像装置は、赤外線撮像装置の本体に少なくとも1つの温度センサを設け、
信号補正部が、温度センサからの出力信号の値に応じた値を、有効画素の画素信号の値に演算してさらなるオフセット補正を行うことができる。 The infrared imaging device of the present invention is provided with at least one temperature sensor in the main body of the infrared imaging device,
The signal correction unit can perform a further offset correction by calculating a value corresponding to the value of the output signal from the temperature sensor to the value of the pixel signal of the effective pixel.
信号補正部が、温度センサからの出力信号の値に応じた値を、有効画素の画素信号の値に演算してさらなるオフセット補正を行うことができる。 The infrared imaging device of the present invention is provided with at least one temperature sensor in the main body of the infrared imaging device,
The signal correction unit can perform a further offset correction by calculating a value corresponding to the value of the output signal from the temperature sensor to the value of the pixel signal of the effective pixel.
ここで、本発明において、「温度センサからの出力信号の値に応じた値」は、赤外線撮像装置の種類毎に予め設定された値であり、例えば赤外線撮像装置の種類毎に、温度センサの出力信号の値に対応するテーブルを予め作成しておき、このテーブルに基づいた値を使用することができる。
Here, in the present invention, the “value according to the value of the output signal from the temperature sensor” is a value set in advance for each type of infrared imaging device. For example, for each type of infrared imaging device, A table corresponding to the value of the output signal is created in advance, and a value based on this table can be used.
本発明の赤外線撮像装置は、温度センサを、赤外線撮像装置の本体の内部に設けることが好ましい。
In the infrared imaging device of the present invention, it is preferable to provide a temperature sensor inside the main body of the infrared imaging device.
本発明の赤外線撮像装置は、温度センサを、結像光学系と対向する位置に設けることが好ましい。
In the infrared imaging device of the present invention, it is preferable to provide a temperature sensor at a position facing the imaging optical system.
本発明の赤外線撮像装置によれば、結像光学系の光軸と熱電変換素子である画素が複数配列された赤外線センサの検出領域の中心を一致させ、この中心を通る一方向において、検出領域の長さを結像光学系の結像領域の長さよりも大きくすることにより、検出領域内に検出領域と結像領域が重複する有効領域と検出領域と結像領域が重複しない参照領域とが設けられた赤外線センサと、参照領域内の画素である参照画素の画素信号の値を使用して有効領域内の画素である有効画素の画素信号の値を補正することにより温度変化による画素信号の値の変動を補正する信号補正部とを備えているので、赤外線の入射を避けるためのさらなる部品を使用しなくても、検出領域内に赤外線が入射する有効領域と赤外線が入射しない参照領域とを形成することができる。これにより、赤外線の入射を避けるためのさらなる部品が不要であるため、赤外線センサを簡易かつ省コストに製造することができる。また、有効領域と参照領域にそれぞれ入射する赤外線の放射量に、赤外線撮像装置本体と赤外線の入射を避けるためのさらなる部品との温度差による影響を受けることがないので温度変化による画素信号の値の変動を精度よく補正することができる。
According to the infrared imaging device of the present invention, the optical axis of the imaging optical system and the center of the detection area of the infrared sensor in which a plurality of pixels as thermoelectric conversion elements are arranged coincide with each other, and in one direction passing through the center, the detection area Is made larger than the length of the imaging region of the imaging optical system, an effective region where the detection region and the imaging region overlap within the detection region, and a reference region where the detection region and the imaging region do not overlap exist. By correcting the value of the pixel signal of the effective pixel that is the pixel in the effective region using the provided infrared sensor and the value of the pixel signal of the reference pixel that is the pixel in the reference region, Since it has a signal correction unit that corrects fluctuations in the value, an effective region where infrared rays are incident on the detection region and a reference region where no infrared rays are incident, without using additional components for avoiding the incidence of infrared rays, Forming Rukoto can. Thereby, since the additional component for avoiding incidence | injection of infrared rays is unnecessary, an infrared sensor can be manufactured simply and at low cost. In addition, the amount of infrared radiation incident on each of the effective area and the reference area is not affected by the temperature difference between the infrared imaging device main body and further components for avoiding the incidence of infrared radiation. Can be accurately corrected.
以下、本発明にかかる赤外線撮像装置の一実施形態を、図面を参照して詳細に説明する。図1は本発明の一実施の形態に係る赤外線撮像装置の構成を説明する概略断面図、図2は本発明の一実施の形態に係る赤外線撮像装置1の構成を説明する概略ブロック図である。本実施形態の赤外線撮像装置1は、図1に示すように、第1の本体部10と第2の本体部11とからなる赤外線撮像装置本体12と、第1の本体部10に設置され、結像面30に被写体から放射される赤外線を結像させることができる結像光学系2と、第2の本体部11に設置され、結像光学系2の結像面30に位置し、熱電変換素子である画素が複数配列された検出領域31を有して、画素毎に結像光学系2から入射する赤外線に基づく画素信号をそれぞれ出力する赤外線センサ3とを備えている。
Hereinafter, an embodiment of an infrared imaging device according to the present invention will be described in detail with reference to the drawings. FIG. 1 is a schematic cross-sectional view illustrating the configuration of an infrared imaging device according to an embodiment of the present invention, and FIG. 2 is a schematic block diagram illustrating the configuration of an infrared imaging device 1 according to an embodiment of the present invention. . As shown in FIG. 1, the infrared imaging device 1 according to the present embodiment is installed in an infrared imaging device main body 12 including a first main body portion 10 and a second main body portion 11, and the first main body portion 10. An imaging optical system 2 capable of imaging infrared rays emitted from a subject on the imaging plane 30 and a second main body 11, located on the imaging plane 30 of the imaging optical system 2, An infrared sensor 3 having a detection region 31 in which a plurality of pixels as conversion elements are arranged and outputting pixel signals based on infrared rays incident from the imaging optical system 2 is provided for each pixel.
赤外線撮像装置本体12には、アルミニウムやステンレスなどの金属材料やプラスチックなどの樹脂材料が用いられるが、本実施形態の赤外線撮像装置本体12はステンレスで形成されるものとする。なお赤外線撮像装置本体12の内部の構造については後で詳細に説明する。
The infrared imaging device main body 12 is made of a metal material such as aluminum or stainless steel or a resin material such as plastic, but the infrared imaging device main body 12 of this embodiment is made of stainless steel. The internal structure of the infrared imaging device body 12 will be described later in detail.
結像光学系2は、1つ以上のレンズで構成されたレンズ群であり、レンズは保持枠に保持されて、この保持枠は第1の本体部10に対して固定されている。なお本実施形態において結像光学系2は固定焦点型光学系として説明するが、本発明はこれに限られるものではなく、可変焦点型光学系であってもよい。
The imaging optical system 2 is a lens group composed of one or more lenses. The lenses are held by a holding frame, and the holding frame is fixed to the first main body 10. In the present embodiment, the imaging optical system 2 is described as a fixed focus optical system, but the present invention is not limited to this, and may be a variable focus optical system.
赤外線センサ3は、図示しない駆動制御部によって駆動され、検出領域31に結像される被写体像を赤外線画像として撮像し、画素信号に変換して出力する。赤外線センサ3は、各画素に蓄積された電荷を順次に転送して電気信号に変換することにより画素信号を出力する。
The infrared sensor 3 is driven by a drive control unit (not shown), takes a subject image formed in the detection area 31 as an infrared image, converts it into a pixel signal, and outputs it. The infrared sensor 3 outputs a pixel signal by sequentially transferring charges accumulated in each pixel and converting them into an electrical signal.
ここで本発明において特徴的なのは、図1に示すように、検出領域31の領域Cが結像光学系2の結像領域Dよりも大きく構成されていることである。ここで図3に赤外線センサ3の検出領域31の第1の実施形態を示す図を示す。本実施形態の赤外線センサ3の検出領域31の領域Cは矩形状を有するものとし、結像光学系2の結像領域Dは円形状を有するものとする。結像光学系2の光軸すなわち結像領域Dの中心Oと、赤外線センサ3の検出領域31の領域Cの中心31oとを一致させ、この中心O,31oを通る直線を直線dとしたとき、直線d上において検出領域31の領域Cの長さが結像領域Dの長さより大きくなるように構成されている。なお本実施形態では、直線dは、検出領域31の領域Cの対角線を結ぶ直線とする。
Here, what is characteristic in the present invention is that the region C of the detection region 31 is configured to be larger than the imaging region D of the imaging optical system 2 as shown in FIG. Here, FIG. 3 shows a diagram illustrating a first embodiment of the detection region 31 of the infrared sensor 3. It is assumed that the region C of the detection region 31 of the infrared sensor 3 of the present embodiment has a rectangular shape, and the imaging region D of the imaging optical system 2 has a circular shape. When the optical axis of the imaging optical system 2, that is, the center O of the imaging region D is matched with the center 31o of the region C of the detection region 31 of the infrared sensor 3, and a straight line passing through the centers O and 31o is a straight line d The length of the region C of the detection region 31 is configured to be greater than the length of the imaging region D on the straight line d. In the present embodiment, the straight line d is a straight line connecting diagonal lines of the region C of the detection region 31.
検出領域31において、結像光学系2を通過した赤外線が入射する領域すなわち検出領域31の領域Cと結像光学系2の結像領域Dが重なる領域が有効領域ARとなり、結合光学系2を通過した赤外線が入射しない領域すなわち検出領域31の領域Cと結像光学系2の結像領域Dが重ならない領域が参照領域BRとなる。本実施形態においては、図3中の直線dと直交する直線すなわちもう一方の対角線上においても、検出領域31の領域Cの長さが結像領域Dの長さより大きくなるので、検出領域31の領域Cの四隅にそれぞれ参照領域BRが形成される。なお有効領域ARと参照領域BRには複数の熱電変換素子である画素が配置され、有効領域ARに位置する画素が有効画素、参照領域BRに位置する画素が参照画素となる。本実施形態においては、有効領域AR内において最も大きい矩形を形成する領域で、かつ有効画素が行列状配置された領域を使用有効領域Aとし、この使用有効領域A内の有効画素において出力される画素データが赤外線画像を形成するものとする。
In the detection region 31, a region where the infrared light that has passed through the imaging optical system 2 is incident, that is, a region where the region C of the detection region 31 and the imaging region D of the imaging optical system 2 overlap is an effective region AR. A region where the transmitted infrared light does not enter, that is, a region where the region C of the detection region 31 and the imaging region D of the imaging optical system 2 do not overlap with each other becomes the reference region BR. In the present embodiment, the length of the region C of the detection region 31 is larger than the length of the imaging region D also on the straight line orthogonal to the straight line d in FIG. Reference regions BR are formed at the four corners of the region C, respectively. Note that pixels that are a plurality of thermoelectric conversion elements are arranged in the effective region AR and the reference region BR, and pixels located in the effective region AR become effective pixels and pixels located in the reference region BR become reference pixels. In the present embodiment, the area that forms the largest rectangle in the effective area AR and the area in which the effective pixels are arranged in a matrix is set as the effective area A, and is output in the effective pixels in the effective area A. Assume that the pixel data forms an infrared image.
また本実施形態においては、4つの隅部に形成された参照領域BRについて、左上を参照領域B1、右上を参照領域B2、左下を参照領域B3、右下を参照領域B4として、後述する信号補正部61は各参照領域B1~B4内の参照画素において出力される画素データを使用して赤外線画像の信号補正処理を行う。
Further, in the present embodiment, for the reference areas BR formed at the four corners, signal correction, which will be described later, is made the reference area B1, the upper left is the reference area B2, the lower left is the reference area B3, and the lower right is the reference area B4. The unit 61 performs infrared image signal correction processing using pixel data output from the reference pixels in the reference regions B1 to B4.
このように、参照領域B1~B4を、有効領域ARを除く位置に設けた場合には、参照領域B1~B4に対して結像光学系2からの赤外線の入射を避けるためのさらなる部品が不要であるため、赤外線センサ3を簡易かつ省コストに製造することができる。また、有効領域ARに対して参照領域B1~B4の位置および面積を好適に確保しつつ、結像領域Dを除く領域に参照領域B1~B4を設けるために有利である。また、例えば図19に示すように結像領域Dに検出領域31の領域Cが全て重なる、すなわち検出領域31の領域C内の画素が全て有効画素となる場合よりも結像光学系2のレンズのサイズが小さくなるので、結像光学系2のコストを低減することができる。
As described above, when the reference areas B1 to B4 are provided at positions other than the effective area AR, no additional parts are required for avoiding the incidence of infrared rays from the imaging optical system 2 to the reference areas B1 to B4. Therefore, the infrared sensor 3 can be manufactured easily and at low cost. Further, it is advantageous to provide the reference areas B1 to B4 in the area excluding the imaging area D while suitably securing the positions and areas of the reference areas B1 to B4 with respect to the effective area AR. For example, as shown in FIG. 19, the lens of the imaging optical system 2 is more than the case where the region C of the detection region 31 overlaps the imaging region D, that is, all the pixels in the region C of the detection region 31 become effective pixels. Therefore, the cost of the imaging optical system 2 can be reduced.
図3の例では、赤外線センサ3が矩形センサであり、赤外線センサ3の4つの隅部が結像領域Dと重複しない領域である参照領域BRとされているが、後述する信号補正部61が赤外線画像の信号補正処理に使用する参照画素は、参照領域BR内の参照画素から任意に選択することができる。すなわち上記使用する参照画素で形成される領域は、参照領域BR内の任意の位置に任意の形状に選択的に設けることができる。なお有効画素と参照画素の位置情報は、記憶部7に記憶され、信号補正部61に適宜参照される。
In the example of FIG. 3, the infrared sensor 3 is a rectangular sensor, and the four corners of the infrared sensor 3 are reference areas BR that are areas that do not overlap with the imaging area D. The reference pixel used for the infrared image signal correction process can be arbitrarily selected from the reference pixels in the reference region BR. That is, the region formed by the reference pixels to be used can be selectively provided in an arbitrary shape at an arbitrary position in the reference region BR. The position information of the effective pixel and the reference pixel is stored in the storage unit 7 and is referred to by the signal correction unit 61 as appropriate.
例えば、上述したような結像面30に含まれ、光軸を通る少なくとも1つの直線上で、赤外線センサ3の検出領域31の長さに対する結像光学系2の結像領域の領域Cの長さを適宜異ならせて、上記実施形態とは別の形状の使用有効領域Aと参照領域Bを設けるようにしてもよい。ここで図4~図6に、赤外線センサ3の検出領域31の他の実施形態の一例をそれぞれ示す。なお図4~図6は、赤外線センサ3の検出領域31の構成を説明するための概略図であり、各領域の大きさ等は実際のものとは異なっている。
For example, the length of the region C of the imaging region of the imaging optical system 2 relative to the length of the detection region 31 of the infrared sensor 3 on at least one straight line included in the imaging surface 30 as described above and passing through the optical axis. The use effective area A and the reference area B having different shapes from the above-described embodiment may be provided by appropriately changing the sizes. 4 to 6 show examples of other embodiments of the detection region 31 of the infrared sensor 3, respectively. 4 to 6 are schematic diagrams for explaining the configuration of the detection region 31 of the infrared sensor 3, and the size of each region is different from the actual one.
赤外線センサ3の第2の検出領域31は、図4に示すように、結像光学系2の光軸すなわち結像領域Dの中心Oと、赤外線センサ3の検出領域31の領域Cの中心31oとを一致させ、この中心O,31oを通り赤外線センサ3の検出領域31の上下を結ぶ垂直方向の線を直線dとし、直線d上において検出領域31の領域Cの長さが結像領域Dの長さより大きくなるように構成する。これにより、図4の検出領域31においては、結像領域D全体が結像光学系2を通過した赤外線が入射する領域すなわち検出領域31の領域Cと結像光学系の結像領域Dが重なる領域である有効領域ARとなり、結像領域Dを囲む外周部が結合光学系2を通過した赤外線が入射しない領域すなわち検出領域31の領域Cと結像光学系の結像領域Dが重ならない領域である参照領域BRとなる。
As shown in FIG. 4, the second detection region 31 of the infrared sensor 3 includes the optical axis of the imaging optical system 2, that is, the center O of the imaging region D, and the center 31 o of the region C of the detection region 31 of the infrared sensor 3. The vertical line connecting the upper and lower sides of the detection region 31 of the infrared sensor 3 through the centers O and 31o is defined as a straight line d, and the length of the region C of the detection region 31 on the straight line d is the imaging region D. It is configured to be larger than the length of. As a result, in the detection region 31 of FIG. 4, the region where the infrared rays that have passed through the imaging optical system 2 are incident, that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap. An effective area AR which is an area, and an outer peripheral portion surrounding the imaging area D is an area where infrared rays that have passed through the coupling optical system 2 do not enter, that is, an area where the imaging area D of the imaging optical system does not overlap with the area C of the detection area 31 Is a reference region BR.
上記第2の検出領域31においても、上述した実施形態の検出領域31と同様に、有効領域AR内において最も大きい矩形を形成する領域で、かつ有効画素が行列状配置された領域を使用有効領域Aとし、この使用有効領域A内の有効画素において出力される画素データが赤外線画像を形成するものとする。また、参照領域BRにおいて、結像領域Dの接線であり、かつ検出領域31の水平方向の線より上側の端部と下側の端部をそれぞれ参照領域B1,B2として後述する信号補正部61は各参照領域B1,B2内の参照画素において出力される画素データを使用して赤外線画像の信号補正処理を行う。
Also in the second detection area 31, as in the detection area 31 of the above-described embodiment, an area that forms the largest rectangle in the effective area AR and in which effective pixels are arranged in a matrix is used effective area It is assumed that A is the pixel data output from the effective pixels in the use effective area A and forms an infrared image. Further, in the reference area BR, a signal correction unit 61, which will be described later, is referred to as reference areas B1 and B2, which are tangent to the imaging area D and have upper and lower ends from the horizontal line of the detection area 31, respectively. Performs signal correction processing of an infrared image using pixel data output in reference pixels in the reference regions B1 and B2.
また、赤外線センサ3の第3の検出領域31は、図5に示すように、結像光学系2の光軸すなわち結像領域Dの中心Oと、赤外線センサ3の検出領域31の領域Cの中心31oとを一致させ、この中心O,31oを通り赤外線センサ3の検出領域31の左右を結ぶ水平方向の線を直線dとし、直線d上において検出領域31の領域Cの長さが結像領域Dの長さより大きくなるように構成する。これにより、図5の検出領域31においては、結像領域D全体が結像光学系2を通過した赤外線が入射する領域すなわち検出領域31の領域Cと結像光学系の結像領域Dが重なる領域である有効領域ARとなり、結像領域Dに隣接する左右の領域が結合光学系2を通過した赤外線が入射しない領域すなわち検出領域31の領域Cと結像光学系の結像領域Dが重ならない領域である参照領域BRとなる。
Further, as shown in FIG. 5, the third detection region 31 of the infrared sensor 3 includes an optical axis of the imaging optical system 2, that is, a center O of the imaging region D, and a region C of the detection region 31 of the infrared sensor 3. A horizontal line connecting the left and right of the detection region 31 of the infrared sensor 3 through the centers O and 31o is defined as a straight line d, and the length of the region C of the detection region 31 is imaged on the straight line d. It is configured to be larger than the length of the region D. Accordingly, in the detection region 31 of FIG. 5, the region where the infrared rays that have passed through the imaging optical system 2 are incident, that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap. The left and right regions adjacent to the imaging region D are regions where the infrared rays that have passed through the coupling optical system 2 are not incident, that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap. The reference region BR is a region that should not be used.
上記第3の検出領域31においても、上述した実施形態の検出領域31と同様に、有効領域AR内において最も大きい矩形を形成する領域で、かつ有効画素が行列状配置された領域を使用有効領域Aとし、この使用有効領域A内の有効画素において出力される画素データが赤外線画像を形成するものとする。また、参照領域BRにおいて、結像領域Dの接線であり、かつ検出領域31の垂直方向の線より左側の端部と右側の端部をそれぞれ参照領域B3,B4として後述する信号補正部61は各参照領域B3,B4内の参照画素において出力される画素データを使用して赤外線画像の信号補正処理を行う。なお第3の検出領域31においては、直線d上すなわち検出領域31の水平方向においては、検出領域31の領域Cの長さが結像領域Dの長さより大きくなるように構成しているが、検出領域31の垂直方向においては、検出領域31の領域Cの長さが結像領域Dの長さより大きくなるように構成していない。すなわち、上記垂直方向においては、検出領域31の領域Cの長さが結像領域Dの長さより短い。従って、上記第2の検出領域31と第3の検出領域31とが同じ大きさである場合には、第3の検出領域31の方が上記第2の検出領域31よりも検出領域31の領域Cと結像光学系の結像領域Dが重なる領域すなわち有効領域ARが広くなるので、使用有効領域Aを広くすることができる。
In the third detection area 31 as well, as in the detection area 31 of the above-described embodiment, the area that forms the largest rectangle in the effective area AR and the area in which the effective pixels are arranged in a matrix is used effective area It is assumed that A is the pixel data output from the effective pixels in the use effective area A and forms an infrared image. In the reference region BR, a signal correction unit 61, which will be described later as reference regions B3 and B4, which are tangents to the imaging region D and have left and right ends from the vertical line of the detection region 31 as reference regions B3 and B4, respectively. Infrared image signal correction processing is performed using pixel data output from the reference pixels in the reference regions B3 and B4. The third detection region 31 is configured such that the length of the region C of the detection region 31 is larger than the length of the imaging region D on the straight line d, that is, in the horizontal direction of the detection region 31. In the vertical direction of the detection region 31, the length of the region C of the detection region 31 is not configured to be greater than the length of the imaging region D. That is, the length of the area C of the detection area 31 is shorter than the length of the imaging area D in the vertical direction. Therefore, when the second detection region 31 and the third detection region 31 are the same size, the third detection region 31 is a region of the detection region 31 than the second detection region 31. Since the area where C and the imaging area D of the imaging optical system overlap, that is, the effective area AR becomes wide, the usable effective area A can be widened.
また、赤外線センサ3の第4の検出領域31は、図6に示すように、結像光学系2の光軸すなわち結像領域Dの中心Oと、赤外線センサ3の検出領域31の領域Cの中心31oとを一致させ、この中心O,31oを通り赤外線センサ3の検出領域31の上下を結ぶ垂直方向の線を直線d1とし、中心O,31oを通り赤外線センサ3の検出領域31の左右を結ぶ水平方向の線を直線d2としたときに、直線d1及び直線d2上において検出領域31の領域Cの長さが結像領域Dの長さより大きくなるように構成する。これにより、図6の検出領域31においては、結像領域D全体が結像光学系2を通過した赤外線が入射する領域すなわち検出領域31の領域Cと結像光学系の結像領域Dが重なる領域である有効領域ARとなり、結像領域Dを囲む外周部が結合光学系2を通過した赤外線が入射しない領域すなわち検出領域31の領域Cと結像光学系の結像領域Dが重ならない領域である参照領域BRとなる。
Further, the fourth detection region 31 of the infrared sensor 3 includes an optical axis of the imaging optical system 2, that is, a center O of the imaging region D and a region C of the detection region 31 of the infrared sensor 3, as shown in FIG. A vertical line connecting the center 31o and passing through the centers O and 31o and connecting the upper and lower sides of the detection region 31 of the infrared sensor 3 is a straight line d1, and the right and left of the detection region 31 of the infrared sensor 3 is passed through the centers O and 31o. When the horizontal line to be connected is a straight line d2, the length of the region C of the detection region 31 is configured to be larger than the length of the imaging region D on the straight lines d1 and d2. Thereby, in the detection region 31 of FIG. 6, the region where the infrared rays that have passed through the imaging optical system 2 are incident, that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap. An effective area AR which is an area, and an outer peripheral portion surrounding the imaging area D is an area where infrared rays that have passed through the coupling optical system 2 do not enter, that is, an area where the imaging area D of the imaging optical system does not overlap with the area C of the detection area 31 Is a reference region BR.
上記第4の検出領域31においても、上述した実施形態の検出領域31と同様に、有効領域AR内において最も大きい矩形を形成する領域で、かつ有効画素が行列状配置された領域を使用有効領域Aとし、この使用有効領域A内の有効画素において出力される画素データが赤外線画像を形成するものとする。また、参照領域BRにおいて、結像領域Dの接線であり、かつ検出領域31の水平方向の線より上側の端部と下側の端部を含み、この上側の端部と下側の端部と同じ幅で形成される検出領域31の領域Cの周辺の枠状の領域を参照領域Bとして後述する信号補正部61は各参照領域B1,B2内の参照画素において出力される画素データを使用して赤外線画像の信号補正処理を行う。
In the fourth detection area 31 as well, as in the detection area 31 of the above-described embodiment, an area that forms the largest rectangle in the effective area AR and in which effective pixels are arranged in a matrix is used effective area It is assumed that A is the pixel data output from the effective pixels in the use effective area A and forms an infrared image. Further, in the reference region BR, it is a tangent to the imaging region D and includes an upper end and a lower end from the horizontal line of the detection region 31, and the upper end and the lower end. The signal correction unit 61, which will be described later, uses a frame-shaped region around the region C of the detection region 31 formed with the same width as the reference region B, and uses pixel data output in the reference pixels in the reference regions B1 and B2. The infrared image signal correction process is then performed.
なお、使用有効領域A内の有効画素及び参照領域B内の参照画素は、赤外線(波長0.7μm~1mm)を検出可能な赤外線検出素子(赤外線検出器)であり、特に、遠赤外線(波長8μm~15μm)を検出可能な赤外線検出素子である。例えば、上記有効画素及び上記参照画素として用いられる赤外線検出素子としてマイクロボロメータ型またはSOI(Silicon on Insulator)ダイオード型の赤外線検出素子を用いることができる。
The effective pixels in the usable effective area A and the reference pixels in the reference area B are infrared detecting elements (infrared detectors) that can detect infrared rays (wavelengths of 0.7 μm to 1 mm). It is an infrared detecting element capable of detecting 8 μm to 15 μm). For example, a microbolometer-type or SOI (Silicon-on-Insulator) diode-type infrared detection element can be used as the infrared detection element used as the effective pixel and the reference pixel.
次に、赤外線撮像装置1の構成について説明する。赤外線撮像装置1は、図2に示すように、上記結像光学系2と、上記赤外線センサ3と、赤外線センサ3からの出力信号を増幅する等各種アナログ信号処理を行うアナログ信号処理部4と、アナログ信号処理部4で信号処理されたアナログ画像信号をデジタル画像データに変換するA/D変換(Analog to Digital変換)部5と、A/D変換部5で変換されたデジタル画像データに対して各種信号処理を行うデジタル信号処理部6と、各種のデジタル信号処理に伴う情報を記憶する記憶部7と、デジタル信号が施された赤外線画像を記憶部7や図示しない表示部に出力する出力部8とを備えている。
Next, the configuration of the infrared imaging device 1 will be described. As shown in FIG. 2, the infrared imaging device 1 includes the imaging optical system 2, the infrared sensor 3, an analog signal processing unit 4 that performs various analog signal processing such as amplifying an output signal from the infrared sensor 3, and the like. A / D conversion (Analog-to-Digital conversion) unit 5 for converting the analog image signal processed by the analog signal processing unit 4 into digital image data, and the digital image data converted by the A / D conversion unit 5 A digital signal processing unit 6 that performs various signal processing, a storage unit 7 that stores information associated with various digital signal processing, and an output that outputs an infrared image subjected to the digital signal to the storage unit 7 or a display unit (not shown). Part 8.
記憶部7は、デジタル信号処理部6に使用される各種の情報、各種デジタル信号処理が施された赤外線画像などを必要に応じて記憶する、DRAM(Dynamic Random Access Memory)などの揮発性メモリとフラッシュメモリなどの不揮発性メモリを含んで構成される。なお本実施形態においては、デジタル信号処理部6とは別に記憶部7を設けているが、本発明はこれに限られるものではなく、デジタル信号処理部6に、記憶部7を設けるようにしてもよい。
The storage unit 7 stores various types of information used in the digital signal processing unit 6, infrared images subjected to various digital signal processing, and the like as necessary, and a volatile memory such as a DRAM (Dynamic Random Access Memory). A non-volatile memory such as a flash memory is included. In the present embodiment, the storage unit 7 is provided separately from the digital signal processing unit 6, but the present invention is not limited to this, and the storage unit 7 is provided in the digital signal processing unit 6. Also good.
出力部8は、無線または有線通信によって、各種のデジタル信号処理が施された赤外線画像を記憶部7や不図示の表示部や装置外部の記憶部に出力する。
The output unit 8 outputs an infrared image subjected to various digital signal processing to the storage unit 7, a display unit (not shown), or a storage unit outside the apparatus by wireless or wired communication.
デジタル信号処理部6は、赤外線センサ3が出力する画素信号の値に補正処理を行う信号補正部61を備えている。通常、赤外線撮像装置本体12の温度が上昇すると、赤外線撮像装置本体12から放射される赤外線の放射量が増加するため、撮像された赤外線画像全体が白っぽくなってしまう。そこで本実施形態の信号補正部61は、赤外線センサ3が出力する画素信号から赤外線撮像装置本体12から放射される赤外線に寄与する値をオフセットする補正を行うために、参照領域B内の画素である参照画素の画素信号の値を使用して使用有効領域A内の画素である有効画素の画素信号の値を補正する。
The digital signal processing unit 6 includes a signal correction unit 61 that performs correction processing on the value of the pixel signal output from the infrared sensor 3. Normally, when the temperature of the infrared imaging device main body 12 rises, the amount of infrared radiation emitted from the infrared imaging device main body 12 increases, and the entire captured infrared image becomes whitish. In view of this, the signal correction unit 61 of the present embodiment uses pixels in the reference region B in order to perform correction for offsetting a value that contributes to infrared rays emitted from the infrared imaging device main body 12 from the pixel signals output from the infrared sensor 3. The value of the pixel signal of the effective pixel that is the pixel in the use effective area A is corrected using the value of the pixel signal of a certain reference pixel.
ここで信号補正部61による補正方法についてフローチャートを参照して説明する。図7に本実施形態の赤外線撮像装置1の一連の処理のフローチャートを示す。
Here, a correction method by the signal correction unit 61 will be described with reference to a flowchart. FIG. 7 shows a flowchart of a series of processes of the infrared imaging apparatus 1 of the present embodiment.
本実施形態の赤外線撮像装置1は、図7に示すように、まず被写体を撮像して(S1)、赤外線センサ3が画素毎に結像光学系2から入射する赤外線に基づく画素信号をそれぞれアナログ信号処理部4に出力し、アナログ信号処理部4が赤外線センサ3からの検出信号を増幅する等各種アナログ信号処理を行い、A/D変換部5が信号処理されたアナログ画像信号をデジタル画像データに変換し、デジタル信号処理部6がA/D変換部5で変換されたデジタル画像データに対して各種信号処理を行い、各種信号処理が施された画像データを記憶部7に記憶する(S2)。
As shown in FIG. 7, the infrared imaging apparatus 1 of the present embodiment first images a subject (S1), and the infrared sensor 3 analogly outputs pixel signals based on infrared rays incident from the imaging optical system 2 for each pixel. Output to the signal processing unit 4, the analog signal processing unit 4 performs various analog signal processing such as amplifying the detection signal from the infrared sensor 3, and the A / D conversion unit 5 converts the processed analog image signal into digital image data. The digital signal processing unit 6 performs various signal processing on the digital image data converted by the A / D conversion unit 5, and stores the image data subjected to the various signal processing in the storage unit 7 (S2). ).
次に、信号補正部61が、記憶部7に記憶された画像データを読み出して、画像データに対して補正処理を行う(S3)。ここで信号補正部61による第1の補正処理について詳細に説明する。図8は本実施形態の信号補正部61の第1の補正処理のフローチャートである。
Next, the signal correction unit 61 reads the image data stored in the storage unit 7 and performs a correction process on the image data (S3). Here, the first correction processing by the signal correction unit 61 will be described in detail. FIG. 8 is a flowchart of the first correction process of the signal correction unit 61 of this embodiment.
信号補正部61は、参照領域B内の画素である参照画素の画素信号の値の平均値を算出する(S11)。参照領域Bが、図3に示すように、検出領域Cの4つの隅部に形成されている場合には、4つの隅部の参照領域B1~B4の参照画素の全ての画素信号の値の平均値Mを算出する。左上の参照領域B1内の参照画素の数をi個、右上の参照領域B2の参照画素の数をj個、左下の参照領域B3内の参照画素の数をk個、右下の参照領域B4の参照画素の数をm個とすると平均値Mは下記式(1)で算出できる。
M=(B11+B12+、、、+B1i+B21+B22+、、、+B2j
+B31+B32+、、、+B3k+B41+B42+、、、+B4m)
/(i+j+k+m) ・・・(1)
ここでB11、、、B1iは参照領域B1の各参照画素の画素信号の値、B21、、、B2jは参照領域B2の各参照画素の画素信号の値、B31、、、B3kは参照領域B3の各参照画素の画素信号の値、B41、、、B4mは参照領域B4の各参照画素の画素信号の値をそれぞれ示す。 Thesignal correction unit 61 calculates an average value of pixel signal values of reference pixels that are pixels in the reference region B (S11). When the reference region B is formed at the four corners of the detection region C as shown in FIG. 3, the values of all pixel signals of the reference pixels in the reference regions B1 to B4 at the four corners An average value M is calculated. The number of reference pixels in the upper left reference area B1, i, the number of reference pixels in the upper right reference area B2, j, the number of reference pixels in the lower left reference area B3, and the lower right reference area B4 If the number of reference pixels is m, the average value M can be calculated by the following equation (1).
M = (B1 1 + B1 2 +,..., + B1 i + B2 1 + B2 2 +,... + B2 j
+ B3 1 + B3 2 +, ..., + B3 k + B4 1 + B4 2 +, ..., + B4 m )
/ (I + j + k + m) (1)
Here, B1 1 , B1 i are the pixel signal values of the reference pixels in the reference area B1, B2 1 , B2 j are the pixel signal values of the reference pixels in the reference area B2, and B3 1 , B3 k represents the value of the pixel signal of each reference pixel in the reference region B3, and B4 1 , B4 m represent the value of the pixel signal of each reference pixel in the reference region B4.
M=(B11+B12+、、、+B1i+B21+B22+、、、+B2j
+B31+B32+、、、+B3k+B41+B42+、、、+B4m)
/(i+j+k+m) ・・・(1)
ここでB11、、、B1iは参照領域B1の各参照画素の画素信号の値、B21、、、B2jは参照領域B2の各参照画素の画素信号の値、B31、、、B3kは参照領域B3の各参照画素の画素信号の値、B41、、、B4mは参照領域B4の各参照画素の画素信号の値をそれぞれ示す。 The
M = (B1 1 + B1 2 +,..., + B1 i + B2 1 + B2 2 +,... + B2 j
+ B3 1 + B3 2 +, ..., + B3 k + B4 1 + B4 2 +, ..., + B4 m )
/ (I + j + k + m) (1)
Here, B1 1 , B1 i are the pixel signal values of the reference pixels in the reference area B1, B2 1 , B2 j are the pixel signal values of the reference pixels in the reference area B2, and B3 1 , B3 k represents the value of the pixel signal of each reference pixel in the reference region B3, and B4 1 , B4 m represent the value of the pixel signal of each reference pixel in the reference region B4.
なお平均値の算出方法としては、上記式(1)のように参照領域B1~B4の全ての参照画素の画素信号の値を全て足し算した後で、画素の数で割り算してもよいし、参照領域B1~B4のそれぞれの領域毎に平均値を算出し、算出した4つの平均値を足し算して4で割った値を平均値としてもよい。すなわち平均値Mは下記式(2)で算出してもよい。
M=((B11+B12+、、、+B1i)/i+(B21+B22+、、、+B2j)/j+(B31+B32+、、、+B3k)/k+(B41+B42+、、、+B4m)/m)/4 ・・・(2) As an average value calculation method, the pixel signal values of all the reference pixels in the reference regions B1 to B4 may be added together as in the above formula (1), and then divided by the number of pixels. An average value may be calculated for each of the reference regions B1 to B4, and a value obtained by adding the calculated four average values and dividing by four may be used as the average value. That is, the average value M may be calculated by the following formula (2).
M = ((B1 1 + B1 2 + ,,, + B1 i) / i + (B2 1 + B2 2 + ,,, + B2 j) / j + (B3 1 + B3 2 + ,,, + B3 k) / k + (B4 1 + B4 2 +, ..., + B4 m ) / m) / 4 (2)
M=((B11+B12+、、、+B1i)/i+(B21+B22+、、、+B2j)/j+(B31+B32+、、、+B3k)/k+(B41+B42+、、、+B4m)/m)/4 ・・・(2) As an average value calculation method, the pixel signal values of all the reference pixels in the reference regions B1 to B4 may be added together as in the above formula (1), and then divided by the number of pixels. An average value may be calculated for each of the reference regions B1 to B4, and a value obtained by adding the calculated four average values and dividing by four may be used as the average value. That is, the average value M may be calculated by the following formula (2).
M = ((B1 1 + B1 2 + ,,, + B1 i) / i + (
また、参照領域Bが、図4に示すように、検出領域Cの上下の端部に設けられている場合には、上端部の参照領域B1内の参照画素と下端部の参照領域B2の参照画素の両方の画素信号の値の平均値Mを算出する。上端部の参照領域B1内の参照画素の数をi個、下端部の参照領域B2の参照画素の数をj個とすると平均値Mは下記式(3)で算出できる。
M=(B11+B12+、、、+B1i+B21+B22+、、、+B2j)/(i+j) ・・・(3)
ここでB11、、、B1iは参照領域B1の各参照画素の画素信号の値、B21、、、B2jは参照領域B2の各参照画素の画素信号の値をそれぞれ示す。 In addition, when the reference area B is provided at the upper and lower ends of the detection area C as shown in FIG. 4, the reference pixel in the reference area B1 at the upper end and the reference area B2 at the lower end are referred to. An average value M of values of both pixel signals of the pixel is calculated. If the number of reference pixels in the reference region B1 at the upper end is i and the number of reference pixels in the reference region B2 at the lower end is j, the average value M can be calculated by the following equation (3).
M = (B1 1 + B1 2 +,..., + B1 i + B2 1 + B2 2 +,... + B2 j ) / (i + j) (3)
Here, B1 1, ..., B1 i indicate the value of the pixel signal of each reference pixel in the reference area B1, and B2 1, ... B2 j indicate the value of the pixel signal of each reference pixel in the reference area B2.
M=(B11+B12+、、、+B1i+B21+B22+、、、+B2j)/(i+j) ・・・(3)
ここでB11、、、B1iは参照領域B1の各参照画素の画素信号の値、B21、、、B2jは参照領域B2の各参照画素の画素信号の値をそれぞれ示す。 In addition, when the reference area B is provided at the upper and lower ends of the detection area C as shown in FIG. 4, the reference pixel in the reference area B1 at the upper end and the reference area B2 at the lower end are referred to. An average value M of values of both pixel signals of the pixel is calculated. If the number of reference pixels in the reference region B1 at the upper end is i and the number of reference pixels in the reference region B2 at the lower end is j, the average value M can be calculated by the following equation (3).
M = (B1 1 + B1 2 +,..., + B1 i + B2 1 + B2 2 +,... + B2 j ) / (i + j) (3)
Here, B1 1, ..., B1 i indicate the value of the pixel signal of each reference pixel in the reference area B1, and B2 1, ... B2 j indicate the value of the pixel signal of each reference pixel in the reference area B2.
参照領域Bが、図5に示すように、検出領域Cの左右の端部に設けられている場合も、参照領域Bが検出領域Cの上下の端部に設けられている場合と同様に、左端部の参照領域B3内の参照画素と右端部の参照領域B4の参照画素の両方の画素信号の値の平均値Mを算出する。左端部の参照領域B3内の参照画素の数をi個、右端部の参照領域B4の参照画素の数をj個とすると平均値Mは下記式(4)で算出できる。
M=(B31+B32+、、、+B3i+B41+B42+、、、+B4j)/(i+j) ・・・(4)
ここでB31、、、B3iは参照領域B3の各参照画素の画素信号の値、B41、、、B4jは参照領域B4の各参照画素の画素信号の値をそれぞれ示す。 As shown in FIG. 5, when the reference area B is provided at the left and right ends of the detection area C, as in the case where the reference area B is provided at the upper and lower ends of the detection area C, An average value M of pixel signal values of both the reference pixel in the reference region B3 at the left end and the reference pixel in the reference region B4 at the right end is calculated. When the number of reference pixels in the left end reference area B3 is i and the number of reference pixels in the right end reference area B4 is j, the average value M can be calculated by the following equation (4).
M = (B3 1 + B3 2 +,..., + B3 i + B4 1 + B4 2 +,... + B4 j ) / (i + j) (4)
Here, B3 1, ..., B3 i indicate the pixel signal value of each reference pixel in the reference area B3, and B4 1, ... B4 j indicate the value of the pixel signal of each reference pixel in the reference area B4.
M=(B31+B32+、、、+B3i+B41+B42+、、、+B4j)/(i+j) ・・・(4)
ここでB31、、、B3iは参照領域B3の各参照画素の画素信号の値、B41、、、B4jは参照領域B4の各参照画素の画素信号の値をそれぞれ示す。 As shown in FIG. 5, when the reference area B is provided at the left and right ends of the detection area C, as in the case where the reference area B is provided at the upper and lower ends of the detection area C, An average value M of pixel signal values of both the reference pixel in the reference region B3 at the left end and the reference pixel in the reference region B4 at the right end is calculated. When the number of reference pixels in the left end reference area B3 is i and the number of reference pixels in the right end reference area B4 is j, the average value M can be calculated by the following equation (4).
M = (B3 1 + B3 2 +,..., + B3 i + B4 1 + B4 2 +,... + B4 j ) / (i + j) (4)
Here, B3 1, ..., B3 i indicate the pixel signal value of each reference pixel in the reference area B3, and B4 1, ... B4 j indicate the value of the pixel signal of each reference pixel in the reference area B4.
また、参照領域Bが、図6に示すように、検出領域Cの縁部に枠状に設けられている場合は、枠状の参照領域B内の参照画素の画素信号の値の平均値を算出する。参照領域B内の参照画素の数をi個とすると平均値Mは下記式(5)で算出できる。
M=(B1+B2+、、、+Bi)/i ・・・(5)
ここでB1、、、Biは参照領域Bの各参照画素の画素信号の値を示す。 In addition, when the reference region B is provided in a frame shape at the edge of the detection region C as shown in FIG. 6, the average value of the pixel signal values of the reference pixels in the frame-shaped reference region B is calculated. calculate. When the number of reference pixels in the reference region B is i, the average value M can be calculated by the following equation (5).
M = (B 1 + B 2 +,... + B i ) / i (5)
Here, B 1, ... B i indicate pixel signal values of the respective reference pixels in the reference region B.
M=(B1+B2+、、、+Bi)/i ・・・(5)
ここでB1、、、Biは参照領域Bの各参照画素の画素信号の値を示す。 In addition, when the reference region B is provided in a frame shape at the edge of the detection region C as shown in FIG. 6, the average value of the pixel signal values of the reference pixels in the frame-shaped reference region B is calculated. calculate. When the number of reference pixels in the reference region B is i, the average value M can be calculated by the following equation (5).
M = (B 1 + B 2 +,... + B i ) / i (5)
Here, B 1, ... B i indicate pixel signal values of the respective reference pixels in the reference region B.
そして次に信号補正部61は、図8に示すように、使用有効領域A内の有効画素の画素信号の値の各々から上記算出された平均値Mの値を減算して画像信号の値をオフセット補正する(S12)。
Then, as shown in FIG. 8, the signal correction unit 61 subtracts the calculated average value M from each of the pixel signal values of the effective pixels in the use effective area A to obtain the value of the image signal. Offset correction is performed (S12).
次に、信号補正部61は、図7に示すように、補正処理が施された画素信号の値を記憶部7に記憶する(S4)。記憶部7に記憶された画像データは、出力部8によって不図示の外部記憶部や表示部などに適宜出力される。また、補正後の画像データは、赤外線撮像装置1のデジタル信号処理部6によって、その他の必要な補正処理などが適宜施されてもよい。
Next, as shown in FIG. 7, the signal correction unit 61 stores the value of the pixel signal subjected to the correction process in the storage unit 7 (S4). The image data stored in the storage unit 7 is appropriately output by the output unit 8 to an external storage unit or a display unit (not shown). The corrected image data may be appropriately subjected to other necessary correction processing by the digital signal processing unit 6 of the infrared imaging device 1.
本実施形態の赤外線撮像装置1においては、上述した使用有効領域Aが被写体からの赤外線放射を捉えるのと同じタイミングで参照領域Bが赤外線撮像装置本体12からの赤外線放射のみを捉えることができる。よって上記で算出された平均値Mは赤外線撮像装置本体12から入射する赤外線に起因する値となる。
In the infrared imaging device 1 of the present embodiment, the reference region B can capture only the infrared radiation from the infrared imaging device body 12 at the same timing as the use effective region A captures the infrared radiation from the subject. Therefore, the average value M calculated above is a value resulting from the infrared rays incident from the infrared imaging device main body 12.
図19に示す従来の赤外線撮像装置100においては、赤外線遮蔽体で赤外線センサに入射する赤外線を遮ることで赤外線が入射しない領域を生じさせており、赤外線遮蔽板を必要とするため、構造の簡易化や省コスト性の向上が望まれている。また従来の赤外線撮像装置100においては、撮像装置本体12の温度が上昇しても、赤外線遮蔽体99の温度はすぐには上昇しないため、参照領域Bには撮像装置本体12よりも温度の低い赤外線遮蔽体99から放射される赤外線が加わることになる。従って使用有効領域Aと参照領域Bとでは入射する赤外線の量が異なってしまい、使用有効領域Aと参照領域Bとではオフセットすべき補正量が異なってしまうので、使用有効領域A内の有効画素の画素信号の値の各々から平均値Mの値を減算するオフセット補正では精度よく補正を行うことが困難であった。
In the conventional infrared imaging device 100 shown in FIG. 19, a region where infrared rays are not incident is generated by blocking infrared rays incident on the infrared sensor with an infrared shield, and an infrared shielding plate is required, so that the structure is simple. Improvement of cost and cost saving is desired. Moreover, in the conventional infrared imaging device 100, even if the temperature of the imaging device main body 12 rises, the temperature of the infrared shielding body 99 does not rise immediately, so that the temperature in the reference region B is lower than that of the imaging device main body 12. Infrared rays radiated from the infrared shielding body 99 are added. Accordingly, the amount of incident infrared rays differs between the use effective area A and the reference area B, and the correction amount to be offset differs between the use effective area A and the reference area B. Therefore, the effective pixels in the use effective area A are different. In the offset correction in which the average value M is subtracted from each of the pixel signal values, it is difficult to perform correction with high accuracy.
しかしながら上述したように本実施形態の赤外線撮像装置1においては、結像光学系2の光軸Oと赤外線センサ3の検出領域31の中心31oを一致させ、この中心O,31oを通る一方向の直線d上において、検出領域31の領域Cの長さを結像光学系2の結像領域Dの長さよりも大きくすることにより、赤外線センサ3の検出領域31内に検出領域31の領域Cと結像領域Dが重複する有効領域ARと検出領域31の領域Cと結像領域Dが重複しない参照領域BRとが設けられるので、赤外線の入射を避けるためのさらなる部品を使用しなくても、検出領域31内に赤外線が入射する有効領域ARと赤外線が入射しない参照領域BRとを形成することができる。これにより、赤外線の入射を避けるためのさらなる部品が不要であるため、赤外線センサ3を簡易かつ省コストに製造することができる。また、有効領域ARと参照領域BRにそれぞれ入射する赤外線の放射量が、赤外線撮像装置本体12と赤外線の入射を避けるためのさらなる部品との温度差による影響を受けることがないので温度変化による画素信号の値の変動を精度よく補正することができる。
However, as described above, in the infrared imaging device 1 of the present embodiment, the optical axis O of the imaging optical system 2 and the center 31o of the detection region 31 of the infrared sensor 3 are made to coincide with each other, and the unidirectional direction passing through the centers O and 31o. On the straight line d, by making the length of the region C of the detection region 31 larger than the length of the image formation region D of the imaging optical system 2, the region C of the detection region 31 and the region C of the detection region 31 are included in the detection region 31 of the infrared sensor 3. Since the effective area AR where the imaging area D overlaps, the area C of the detection area 31 and the reference area BR where the imaging area D does not overlap are provided, even without using additional components to avoid the incidence of infrared rays, An effective area AR in which infrared rays are incident and a reference area BR in which no infrared rays are incident can be formed in the detection area 31. Thereby, since the additional component for avoiding incidence | injection of infrared rays is unnecessary, the infrared sensor 3 can be manufactured simply and at low cost. In addition, the amount of infrared radiation incident on each of the effective area AR and the reference area BR is not affected by the temperature difference between the infrared imaging apparatus main body 12 and a further component for avoiding the incidence of infrared radiation. The fluctuation of the signal value can be accurately corrected.
なお、図3のように4つの隅部の参照領域B1~B4内の参照画素全ての画素信号の値の平均値Mを使用してオフセット補正を行う場合には、赤外線撮像装置本体12の周囲で生じる温度差の影響を低減することができる。また、図4のように上端部の参照領域B1内の参照画素と下端部の参照領域B2の参照画素の両方の画素信号の値の平均値Mを使用してオフセット補正を行う場合には、赤外線撮像装置本体12の上下方向で生じる温度差の影響を低減することができる。また図5のように左端部の参照領域B3内の参照画素と右端部の参照領域B4の参照画素の両方の画素信号の値の平均値Mを使用してオフセット補正を行う場合には、赤外線撮像装置本体12の左右方向で生じる温度差の影響を低減することができる。また図6のように枠状の参照領域B内の参照画素の画素信号の値の平均値Mを使用してオフセット補正を行う場合には、赤外線撮像装置本体12の周囲で生じる温度差の影響を低減することができる。
When the offset correction is performed using the average value M of the pixel signal values of all the reference pixels in the reference areas B1 to B4 at the four corners as shown in FIG. Can reduce the influence of the temperature difference caused by. In addition, when the offset correction is performed using the average value M of the pixel signals of both the reference pixel in the reference region B1 at the upper end and the reference pixel in the reference region B2 at the lower end as shown in FIG. The influence of the temperature difference that occurs in the vertical direction of the infrared imaging device main body 12 can be reduced. In addition, when the offset correction is performed using the average value M of the pixel signals of both the reference pixel in the reference region B3 at the left end and the reference pixel in the reference region B4 at the right end as shown in FIG. The influence of the temperature difference that occurs in the left-right direction of the imaging device body 12 can be reduced. Further, when the offset correction is performed using the average value M of the pixel signal values of the reference pixels in the frame-like reference region B as shown in FIG. 6, the influence of the temperature difference generated around the infrared imaging device main body 12. Can be reduced.
なお、本実施形態では、図3では4つの隅部の参照領域B1~B4、図4では上下の端部の参照領域B1,B2、図5では左右の端部の参照領域B3,B4の参照画素の画素信号の値の平均値をそれぞれオフセット補正に使用したが、本発明はこれに限られるものではなく、例えば図3では4つの隅部の参照領域B1~B4から選択した1つの参照領域のみの参照画素の画素信号の値の平均値を使用しても良い。また図4では上端部の参照領域B1のみの参照画素の画素信号の値の平均値を使用しても良いし、下端部の参照領域B2のみの参照画素の画素信号の値の平均値を使用しても良い。また図5では左端部の参照領域B3のみの参照画素の画素信号の値の平均値を使用しても良いし、右端部の参照領域B4のみの参照画素の画素信号の値の平均値を使用しても良いし適宜変更することができる。
In this embodiment, reference areas B1 to B4 at the four corners in FIG. 3, reference areas B1 and B2 at the upper and lower end parts in FIG. 4, and reference areas B3 and B4 at the left and right end parts in FIG. Although the average value of the pixel signal values of the pixels is used for offset correction, the present invention is not limited to this. For example, in FIG. 3, one reference area selected from the reference areas B1 to B4 at the four corners. An average value of pixel signal values of only reference pixels may be used. In FIG. 4, the average value of the pixel signals of the reference pixels only in the reference region B1 at the upper end may be used, or the average value of the pixel signals of the reference pixels in the reference region B2 at the lower end is used. You may do it. Further, in FIG. 5, the average value of the pixel signals of the reference pixels only in the reference region B3 at the left end may be used, or the average value of the pixel signals of the reference pixels only in the reference region B4 at the right end is used. It may be changed as appropriate.
次に、本実施形態の信号補正部61による第2の補正処理について詳細に説明する。ここで図9に信号補正部61の第2の補正処理のフローチャート、図10に、参照領域が4つの隅部にあるときの補正方法を説明する図、図11に参照領域が4つの隅部にあるときのシェーディング補正の補正値の算出方法を説明する図、図12に参照領域が枠状であるときの補正方法を説明する図、図13に参照領域が枠状であるときのシェーディング補正の補正値の算出方法を説明する図、図14に参照領域が枠状であるときのシェーディング補正の補正値の一例を示す図をそれぞれ示す。
Next, the second correction processing by the signal correction unit 61 of this embodiment will be described in detail. Here, FIG. 9 is a flowchart of the second correction processing of the signal correction unit 61, FIG. 10 is a diagram for explaining a correction method when the reference area is at four corners, and FIG. 11 is a reference area having four corners. FIG. 12 is a diagram illustrating a method for calculating a correction value for shading correction when the reference area is in FIG. 12, FIG. 12 is a diagram illustrating a correction method when the reference area is frame-shaped, and FIG. 13 is a shading correction when the reference area is frame-shaped. FIG. 14 is a diagram illustrating a method for calculating the correction value, and FIG. 14 is a diagram illustrating an example of a correction value for shading correction when the reference region has a frame shape.
信号補正部61は、図9に示すように、まず参照領域B内の画素である参照画素の画素信号の値の平均値を算出する(S21)。参照領域Bが、図3に示すように、検出領域Cの4つの隅部に設けられている場合には、左上の参照領域B1内の参照画素の画素信号の値の平均値M1、右上の参照領域B2の参照画素の画素信号の値の平均値M2、左下の参照領域B3内の参照画素の画素信号の値の平均値M3及び右下の参照領域B4の参照画素の画素信号の値の平均値M4を算出する。左上、右上、左下及び右下の参照領域B1~B4内の参照画素の数をそれぞれi個、j個、k個、m個とすると平均値M1~M4はそれぞれ下記式(6-1)、(6-2)、(6-3)、(6-4)で算出できる。
M1=(B11+B12+、、、+B1i)/i・・・(6-1)
M2=(B21+B22+、、、+B2j)/j・・・(6-2)
M3=(B31+B32+、、、+B3k)/k・・・(6-3)
M4=(B41+B42+、、、+B4m)/m・・・(6-4)
ここでB11、、、B1iは参照領域B1の各参照画素の画素信号の値、B21、、、B2jは参照領域B2の各参照画素の画素信号の値、B31、、、B3kは参照領域B3の各参照画素の画素信号の値、B41、、、B4mは参照領域B4の各参照画素の画素信号の値をそれぞれ示す。 As shown in FIG. 9, thesignal correction unit 61 first calculates an average value of pixel signal values of reference pixels that are pixels in the reference region B (S21). When the reference area B is provided at the four corners of the detection area C as shown in FIG. 3, the average value M1 of the pixel signal values of the reference pixels in the upper left reference area B1, and the upper right The average value M2 of the pixel signal values of the reference pixels in the reference region B2, the average value M3 of the pixel signal values of the reference pixels in the lower left reference region B3, and the pixel signal values of the reference pixels in the lower right reference region B4 An average value M4 is calculated. When the number of reference pixels in the upper left, upper right, lower left, and lower right reference regions B1 to B4 is i, j, k, and m, the average values M1 to M4 are expressed by the following equations (6-1), It can be calculated by (6-2), (6-3), (6-4).
M1 = (B1 1 + B1 2 +,... + B1 i ) / i (6-1)
M2 = (B2 1 + B2 2 +,... + B2 j ) / j (6-2)
M3 = (B3 1 + B3 2 +,... + B3 k ) / k (6-3)
M4 = (B4 1 + B4 2 +,... + B4 m ) / m (6-4)
Here, B1 1 , B1 i are the pixel signal values of the reference pixels in the reference area B1, B2 1 , B2 j are the pixel signal values of the reference pixels in the reference area B2, and B3 1 , B3 k represents the value of the pixel signal of each reference pixel in the reference region B3, and B4 1 , B4 m represent the value of the pixel signal of each reference pixel in the reference region B4.
M1=(B11+B12+、、、+B1i)/i・・・(6-1)
M2=(B21+B22+、、、+B2j)/j・・・(6-2)
M3=(B31+B32+、、、+B3k)/k・・・(6-3)
M4=(B41+B42+、、、+B4m)/m・・・(6-4)
ここでB11、、、B1iは参照領域B1の各参照画素の画素信号の値、B21、、、B2jは参照領域B2の各参照画素の画素信号の値、B31、、、B3kは参照領域B3の各参照画素の画素信号の値、B41、、、B4mは参照領域B4の各参照画素の画素信号の値をそれぞれ示す。 As shown in FIG. 9, the
M1 = (B1 1 + B1 2 +,... + B1 i ) / i (6-1)
M2 = (B2 1 + B2 2 +,... + B2 j ) / j (6-2)
M3 = (B3 1 + B3 2 +,... + B3 k ) / k (6-3)
M4 = (B4 1 + B4 2 +,... + B4 m ) / m (6-4)
Here, B1 1 , B1 i are the pixel signal values of the reference pixels in the reference area B1, B2 1 , B2 j are the pixel signal values of the reference pixels in the reference area B2, and B3 1 , B3 k represents the value of the pixel signal of each reference pixel in the reference region B3, and B4 1 , B4 m represent the value of the pixel signal of each reference pixel in the reference region B4.
また参照領域Bが、図4に示すように、検出領域Cの上下の端部に設けられている場合には、上端部の参照領域B1内の参照画素の画素信号の値の平均値M1と下端部の参照領域B2の参照画素の画素信号の値の平均値M2を算出する。上端部の参照領域B1内の参照画素の数をi個、下端部の参照領域B2の参照画素の数をj個とすると平均値M1、M2はそれぞれ下記式(7-1)、(7-2)で算出できる。
M1=(B11+B12+、、、+B1i)/i・・・(7-1)
M2=(B21+B22+、、、+B2j)/j・・・(7-2)
ここでB11、、、B1iは参照領域B1の各参照画素の画素信号の値、B21、、、B2jは参照領域B2の各参照画素の画素信号の値をそれぞれ示す。 When the reference area B is provided at the upper and lower ends of the detection area C as shown in FIG. 4, the average value M1 of the pixel signal values of the reference pixels in the reference area B1 at the upper end is An average value M2 of pixel signal values of the reference pixels in the reference region B2 at the lower end is calculated. If the number of reference pixels in the reference region B1 at the upper end is i and the number of reference pixels in the reference region B2 at the lower end is j, the average values M1 and M2 are expressed by the following equations (7-1) and (7- 2).
M1 = (B1 1 + B1 2 +,... + B1 i ) / i (7-1)
M2 = (B2 1 + B2 2 +,... + B2 j ) / j (7-2)
Here, B1 1, ..., B1 i indicate the value of the pixel signal of each reference pixel in the reference area B1, and B2 1, ... B2 j indicate the value of the pixel signal of each reference pixel in the reference area B2.
M1=(B11+B12+、、、+B1i)/i・・・(7-1)
M2=(B21+B22+、、、+B2j)/j・・・(7-2)
ここでB11、、、B1iは参照領域B1の各参照画素の画素信号の値、B21、、、B2jは参照領域B2の各参照画素の画素信号の値をそれぞれ示す。 When the reference area B is provided at the upper and lower ends of the detection area C as shown in FIG. 4, the average value M1 of the pixel signal values of the reference pixels in the reference area B1 at the upper end is An average value M2 of pixel signal values of the reference pixels in the reference region B2 at the lower end is calculated. If the number of reference pixels in the reference region B1 at the upper end is i and the number of reference pixels in the reference region B2 at the lower end is j, the average values M1 and M2 are expressed by the following equations (7-1) and (7- 2).
M1 = (B1 1 + B1 2 +,... + B1 i ) / i (7-1)
M2 = (B2 1 + B2 2 +,... + B2 j ) / j (7-2)
Here, B1 1, ..., B1 i indicate the value of the pixel signal of each reference pixel in the reference area B1, and B2 1, ... B2 j indicate the value of the pixel signal of each reference pixel in the reference area B2.
参照領域Bが、図5に示すように、検出領域Cの左右の端部に設けられている場合も、参照領域Bが検出領域Cの上下の端部に設けられている場合と同様に、左端部の参照領域B3内の参照画素の画素信号の値の平均値M3と右端部の参照領域B4の参照画素の画素信号の値の平均値M4を算出する。左端部の参照領域B3内の参照画素の数をi個、右端部の参照領域B4の参照画素の数をj個とすると平均値M3、M4は下記式(8-1)、(8-2)で算出できる。
M3=(B31+B32+、、、+B3i)/i・・・(8-1)
M4=(B41+B42+、、、+B4j)/j・・・(8-2)
ここでB31、、、B3iは参照領域B3の各参照画素の画素信号の値、B41、、、B4jは参照領域B4の各参照画素の画素信号の値をそれぞれ示す。 As shown in FIG. 5, when the reference area B is provided at the left and right ends of the detection area C, as in the case where the reference area B is provided at the upper and lower ends of the detection area C, An average value M3 of pixel signal values of the reference pixels in the reference region B3 at the left end portion and an average value M4 of pixel signal values of the reference pixels in the reference region B4 at the right end portion are calculated. If the number of reference pixels in the left end reference region B3 is i and the number of reference pixels in the right end reference region B4 is j, the average values M3 and M4 are expressed by the following equations (8-1) and (8-2). ).
M3 = (B3 1 + B3 2 +,... + B3 i ) / i (8-1)
M4 = (B4 1 + B4 2 +,... + B4 j ) / j (8-2)
Here, B3 1, ..., B3 i indicate the pixel signal value of each reference pixel in the reference area B3, and B4 1, ... B4 j indicate the value of the pixel signal of each reference pixel in the reference area B4.
M3=(B31+B32+、、、+B3i)/i・・・(8-1)
M4=(B41+B42+、、、+B4j)/j・・・(8-2)
ここでB31、、、B3iは参照領域B3の各参照画素の画素信号の値、B41、、、B4jは参照領域B4の各参照画素の画素信号の値をそれぞれ示す。 As shown in FIG. 5, when the reference area B is provided at the left and right ends of the detection area C, as in the case where the reference area B is provided at the upper and lower ends of the detection area C, An average value M3 of pixel signal values of the reference pixels in the reference region B3 at the left end portion and an average value M4 of pixel signal values of the reference pixels in the reference region B4 at the right end portion are calculated. If the number of reference pixels in the left end reference region B3 is i and the number of reference pixels in the right end reference region B4 is j, the average values M3 and M4 are expressed by the following equations (8-1) and (8-2). ).
M3 = (B3 1 + B3 2 +,... + B3 i ) / i (8-1)
M4 = (B4 1 + B4 2 +,... + B4 j ) / j (8-2)
Here, B3 1, ..., B3 i indicate the pixel signal value of each reference pixel in the reference area B3, and B4 1, ... B4 j indicate the value of the pixel signal of each reference pixel in the reference area B4.
また、参照領域Bが、図6に示すように、検出領域Cの縁部に枠状に設けられている場合は、枠状の参照領域Bを複数の参照領域に分割する。本実施形態においては、図12に示すように、上行及び下行をそれぞれ分割参照領域B11~B15,B51~B55の5個ずつに分割し、残りの参照領域すなわち分割参照領域B11~B15,B51~B55を除く参照領域である左列及び右列をそれぞれ分割参照領域B21~B41,B25~B45の3個ずつに分割して、合計16個の分割参照領域B11~B55を作成する。
Further, when the reference area B is provided in a frame shape at the edge of the detection area C as shown in FIG. 6, the frame-shaped reference area B is divided into a plurality of reference areas. In the present embodiment, as shown in FIG. 12, the upper row and the lower row are each divided into five divided reference regions B11 to B15, B51 to B55, and the remaining reference regions, that is, divided reference regions B11 to B15, B51 to The left column and the right column, which are reference regions excluding B55, are divided into three divided reference regions B21 to B41 and B25 to B45, respectively, to create a total of 16 divided reference regions B11 to B55.
次に作成した16個の分割参照領域B11~B55の各々について、分割参照領域B11~B55内の参照画素の画素信号の値の平均値M11~M55を算出する。参照領域B11内の参照画素の数をi個とすると平均値M11は下記式(9)で算出できる。
M11=(B1+B2+、、、+Bi)/i・・・(9)
ここでB1、、、Biは参照領域Bの各参照画素の画素信号の値を示す。
なお残りの参照領域B12~B55の平均値M12~M55の算出も参照領域M11と同様に算出することができる。 Next, average values M11 to M55 of the pixel signal values of the reference pixels in the divided reference areas B11 to B55 are calculated for each of the 16 divided reference areas B11 to B55 created. If the number of reference pixels in the reference area B11 is i, the average value M11 can be calculated by the following equation (9).
M11 = (B 1 + B 2 +,... + B i ) / i (9)
Here, B 1, ... B i indicate pixel signal values of the respective reference pixels in the reference region B.
The average values M12 to M55 of the remaining reference areas B12 to B55 can be calculated in the same manner as the reference area M11.
M11=(B1+B2+、、、+Bi)/i・・・(9)
ここでB1、、、Biは参照領域Bの各参照画素の画素信号の値を示す。
なお残りの参照領域B12~B55の平均値M12~M55の算出も参照領域M11と同様に算出することができる。 Next, average values M11 to M55 of the pixel signal values of the reference pixels in the divided reference areas B11 to B55 are calculated for each of the 16 divided reference areas B11 to B55 created. If the number of reference pixels in the reference area B11 is i, the average value M11 can be calculated by the following equation (9).
M11 = (B 1 + B 2 +,... + B i ) / i (9)
Here, B 1, ... B i indicate pixel signal values of the respective reference pixels in the reference region B.
The average values M12 to M55 of the remaining reference areas B12 to B55 can be calculated in the same manner as the reference area M11.
次に信号補正部61は、図9に示すように、使用有効領域A内の有効画素の画素信号の値に対してシェーディング補正を行う(S22)。参照領域Bが、図3に示すように、検出領域Cの4つの隅部に設けられている場合は、4つの参照領域B1~B4の各々について、算出した参照領域B1~B4内の参照画素の画素信号の値の平均値M1~M4を用いてシェーディング量Sを算出する。ここでシェーディング量Sの算出方法について詳細に説明する。
Next, as shown in FIG. 9, the signal correction unit 61 performs shading correction on the value of the pixel signal of the effective pixel in the use effective area A (S22). When the reference area B is provided at the four corners of the detection area C as shown in FIG. 3, the reference pixels in the calculated reference areas B1 to B4 for each of the four reference areas B1 to B4. The shading amount S is calculated using the average values M1 to M4 of the pixel signal values. Here, a method for calculating the shading amount S will be described in detail.
なお、以下の説明において、使用有効領域Aの画素列が延びる方向である行方向をx方向、画素行が延びる方向である列方向をy方向とし、使用有効領域Aの各有効画素の位置をxy座標系で表す。また、図10に示すように、行方向一端側の最端列のx座標をx1、他端側の最端列のx座標をx2とし、列方向一端側の最端行のy座標をy1、列方向他端側の最端行のy座標をy2とする。なお、x1<x2、y1<y2とする。また、参照領域B1~B4内の参照画素の画素信号の値の平均値をそれぞれZ11、Z12、Z21、Z22とし、座標(x,y)におけるシェーディング量Sを算出する。
In the following description, the row direction that is the direction in which the pixel columns of the use effective area A extend is the x direction, the column direction that is the direction in which the pixel rows extend is the y direction, and the position of each effective pixel in the use effective area A is Expressed in the xy coordinate system. As shown in FIG. 10, the x coordinate of the endmost column on one end side in the row direction is x1, the x coordinate of the endmost column on the other end side is x2, and the y coordinate of the endmost row on the one end side in the column direction is y1. The y coordinate of the outermost row on the other end side in the column direction is y2. Note that x1 <x2 and y1 <y2. Further, the average value of the pixel signal values of the reference pixels in the reference areas B1 to B4 is set as Z11, Z12, Z21, and Z22, respectively, and the shading amount S at the coordinates (x, y) is calculated.
図10に示すようにy=y1では、シェーディング量S(x,y)は下記式(10-1)で表される。
S(x,y1)=a1x+b1・・・(10-1)
ここでa1は垂直方向の傾きであり、下記式(10-2)で表される。
a1=(Z12-Z11)/(x2-x1)・・・(10-2)
またb1は垂直方向の切片であり、x=x1、y=y1=Z11としたときに、下記式(10-3)で表される。
b1=S(x1,y1)-a1x1=Z11-(Z12-Z11)/(x2-x1)×x1・・・(10-3) As shown in FIG. 10, when y = y1, the shading amount S (x, y) is expressed by the following equation (10-1).
S (x, y1) = a1x + b1 (10-1)
Here, a1 is the inclination in the vertical direction and is represented by the following formula (10-2).
a1 = (Z12-Z11) / (x2-x1) (10-2)
Further, b1 is an intercept in the vertical direction, and is expressed by the following equation (10-3) when x = x1 and y = y1 = Z11.
b1 = S (x1, y1) −a1x1 = Z11− (Z12−Z11) / (x2−x1) × x1 (10-3)
S(x,y1)=a1x+b1・・・(10-1)
ここでa1は垂直方向の傾きであり、下記式(10-2)で表される。
a1=(Z12-Z11)/(x2-x1)・・・(10-2)
またb1は垂直方向の切片であり、x=x1、y=y1=Z11としたときに、下記式(10-3)で表される。
b1=S(x1,y1)-a1x1=Z11-(Z12-Z11)/(x2-x1)×x1・・・(10-3) As shown in FIG. 10, when y = y1, the shading amount S (x, y) is expressed by the following equation (10-1).
S (x, y1) = a1x + b1 (10-1)
Here, a1 is the inclination in the vertical direction and is represented by the following formula (10-2).
a1 = (Z12-Z11) / (x2-x1) (10-2)
Further, b1 is an intercept in the vertical direction, and is expressed by the following equation (10-3) when x = x1 and y = y1 = Z11.
b1 = S (x1, y1) −a1x1 = Z11− (Z12−Z11) / (x2−x1) × x1 (10-3)
同様にしてy=y2では、シェーディング量S(x,y)は下記式(11-1)で表される。
S(x,y2)=a2x+b2・・・(11-1)
ここでa2は傾きであり、下記式(11-2)で表される。
a2=(Z22-Z21)/(x2-x1)・・・(11-2)
またb2は切片であり、x=x1、y=y2=Z21としたときに、下記式(11-3)で表される。
b2=S(x1,y2)-a2x1=Z21-(Z22-Z21)/(x2-x1)×x1・・・(11-3) Similarly, when y = y2, the shading amount S (x, y) is expressed by the following equation (11-1).
S (x, y2) = a2x + b2 (11-1)
Here, a2 is an inclination and is represented by the following formula (11-2).
a2 = (Z22-Z21) / (x2-x1) (11-2)
Further, b2 is an intercept, and is expressed by the following formula (11-3) when x = x1 and y = y2 = Z21.
b2 = S (x1, y2) −a2x1 = Z21− (Z22−Z21) / (x2−x1) × x1 (11-3)
S(x,y2)=a2x+b2・・・(11-1)
ここでa2は傾きであり、下記式(11-2)で表される。
a2=(Z22-Z21)/(x2-x1)・・・(11-2)
またb2は切片であり、x=x1、y=y2=Z21としたときに、下記式(11-3)で表される。
b2=S(x1,y2)-a2x1=Z21-(Z22-Z21)/(x2-x1)×x1・・・(11-3) Similarly, when y = y2, the shading amount S (x, y) is expressed by the following equation (11-1).
S (x, y2) = a2x + b2 (11-1)
Here, a2 is an inclination and is represented by the following formula (11-2).
a2 = (Z22-Z21) / (x2-x1) (11-2)
Further, b2 is an intercept, and is expressed by the following formula (11-3) when x = x1 and y = y2 = Z21.
b2 = S (x1, y2) −a2x1 = Z21− (Z22−Z21) / (x2−x1) × x1 (11-3)
上記式(10-1)~(11-3)を用いると、座標(x,y)におけるシェーディング量S(x,y)は下記式(12)で表される。
S(x,y)=((a2x+b2)-(a1x+b1))/(y2-y1)
×(y-y1)+(a1x+b1)
・・・(12) Using the above equations (10-1) to (11-3), the shading amount S (x, y) at the coordinates (x, y) is expressed by the following equation (12).
S (x, y) = ((a2x + b2) − (a1x + b1)) / (y2−y1)
× (y−y1) + (a1x + b1)
(12)
S(x,y)=((a2x+b2)-(a1x+b1))/(y2-y1)
×(y-y1)+(a1x+b1)
・・・(12) Using the above equations (10-1) to (11-3), the shading amount S (x, y) at the coordinates (x, y) is expressed by the following equation (12).
S (x, y) = ((a2x + b2) − (a1x + b1)) / (y2−y1)
× (y−y1) + (a1x + b1)
(12)
次に、上述したシェーディング量の算出方法にて算出したシェーディング補正の補正値の一例を示して説明する。図9のステップS21にて算出した参照領域B1~B4内の参照画素の画素信号の値の平均値Z11、Z12、Z21、Z22を、図11に示すように、Z11=10、Z12=20、Z21=60、Z22=300とする。まず、図11のx1~x2の各々の列において傾きを算出すると、図11の表の下部に表す値が算出される。例えばx=x1では、(60-10)/(10-2)=6.25となる。
Next, an example of the correction value of the shading correction calculated by the above-described shading amount calculation method will be described. The average values Z11, Z12, Z21, and Z22 of the pixel signals of the reference pixels in the reference regions B1 to B4 calculated in step S21 of FIG. 9 are expressed as Z11 = 10, Z12 = 20, as shown in FIG. It is assumed that Z21 = 60 and Z22 = 300. First, when the slope is calculated in each of the columns x1 to x2 in FIG. 11, the values shown at the bottom of the table in FIG. 11 are calculated. For example, when x = x1, (60-10) / (10-2) = 6.25.
同様にして図11のy1-y2の各々の行において傾きを算出すると、図11の表の右側に表す値が算出される。例えばy=y1では、(20-10)/(5-1)=2.5となる。
Similarly, when the slope is calculated in each row y1-y2 in FIG. 11, the value shown on the right side of the table in FIG. 11 is calculated. For example, when y = y1, (20-10) / (5-1) = 2.5.
本実施形態においては、y=y1=2のとき、上記式(10-2)から傾きa1=2.5となり、上記式(10-3)から切片b1=7.5となる。また、y=y2=10のとき、上記式(11-2)から傾きa2=60となり、上記式(11-3)から切片b2=0となる。
In this embodiment, when y = y1 = 2, the slope a1 = 2.5 from the above equation (10-2), and the intercept b1 = 7.5 from the above equation (10-3). When y = y2 = 10, the slope a2 = 60 from the above equation (11-2), and the intercept b2 = 0 from the above equation (11-3).
以上の値を上記式(12)に代入すると、例えば座標(x2,y1)=(2,2)におけるシェーディング量Sは、S=12.5となる。同様にして上記式(12)に座標値を代入していくと図11に示すシェーディング量Sの値を算出できる。そして算出したシェーディング量Sを使用有効領域A内の有効画素の画素信号の値にそれぞれ演算してシェーディング補正を行う。
When the above values are substituted into the above equation (12), for example, the shading amount S at coordinates (x2, y1) = (2, 2) is S = 12.5. Similarly, when the coordinate value is substituted into the above equation (12), the value of the shading amount S shown in FIG. 11 can be calculated. Then, the shading correction is performed by calculating the calculated shading amount S to the value of the pixel signal of the effective pixel in the use effective area A.
図3のように、検出領域Cの4つの隅部に設けられた4個の参照領域B1~B4内の参照画素の画素信号の値の平均値M1~M4を使用してシェーディング補正を行う場合には、赤外線撮像装置本体12の周囲の温度差による赤外線画像の上下方向及び左右方向のシェーディング量Sを詳細に取得できるため、赤外線画像全体のムラを精度よく補正することができる。
When shading correction is performed using average values M1 to M4 of pixel signal values of reference pixels in the four reference regions B1 to B4 provided at the four corners of the detection region C as shown in FIG. Since the shading amount S in the vertical direction and the horizontal direction of the infrared image due to the temperature difference around the infrared imaging device main body 12 can be acquired in detail, the unevenness of the entire infrared image can be accurately corrected.
また、参照領域Bが、図4に示すように、検出領域Cの上下の端部に設けられている場合には、算出した上端部の参照領域B1内の参照画素の画素信号の値の平均値M1と下端部の参照領域B2の参照画素の画素信号の値の平均値M2を用いて線形補間を行って使用有効領域A内の各有効画素のシェーディング量Sをそれぞれ算出し、算出したシェーディング量Sを使用有効領域A内の有効画素の画素信号の値にそれぞれ演算してシェーディング補正を行う。
Further, when the reference area B is provided at the upper and lower ends of the detection area C as shown in FIG. 4, the average of the pixel signal values of the reference pixels in the reference area B1 at the upper end calculated. Using the value M1 and the average value M2 of the pixel signals of the reference pixels in the reference region B2 at the lower end, linear interpolation is performed to calculate the shading amount S of each effective pixel in the use effective region A, and the calculated shading The shading correction is performed by calculating the amount S to the value of the pixel signal of each effective pixel in the use effective area A.
例えば赤外線撮像装置本体12の上側に高温の物体等が置かれた場合には、撮像された赤外線画像は上側が全体的に白っぽくなる。すなわち、赤外線撮像装置本体12の周囲の温度差によって撮像された赤外線画像は濃度にムラが生じてしまう。図4のように上端部の参照領域B1内の参照画素と下端部の参照領域B2の参照画素の画素信号の値の各平均値M1、M2を使用してシェーディング補正を行う場合には、赤外線画像の上下方向のシェーディング量Sを詳細に取得できるため、赤外線画像の上下方向のムラを精度よく補正することができる。
For example, when a high-temperature object or the like is placed on the upper side of the infrared imaging device main body 12, the captured infrared image becomes generally whitish on the upper side. That is, the density of the infrared image captured by the temperature difference around the infrared imaging device body 12 is uneven. When performing shading correction using the average values M1 and M2 of the pixel signals of the reference pixels in the reference region B1 at the upper end and the reference pixels in the reference region B2 at the lower end as shown in FIG. Since the shading amount S in the vertical direction of the image can be acquired in detail, the unevenness in the vertical direction of the infrared image can be accurately corrected.
また、参照領域Bが、図5に示すように、検出領域Cの左右の端部に設けられている場合も、参照領域Bが検出領域Cの上下の端部に設けられている場合と同様に、算出した左端部の参照領域B3内の参照画素の画素信号の値の平均値M3と右端部の参照領域B4の参照画素の画素信号の値の平均値M4の段差量を用いて線形補間を行って使用有効領域A内の有効画素のシェーディング量Sを算出し、算出したシェーディング量Sを使用有効領域A内の有効画素の画素信号の値にそれぞれ演算してシェーディング補正を行う。
Further, as shown in FIG. 5, when the reference area B is provided at the left and right ends of the detection area C, the same as when the reference area B is provided at the upper and lower ends of the detection area C. Further, linear interpolation is performed using the calculated step difference between the average value M3 of the pixel signal values of the reference pixels in the reference region B3 at the left end portion and the average value M4 of the pixel signal values of the reference pixels in the reference region B4 at the right end portion. The shading correction S is performed by calculating the shading amount S of the effective pixels in the use effective region A and calculating the calculated shading amount S to the value of the pixel signal of the effective pixels in the use effective region A.
図5のように左端部の参照領域B3内の参照画素と右端部の参照領域B4の参照画素の画素信号の値の各平均値M3、M4を使用してシェーディング補正を行う場合には、赤外線画像の左右方向のシェーディング量Sを詳細に取得できるため、赤外線画像の左右方向のムラを精度よく補正することができる。
When performing shading correction using the average values M3 and M4 of the pixel signals of the reference pixels in the reference region B3 at the left end and the reference pixels in the reference region B4 at the right end as shown in FIG. Since the shading amount S in the left-right direction of the image can be acquired in detail, unevenness in the left-right direction of the infrared image can be accurately corrected.
また、参照領域Bが、図6に示すように、検出領域Cの縁部に枠状に設けられている場合は、上記で作成した16個の分割参照領域B11~B55の各々について、算出した分割参照領域B11~B55内の参照画素の画素信号の値の平均値M11~M55を用いてシェーディング量Sを算出する。シェーディング量Sの算出は、例えば周囲の画素の画素信号の値を用いて算出することができる。ここでシェーディング量Sの算出方法について詳細に説明する。
Further, when the reference area B is provided in a frame shape at the edge of the detection area C as shown in FIG. 6, the calculation is performed for each of the 16 divided reference areas B11 to B55 created above. The shading amount S is calculated using the average values M11 to M55 of the pixel signal values of the reference pixels in the divided reference regions B11 to B55. The shading amount S can be calculated using, for example, pixel signal values of surrounding pixels. Here, a method for calculating the shading amount S will be described in detail.
図13に示すように、分割参照領域B11~B55内の参照画素の画素信号の値の平均値をそれぞれZ11~Z55とすると、使用有効領域A内の左上の有効画素Z22におけるシェーディング量は下記式(13)で算出できる。
Z22=(Z12-Z11)+(Z21-Z11)+Z11
=Z12+Z21-Z11・・・(13) As shown in FIG. 13, when the average values of the pixel signals of the reference pixels in the divided reference areas B11 to B55 are Z11 to Z55, respectively, the shading amount in the upper left effective pixel Z22 in the use effective area A is expressed by the following equation. It can be calculated by (13).
Z22 = (Z12−Z11) + (Z21−Z11) + Z11
= Z12 + Z21-Z11 (13)
Z22=(Z12-Z11)+(Z21-Z11)+Z11
=Z12+Z21-Z11・・・(13) As shown in FIG. 13, when the average values of the pixel signals of the reference pixels in the divided reference areas B11 to B55 are Z11 to Z55, respectively, the shading amount in the upper left effective pixel Z22 in the use effective area A is expressed by the following equation. It can be calculated by (13).
Z22 = (Z12−Z11) + (Z21−Z11) + Z11
= Z12 + Z21-Z11 (13)
また、使用有効領域A内の右上の有効画素Z24におけるシェーディング量は下記式(14)で算出できる。
Z24=(Z14-Z15)+(Z25-Z15)+Z15
=Z14+Z25-Z15・・・(14) Further, the shading amount at the upper right effective pixel Z24 in the use effective area A can be calculated by the following equation (14).
Z24 = (Z14−Z15) + (Z25−Z15) + Z15
= Z14 + Z25-Z15 (14)
Z24=(Z14-Z15)+(Z25-Z15)+Z15
=Z14+Z25-Z15・・・(14) Further, the shading amount at the upper right effective pixel Z24 in the use effective area A can be calculated by the following equation (14).
Z24 = (Z14−Z15) + (Z25−Z15) + Z15
= Z14 + Z25-Z15 (14)
つまり使用有効領域Aを水平方向及び垂直方向の中央でそれぞれ分割した4つの領域に区画し、使用有効領域Aの中心に対して左上の領域に位置する有効画素T1については、この有効画素T1の上と左にそれぞれ隣接する分割参照領域Bの平均値Mまたは有効画素の値を加算し、加算された値から有効画素T1の斜め左上に隣接する分割参照領域Bの平均値Mまたは有効画素の値を減算することにより有効画素Tにおけるシェーディング量Sを算出する。また、使用有効領域Aの中心に対して左下の領域に位置する有効画素T2については、この有効画素T2の下と左にそれぞれ隣接する分割参照領域Bの平均値Mまたは有効画素の値を加算し、加算された値から有効画素T2の斜め左下に隣接する分割参照領域Bの平均値Mまたは有効画素の値を減算することにより有効画素T2におけるシェーディング量Sを算出する。
That is, the usable effective area A is divided into four areas divided in the center in the horizontal direction and the vertical direction, and the effective pixel T1 located in the upper left area with respect to the center of the usable effective area A is the effective pixel T1. The average value M or effective pixel value of the divided reference region B adjacent to the upper and left sides is added, and the average value M or effective pixel value of the divided reference region B adjacent to the upper left of the effective pixel T1 is added from the added value. The shading amount S at the effective pixel T is calculated by subtracting the value. For the effective pixel T2 located in the lower left area with respect to the center of the effective area A, the average value M or the effective pixel value of the divided reference area B adjacent to the lower and left sides of the effective pixel T2 is added. Then, the shading amount S at the effective pixel T2 is calculated by subtracting the average value M or the value of the effective pixel of the divided reference region B adjacent to the lower left of the effective pixel T2 from the added value.
同様にして使用有効領域Aの中心に対して右上の領域に位置する有効画素T3については、この有効画素T3の上と右にそれぞれ隣接する分割参照領域Bの平均値Mまたは有効画素の値を加算し、加算された値から有効画素T3の斜め右上に隣接する分割参照領域Bの平均値Mまたは有効画素の値を減算することにより有効画素T3におけるシェーディング量Sを算出する。また使用有効領域Aの中心に対して右下の領域に位置する有効画素T4については、この有効画素T4の下と右にそれぞれ隣接する分割参照領域Bの平均値Mまたは有効画素の値を加算し、加算された値から有効画素T4の斜め左下に隣接する分割参照領域Bの平均値Mまたは有効画素の値を減算することにより有効画素T4におけるシェーディング量Sを算出する。
Similarly, for the effective pixel T3 located in the upper right area with respect to the center of the effective area A, the average value M or the effective pixel value of the divided reference area B adjacent to the upper and right sides of the effective pixel T3 is set. The shading amount S in the effective pixel T3 is calculated by adding and subtracting the average value M or the value of the effective pixel of the divided reference region B adjacent to the diagonally upper right of the effective pixel T3 from the added value. For the effective pixel T4 located in the lower right area with respect to the center of the effective area A, the average value M or the effective pixel value of the divided reference area B adjacent to the lower and right sides of the effective pixel T4 is added. Then, the shading amount S at the effective pixel T4 is calculated by subtracting the average value M or the value of the effective pixel of the divided reference region B adjacent to the lower left of the effective pixel T4 from the added value.
以上のように、使用有効領域Aの中心から最も離れた有効画素からシェーディング量を算出していくことで使用有効領域A内の有効画素の各々においてシェーディング量を算出する。なお、例えば図13の有効画素Z23は、使用有効領域Aの水平方向の中央に位置する画素であり、本実施形態においては、有効画素Z23の上の平均値Z13と左の有効画素Z22と斜め左上の平均値Z12を使用してシェーディング量を算出したが、有効画素Z23の上の平均値Z13と右の有効画素Z24と斜め右上の平均値Z14を使用してシェーディング量を算出してもよいし、周囲の画素において先に算出されたシェーディング量の値を使用することができる。使用有効領域Aの垂直方向の中央に位置する画素についても同様にして算出することができる。また、使用有効領域Aの中央に有効画素が位置する場合には、周囲の画素のいずれの画素において算出されたシェーディング量の値を使用してもよいし、適宜選択することができる。
As described above, the shading amount is calculated for each effective pixel in the use effective region A by calculating the shading amount from the effective pixel farthest from the center of the use effective region A. For example, the effective pixel Z23 in FIG. 13 is a pixel located in the center in the horizontal direction of the use effective area A. In the present embodiment, the effective value Z13 above the effective pixel Z23 and the left effective pixel Z22 are slanted. Although the shading amount is calculated using the upper left average value Z12, the shading amount may be calculated using the average value Z13 above the effective pixel Z23, the right effective pixel Z24, and the diagonally upper right average value Z14. In addition, the shading amount value calculated previously in the surrounding pixels can be used. The pixel located in the center in the vertical direction of the use effective area A can be calculated in the same manner. Further, when the effective pixel is located in the center of the use effective area A, the value of the shading amount calculated in any of the surrounding pixels may be used or may be appropriately selected.
次に、上述したシェーディング量の算出方法にて算出したシェーディング補正の補正値の一例を示して説明する。図9のステップS21にて算出した分割参照領域B11~B55内の参照画素の画素信号の値の平均値Z11~Z55を、図13の使用有効領域A内に示した式に代入すると、使用有効領域A内の有効画素の各々において図14に示すシェーディング量が算出される。
Next, an example of the correction value of the shading correction calculated by the above-described shading amount calculation method will be described. If the average values Z11 to Z55 of the pixel signals of the reference pixels in the divided reference areas B11 to B55 calculated in step S21 of FIG. 9 are substituted into the expression shown in the effective area A of FIG. The shading amount shown in FIG. 14 is calculated for each effective pixel in the region A.
そして信号補正部61は、使用有効領域A内の有効画素の画素信号の値の各々から、各々の有効画素に対してそれぞれ上記算出されたシェーディング量の値を演算して画素信号の値をシェーディング補正する。なお図14に示すように、画素毎に算出されたシェーディング量の値は、プラスの値とマイナスの値が存在する場合がある。シェーディング量の値がマイナスの値の場合には、このシェーディング量の値の絶対値を使用有効領域A内の有効画素の画素信号の値に加算することになり、シェーディング量の値がプラスの場合には、このシェーディング量の値を使用有効領域A内の有効画素の画素信号の値から減算することになる。
Then, the signal correction unit 61 calculates the value of the shading amount calculated for each effective pixel from each of the pixel signal values of the effective pixels in the use effective area A, and shades the pixel signal value. to correct. As shown in FIG. 14, the value of the shading amount calculated for each pixel may include a positive value and a negative value. When the shading amount is a negative value, the absolute value of the shading amount is added to the value of the pixel signal of the effective pixel in the use effective area A. When the shading amount is positive In this case, the value of the shading amount is subtracted from the value of the pixel signal of the effective pixel in the use effective area A.
図12のように、検出領域Cの縁部に枠状に設けられた16個の分割参照領域B11~B55内の参照画素の画素信号の値の平均値M11~M55を使用してシェーディング補正を行う場合には、赤外線撮像装置本体12の周囲の温度差による赤外線画像の上下方向及び左右方向のシェーディング量Sを詳細に取得できるため、赤外線画像全体のムラを精度よく補正することができる。
As shown in FIG. 12, shading correction is performed using average values M11 to M55 of pixel signal values of reference pixels in 16 divided reference regions B11 to B55 provided in a frame shape at the edge of the detection region C. When performing, since the shading amount S in the vertical direction and the horizontal direction of the infrared image due to the temperature difference around the infrared imaging device main body 12 can be acquired in detail, the unevenness of the entire infrared image can be accurately corrected.
上記第2の補正方法においても、上述した第1の補正方法と同様に、結像光学系2の光軸Oと赤外線センサ3の検出領域31の中心31oを一致させ、この中心O,31oを通る一方向の直線d上において、検出領域31の領域Cの長さを結像光学系2の結像領域Dの長さよりも大きくすることにより、赤外線センサ3の検出領域31内に検出領域31の領域Cと結像領域Dが重複する有効領域ARと検出領域31の領域Cと結像領域Dが重複しない参照領域BRとが設けられるので、赤外線の入射を避けるためのさらなる部品を使用しなくても、検出領域31内に赤外線が入射する有効領域ARと赤外線が入射しない参照領域BRとを形成することができる。これにより、赤外線の入射を避けるためのさらなる部品が不要であるため、赤外線センサ3を簡易かつ省コストに製造することができる。また、有効領域ARと参照領域BRにそれぞれ入射する赤外線の放射量が、赤外線撮像装置本体12と赤外線の入射を避けるためのさらなる部品との温度差による影響を受けることがないので温度変化による画素信号の値の変動を精度よく補正することができる。
Also in the second correction method, as in the first correction method described above, the optical axis O of the imaging optical system 2 and the center 31o of the detection region 31 of the infrared sensor 3 are made to coincide with each other, and the centers O and 31o are set. By making the length of the region C of the detection region 31 larger than the length of the imaging region D of the imaging optical system 2 on the straight line d in one direction passing therethrough, the detection region 31 in the detection region 31 of the infrared sensor 3 is obtained. An effective area AR in which the imaging area D overlaps with the imaging area D and a reference area BR in which the imaging area D does not overlap with the effective area AR of the detection area 31 are provided. Even if not, the effective area AR in which infrared rays enter and the reference area BR in which infrared rays do not enter can be formed in the detection area 31. Thereby, since the additional component for avoiding incidence | injection of infrared rays is unnecessary, the infrared sensor 3 can be manufactured simply and at low cost. In addition, the amount of infrared radiation incident on each of the effective area AR and the reference area BR is not affected by the temperature difference between the infrared imaging apparatus main body 12 and a further component for avoiding the incidence of infrared radiation. The fluctuation of the signal value can be accurately corrected.
なお本実施形態においては、シェーディング量Sの算出に、上述した補間方法を使用したが、本発明はこれに限られるものではなく、線形補間、非線形補間等の公知の方法を使用することができる。
In the present embodiment, the above-described interpolation method is used to calculate the shading amount S. However, the present invention is not limited to this, and a known method such as linear interpolation or nonlinear interpolation can be used. .
次に本発明にかかる第2の実施形態の赤外線撮像装置について説明する。図15は本発明に係る第2の実施形態の赤外線撮像装置の構成を説明する概略断面図である。なお第2の実施形態の赤外線撮像装置は、上述した実施形態の赤外線撮像装置1に後述する温度センサを設けた構成であるため、図15の構成の説明については図1と同じ符号を付して説明を省略し、以下、温度センサ13についてと、信号補正部61による温度センサ13からの出力信号を使用した補正方法についてのみ詳細に説明する。
Next, an infrared imaging device according to a second embodiment of the present invention will be described. FIG. 15 is a schematic cross-sectional view illustrating the configuration of the infrared imaging device according to the second embodiment of the present invention. Note that the infrared imaging apparatus of the second embodiment has a configuration in which a temperature sensor described later is provided in the infrared imaging apparatus 1 of the above-described embodiment, and therefore, the description of the configuration of FIG. Hereinafter, only the temperature sensor 13 and the correction method using the output signal from the temperature sensor 13 by the signal correction unit 61 will be described in detail.
本実施形態の赤外線撮像装置本体12は、第1の本体部10において結像光学系2と対向する位置に温度センサ13が設けられている。温度センサ13としては、サーミスタ、熱電対、測温抵抗体等、公知のものを使用することができる。また、温度センサ13からの出力信号は図示しない有線又は無線通信によって信号補正部61に送られる。
In the infrared imaging device main body 12 of the present embodiment, a temperature sensor 13 is provided at a position facing the imaging optical system 2 in the first main body portion 10. As the temperature sensor 13, a known one such as a thermistor, a thermocouple, or a resistance temperature detector can be used. The output signal from the temperature sensor 13 is sent to the signal correction unit 61 by wired or wireless communication (not shown).
次に、本実施形態の赤外線撮像装置の信号補正部61による補正方法についてフローチャートを参照して説明する。図16は信号補正部61の第3の補正処理のフローチャートである。なお本実施形態の赤外線撮像装置の一連の処理については、上述した実施形態の図7のフローチャートの処理と同様であるため、ここでの説明は省略し、信号補正部61による補正処理についてのみ説明する。また図16において、図8と同じ処理については同じステップ番号で示して詳細な説明は省略する。
Next, a correction method by the signal correction unit 61 of the infrared imaging device of the present embodiment will be described with reference to a flowchart. FIG. 16 is a flowchart of the third correction process of the signal correction unit 61. Note that a series of processing of the infrared imaging apparatus of the present embodiment is the same as the processing of the flowchart of FIG. 7 of the above-described embodiment, and thus description thereof is omitted here, and only correction processing by the signal correction unit 61 is described. To do. In FIG. 16, the same processing as in FIG. 8 is denoted by the same step number, and detailed description thereof is omitted.
信号補正部61は、図16に示すように、ステップS11にて参照領域B内の画素である参照画素の画素信号の値の平均値Mを算出し、ステップS12にて使用有効領域A内の有効画素の画素信号の値からそれぞれ平均値Mの値を減算するオフセット補正を行う。
As shown in FIG. 16, the signal correction unit 61 calculates the average value M of the pixel signal values of the reference pixels that are the pixels in the reference region B in step S11, and in the use effective region A in step S12. Offset correction is performed by subtracting the average value M from the value of the pixel signal of the effective pixel.
次に、信号補正部61は、温度センサ13からの出力信号を検出して、検出した出力信号の値に応じた値を算出する(ステップS13)。ここで図17に温度センサ13からの出力信号の値と、この値に応じた演算値との関係を示す図を示す。
Next, the signal correction unit 61 detects an output signal from the temperature sensor 13 and calculates a value corresponding to the detected value of the output signal (step S13). FIG. 17 shows a relationship between the value of the output signal from the temperature sensor 13 and the calculated value corresponding to this value.
温度センサ13からの出力信号の値に応じた演算値は、赤外線撮像装置1の種類毎に予め設定されており、例えば図17に示すように、温度センサ13からの出力信号の値に対応する演算値のテーブルが記憶部7に記憶されている。このテーブルには、赤外線撮像装置本体12の温度毎に放射される赤外線の量を検出しておき、この赤外線の放射量に基づいた値が演算値として設定される。具体的には例えば、赤外線撮像装置1を恒温槽に入れ赤外線撮像装置1の温度を一定に設定した状態で、絶対温度がわかっている基準熱源を赤外線撮像装置1にて撮影し、赤外線撮像装置1の温度データと基準熱源の温度の差分から、図17の演算値を算出する。このテーブルを赤外線撮像装置1の設計段階あるいは製造段階にて予め設定しておく。信号補正部61は、記憶部7に記憶されたテーブルを参照することにより、温度センサ13からの出力信号の値に応じた演算値を算出する。
The calculation value corresponding to the value of the output signal from the temperature sensor 13 is set in advance for each type of the infrared imaging device 1, and corresponds to the value of the output signal from the temperature sensor 13, for example, as shown in FIG. A table of calculation values is stored in the storage unit 7. In this table, the amount of infrared rays radiated for each temperature of the infrared imaging device body 12 is detected, and a value based on the amount of infrared rays is set as a calculation value. Specifically, for example, in a state where the infrared imaging device 1 is placed in a constant temperature bath and the temperature of the infrared imaging device 1 is set to be constant, a reference heat source whose absolute temperature is known is photographed by the infrared imaging device 1, and the infrared imaging device The calculated value of FIG. 17 is calculated from the difference between the temperature data of 1 and the temperature of the reference heat source. This table is set in advance at the design stage or manufacturing stage of the infrared imaging device 1. The signal correction unit 61 refers to the table stored in the storage unit 7 to calculate a calculation value corresponding to the value of the output signal from the temperature sensor 13.
次に、信号補正部61は、ステップS12にてオフセット補正した後の値にステップS13にて算出した温度センサ13の出力信号の値に対応する演算値を演算してさらなるオフセット補正を行う(ステップS14)。図17において、例えば温度センサ13からの出力信号の値がN4であるときの演算値M4が0である場合、ステップS12にてオフセット補正された後の値が示す赤外線画像が記憶部7に記憶される。
Next, the signal correction unit 61 calculates a calculation value corresponding to the value of the output signal of the temperature sensor 13 calculated in step S13 to the value after the offset correction in step S12, and further performs offset correction (step) S14). In FIG. 17, for example, when the calculated value M4 when the value of the output signal from the temperature sensor 13 is N4 is 0, the infrared image indicated by the value after offset correction is stored in the storage unit 7 in step S12. Is done.
一方、温度センサ13からの出力信号の値がN1~N3のいずれかであるときの演算値M1~M3の値がそれぞれプラスの値である場合、この演算値M1~M3をステップS12にてオフセット補正した後の値から減算して画素信号の値をさらなるオフセット補正する。また温度センサ13からの出力信号の値がN5以降のいずれかであるときの演算値M5以降の値がそれぞれマイナスの値である場合、この演算値M5以降の値の絶対値をステップS12にてオフセット補正した後の値に加算して画素信号の値をさらなるオフセット補正する。
On the other hand, if the values of the calculated values M1 to M3 when the value of the output signal from the temperature sensor 13 is any of N1 to N3 are positive values, the calculated values M1 to M3 are offset in step S12. The value of the pixel signal is further offset-corrected by subtracting from the corrected value. If the value after the calculated value M5 when the value of the output signal from the temperature sensor 13 is any value after N5 is a negative value, the absolute value of the value after the calculated value M5 is obtained in step S12. The value of the pixel signal is further offset corrected by adding to the value after the offset correction.
一般的に、赤外線センサ3には、被写体から放射される赤外線と赤外線撮像装置本体12から放射される赤外線が入射する。上記のように、温度センサ13からの出力信号の値は赤外線撮像装置本体12の放射する赤外線の量に基づいた値である演算値であるため、温度センサ13からの出力信号の値に基づいた値である演算値をオフセット補正後の画素信号の値に演算することにより、赤外線センサ3に入射する赤外線のうち赤外線撮像装置本体12から放射される赤外線による値をオフセットすることができるので、被写体から放射される赤外線に基づいた精度の高い画像信号の値を取得することができ、被写体の絶対温度に基づいた精度の高い赤外線画像を取得することができる。
Generally, infrared rays emitted from the subject and infrared rays emitted from the infrared imaging device main body 12 are incident on the infrared sensor 3. As described above, the value of the output signal from the temperature sensor 13 is a calculated value that is a value based on the amount of infrared rays radiated from the infrared imaging device body 12, and thus is based on the value of the output signal from the temperature sensor 13. By calculating the calculation value, which is a value, to the pixel signal value after offset correction, it is possible to offset the value due to the infrared ray radiated from the infrared imaging device main body 12 out of the infrared rays incident on the infrared sensor 3. Therefore, it is possible to acquire a highly accurate image signal value based on the infrared rays emitted from the camera, and to acquire a highly accurate infrared image based on the absolute temperature of the subject.
次に、本実施形態の赤外線撮像装置の信号補正部61による第4の補正方法についてフローチャートを参照して説明する。図18は信号補正部61の第4の補正処理のフローチャートである。なお本実施形態の赤外線撮像装置の一連の処理については、上述した実施形態の図7のフローチャートの処理と同様であるため、ここでの説明は省略し、信号補正部61による補正処理についてのみ説明する。また図18において、図9と同じ処理については同じステップ番号で示して詳細な説明は省略する。
Next, a fourth correction method by the signal correction unit 61 of the infrared imaging device of the present embodiment will be described with reference to a flowchart. FIG. 18 is a flowchart of the fourth correction process of the signal correction unit 61. Note that a series of processing of the infrared imaging apparatus of the present embodiment is the same as the processing of the flowchart of FIG. 7 of the above-described embodiment, and thus description thereof is omitted here, and only correction processing by the signal correction unit 61 is described. To do. Also, in FIG. 18, the same processing as in FIG. 9 is indicated by the same step number, and detailed description thereof is omitted.
信号補正部61は、図18に示すように、ステップS21にて各参照領域B内の画素である参照画素の画素信号の値の平均値Mを算出し、ステップS22にて使用有効領域A内の有効画素の画素信号の値に対してシェーディング補正を行う。
As shown in FIG. 18, the signal correction unit 61 calculates an average value M of pixel signal values of reference pixels, which are pixels in each reference region B, in step S21, and in the use effective region A in step S22. The shading correction is performed on the value of the pixel signal of the effective pixel.
次に、信号補正部61は、温度センサ13からの出力信号を検出して、検出した出力信号の値に応じた値を算出する(ステップS23)。ここでステップS23の処理は図16のステップ13の処理と同じであるため、ここでの説明は省略する。
Next, the signal correction unit 61 detects an output signal from the temperature sensor 13 and calculates a value corresponding to the detected value of the output signal (step S23). Here, since the process of step S23 is the same as the process of step 13 of FIG. 16, description here is abbreviate | omitted.
次に、信号補正部61は、ステップS22にてシェーディング補正した後の値にステップS23にて算出した温度センサ13の出力信号の値に対応する演算値を演算してオフセット補正を行う(ステップS24)。図17において、例えば温度センサ13からの出力信号の値がN4であるときの演算値M4が0である場合、ステップS22にてシェーディング補正した後の値が示す赤外線画像が記憶部7に記憶される。
Next, the signal correction unit 61 performs an offset correction by calculating a calculation value corresponding to the value of the output signal of the temperature sensor 13 calculated in step S23 to the value after the shading correction in step S22 (step S24). ). In FIG. 17, for example, when the calculated value M4 when the value of the output signal from the temperature sensor 13 is N4 is 0, the infrared image indicated by the value after shading correction is stored in the storage unit 7 in step S22. The
一方、温度センサ13からの出力信号の値がN1~N3のいずれかであるときの演算値M1~M3の値がそれぞれプラスの値である場合、この演算値M1~M3をステップS22にてシェーディング補正した後の値から減算して画素信号の値をオフセット補正する。また温度センサ13からの出力信号の値がN5以降のいずれかであるときの演算値M5以降の値がそれぞれマイナスの値である場合、この演算値M5以降の値の絶対値をステップS22にてシェーディング補正した後の値に加算して画素信号の値をオフセット補正する。
On the other hand, if the values of the calculated values M1 to M3 when the value of the output signal from the temperature sensor 13 is any of N1 to N3 are positive values, the calculated values M1 to M3 are shaded in step S22. The value of the pixel signal is offset-corrected by subtracting from the corrected value. If the value after the calculated value M5 when the value of the output signal from the temperature sensor 13 is any value after N5 is a negative value, the absolute value of the value after the calculated value M5 is determined in step S22. The value of the pixel signal is offset corrected by adding to the value after the shading correction.
一般的に、赤外線センサ3には、被写体から放射される赤外線と赤外線撮像装置本体12から放射される赤外線が入射する。上記のように、温度センサ13からの出力信号の値は赤外線撮像装置本体12が放射する赤外線の量に基づいた値である演算値であるため、温度センサ13からの出力信号の値に基づいた値である演算値をシェーディング補正した後の画素信号の値に演算することにより、赤外線センサ3に入射する赤外線のうち赤外線撮像装置本体12から放射される赤外線による値をオフセットすることができるので、被写体から放射される赤外線に基づいた精度の高い画像信号の値を取得することができ、被写体の絶対温度に基づいた精度の高い赤外線画像を取得することができる。
Generally, infrared rays emitted from the subject and infrared rays emitted from the infrared imaging device main body 12 are incident on the infrared sensor 3. As described above, since the value of the output signal from the temperature sensor 13 is a calculated value that is a value based on the amount of infrared rays radiated from the infrared imaging device body 12, it is based on the value of the output signal from the temperature sensor 13. By calculating the calculated value, which is a value, to the value of the pixel signal after shading correction, it is possible to offset the value due to the infrared rays radiated from the infrared imaging device main body 12 out of the infrared rays incident on the infrared sensor 3. A highly accurate image signal value based on infrared rays emitted from the subject can be acquired, and a highly accurate infrared image based on the absolute temperature of the subject can be acquired.
なお上述した実施形態においては、温度センサ13は赤外線撮像装置本体12において結像光学系2と対向する位置に設けたが、本発明はこれに限られるものではない。本発明においては、温度センサ13は、赤外線撮像装置12のどこに設けてもよく、例えば、図15の赤外線撮像装置本体12内部の第2の本体部11を構成する壁面に設けても良いし、赤外線撮像装置本体12外部の第1の本体部10や第2の本体部11を構成する壁面に設けても良いし、適宜変更することができる。温度センサ13を赤外線撮像装置本体12の内部に設けた場合には、赤外線センサ3の設置される空間に位置する壁の温度を測定することになり、赤外線センサ3に赤外線を放射する壁の温度により近い値の温度を測定することができるので、補正の精度を向上させることができる。また赤外線センサ3を赤外線撮像装置本体12において結像光学系2と対応する位置に設けた場合には、赤外線センサ3に赤外線を放射する壁の温度を測定することができるので、補正の精度をさらに向上させることができる。なお、本実施形態においては、温度センサ13は1つ設けるものとしたが、異なる位置に複数個設けてもよい。温度センサ13を複数個設けた場合には、各温度センサ13からの出力信号の値の平均値を使用することができる。
In the embodiment described above, the temperature sensor 13 is provided at a position facing the imaging optical system 2 in the infrared imaging apparatus main body 12, but the present invention is not limited to this. In the present invention, the temperature sensor 13 may be provided anywhere on the infrared imaging device 12, for example, may be provided on a wall surface constituting the second main body portion 11 inside the infrared imaging device main body 12 of FIG. You may provide in the wall surface which comprises the 1st main-body part 10 and the 2nd main-body part 11 outside the infrared imaging device main body 12, and can change suitably. When the temperature sensor 13 is provided inside the infrared imaging device main body 12, the temperature of the wall located in the space where the infrared sensor 3 is installed is measured, and the temperature of the wall that radiates infrared rays to the infrared sensor 3. Therefore, the correction accuracy can be improved. Further, when the infrared sensor 3 is provided at a position corresponding to the imaging optical system 2 in the infrared imaging apparatus main body 12, the temperature of the wall that emits infrared rays to the infrared sensor 3 can be measured. Further improvement can be achieved. In the present embodiment, one temperature sensor 13 is provided, but a plurality of temperature sensors 13 may be provided at different positions. When a plurality of temperature sensors 13 are provided, the average value of the output signals from each temperature sensor 13 can be used.
本発明の各実施形態によれば、熱型センサー(マイクロボロメータ型やSIOダイオード型など)において生じるオフセット変動に対して好適に上記に説明した各効果が得られるため、遠赤外線を検出する赤外線センサを使用したが、本発明はこれに限られるものではなく、中赤外線、近赤外線を検出する赤外線センサを使用してもよい。
According to each embodiment of the present invention, since each effect described above can be suitably obtained with respect to offset fluctuation occurring in a thermal sensor (microbolometer type, SIO diode type, etc.), an infrared sensor that detects far infrared rays However, the present invention is not limited to this, and an infrared sensor that detects mid-infrared rays and near-infrared rays may be used.
また、本発明の各実施形態に係る赤外線撮像装置1は、防犯用の撮像装置、車載用の撮像装置などに好適に適用可能であり、赤外線画像を撮影する単独の撮像装置として構成されてもよく、赤外線画像の撮像機能を有する撮像システムに組み込まれて構成されてもよい。
Moreover, the infrared imaging device 1 according to each embodiment of the present invention can be suitably applied to a security imaging device, an in-vehicle imaging device, and the like, and may be configured as a single imaging device that captures an infrared image. It may be configured to be incorporated in an imaging system having an infrared image capturing function.
本発明の赤外線撮像装置は、上記実施形態に限られるものではなく、発明の趣旨を逸脱しない限りにおいて適宜変更することができる。
The infrared imaging device of the present invention is not limited to the above embodiment, and can be appropriately changed without departing from the spirit of the invention.
1 赤外線撮像装置
12 赤外線撮像装置本体
13 温度センサ
2 結像光学系
3 赤外線センサ
4 アナログ信号処理部
5 A/D変換部
6 デジタル信号処理部
61 信号補正部
7 記憶部
8 出力部
30 結像面
31 検出領域
31o 検出領域の中心
AR 有効領域
A 使用有効領域
B 参照領域
C 検出領域の領域
O 光軸 DESCRIPTION OFSYMBOLS 1 Infrared imaging device 12 Infrared imaging device main body 13 Temperature sensor 2 Imaging optical system 3 Infrared sensor 4 Analog signal processing part 5 A / D conversion part 6 Digital signal processing part 61 Signal correction part 7 Storage part 8 Output part 30 Imaging surface 31 Detection area 31o Center of detection area AR Effective area A Effective use area B Reference area C Detection area O Optical axis
12 赤外線撮像装置本体
13 温度センサ
2 結像光学系
3 赤外線センサ
4 アナログ信号処理部
5 A/D変換部
6 デジタル信号処理部
61 信号補正部
7 記憶部
8 出力部
30 結像面
31 検出領域
31o 検出領域の中心
AR 有効領域
A 使用有効領域
B 参照領域
C 検出領域の領域
O 光軸 DESCRIPTION OF
Claims (10)
- 赤外線を結像させる結像光学系と、
該結像光学系の結像面に位置し、熱電変換素子である画素が複数配列された検出領域を有して、前記画素毎に、前記結像光学系から入射する赤外線に基づく画素信号をそれぞれ出力する赤外線センサであって、前記結像光学系の光軸と前記検出領域の中心を一致させ、該中心を通る一方向において、前記検出領域の長さを前記結像光学系の結像領域の長さよりも大きくすることにより、前記検出領域内に該検出領域と前記結像領域が重複する有効領域と前記検出領域と前記結像領域が重複しない参照領域とが設けられた赤外線センサと、
前記参照領域内の前記画素である参照画素の画素信号の値を使用して前記有効領域内の前記画素である有効画素の画素信号の値を補正することにより温度変化による画素信号の値の変動を補正する信号補正部とを備える赤外線撮像装置。 An imaging optical system for imaging infrared rays;
A pixel signal based on infrared rays incident from the imaging optical system is provided for each pixel, having a detection region located on the imaging surface of the imaging optical system and having a plurality of pixels as thermoelectric conversion elements arranged. Infrared sensors for outputting, respectively, the optical axis of the imaging optical system and the center of the detection area coincide with each other, and the length of the detection area in one direction passing through the center is imaged by the imaging optical system An infrared sensor provided with an effective region in which the detection region and the imaging region overlap within the detection region, and a reference region in which the detection region and the imaging region do not overlap by providing the detection region with a length larger than the length of the region; ,
Variation in the value of the pixel signal due to a temperature change by correcting the value of the pixel signal of the effective pixel that is the pixel in the effective area using the value of the pixel signal of the reference pixel that is the pixel in the reference area An infrared imaging device comprising: a signal correction unit that corrects - 前記信号補正部が、前記有効画素の画素信号の値から、前記参照画素の画素信号の値の平均値をそれぞれ減算してオフセット補正を行う請求項1記載の赤外線撮像装置。 The infrared imaging device according to claim 1, wherein the signal correction unit performs offset correction by subtracting an average value of pixel signals of the reference pixels from a value of a pixel signal of the effective pixel.
- 前記検出領域に前記参照領域を2つ以上設け、
前記信号補正部が、前記2つ以上の参照領域のうち少なくとも1つの前記参照領域の前記参照画素の画素信号の値の平均値を、前記有効画素の画素信号の値からそれぞれ減算して前記オフセット補正を行う請求項2記載の赤外線撮像装置。 Two or more reference areas are provided in the detection area,
The signal correction unit subtracts an average value of pixel signals of the reference pixels in at least one of the reference regions of the two or more reference regions from a value of the pixel signal of the effective pixel, and the offset. The infrared imaging device according to claim 2, wherein correction is performed. - 前記検出領域が矩形状であり、該検出領域の四隅にそれぞれ前記参照領域を設け、
前記信号補正部が、前記有効画素の画素信号の値から、前記四隅の前記参照領域の少なくとも1つ以上の前記参照領域内の参照画素の画素信号の値の平均値をそれぞれ減算して前記オフセット補正を行う請求項2記載の赤外線撮像装置。 The detection area is rectangular, and the reference areas are provided at the four corners of the detection area,
The signal correction unit subtracts, from the pixel signal value of the effective pixel, an average value of pixel signal values of reference pixels in at least one of the reference regions of the four corners, respectively, and the offset The infrared imaging device according to claim 2, wherein correction is performed. - 前記検出領域に、前記参照領域を2つ以上設け、
前記信号補正部が、前記2つ以上の参照領域のうち少なくとも2つの前記参照領域の前記参照画素の画素信号の値の平均値をそれぞれ算出し、該算出した少なくとも2つの平均値を用いて、前記有効画素の画素信号の値に対してシェーディング補正を行う請求項1記載の赤外線撮像装置。 Two or more reference areas are provided in the detection area,
The signal correction unit calculates an average value of pixel signals of the reference pixels in at least two of the reference regions out of the two or more reference regions, and uses the calculated at least two average values, The infrared imaging device according to claim 1, wherein shading correction is performed on a value of a pixel signal of the effective pixel. - 前記検出領域が矩形状であり、該検出領域の四隅にそれぞれ前記参照領域を設け、
前記信号補正部が、前記四隅の前記参照領域のうち少なくとも2つの前記参照領域に対し、該参照領域内の参照画素の画素信号の値の平均値をそれぞれ算出し、該算出した少なくとも2つの平均値を用いて、前記有効画素の画素信号の値に対して前記シェーディング補正を行う請求項5記載の赤外線撮像装置。 The detection area is rectangular, and the reference areas are provided at the four corners of the detection area,
The signal correction unit calculates an average value of pixel signal values of reference pixels in the reference area for at least two of the reference areas in the four corners, and calculates the calculated at least two averages. The infrared imaging device according to claim 5, wherein the shading correction is performed on a value of a pixel signal of the effective pixel using a value. - 前記検出領域内の縁部に前記参照領域を設け、該参照領域に複数の前記参照画素が配列された枠状の参照領域部を設け、前記枠状の参照領域部を複数の領域に分割して、
前記信号補正部が、前記複数の領域に対して、該複数の領域に位置する参照画素の画素信号の値の平均値をそれぞれ算出し、該算出した各平均値を用いて、前記有効画素の画素信号の値に対してシェーディング補正を行う請求項1記載の赤外線撮像装置。 The reference region is provided at an edge in the detection region, a frame-like reference region portion in which a plurality of the reference pixels are arranged is provided in the reference region, and the frame-like reference region portion is divided into a plurality of regions. And
The signal correction unit calculates, for each of the plurality of regions, an average value of pixel signals of reference pixels located in the plurality of regions, and uses each of the calculated average values to calculate the effective pixel. The infrared imaging apparatus according to claim 1, wherein shading correction is performed on the value of the pixel signal. - 赤外線撮像装置の本体に少なくとも1つの温度センサを設け、
前記信号補正部が、前記温度センサからの出力信号の値に応じた値を、前記有効画素の画素信号の値に演算してさらなるオフセット補正を行う請求項1~7いずれか1項記載の赤外線撮像装置。 Providing at least one temperature sensor in the body of the infrared imaging device;
The infrared signal according to any one of claims 1 to 7, wherein the signal correction unit performs a further offset correction by calculating a value corresponding to a value of an output signal from the temperature sensor to a value of a pixel signal of the effective pixel. Imaging device. - 前記温度センサが、前記赤外線撮像装置の本体の内部に設けられている請求項8記載の赤外線撮像装置。 The infrared imaging device according to claim 8, wherein the temperature sensor is provided inside a body of the infrared imaging device.
- 前記温度センサが、前記結像光学系と対向する位置に設けられている請求項9記載の赤外線撮像装置。 The infrared imaging device according to claim 9, wherein the temperature sensor is provided at a position facing the imaging optical system.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
JP2015103564 | 2015-05-21 | ||
JP2015-103564 | 2015-05-21 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2016185699A1 true WO2016185699A1 (en) | 2016-11-24 |
Family
ID=57319741
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2016/002352 WO2016185699A1 (en) | 2015-05-21 | 2016-05-13 | Infrared imaging device |
Country Status (1)
Country | Link |
---|---|
WO (1) | WO2016185699A1 (en) |
Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0342531A (en) * | 1989-07-11 | 1991-02-22 | Mitsubishi Electric Corp | Infrared measuring instrument |
JP2006292594A (en) * | 2005-04-12 | 2006-10-26 | Nec Electronics Corp | Infrared detector |
JP2008113141A (en) * | 2006-10-30 | 2008-05-15 | Fujifilm Corp | Imaging device and signal processing method |
JP2008160561A (en) * | 2006-12-25 | 2008-07-10 | Auto Network Gijutsu Kenkyusho:Kk | Imaging system and imaging apparatus |
WO2014018948A2 (en) * | 2012-07-26 | 2014-01-30 | Olive Medical Corporation | Camera system with minimal area monolithic cmos image sensor |
-
2016
- 2016-05-13 WO PCT/JP2016/002352 patent/WO2016185699A1/en active Application Filing
Patent Citations (5)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH0342531A (en) * | 1989-07-11 | 1991-02-22 | Mitsubishi Electric Corp | Infrared measuring instrument |
JP2006292594A (en) * | 2005-04-12 | 2006-10-26 | Nec Electronics Corp | Infrared detector |
JP2008113141A (en) * | 2006-10-30 | 2008-05-15 | Fujifilm Corp | Imaging device and signal processing method |
JP2008160561A (en) * | 2006-12-25 | 2008-07-10 | Auto Network Gijutsu Kenkyusho:Kk | Imaging system and imaging apparatus |
WO2014018948A2 (en) * | 2012-07-26 | 2014-01-30 | Olive Medical Corporation | Camera system with minimal area monolithic cmos image sensor |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US9497397B1 (en) | Image sensor with auto-focus and color ratio cross-talk comparison | |
US9584743B1 (en) | Image sensor with auto-focus and pixel cross-talk compensation | |
US7880777B2 (en) | Method for fixed pattern noise reduction in infrared imaging cameras | |
US10110833B2 (en) | Hybrid infrared sensor array having heterogeneous infrared sensors | |
US20200351461A1 (en) | Shutterless calibration | |
KR102088401B1 (en) | Image sensor and imaging device including the same | |
US10419696B2 (en) | Infrared imaging device and signal correction method using infrared imaging device | |
US9438825B2 (en) | Infrared sensor amplification techniques for thermal imaging | |
EP2923187B1 (en) | Hybrid infrared sensor array having heterogeneous infrared sensors | |
JP2013200137A (en) | Infrared temperature measurement device, infrared temperature measurement method, and infrared temperature measurement device control program | |
KR20110016438A (en) | Camera sensor correction | |
US20170370775A1 (en) | Infrared detection apparatus | |
JP2008219613A (en) | Non-cooled infrared camera | |
KR20130104756A (en) | Image apparatus and image sensor thereof | |
EP3804296A1 (en) | Device and method for parasitic heat compensation in an infrared camera | |
JP2005249723A (en) | Display output unit for image containing temperature distribution, and control method therefor | |
JP2011038838A (en) | Thermal device and method for measuring infrared output | |
JP2011147079A (en) | Image pickup device | |
JP2011044813A (en) | Imaging apparatus, correction value calculation method, and imaging method | |
US20160219231A1 (en) | Dark current gradient estimation using optically black pixels | |
WO2016185699A1 (en) | Infrared imaging device | |
EP3803780B1 (en) | Device and method for parasitic heat compensation in an infrared camera | |
WO2016185698A1 (en) | Infrared imaging device | |
JP2012049947A5 (en) | ||
JP2012049947A (en) | Image processing apparatus |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 16796095 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 16796095 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: JP |