WO2016185699A1 - Dispositif d'imagerie infrarouge - Google Patents

Dispositif d'imagerie infrarouge Download PDF

Info

Publication number
WO2016185699A1
WO2016185699A1 PCT/JP2016/002352 JP2016002352W WO2016185699A1 WO 2016185699 A1 WO2016185699 A1 WO 2016185699A1 JP 2016002352 W JP2016002352 W JP 2016002352W WO 2016185699 A1 WO2016185699 A1 WO 2016185699A1
Authority
WO
WIPO (PCT)
Prior art keywords
region
pixel
infrared
value
signal
Prior art date
Application number
PCT/JP2016/002352
Other languages
English (en)
Japanese (ja)
Inventor
小林 誠
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2016185699A1 publication Critical patent/WO2016185699A1/fr

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/02Details
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01JMEASUREMENT OF INTENSITY, VELOCITY, SPECTRAL CONTENT, POLARISATION, PHASE OR PULSE CHARACTERISTICS OF INFRARED, VISIBLE OR ULTRAVIOLET LIGHT; COLORIMETRY; RADIATION PYROMETRY
    • G01J1/00Photometry, e.g. photographic exposure meter
    • G01J1/42Photometry, e.g. photographic exposure meter using electric radiation detectors
    • G01J1/44Electric circuits
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N25/00Circuitry of solid-state image sensors [SSIS]; Control thereof
    • H04N25/60Noise processing, e.g. detecting, correcting, reducing or removing noise
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N5/00Details of television systems
    • H04N5/30Transforming light or analogous information into electric information
    • H04N5/33Transforming infrared radiation

Definitions

  • the present invention relates to an imaging device that captures an infrared image, and more particularly to an infrared imaging device that corrects fluctuations in the value of a pixel signal output from an infrared sensor.
  • An infrared imaging device that captures an infrared image by detecting infrared rays emitted from a subject such as an object or a person with an infrared sensor is known. It is generally known that a subject whose temperature is higher than absolute zero emits infrared light, and that the higher the temperature of the subject, the more infrared light with a shorter wavelength, and the lower the temperature of the subject, the less infrared light with a longer wavelength. When a subject is imaged by an infrared imaging device, the captured image is displayed white at a high temperature and black at a low temperature. However, when a temperature change occurs in the vicinity of the infrared imaging device or the like, the detection signal detected by the infrared sensor changes due to the temperature change, and noise is generated in the captured subject image.
  • Patent Document 1 among detection regions in which pixels that are thermal infrared imaging elements are two-dimensionally arranged, pixels in a region where infrared rays are incident are effective pixels, and pixels in a region where infrared rays are not incident are reference pixels.
  • An infrared imaging device is disclosed in which, when a temperature rise occurs in a region where a reference pixel is located, the fluctuation of the detection signal of the infrared sensor due to the temperature rise is suppressed by reducing the bias current flowing to the effective pixel. ing.
  • Patent Document 1 a ridge that covers the periphery of the detection region, that is, an infrared shielding body is provided inside the infrared sensor, and the infrared shielding body shields infrared light incident on the infrared sensor, thereby generating a region where no infrared light is incident. I am letting.
  • an infrared shielding body is provided in a housing in which an infrared sensor is accommodated so that infrared rays from a subject do not enter a part of the pixels, that is, the reference pixels, and the reference pixels are used as a reference.
  • An infrared sensor is disclosed in which an electrical signal from another pixel is read to cancel a variation in infrared rays radiated from a casing and detect infrared rays incident from a subject.
  • infrared rays emitted from a subject and infrared rays emitted from an infrared imaging device main body are incident on the infrared sensor.
  • the temperature of the infrared imaging device main body rises, the amount of infrared rays emitted from the infrared imaging device main body increases, so that the entire captured image becomes whitish.
  • the captured infrared image is generally whitish on the right side. Therefore, it is desired to perform correction for reducing fluctuations in infrared rays emitted from the infrared imaging device body from the pixel signals output from the infrared sensor.
  • FIG. 20 is a diagram illustrating an example of the amount of infrared rays in an infrared sensor provided with the infrared shielding body 99.
  • an effective area where infrared rays from the imaging optical system 2 are incident is indicated by A
  • a reference area where infrared rays from the imaging optical system 2 are not incident is indicated by B.
  • the incident amount of infrared rays unrelated to the infrared rays from the imaging optical system 2 is different from the region B. For this reason, when the reference pixels in the reference region B are used, it is difficult to accurately perform correction for reducing the incident amount of infrared rays unrelated to the infrared rays from the imaging optical system 2.
  • the present invention has been made in view of such problems, and provides an infrared imaging device capable of accurately correcting fluctuation due to a temperature change in the value of a pixel signal output from an infrared sensor without increasing the number of components. It is intended.
  • An infrared imaging device of the present invention includes an imaging optical system that forms an infrared image, It has a detection region located on the imaging surface of the imaging optical system and in which a plurality of pixels that are thermoelectric conversion elements are arranged, and outputs a pixel signal based on infrared rays incident from the imaging optical system for each pixel.
  • An infrared sensor that matches the optical axis of the imaging optical system with the center of the detection area, and makes the length of the detection area larger than the length of the imaging area of the imaging optical system in one direction passing through the center.
  • An infrared sensor provided with an effective area in which the detection area and the imaging area overlap in the detection area, and a reference area in which the detection area and the imaging area do not overlap; A signal that corrects a change in the value of the pixel signal due to a temperature change by correcting the value of the pixel signal of the effective pixel that is a pixel in the effective region using the value of the pixel signal of the reference pixel that is a pixel in the reference region A correction unit.
  • infrared rays includes all of near infrared rays, middle infrared rays, and far infrared rays.
  • the “effective region where the detection region and the imaging region overlap” means a region where infrared rays incident from the imaging optical system reach in the detection region.
  • the “reference region where the detection region and the imaging region do not overlap” means a region where infrared rays incident from the imaging optical system do not reach in the detection region.
  • the “reference area” means an area that is not an effective area among the detection areas.
  • the signal correction unit can perform the offset correction by subtracting the average value of the pixel signal value of the reference pixel from the value of the pixel signal of the effective pixel.
  • the infrared imaging device of the present invention is provided with two or more reference regions in the detection region,
  • the signal correction unit may perform offset correction by subtracting the average value of the pixel signal values of the reference pixels in at least one reference region of the two or more reference regions from the value of the pixel signal of the effective pixel. .
  • the detection area is rectangular, and reference areas are provided at the four corners of the detection area, respectively.
  • the signal correction unit may perform offset correction by subtracting an average value of pixel signal values of reference pixels in at least one reference region of the reference regions at the four corners from the pixel signal value of the effective pixel. .
  • the infrared imaging device of the present invention is provided with two or more reference regions in the detection region,
  • the signal correction unit calculates the average value of the pixel signals of the reference pixels in at least two reference areas among the two or more reference areas, and uses the calculated at least two average values to obtain the pixel signal of the effective pixel
  • the shading correction can be performed on the value of.
  • shading correction refers to non-uniformity of incident infrared rays generated on the imaging surface of the infrared sensor, such as a decrease in the amount of infrared rays at the periphery of the imaging region caused by the imaging optical system, or to the circuit board. It means correction that reduces non-uniformity of infrared rays caused by non-uniformity of infrared rays generated from a circuit board by energization and non-uniformity of infrared rays caused by non-uniformity of external heat from an optical system or an infrared imaging device main body.
  • the detection area is rectangular, and reference areas are provided at the four corners of the detection area, respectively.
  • the signal correction unit calculates an average value of the pixel signal values of the reference pixels in the reference area for at least two reference areas of the reference areas at the four corners, and uses the calculated at least two average values to Shading correction may be performed on the value of the pixel signal of the pixel.
  • a reference region is provided at an edge in a detection region, a frame-like reference region portion in which a plurality of reference pixels are arranged is provided in the reference region, and the frame-like reference region portion is provided as a plurality of regions.
  • Divided into The signal correction unit calculates an average value of the pixel signal values of the reference pixels located in the plurality of regions for the plurality of divided regions, and uses each of the calculated average values for the pixel signal of the effective pixel. Shading correction may be performed on the value.
  • the infrared imaging device of the present invention is provided with at least one temperature sensor in the main body of the infrared imaging device,
  • the signal correction unit can perform a further offset correction by calculating a value corresponding to the value of the output signal from the temperature sensor to the value of the pixel signal of the effective pixel.
  • the “value according to the value of the output signal from the temperature sensor” is a value set in advance for each type of infrared imaging device.
  • a table corresponding to the value of the output signal is created in advance, and a value based on this table can be used.
  • the infrared imaging device of the present invention it is preferable to provide a temperature sensor inside the main body of the infrared imaging device.
  • the infrared imaging device of the present invention it is preferable to provide a temperature sensor at a position facing the imaging optical system.
  • the optical axis of the imaging optical system and the center of the detection area of the infrared sensor in which a plurality of pixels as thermoelectric conversion elements are arranged coincide with each other, and in one direction passing through the center, the detection area Is made larger than the length of the imaging region of the imaging optical system, an effective region where the detection region and the imaging region overlap within the detection region, and a reference region where the detection region and the imaging region do not overlap exist.
  • 1 is a schematic cross-sectional view illustrating a configuration of an infrared imaging device according to an embodiment of the present invention.
  • 1 is a schematic block diagram illustrating a configuration of an infrared imaging device according to an embodiment of the present invention.
  • region of an infrared sensor The figure which shows an example of the 2nd detection area
  • region of an infrared sensor The figure which shows an example of the 4th detection area
  • the figure explaining the correction method when a reference field is frame shape
  • the figure explaining the calculation method of the correction value of the shading correction when a reference area is frame shape
  • the figure which shows an example of the correction value of shading correction when a reference area is frame shape
  • Flowchart of third correction process of signal correction unit The figure which shows the relationship between the value of the output signal from the temperature sensor and the calculated value according to this value
  • Flowchart of fourth correction process of signal correction unit Schematic cross-sectional view illustrating the configuration of a conventional infrared imaging device
  • FIG. 1 is a schematic cross-sectional view illustrating the configuration of an infrared imaging device according to an embodiment of the present invention
  • FIG. 2 is a schematic block diagram illustrating the configuration of an infrared imaging device 1 according to an embodiment of the present invention.
  • the infrared imaging device 1 according to the present embodiment is installed in an infrared imaging device main body 12 including a first main body portion 10 and a second main body portion 11, and the first main body portion 10.
  • An imaging optical system 2 capable of imaging infrared rays emitted from a subject on the imaging plane 30 and a second main body 11, located on the imaging plane 30 of the imaging optical system 2,
  • An infrared sensor 3 having a detection region 31 in which a plurality of pixels as conversion elements are arranged and outputting pixel signals based on infrared rays incident from the imaging optical system 2 is provided for each pixel.
  • the infrared imaging device main body 12 is made of a metal material such as aluminum or stainless steel or a resin material such as plastic, but the infrared imaging device main body 12 of this embodiment is made of stainless steel. The internal structure of the infrared imaging device body 12 will be described later in detail.
  • the imaging optical system 2 is a lens group composed of one or more lenses.
  • the lenses are held by a holding frame, and the holding frame is fixed to the first main body 10.
  • the imaging optical system 2 is described as a fixed focus optical system, but the present invention is not limited to this, and may be a variable focus optical system.
  • the infrared sensor 3 is driven by a drive control unit (not shown), takes a subject image formed in the detection area 31 as an infrared image, converts it into a pixel signal, and outputs it.
  • the infrared sensor 3 outputs a pixel signal by sequentially transferring charges accumulated in each pixel and converting them into an electrical signal.
  • FIG. 3 shows a diagram illustrating a first embodiment of the detection region 31 of the infrared sensor 3. It is assumed that the region C of the detection region 31 of the infrared sensor 3 of the present embodiment has a rectangular shape, and the imaging region D of the imaging optical system 2 has a circular shape.
  • the optical axis of the imaging optical system 2 that is, the center O of the imaging region D is matched with the center 31o of the region C of the detection region 31 of the infrared sensor 3, and a straight line passing through the centers O and 31o is a straight line d
  • the length of the region C of the detection region 31 is configured to be greater than the length of the imaging region D on the straight line d.
  • the straight line d is a straight line connecting diagonal lines of the region C of the detection region 31.
  • a region where the infrared light that has passed through the imaging optical system 2 is incident that is, a region where the region C of the detection region 31 and the imaging region D of the imaging optical system 2 overlap is an effective region AR.
  • a region where the transmitted infrared light does not enter that is, a region where the region C of the detection region 31 and the imaging region D of the imaging optical system 2 do not overlap with each other becomes the reference region BR.
  • the length of the region C of the detection region 31 is larger than the length of the imaging region D also on the straight line orthogonal to the straight line d in FIG.
  • Reference regions BR are formed at the four corners of the region C, respectively.
  • pixels that are a plurality of thermoelectric conversion elements are arranged in the effective region AR and the reference region BR, and pixels located in the effective region AR become effective pixels and pixels located in the reference region BR become reference pixels.
  • the area that forms the largest rectangle in the effective area AR and the area in which the effective pixels are arranged in a matrix is set as the effective area A, and is output in the effective pixels in the effective area A.
  • the pixel data forms an infrared image.
  • the unit 61 performs infrared image signal correction processing using pixel data output from the reference pixels in the reference regions B1 to B4.
  • the infrared sensor 3 can be manufactured easily and at low cost. Further, it is advantageous to provide the reference areas B1 to B4 in the area excluding the imaging area D while suitably securing the positions and areas of the reference areas B1 to B4 with respect to the effective area AR. For example, as shown in FIG. 19, the lens of the imaging optical system 2 is more than the case where the region C of the detection region 31 overlaps the imaging region D, that is, all the pixels in the region C of the detection region 31 become effective pixels. Therefore, the cost of the imaging optical system 2 can be reduced.
  • the infrared sensor 3 is a rectangular sensor, and the four corners of the infrared sensor 3 are reference areas BR that are areas that do not overlap with the imaging area D.
  • the reference pixel used for the infrared image signal correction process can be arbitrarily selected from the reference pixels in the reference region BR. That is, the region formed by the reference pixels to be used can be selectively provided in an arbitrary shape at an arbitrary position in the reference region BR.
  • the position information of the effective pixel and the reference pixel is stored in the storage unit 7 and is referred to by the signal correction unit 61 as appropriate.
  • the use effective area A and the reference area B having different shapes from the above-described embodiment may be provided by appropriately changing the sizes.
  • 4 to 6 show examples of other embodiments of the detection region 31 of the infrared sensor 3, respectively. 4 to 6 are schematic diagrams for explaining the configuration of the detection region 31 of the infrared sensor 3, and the size of each region is different from the actual one.
  • the second detection region 31 of the infrared sensor 3 includes the optical axis of the imaging optical system 2, that is, the center O of the imaging region D, and the center 31 o of the region C of the detection region 31 of the infrared sensor 3.
  • the vertical line connecting the upper and lower sides of the detection region 31 of the infrared sensor 3 through the centers O and 31o is defined as a straight line d, and the length of the region C of the detection region 31 on the straight line d is the imaging region D. It is configured to be larger than the length of.
  • the region where the infrared rays that have passed through the imaging optical system 2 are incident that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap.
  • An effective area AR which is an area, and an outer peripheral portion surrounding the imaging area D is an area where infrared rays that have passed through the coupling optical system 2 do not enter, that is, an area where the imaging area D of the imaging optical system does not overlap with the area C of the detection area 31 Is a reference region BR.
  • an area that forms the largest rectangle in the effective area AR and in which effective pixels are arranged in a matrix is used effective area
  • A is the pixel data output from the effective pixels in the use effective area A and forms an infrared image.
  • a signal correction unit 61 which will be described later, is referred to as reference areas B1 and B2, which are tangent to the imaging area D and have upper and lower ends from the horizontal line of the detection area 31, respectively. Performs signal correction processing of an infrared image using pixel data output in reference pixels in the reference regions B1 and B2.
  • the third detection region 31 of the infrared sensor 3 includes an optical axis of the imaging optical system 2, that is, a center O of the imaging region D, and a region C of the detection region 31 of the infrared sensor 3.
  • a horizontal line connecting the left and right of the detection region 31 of the infrared sensor 3 through the centers O and 31o is defined as a straight line d, and the length of the region C of the detection region 31 is imaged on the straight line d. It is configured to be larger than the length of the region D. Accordingly, in the detection region 31 of FIG.
  • the region where the infrared rays that have passed through the imaging optical system 2 are incident that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap.
  • the left and right regions adjacent to the imaging region D are regions where the infrared rays that have passed through the coupling optical system 2 are not incident, that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap.
  • the reference region BR is a region that should not be used.
  • the area that forms the largest rectangle in the effective area AR and the area in which the effective pixels are arranged in a matrix is used effective area
  • A is the pixel data output from the effective pixels in the use effective area A and forms an infrared image.
  • a signal correction unit 61 which will be described later as reference regions B3 and B4, which are tangents to the imaging region D and have left and right ends from the vertical line of the detection region 31 as reference regions B3 and B4, respectively.
  • Infrared image signal correction processing is performed using pixel data output from the reference pixels in the reference regions B3 and B4.
  • the third detection region 31 is configured such that the length of the region C of the detection region 31 is larger than the length of the imaging region D on the straight line d, that is, in the horizontal direction of the detection region 31.
  • the length of the region C of the detection region 31 is not configured to be greater than the length of the imaging region D. That is, the length of the area C of the detection area 31 is shorter than the length of the imaging area D in the vertical direction. Therefore, when the second detection region 31 and the third detection region 31 are the same size, the third detection region 31 is a region of the detection region 31 than the second detection region 31. Since the area where C and the imaging area D of the imaging optical system overlap, that is, the effective area AR becomes wide, the usable effective area A can be widened.
  • the fourth detection region 31 of the infrared sensor 3 includes an optical axis of the imaging optical system 2, that is, a center O of the imaging region D and a region C of the detection region 31 of the infrared sensor 3, as shown in FIG.
  • a vertical line connecting the center 31o and passing through the centers O and 31o and connecting the upper and lower sides of the detection region 31 of the infrared sensor 3 is a straight line d1
  • the right and left of the detection region 31 of the infrared sensor 3 is passed through the centers O and 31o.
  • the horizontal line to be connected is a straight line d2
  • the length of the region C of the detection region 31 is configured to be larger than the length of the imaging region D on the straight lines d1 and d2.
  • the region where the infrared rays that have passed through the imaging optical system 2 are incident that is, the region C of the detection region 31 and the imaging region D of the imaging optical system overlap.
  • An effective area AR which is an area, and an outer peripheral portion surrounding the imaging area D is an area where infrared rays that have passed through the coupling optical system 2 do not enter, that is, an area where the imaging area D of the imaging optical system does not overlap with the area C of the detection area 31 Is a reference region BR.
  • the fourth detection area 31 as well, as in the detection area 31 of the above-described embodiment, an area that forms the largest rectangle in the effective area AR and in which effective pixels are arranged in a matrix is used effective area
  • A is the pixel data output from the effective pixels in the use effective area A and forms an infrared image.
  • the reference region BR it is a tangent to the imaging region D and includes an upper end and a lower end from the horizontal line of the detection region 31, and the upper end and the lower end.
  • the signal correction unit 61 which will be described later, uses a frame-shaped region around the region C of the detection region 31 formed with the same width as the reference region B, and uses pixel data output in the reference pixels in the reference regions B1 and B2. The infrared image signal correction process is then performed.
  • the effective pixels in the usable effective area A and the reference pixels in the reference area B are infrared detecting elements (infrared detectors) that can detect infrared rays (wavelengths of 0.7 ⁇ m to 1 mm). It is an infrared detecting element capable of detecting 8 ⁇ m to 15 ⁇ m).
  • a microbolometer-type or SOI (Silicon-on-Insulator) diode-type infrared detection element can be used as the infrared detection element used as the effective pixel and the reference pixel.
  • the infrared imaging device 1 includes the imaging optical system 2, the infrared sensor 3, an analog signal processing unit 4 that performs various analog signal processing such as amplifying an output signal from the infrared sensor 3, and the like.
  • a / D conversion (Analog-to-Digital conversion) unit 5 for converting the analog image signal processed by the analog signal processing unit 4 into digital image data, and the digital image data converted by the A / D conversion unit 5
  • a digital signal processing unit 6 that performs various signal processing, a storage unit 7 that stores information associated with various digital signal processing, and an output that outputs an infrared image subjected to the digital signal to the storage unit 7 or a display unit (not shown). Part 8.
  • the storage unit 7 stores various types of information used in the digital signal processing unit 6, infrared images subjected to various digital signal processing, and the like as necessary, and a volatile memory such as a DRAM (Dynamic Random Access Memory).
  • a non-volatile memory such as a flash memory is included.
  • the storage unit 7 is provided separately from the digital signal processing unit 6, but the present invention is not limited to this, and the storage unit 7 is provided in the digital signal processing unit 6. Also good.
  • the output unit 8 outputs an infrared image subjected to various digital signal processing to the storage unit 7, a display unit (not shown), or a storage unit outside the apparatus by wireless or wired communication.
  • the digital signal processing unit 6 includes a signal correction unit 61 that performs correction processing on the value of the pixel signal output from the infrared sensor 3. Normally, when the temperature of the infrared imaging device main body 12 rises, the amount of infrared radiation emitted from the infrared imaging device main body 12 increases, and the entire captured infrared image becomes whitish.
  • the signal correction unit 61 of the present embodiment uses pixels in the reference region B in order to perform correction for offsetting a value that contributes to infrared rays emitted from the infrared imaging device main body 12 from the pixel signals output from the infrared sensor 3.
  • the value of the pixel signal of the effective pixel that is the pixel in the use effective area A is corrected using the value of the pixel signal of a certain reference pixel.
  • FIG. 7 shows a flowchart of a series of processes of the infrared imaging apparatus 1 of the present embodiment.
  • the infrared imaging apparatus 1 of the present embodiment first images a subject (S1), and the infrared sensor 3 analogly outputs pixel signals based on infrared rays incident from the imaging optical system 2 for each pixel.
  • the analog signal processing unit 4 performs various analog signal processing such as amplifying the detection signal from the infrared sensor 3, and the A / D conversion unit 5 converts the processed analog image signal into digital image data.
  • the digital signal processing unit 6 performs various signal processing on the digital image data converted by the A / D conversion unit 5, and stores the image data subjected to the various signal processing in the storage unit 7 (S2). ).
  • FIG. 8 is a flowchart of the first correction process of the signal correction unit 61 of this embodiment.
  • the signal correction unit 61 calculates an average value of pixel signal values of reference pixels that are pixels in the reference region B (S11).
  • the reference region B is formed at the four corners of the detection region C as shown in FIG. 3, the values of all pixel signals of the reference pixels in the reference regions B1 to B4 at the four corners
  • An average value M is calculated.
  • the number of reference pixels in the upper left reference area B1, i, the number of reference pixels in the upper right reference area B2, j, the number of reference pixels in the lower left reference area B3, and the lower right reference area B4 If the number of reference pixels is m, the average value M can be calculated by the following equation (1).
  • M (B1 1 + B1 2 +,..., + B1 i + B2 1 + B2 2 +,...
  • B1 1 , B1 i are the pixel signal values of the reference pixels in the reference area B1
  • B2 1 , B2 j are the pixel signal values of the reference pixels in the reference area B2
  • B3 1 , B3 k represents the value of the pixel signal of each reference pixel in the reference region B3
  • B4 1 , B4 m represent the value of the pixel signal of each reference pixel in the reference region B4.
  • the pixel signal values of all the reference pixels in the reference regions B1 to B4 may be added together as in the above formula (1), and then divided by the number of pixels.
  • An average value may be calculated for each of the reference regions B1 to B4, and a value obtained by adding the calculated four average values and dividing by four may be used as the average value. That is, the average value M may be calculated by the following formula (2).
  • the reference area B is provided at the upper and lower ends of the detection area C as shown in FIG. 4, the reference pixel in the reference area B1 at the upper end and the reference area B2 at the lower end are referred to.
  • An average value M of values of both pixel signals of the pixel is calculated. If the number of reference pixels in the reference region B1 at the upper end is i and the number of reference pixels in the reference region B2 at the lower end is j, the average value M can be calculated by the following equation (3).
  • M (B1 1 + B1 2 +,..., + B1 i + B2 1 + B2 2 +,...
  • B1 1, ..., B1 i indicate the value of the pixel signal of each reference pixel in the reference area B1
  • B2 1, ... B2 j indicate the value of the pixel signal of each reference pixel in the reference area B2.
  • B3 1, ..., B3 i indicate the pixel signal value of each reference pixel in the reference area B3, and B4 1, ... B4 j indicate the value of the pixel signal of each reference pixel in the reference area B4.
  • the average value of the pixel signal values of the reference pixels in the frame-shaped reference region B is calculated. calculate.
  • B 1, ... B i indicate pixel signal values of the respective reference pixels in the reference region B.
  • the signal correction unit 61 subtracts the calculated average value M from each of the pixel signal values of the effective pixels in the use effective area A to obtain the value of the image signal. Offset correction is performed (S12).
  • the signal correction unit 61 stores the value of the pixel signal subjected to the correction process in the storage unit 7 (S4).
  • the image data stored in the storage unit 7 is appropriately output by the output unit 8 to an external storage unit or a display unit (not shown).
  • the corrected image data may be appropriately subjected to other necessary correction processing by the digital signal processing unit 6 of the infrared imaging device 1.
  • the reference region B can capture only the infrared radiation from the infrared imaging device body 12 at the same timing as the use effective region A captures the infrared radiation from the subject. Therefore, the average value M calculated above is a value resulting from the infrared rays incident from the infrared imaging device main body 12.
  • the conventional infrared imaging device 100 shown in FIG. 19 a region where infrared rays are not incident is generated by blocking infrared rays incident on the infrared sensor with an infrared shield, and an infrared shielding plate is required, so that the structure is simple. Improvement of cost and cost saving is desired. Moreover, in the conventional infrared imaging device 100, even if the temperature of the imaging device main body 12 rises, the temperature of the infrared shielding body 99 does not rise immediately, so that the temperature in the reference region B is lower than that of the imaging device main body 12. Infrared rays radiated from the infrared shielding body 99 are added.
  • the amount of incident infrared rays differs between the use effective area A and the reference area B, and the correction amount to be offset differs between the use effective area A and the reference area B. Therefore, the effective pixels in the use effective area A are different. In the offset correction in which the average value M is subtracted from each of the pixel signal values, it is difficult to perform correction with high accuracy.
  • the optical axis O of the imaging optical system 2 and the center 31o of the detection region 31 of the infrared sensor 3 are made to coincide with each other, and the unidirectional direction passing through the centers O and 31o.
  • the straight line d by making the length of the region C of the detection region 31 larger than the length of the image formation region D of the imaging optical system 2, the region C of the detection region 31 and the region C of the detection region 31 are included in the detection region 31 of the infrared sensor 3.
  • the effective area AR where the imaging area D overlaps the area C of the detection area 31 and the reference area BR where the imaging area D does not overlap are provided, even without using additional components to avoid the incidence of infrared rays, An effective area AR in which infrared rays are incident and a reference area BR in which no infrared rays are incident can be formed in the detection area 31.
  • injection of infrared rays is unnecessary, the infrared sensor 3 can be manufactured simply and at low cost.
  • the amount of infrared radiation incident on each of the effective area AR and the reference area BR is not affected by the temperature difference between the infrared imaging apparatus main body 12 and a further component for avoiding the incidence of infrared radiation.
  • the fluctuation of the signal value can be accurately corrected.
  • the influence of the temperature difference that occurs in the left-right direction of the imaging device body 12 can be reduced. Further, when the offset correction is performed using the average value M of the pixel signal values of the reference pixels in the frame-like reference region B as shown in FIG. 6, the influence of the temperature difference generated around the infrared imaging device main body 12. Can be reduced.
  • the average value of the pixel signal values of the pixels is used for offset correction, the present invention is not limited to this.
  • An average value of pixel signal values of only reference pixels may be used.
  • the average value of the pixel signals of the reference pixels only in the reference region B1 at the upper end may be used, or the average value of the pixel signals of the reference pixels in the reference region B2 at the lower end is used. You may do it.
  • the average value of the pixel signals of the reference pixels only in the reference region B3 at the left end may be used, or the average value of the pixel signals of the reference pixels only in the reference region B4 at the right end is used. It may be changed as appropriate.
  • FIG. 9 is a flowchart of the second correction processing of the signal correction unit 61
  • FIG. 10 is a diagram for explaining a correction method when the reference area is at four corners
  • FIG. 11 is a reference area having four corners
  • FIG. 12 is a diagram illustrating a method for calculating a correction value for shading correction when the reference area is in FIG. 12
  • FIG. 12 is a diagram illustrating a correction method when the reference area is frame-shaped
  • FIG. 13 is a shading correction when the reference area is frame-shaped
  • FIG. 14 is a diagram illustrating a method for calculating the correction value
  • FIG. 14 is a diagram illustrating an example of a correction value for shading correction when the reference region has a frame shape.
  • the signal correction unit 61 first calculates an average value of pixel signal values of reference pixels that are pixels in the reference region B (S21).
  • the reference area B is provided at the four corners of the detection area C as shown in FIG. 3, the average value M1 of the pixel signal values of the reference pixels in the upper left reference area B1, and the upper right
  • the pixel signal values of the reference pixels in the lower right reference region B4 An average value M4 is calculated.
  • M1 (B1 1 + B1 2 +,... + B1 i ) / i (6-1)
  • M2 (B2 1 + B2 2 +,... + B2 j ) / j (6-2)
  • M3 (B3 1 + B3 2 +,... + B3 k ) / k (6-3)
  • M4 (B4 1 + B4 2 +,...
  • B1 1 , B1 i are the pixel signal values of the reference pixels in the reference area B1
  • B2 1 , B2 j are the pixel signal values of the reference pixels in the reference area B2
  • B3 1 , B3 k represents the value of the pixel signal of each reference pixel in the reference region B3
  • B4 1 , B4 m represent the value of the pixel signal of each reference pixel in the reference region B4.
  • B1 1, ..., B1 i indicate the value of the pixel signal of each reference pixel in the reference area B1
  • B2 1, ... B2 j indicate the value of the pixel signal of each reference pixel in the reference area B2.
  • M4 (B4 1 + B4 2 +,... + B4 j ) / j (8-2)
  • B3 1, ..., B3 i indicate the pixel signal value of each reference pixel in the reference area B3
  • B4 1, ... B4 j indicate the value of the pixel signal of each reference pixel in the reference area B4.
  • the frame-shaped reference area B is divided into a plurality of reference areas.
  • the upper row and the lower row are each divided into five divided reference regions B11 to B15, B51 to B55, and the remaining reference regions, that is, divided reference regions B11 to B15, B51 to The left column and the right column, which are reference regions excluding B55, are divided into three divided reference regions B21 to B41 and B25 to B45, respectively, to create a total of 16 divided reference regions B11 to B55.
  • the average values M12 to M55 of the remaining reference areas B12 to B55 can be calculated in the same manner as the reference area M11.
  • the signal correction unit 61 performs shading correction on the value of the pixel signal of the effective pixel in the use effective area A (S22).
  • the reference area B is provided at the four corners of the detection area C as shown in FIG. 3, the reference pixels in the calculated reference areas B1 to B4 for each of the four reference areas B1 to B4.
  • the shading amount S is calculated using the average values M1 to M4 of the pixel signal values.
  • a method for calculating the shading amount S will be described in detail.
  • the row direction that is the direction in which the pixel columns of the use effective area A extend is the x direction
  • the column direction that is the direction in which the pixel rows extend is the y direction
  • the position of each effective pixel in the use effective area A is Expressed in the xy coordinate system.
  • the x coordinate of the endmost column on one end side in the row direction is x1
  • the x coordinate of the endmost column on the other end side is x2
  • the y coordinate of the endmost row on the one end side in the column direction is y1.
  • the y coordinate of the outermost row on the other end side in the column direction is y2.
  • the average value of the pixel signal values of the reference pixels in the reference areas B1 to B4 is set as Z11, Z12, Z21, and Z22, respectively, and the shading amount S at the coordinates (x, y) is calculated.
  • S (x, y) a1x + b1 (10-1)
  • a1 is the inclination in the vertical direction and is represented by the following formula (10-2).
  • a1 (Z12-Z11) / (x2-x1) (10-2)
  • S (x, y) a2x + b2 (11-1)
  • a2 is an inclination and is represented by the following formula (11-2).
  • a2 (Z22-Z21) / (x2-x1) (11-2)
  • the value of the shading amount S shown in FIG. 11 can be calculated. Then, the shading correction is performed by calculating the calculated shading amount S to the value of the pixel signal of the effective pixel in the use effective area A.
  • the average of the pixel signal values of the reference pixels in the reference area B1 at the upper end calculated.
  • linear interpolation is performed to calculate the shading amount S of each effective pixel in the use effective region A, and the calculated shading
  • the shading correction is performed by calculating the amount S to the value of the pixel signal of each effective pixel in the use effective area A.
  • the captured infrared image becomes generally whitish on the upper side. That is, the density of the infrared image captured by the temperature difference around the infrared imaging device body 12 is uneven.
  • the shading amount S in the vertical direction of the image can be acquired in detail, the unevenness in the vertical direction of the infrared image can be accurately corrected.
  • linear interpolation is performed using the calculated step difference between the average value M3 of the pixel signal values of the reference pixels in the reference region B3 at the left end portion and the average value M4 of the pixel signal values of the reference pixels in the reference region B4 at the right end portion.
  • the shading correction S is performed by calculating the shading amount S of the effective pixels in the use effective region A and calculating the calculated shading amount S to the value of the pixel signal of the effective pixels in the use effective region A.
  • the calculation is performed for each of the 16 divided reference areas B11 to B55 created above.
  • the shading amount S is calculated using the average values M11 to M55 of the pixel signal values of the reference pixels in the divided reference regions B11 to B55.
  • the shading amount S can be calculated using, for example, pixel signal values of surrounding pixels.
  • a method for calculating the shading amount S will be described in detail.
  • the shading amount at the upper right effective pixel Z24 in the use effective area A can be calculated by the following equation (14).
  • the usable effective area A is divided into four areas divided in the center in the horizontal direction and the vertical direction, and the effective pixel T1 located in the upper left area with respect to the center of the usable effective area A is the effective pixel T1.
  • the average value M or effective pixel value of the divided reference region B adjacent to the upper and left sides is added, and the average value M or effective pixel value of the divided reference region B adjacent to the upper left of the effective pixel T1 is added from the added value.
  • the shading amount S at the effective pixel T is calculated by subtracting the value.
  • the average value M or the effective pixel value of the divided reference area B adjacent to the lower and left sides of the effective pixel T2 is added.
  • the shading amount S at the effective pixel T2 is calculated by subtracting the average value M or the value of the effective pixel of the divided reference region B adjacent to the lower left of the effective pixel T2 from the added value.
  • the average value M or the effective pixel value of the divided reference area B adjacent to the upper and right sides of the effective pixel T3 is set.
  • the shading amount S in the effective pixel T3 is calculated by adding and subtracting the average value M or the value of the effective pixel of the divided reference region B adjacent to the diagonally upper right of the effective pixel T3 from the added value.
  • the average value M or the effective pixel value of the divided reference area B adjacent to the lower and right sides of the effective pixel T4 is added.
  • the shading amount S at the effective pixel T4 is calculated by subtracting the average value M or the value of the effective pixel of the divided reference region B adjacent to the lower left of the effective pixel T4 from the added value.
  • the shading amount is calculated for each effective pixel in the use effective region A by calculating the shading amount from the effective pixel farthest from the center of the use effective region A.
  • the effective pixel Z23 in FIG. 13 is a pixel located in the center in the horizontal direction of the use effective area A.
  • the effective value Z13 above the effective pixel Z23 and the left effective pixel Z22 are slanted.
  • the shading amount is calculated using the upper left average value Z12
  • the shading amount may be calculated using the average value Z13 above the effective pixel Z23, the right effective pixel Z24, and the diagonally upper right average value Z14.
  • the shading amount value calculated previously in the surrounding pixels can be used.
  • the pixel located in the center in the vertical direction of the use effective area A can be calculated in the same manner. Further, when the effective pixel is located in the center of the use effective area A, the value of the shading amount calculated in any of the surrounding pixels may be used or may be appropriately selected.
  • the signal correction unit 61 calculates the value of the shading amount calculated for each effective pixel from each of the pixel signal values of the effective pixels in the use effective area A, and shades the pixel signal value. to correct.
  • the value of the shading amount calculated for each pixel may include a positive value and a negative value.
  • the shading amount is a negative value
  • the absolute value of the shading amount is added to the value of the pixel signal of the effective pixel in the use effective area A.
  • the shading amount is positive In this case, the value of the shading amount is subtracted from the value of the pixel signal of the effective pixel in the use effective area A.
  • shading correction is performed using average values M11 to M55 of pixel signal values of reference pixels in 16 divided reference regions B11 to B55 provided in a frame shape at the edge of the detection region C.
  • the shading amount S in the vertical direction and the horizontal direction of the infrared image due to the temperature difference around the infrared imaging device main body 12 can be acquired in detail, the unevenness of the entire infrared image can be accurately corrected.
  • the optical axis O of the imaging optical system 2 and the center 31o of the detection region 31 of the infrared sensor 3 are made to coincide with each other, and the centers O and 31o are set.
  • the detection region 31 in the detection region 31 of the infrared sensor 3 is obtained.
  • An effective area AR in which the imaging area D overlaps with the imaging area D and a reference area BR in which the imaging area D does not overlap with the effective area AR of the detection area 31 are provided.
  • the effective area AR in which infrared rays enter and the reference area BR in which infrared rays do not enter can be formed in the detection area 31.
  • injection of infrared rays is unnecessary, the infrared sensor 3 can be manufactured simply and at low cost.
  • the amount of infrared radiation incident on each of the effective area AR and the reference area BR is not affected by the temperature difference between the infrared imaging apparatus main body 12 and a further component for avoiding the incidence of infrared radiation. The fluctuation of the signal value can be accurately corrected.
  • the above-described interpolation method is used to calculate the shading amount S.
  • the present invention is not limited to this, and a known method such as linear interpolation or nonlinear interpolation can be used. .
  • FIG. 15 is a schematic cross-sectional view illustrating the configuration of the infrared imaging device according to the second embodiment of the present invention.
  • the infrared imaging apparatus of the second embodiment has a configuration in which a temperature sensor described later is provided in the infrared imaging apparatus 1 of the above-described embodiment, and therefore, the description of the configuration of FIG.
  • the temperature sensor 13 and the correction method using the output signal from the temperature sensor 13 by the signal correction unit 61 will be described in detail.
  • a temperature sensor 13 is provided at a position facing the imaging optical system 2 in the first main body portion 10.
  • a known one such as a thermistor, a thermocouple, or a resistance temperature detector can be used.
  • the output signal from the temperature sensor 13 is sent to the signal correction unit 61 by wired or wireless communication (not shown).
  • FIG. 16 is a flowchart of the third correction process of the signal correction unit 61.
  • a series of processing of the infrared imaging apparatus of the present embodiment is the same as the processing of the flowchart of FIG. 7 of the above-described embodiment, and thus description thereof is omitted here, and only correction processing by the signal correction unit 61 is described. To do.
  • the same processing as in FIG. 8 is denoted by the same step number, and detailed description thereof is omitted.
  • the signal correction unit 61 calculates the average value M of the pixel signal values of the reference pixels that are the pixels in the reference region B in step S11, and in the use effective region A in step S12. Offset correction is performed by subtracting the average value M from the value of the pixel signal of the effective pixel.
  • the signal correction unit 61 detects an output signal from the temperature sensor 13 and calculates a value corresponding to the detected value of the output signal (step S13).
  • FIG. 17 shows a relationship between the value of the output signal from the temperature sensor 13 and the calculated value corresponding to this value.
  • the calculation value corresponding to the value of the output signal from the temperature sensor 13 is set in advance for each type of the infrared imaging device 1, and corresponds to the value of the output signal from the temperature sensor 13, for example, as shown in FIG.
  • a table of calculation values is stored in the storage unit 7. In this table, the amount of infrared rays radiated for each temperature of the infrared imaging device body 12 is detected, and a value based on the amount of infrared rays is set as a calculation value.
  • the infrared imaging device 1 in a state where the infrared imaging device 1 is placed in a constant temperature bath and the temperature of the infrared imaging device 1 is set to be constant, a reference heat source whose absolute temperature is known is photographed by the infrared imaging device 1, and the infrared imaging device
  • the calculated value of FIG. 17 is calculated from the difference between the temperature data of 1 and the temperature of the reference heat source.
  • This table is set in advance at the design stage or manufacturing stage of the infrared imaging device 1.
  • the signal correction unit 61 refers to the table stored in the storage unit 7 to calculate a calculation value corresponding to the value of the output signal from the temperature sensor 13.
  • the signal correction unit 61 calculates a calculation value corresponding to the value of the output signal of the temperature sensor 13 calculated in step S13 to the value after the offset correction in step S12, and further performs offset correction (step) S14).
  • the infrared image indicated by the value after offset correction is stored in the storage unit 7 in step S12. Is done.
  • the calculated values M1 to M3 when the value of the output signal from the temperature sensor 13 is any of N1 to N3 are positive values
  • the calculated values M1 to M3 are offset in step S12.
  • the value of the pixel signal is further offset-corrected by subtracting from the corrected value. If the value after the calculated value M5 when the value of the output signal from the temperature sensor 13 is any value after N5 is a negative value, the absolute value of the value after the calculated value M5 is obtained in step S12.
  • the value of the pixel signal is further offset corrected by adding to the value after the offset correction.
  • the value of the output signal from the temperature sensor 13 is a calculated value that is a value based on the amount of infrared rays radiated from the infrared imaging device body 12, and thus is based on the value of the output signal from the temperature sensor 13.
  • the calculation value which is a value, to the pixel signal value after offset correction, it is possible to offset the value due to the infrared ray radiated from the infrared imaging device main body 12 out of the infrared rays incident on the infrared sensor 3. Therefore, it is possible to acquire a highly accurate image signal value based on the infrared rays emitted from the camera, and to acquire a highly accurate infrared image based on the absolute temperature of the subject.
  • FIG. 18 is a flowchart of the fourth correction process of the signal correction unit 61.
  • a series of processing of the infrared imaging apparatus of the present embodiment is the same as the processing of the flowchart of FIG. 7 of the above-described embodiment, and thus description thereof is omitted here, and only correction processing by the signal correction unit 61 is described. To do.
  • FIG. 18 the same processing as in FIG. 9 is indicated by the same step number, and detailed description thereof is omitted.
  • the signal correction unit 61 calculates an average value M of pixel signal values of reference pixels, which are pixels in each reference region B, in step S21, and in the use effective region A in step S22.
  • the shading correction is performed on the value of the pixel signal of the effective pixel.
  • the signal correction unit 61 detects an output signal from the temperature sensor 13 and calculates a value corresponding to the detected value of the output signal (step S23).
  • step S23 since the process of step S23 is the same as the process of step 13 of FIG. 16, description here is abbreviate
  • the signal correction unit 61 performs an offset correction by calculating a calculation value corresponding to the value of the output signal of the temperature sensor 13 calculated in step S23 to the value after the shading correction in step S22 (step S24). ).
  • the calculated values M1 to M3 when the value of the output signal from the temperature sensor 13 is any of N1 to N3 are positive values
  • the calculated values M1 to M3 are shaded in step S22.
  • the value of the pixel signal is offset-corrected by subtracting from the corrected value. If the value after the calculated value M5 when the value of the output signal from the temperature sensor 13 is any value after N5 is a negative value, the absolute value of the value after the calculated value M5 is determined in step S22.
  • the value of the pixel signal is offset corrected by adding to the value after the shading correction.
  • infrared rays emitted from the subject and infrared rays emitted from the infrared imaging device main body 12 are incident on the infrared sensor 3.
  • the value of the output signal from the temperature sensor 13 is a calculated value that is a value based on the amount of infrared rays radiated from the infrared imaging device body 12, it is based on the value of the output signal from the temperature sensor 13.
  • the calculated value which is a value
  • the value of the pixel signal after shading correction it is possible to offset the value due to the infrared rays radiated from the infrared imaging device main body 12 out of the infrared rays incident on the infrared sensor 3.
  • a highly accurate image signal value based on infrared rays emitted from the subject can be acquired, and a highly accurate infrared image based on the absolute temperature of the subject can be acquired.
  • the temperature sensor 13 is provided at a position facing the imaging optical system 2 in the infrared imaging apparatus main body 12, but the present invention is not limited to this.
  • the temperature sensor 13 may be provided anywhere on the infrared imaging device 12, for example, may be provided on a wall surface constituting the second main body portion 11 inside the infrared imaging device main body 12 of FIG. You may provide in the wall surface which comprises the 1st main-body part 10 and the 2nd main-body part 11 outside the infrared imaging device main body 12, and can change suitably.
  • the temperature sensor 13 When the temperature sensor 13 is provided inside the infrared imaging device main body 12, the temperature of the wall located in the space where the infrared sensor 3 is installed is measured, and the temperature of the wall that radiates infrared rays to the infrared sensor 3. Therefore, the correction accuracy can be improved. Further, when the infrared sensor 3 is provided at a position corresponding to the imaging optical system 2 in the infrared imaging apparatus main body 12, the temperature of the wall that emits infrared rays to the infrared sensor 3 can be measured. Further improvement can be achieved. In the present embodiment, one temperature sensor 13 is provided, but a plurality of temperature sensors 13 may be provided at different positions. When a plurality of temperature sensors 13 are provided, the average value of the output signals from each temperature sensor 13 can be used.
  • an infrared sensor that detects far infrared rays
  • the present invention is not limited to this, and an infrared sensor that detects mid-infrared rays and near-infrared rays may be used.
  • the infrared imaging device 1 can be suitably applied to a security imaging device, an in-vehicle imaging device, and the like, and may be configured as a single imaging device that captures an infrared image. It may be configured to be incorporated in an imaging system having an infrared image capturing function.
  • the infrared imaging device of the present invention is not limited to the above embodiment, and can be appropriately changed without departing from the spirit of the invention.

Landscapes

  • Engineering & Computer Science (AREA)
  • Multimedia (AREA)
  • Signal Processing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Spectroscopy & Molecular Physics (AREA)
  • Transforming Light Signals Into Electric Signals (AREA)

Abstract

[Problème] Corriger, avec un degré élevé de précision, des fluctuations des valeurs de signaux de pixels émis par un capteur infrarouge (3) provoquées par des variations de température, sans provoquer une augmentation du nombre de composants d'un dispositif d'imagerie infrarouge (1). [Solution] Ce dispositif d'imagerie infrarouge (1) comprend : un système optique d'imagerie (2); un capteur infrarouge (3), en faisant correspondre le centre (31o) de la région de détection (31) et l'axe optique (O) du système optique d'imagerie, et en rendant la longueur de la région de détection (31) plus longue que la longueur de la région d'imagerie (D) du système optique d'imagerie dans la direction passant par le centre (31o), une région efficace (A) dans laquelle la région de détection (31) et la région d'imagerie (D) se chevauchent, et la région de référence (B) dans laquelle la région de détection (31) et la région d'imagerie (D) ne se chevauchent pas, sont formées dans la région de détection (31); et une unité de correction du signal (61) qui corrige les fluctuations des valeurs de signaux de pixels provoquées par des variations de température, par des valeurs de correction de signaux de pixel de pixels efficaces, qui sont des pixels à l'intérieur de la région efficace (A), à l'aide des valeurs de signaux de pixel pour des signaux de référence, qui sont des pixels à l'intérieur de la région de référence (B).
PCT/JP2016/002352 2015-05-21 2016-05-13 Dispositif d'imagerie infrarouge WO2016185699A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2015-103564 2015-05-21
JP2015103564 2015-05-21

Publications (1)

Publication Number Publication Date
WO2016185699A1 true WO2016185699A1 (fr) 2016-11-24

Family

ID=57319741

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2016/002352 WO2016185699A1 (fr) 2015-05-21 2016-05-13 Dispositif d'imagerie infrarouge

Country Status (1)

Country Link
WO (1) WO2016185699A1 (fr)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0342531A (ja) * 1989-07-11 1991-02-22 Mitsubishi Electric Corp 赤外線計測装置
JP2006292594A (ja) * 2005-04-12 2006-10-26 Nec Electronics Corp 赤外線検知器
JP2008113141A (ja) * 2006-10-30 2008-05-15 Fujifilm Corp 撮像装置及び信号処理方法
JP2008160561A (ja) * 2006-12-25 2008-07-10 Auto Network Gijutsu Kenkyusho:Kk 撮像システム及び撮像装置
WO2014018948A2 (fr) * 2012-07-26 2014-01-30 Olive Medical Corporation Système de caméra à capteur d'image cmos monolithique à surface minimale

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH0342531A (ja) * 1989-07-11 1991-02-22 Mitsubishi Electric Corp 赤外線計測装置
JP2006292594A (ja) * 2005-04-12 2006-10-26 Nec Electronics Corp 赤外線検知器
JP2008113141A (ja) * 2006-10-30 2008-05-15 Fujifilm Corp 撮像装置及び信号処理方法
JP2008160561A (ja) * 2006-12-25 2008-07-10 Auto Network Gijutsu Kenkyusho:Kk 撮像システム及び撮像装置
WO2014018948A2 (fr) * 2012-07-26 2014-01-30 Olive Medical Corporation Système de caméra à capteur d'image cmos monolithique à surface minimale

Similar Documents

Publication Publication Date Title
US9497397B1 (en) Image sensor with auto-focus and color ratio cross-talk comparison
US9584743B1 (en) Image sensor with auto-focus and pixel cross-talk compensation
US10110833B2 (en) Hybrid infrared sensor array having heterogeneous infrared sensors
EP1727359B1 (fr) Procédé de reduction du motif de bruit fixe dans une caméra infrarouge
US20200351461A1 (en) Shutterless calibration
KR102088401B1 (ko) 이미지 센서 및 이를 포함하는 촬상 장치
US10419696B2 (en) Infrared imaging device and signal correction method using infrared imaging device
EP2923187B1 (fr) Réseau hybride de capteurs infrarouges présentant des capteurs infrarouges hétérogènes
US20140160299A1 (en) Infrared Sensor Amplification Techniques for Thermal Imaging
KR20110016438A (ko) 카메라 센서 교정
JP2013200137A (ja) 赤外線温度測定装置、赤外線温度測定方法、および、赤外線温度測定装置の制御プログラム
JP2008219613A (ja) 非冷却赤外線カメラ
US20170370775A1 (en) Infrared detection apparatus
KR20130104756A (ko) 영상 촬영 장치 및 이에 포함된 이미지 센서
JP5493900B2 (ja) 撮像装置
CA3101388A1 (fr) Dispositif et procede de compensation de la chaleur parasite dans une camera infrarouge
JP2011038838A (ja) 熱型赤外線出力計測装置および熱型赤外線出力計測方法
JP2005249723A (ja) 温度分布を含む画像の出力装置およびその制御方法
JP2011044813A (ja) 撮像装置、補正値算出方法、及び撮像方法
US20160219231A1 (en) Dark current gradient estimation using optically black pixels
WO2016185699A1 (fr) Dispositif d'imagerie infrarouge
EP3803780B1 (fr) Dispositif et procédé de compensation de la chaleur parasite dans une caméra infrarouge
WO2016185698A1 (fr) Dispositif d'imagerie infrarouge
JP2012049947A5 (fr)
JP2012049947A (ja) 画像処理装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 16796095

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 16796095

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP