WO2020183565A1 - 画像処理装置及び熱画像生成システム、並びにプログラム及び記録媒体 - Google Patents
画像処理装置及び熱画像生成システム、並びにプログラム及び記録媒体 Download PDFInfo
- Publication number
- WO2020183565A1 WO2020183565A1 PCT/JP2019/009680 JP2019009680W WO2020183565A1 WO 2020183565 A1 WO2020183565 A1 WO 2020183565A1 JP 2019009680 W JP2019009680 W JP 2019009680W WO 2020183565 A1 WO2020183565 A1 WO 2020183565A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- image
- thermal
- thermal image
- skeleton
- processing apparatus
- Prior art date
Links
- 238000012545 processing Methods 0.000 title claims description 87
- 238000003702 image correction Methods 0.000 claims abstract description 37
- 238000004040 coloring Methods 0.000 claims description 14
- 238000003384 imaging method Methods 0.000 claims description 13
- 239000003086 colorant Substances 0.000 claims description 10
- 238000004891 communication Methods 0.000 claims description 4
- 238000012937 correction Methods 0.000 claims description 4
- 101100063435 Caenorhabditis elegans din-1 gene Proteins 0.000 abstract description 14
- 230000000007 visual effect Effects 0.000 abstract 2
- 238000004458 analytical method Methods 0.000 description 14
- 239000002131 composite material Substances 0.000 description 13
- 238000010586 diagram Methods 0.000 description 13
- 238000000605 extraction Methods 0.000 description 10
- 238000000034 method Methods 0.000 description 10
- 238000004364 calculation method Methods 0.000 description 9
- 238000001514 detection method Methods 0.000 description 7
- 230000006870 function Effects 0.000 description 7
- 238000012986 modification Methods 0.000 description 6
- 230000004048 modification Effects 0.000 description 6
- 230000001174 ascending effect Effects 0.000 description 4
- 239000000284 extract Substances 0.000 description 3
- 230000002194 synthesizing effect Effects 0.000 description 3
- 238000012935 Averaging Methods 0.000 description 2
- 230000000694 effects Effects 0.000 description 2
- 238000003672 processing method Methods 0.000 description 2
- 201000005569 Gout Diseases 0.000 description 1
- 230000005678 Seebeck effect Effects 0.000 description 1
- 230000003044 adaptive effect Effects 0.000 description 1
- 230000003321 amplification Effects 0.000 description 1
- 230000015572 biosynthetic process Effects 0.000 description 1
- 238000006243 chemical reaction Methods 0.000 description 1
- 230000007423 decrease Effects 0.000 description 1
- 238000002474 experimental method Methods 0.000 description 1
- 239000000203 mixture Substances 0.000 description 1
- 238000003199 nucleic acid amplification method Methods 0.000 description 1
- 238000000926 separation method Methods 0.000 description 1
- 238000004088 simulation Methods 0.000 description 1
- 238000003786 synthesis reaction Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/73—Deblurring; Sharpening
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/80—Camera processing pipelines; Components thereof
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/10—Segmentation; Edge detection
- G06T7/194—Segmentation; Edge detection involving foreground-background segmentation
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T5/00—Image enhancement or restoration
- G06T5/50—Image enhancement or restoration using two or more images, e.g. averaging or subtraction
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/536—Depth or shape recovery from perspective effects, e.g. by using vanishing points
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/50—Depth or shape recovery
- G06T7/55—Depth or shape recovery from multiple images
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/20—Cameras or camera modules comprising electronic image sensors; Control thereof for generating image signals from infrared radiation only
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N23/00—Cameras or camera modules comprising electronic image sensors; Control thereof
- H04N23/60—Control of cameras or camera modules
-
- H—ELECTRICITY
- H04—ELECTRIC COMMUNICATION TECHNIQUE
- H04N—PICTORIAL COMMUNICATION, e.g. TELEVISION
- H04N7/00—Television systems
- H04N7/18—Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10048—Infrared image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20212—Image combination
- G06T2207/20216—Image averaging
Definitions
- the present invention relates to an image processing device and a thermal image generation system.
- the present invention also relates to programs and recording media.
- a general thermal infrared solid-state image sensor (hereinafter referred to as a thermal image sensor) visualizes infrared rays emitted by a subject, and the infrared rays emitted by the subject are collected by a lens and connected onto the image sensor.
- the difference in temperature rise caused by the image sensor absorbing infrared rays is the shade of the image.
- a thermal image sensor that can acquire thermal information can acquire information that cannot be acquired by a visible camera.
- the image resolution, contrast, contour sharpness, and SN ratio are small.
- large thermal image sensors are expensive. It is desired to improve the image quality by using a low-priced thermal image sensor and by image processing.
- Patent Document 1 In the case of an inexpensive small thermal image sensor, the method of Patent Document 1 has a problem that noise amplification occurs and visibility deteriorates when the sharpening process is performed due to insufficient SN ratio.
- the present invention has been made to solve the above problems, and an object of the present invention is to make it possible to generate a thermal image that is clear and has a high SN ratio.
- the image processing apparatus is It has a background image generation unit and an image correction unit.
- the background image generation unit The thermal image sensor identifies the intermediate image located in the center when the first thermal image of multiple frames acquired by imaging in the same field of view or the sorted image of multiple frames generated from the first thermal image is arranged in order of brightness. And A feature amount as an index of brightness was calculated for each of the first thermal image and the sorted image. Of the first thermal image or the sorted image, a plurality of frames of the first thermal image or the sorted image in which the difference in the feature amount from the intermediate image is smaller than a predetermined difference threshold are averaged in the frame direction.
- the skeleton image obtained by extracting the skeleton component is stored in a storage device.
- the image correction unit The thermal image sensor corrects the second thermal image acquired by imaging in the same field of view as the first thermal image by using the skeleton image stored in the storage device to correct the thermal image. To generate.
- the image processing apparatus of another aspect of the present invention is It has a background image generation unit and an image correction unit.
- the background image generation unit The thermal image sensor identifies the intermediate image located in the center when the first thermal image of multiple frames acquired by imaging in the same field of view or the sorted image of multiple frames generated from the first thermal image is arranged in order of brightness. And A feature amount as an index of brightness was calculated for each of the first thermal image and the sorted image. Of the first thermal image or the sorted image, a plurality of frames of the first thermal image or the sorted image in which the difference in the feature amount from the intermediate image is smaller than a predetermined difference threshold are averaged in the frame direction.
- the sharpened image obtained by sharpening the average image is stored in a storage device, and then stored.
- the image correction unit The thermal image sensor was obtained by extracting a skeletal component from the sharpened image stored in the storage device with respect to the second thermal image acquired by imaging in the same field as the first thermal image.
- a corrected thermal image is generated by performing correction using the skeleton image.
- FIG. 1 shows the schematic structure of the thermal image generation system which includes the image processing apparatus of Embodiment 1 of this invention. It is a functional block diagram of the image processing apparatus of Embodiment 1 of this invention.
- (A) and (b) are diagrams showing different configuration examples of the sharpening portion of FIG.
- (A) to (d) are diagrams showing different examples of weight tables used in the image processing apparatus of FIG.
- FIG. 1 shows a schematic configuration of a thermal image generation system including the image processing apparatus according to the first embodiment of the present invention.
- the infrared generation system shown in FIG. 1 includes a thermal image sensor 1, an image processing device 2, a storage device 3, and a display terminal 4.
- the thermal image sensor 1 detects infrared rays radiated from the subject and generates a thermal image showing the temperature distribution of the subject.
- the infrared ray referred to here is, for example, an electromagnetic wave having a wavelength of 8 ⁇ m to 12 ⁇ m.
- the thermal image sensor 1 has a plurality of infrared detection elements arranged one-dimensionally or two-dimensionally. The signal output from each infrared detection element represents the pixel value (pixel value) of the thermal image.
- the infrared detection element for example, a pyroelectric element can be used.
- a thermopile-type infrared detection element connected to a thermocouple that causes the Seebeck effect, a bolometer-type infrared detection element that utilizes a change in resistance value due to a temperature rise, or the like can also be used.
- the infrared detection element should not be limited to these, and any type of infrared detection element can be used as long as it can detect infrared rays.
- FIG. 2 is a functional block diagram of the image processing device 2 of the first embodiment.
- the illustrated image processing device 2 has a background image generation unit 21 and an image correction unit 22.
- the background image generation unit 21 generates a background image based on a plurality of frames of thermal images output from the thermal image sensor 1.
- the multi-frame thermal image used to generate the background image is acquired by the thermal image sensor 1 repeating imaging in the same field of view.
- the background image generation unit 21 specifies the order of the size of the pixel values for the pixels at the same position in the thermal image of the plurality of frames, and generates a sorted image of a plurality of frames composed of a set of pixels having the same order. ..
- the background image generation unit 21 further identifies a sorted image composed of pixels located at the center when arranged in the order of pixel value sizes, that is, a set of pixels having an intermediate rank, as an intermediate image. Since the intermediate image specified in this way is composed of a set of pixels having an intermediate rank, the intermediate image is located at the center when the sorted images Dc of a plurality of frames are arranged in the order of brightness.
- the background image generation unit 21 further calculates the feature amount for each of the sorted images of the plurality of frames, and the difference between the feature amount and the intermediate image is smaller than the predetermined threshold value (difference threshold value) FDt. Is averaged in the frame direction to generate an average image.
- the background image generation unit 21 generates a skeleton image Dg by extracting skeleton components after sharpening the average image, and stores the generated skeleton image Dg in the storage device 3 as a background image.
- Each of the multiple frames of thermal images used to generate the background image is called the first thermal image and is represented by the reference numeral Din1.
- the image correction unit 22 displays the skeleton image Dg stored in the storage device 3 with respect to the second thermal image Din2 output from the thermal image sensor 1 and obtained by imaging in the same field as the first thermal image Din1.
- a corrected thermal image Dout is generated by superimposing.
- the corrected thermal image Dout is the one in which the SN ratio is improved with respect to the second thermal image Din2.
- the background image generation unit 21 has a temperature sorting unit 211, a feature amount calculation unit 212, an analysis unit 213, an average image generation unit 214, a sharpening unit 215, and a skeleton component extraction unit 216.
- the temperature sorting unit 211 compares the pixels at the same position of the first thermal image Din1 of a plurality of frames, for example, N frames (N is an integer of 2 or more) with each other, and arranges them in the order of the size of the pixel values to specify the order. When arranging in order, the pixel values may be arranged in descending order (descending order) or in ascending order (ascending order).
- the temperature sorting unit 211 also generates a plurality of frames of sorted image Dc composed of a set of pixels having the same rank. That is, the nth sorted image Dc is composed of a set of pixels having a rank of n (n is any of 1 to N).
- the temperature sorting unit 211 further specifies the sorted image Dc composed of the pixels located at the center when arranged in the order of the size of the pixel values, that is, the set of pixels having an intermediate rank as the intermediate image Dd.
- the temperature sorting unit 211 outputs the generated plurality of sorted images Dc together with the information Sdc indicating the order of each.
- the temperature sorting unit 211 further outputs the information IDd that identifies the intermediate image Dd.
- the feature amount calculation unit 212 calculates the feature amount Qf, which is an index of brightness, for each of the sorted images Dc of a plurality of frames. As the feature amount Qf, the average value of the pixel values of the sorted images of each frame, the intermediate value of the pixel values, the maximum value of the pixel values, or the minimum value of the pixel values is calculated.
- the analysis unit 213 receives the feature amount Qf of each sorted image from the feature amount calculation unit 212, receives the information IDd for specifying the intermediate image Dd from the temperature sort unit 211, and specifies the high temperature boundary frame Fu and the low temperature boundary frame Fl. ..
- the analysis unit 213 sets the image having the largest feature amount among the sorted images having a larger feature amount than the intermediate image Dd and a difference (absolute value) of the feature amount from the intermediate image Dd smaller than the difference threshold FDt at the high temperature boundary. Specify as frame Fu. When there is no sorted image in which the feature amount is larger than the intermediate image Dd and the difference (absolute value) of the feature amount is equal to or larger than the difference threshold FDt, the image with the largest feature amount among the sorted images is the high temperature boundary frame Fu. Is specified.
- the analysis unit 213 lowers the temperature of the sorted image having the smallest feature amount among the sorted images whose feature amount is smaller than that of the intermediate image Dd and whose difference (absolute value) of the feature amount from the intermediate image Dd is smaller than the difference threshold FDt. It is specified as the boundary frame Fl. If there is no sorted image whose feature amount is smaller than the intermediate image Dd and the difference (absolute value) of the feature amount is equal to or greater than the difference threshold FDt, the image with the smallest feature amount among the sorted images is the low temperature boundary frame Fl. Is specified.
- the analysis unit 213 outputs the information IFu that specifies the high temperature boundary frame Fu and the information IFl that specifies the low temperature boundary frame Fl.
- the difference threshold value FDt may be stored in the storage device 3 or may be stored in a parameter memory (not shown).
- the average image generation unit 214 receives information Sdc indicating the order of each sorted image together with the sorted image Dc from the temperature sorting unit 211, and receives information IFu and a low temperature boundary frame Fl for specifying the high temperature boundary frame Fu from the analysis unit 213.
- the average image De is generated by receiving the specified information IFl.
- the average image generation unit 214 sets the pixel values of the images of the frames (including the high temperature boundary frame Fu and the low temperature boundary frame Fl) from the high temperature boundary frame Fu to the low temperature boundary frame Fl among the sorted image Dc of the plurality of frames in the frame direction.
- An average image De is generated on average. "Average in the frame direction" means averaging the pixel values of the pixels at the same position in the images of a plurality of frames.
- the average image De is unsteady by excluding the frame having a feature amount larger than the feature amount of the high temperature boundary frame Fu and the frame having a feature amount smaller than the feature amount of the low temperature boundary frame Fl. It is possible to prevent the influence of a subject that appears as a target, particularly a heat source (high temperature object) or a low temperature object.
- the subjects that appear non-stationarily here include people.
- the sharpening unit 215 sharpens the average image De to generate a sharpened image Df.
- Examples of the sharpening method in the sharpening unit 215 include histogram equalization and the Retinex method.
- FIG. 3A shows a configuration example of the sharpening unit 215 that performs sharpening by histogram equalization.
- the sharpening section 215 shown in FIG. 3A is composed of a histogram equalizing section 2151.
- Histogram equalization unit 2151 performs histogram equalization on the average image De.
- Histogram equalization is a process of calculating the distribution of pixel values in the entire image and converting the pixel values so that the distribution of pixel values becomes the distribution of a desired shape.
- Histogram equalization may be Contrast Limited Adaptive Histogram Equalization.
- FIG. 3 (b) shows a configuration example of the sharpening unit 215 that sharpens by the Retinex method.
- the sharpening section 215 shown in FIG. 3B has a filter separating section 2152, adjusting sections 2153 and 2154, and a combining section 2155.
- the filter separation unit 2152 separates the input average image De into a low frequency component Del and a high frequency component Deh.
- the adjusting unit 2153 adjusts the magnitude of the pixel value by multiplying the low frequency component Del by the first gain.
- the adjusting unit 2154 adjusts the size of the pixel value by multiplying the high frequency component Deh by the second gain. The second gain is larger than the first gain.
- the synthesis unit 2155 synthesizes the outputs of the adjustment units 2153 and 2154. The image obtained as a result of the composition has an enhanced high frequency component.
- the skeleton component extraction unit 216 extracts the skeleton component from the sharpening image Df output from the sharpening unit 215, and generates a skeleton image Dg composed of the extracted skeleton component.
- the skeleton component is a component representing the global structure of an image, and includes an edge component and a flat component (a slowly changing component) in the image.
- the total variational norm minimization method can be used for the extraction of skeletal components.
- the background image generation unit 21 transmits the skeleton image Dg as a background image to the storage device 3 and stores it.
- the image correction unit 22 corrects the second thermal image Din2 output from the thermal image sensor 1 using the skeleton image Dg stored in the storage device 3, generates a corrected thermal image Dout, and outputs the corrected thermal image Dout. ..
- the second thermal image Din2 is obtained by imaging in the same field of view as the first thermal image Din1.
- the second thermal image Din2 may be obtained by imaging at a different imaging time from the first thermal image Din1, or an image of one frame of the first thermal image Din1 may be used as the second thermal image. good.
- the image correction unit 22 has a superimposing unit 221.
- the superimposing unit 221 generates a corrected thermal image Dout by superimposing the skeleton image Dg on the second thermal image Din2.
- Superposition is performed, for example, by weighted addition.
- the gain may be multiplied by the skeleton image Dg so that the components of the skeleton image Dg become clearer.
- P Dout P Din2 + P Dg ⁇ g formula (1)
- P Din2 is the pixel value of the second thermal image Din2
- P Dg is the pixel value of the skeleton image Dg
- g is the gain with respect to the skeleton image Dg
- P Dout is a pixel value of the corrected thermal image Dout.
- a skeleton image with less noise and high contrast is generated from the first thermal image of a plurality of frames and stored as a background image, and the stored skeleton image is combined with the second thermal image. Therefore, it is possible to generate a thermal image having a high SN ratio and a high time resolution.
- the average image instead of using the average image as a background image as it is, only the skeleton component of the average image is extracted, stored as a background image, and added to the second thermal image to save the temperature information of the second thermal image. At the same time, information on the background structure can be added.
- the subject appearing unsteadily in the background image particularly the heat source (high temperature) It can be prevented from being affected by an object) or a cold object.
- FIG. 4 is a functional block diagram of the image processing device 2a according to the second embodiment of the present invention.
- the image processing device 2a shown in FIG. 4 is generally the same as the image processing device 2 of FIG. 2, but instead of the background image generation unit 21 and the image correction unit 22, the background image generation unit 21b and the image correction unit 22b are used. Be prepared.
- the background image generation unit 21b is generally the same as the background image generation unit 21, but includes a threshold value generation unit 217.
- the image correction unit 22b is generally the same as the image correction unit 22, but includes a weight determination unit 222.
- the threshold value generation unit 217 generates a weight determination threshold value Th, transmits the threshold value Th to the storage device 3, and stores the threshold value Th. For example, the threshold value generation unit 217 obtains the average value or the intermediate value of the pixel values of the average image De output from the average image generation unit 214, and determines the threshold value Th based on the calculated average value or the intermediate value.
- the average value or intermediate value of the pixel values of the average image De means the average value or the intermediate value of the pixel values of the pixels located in the entire or main part of the average image De. What kind of relationship the threshold value Th has with respect to the above average value or the intermediate value is determined based on experience or experiment (simulation).
- the threshold value Th may be set to a value higher than the above average value or the intermediate value.
- the value obtained by adding the difference threshold value FDt to the above average value or the intermediate value may be defined as the threshold value Th.
- the difference threshold FDt is also notified to the threshold generation unit 217.
- the threshold value generation unit 217 transmits the generated threshold value Th to the storage device 3 and stores it.
- the weight determination unit 222 creates a weight table based on the threshold value Th stored in the storage device 3, refers to the created weight table, and generates a composite weight w based on the pixel value of the second thermal image Din2. To do. Examples of the weight table created by the weight determination unit 222 based on the threshold value Th are shown in FIGS. 5A and 5B.
- the composite weight w is kept at 1, and the pixel value P Din2 is the threshold value. In the range larger than Th, the composite weight w gradually decreases as the pixel value P Din2 increases.
- the weighted addition rate of the skeleton image Dg can be reduced only when the pixel value of the second thermal image Din2 is higher than the threshold value Th. it can.
- the weight determination unit 222 creates the weight table using the threshold Th, but the weight table may be created without using the threshold Th. Examples of the weight table created without using the threshold Th are shown in FIGS. 5 (c) and 5 (d).
- the composite weight w is 1, and as the pixel value P Din2 increases, the composite weight w Gradually becomes smaller. Even in such a weight table, the addition ratio of the skeleton image Dg can be reduced in the range where the pixel value P Din2 is large.
- the weight table may be such that the composite weight w becomes smaller as the pixel value of the second thermal image Din2 becomes larger.
- the background image generation unit 21a does not need to include the threshold value generation unit 217 (thus, it is the same as the background image generation unit 21 of FIG.
- the weight determination unit 222 does not need to read the threshold value Th from the storage device 3.
- FIG. 6 is a functional block diagram of the image processing device 2c according to the third embodiment of the present invention.
- the image processing device 2c shown in FIG. 6 is generally the same as the image processing device 2 of FIG. 2, but instead of the background image generation unit 21 and the image correction unit 22, the background image generation unit 21c and the image correction unit 22c are used. Be prepared.
- the background image generation unit 21c is generally the same as the background image generation unit 21 of FIG. 2, but does not include the skeleton component extraction unit 216 of FIG. 2, and the sharpening image Df output from the sharpening unit 215 is used as a background image. Is stored in the storage device 3.
- the image correction unit 22c reads the sharpened image Df stored in the storage device 3 and extracts the skeleton component to generate the skeleton image Dg, and corrects the second thermal image Din2 using the skeleton image Dg. .. That is, in the image processing apparatus 2c shown in FIG. 6, the skeleton component is extracted not by the background image generation unit but by the image correction unit.
- the image correction unit 22c has a skeleton component extraction unit 223 and a superimposition unit 221.
- the skeleton component extraction unit 223 reads the sharpened image Df stored in the storage device 3 and extracts the skeleton component to generate the skeleton image Dg.
- the superimposing unit 221 corrects the second thermal image Din2 by superimposing the skeleton image Dg on the second thermal image Din2, and generates the corrected thermal image Dout.
- FIG. 7 is a functional block diagram of the image processing device 2d according to the third embodiment of the present invention.
- the image processing device 2d shown in FIG. 7 is generally the same as the image processing device 2b of FIG. 4, but instead of the background image generation unit 21b and the image correction unit 22b, the background image generation unit 21d and the image correction unit 22d are used. Be prepared.
- the background image generation unit 21d is generally the same as the background image generation unit 21b, but includes a threshold value generation unit 217d instead of the threshold value generation unit 217.
- the threshold value generation unit 217d obtains the average value or the intermediate value of the pixel values of the average image De output from the average image generation unit 214, and based on the calculated average value or the intermediate value, in addition to the weight determination threshold value Th, the image
- the high temperature threshold value Tu and the low temperature threshold value Tl for division are generated, and the generated threshold values Th, Tu and Tl are transmitted to the storage device 3 and stored.
- the high temperature threshold Tu and the low temperature threshold Tl are used for image division.
- the high temperature threshold value Tu is obtained by adding the difference threshold value FDt to the average value or the intermediate value of the pixel values of the average image De.
- the low temperature threshold Tl is obtained by subtracting the difference threshold FDt from the average value or the intermediate value of the pixel values of the average image De.
- the weight determination threshold value Th may be the same as the high temperature threshold value Tu.
- the image correction unit 22d divides the second thermal image Din2 into a high temperature region, an intermediate temperature region, and a low temperature region using the high temperature threshold Tu and the low temperature threshold Tl read from the storage device 3, and colors each region.
- a color image Dh is generated by performing and synthesizing, and a corrected thermal image Dout is generated and output by synthesizing the color image Dh and the skeleton image Dg taken out from the storage device 3.
- the corrected thermal image Dout in this case is a color image colored according to the temperature of each part.
- the image correction unit 22d has a weight determination unit 222, a coloring unit 224, and a superimposition unit 221d.
- the weight determination unit 222 creates the weight table and determines the weight as described with respect to the configuration of FIG. When creating the weight table shown in FIG. 5A or FIG. 5B, it is necessary to use the threshold value Th. As described above, the threshold Th may be the same as the high temperature threshold Tu. In that case, the high temperature threshold value Tu stored in the storage device 3 can be read out and used as the threshold value Th for creating the weight table.
- the rate of weighted addition of the skeleton image Dg can be reduced only when the pixel value of the second thermal image Din2 is higher than the threshold value Th. it can.
- the threshold value Th is the same as the high temperature threshold value Tu
- the rate of addition of the skeleton image Dg can be reduced only when the pixel value P Din2 of the second thermal image Din2 belongs to the high temperature region.
- the weight table may be that shown in FIGS. 5 (c) or 5 (d).
- the weight table may be such that the composite weight w becomes smaller as the pixel value of the second thermal image Din2 becomes larger.
- the coloring unit 224 divides the second thermal image Din2 into a high temperature region, an intermediate temperature region, and a low temperature region using the high temperature threshold Tu and the low temperature threshold Tl, and for each region. Coloring is performed, and a color image Dh is generated by synthesizing the colored images.
- the color image Dh is represented by, for example, red (R), green (G), and blue (B) signals.
- each pixel of the second thermal image Din2 is determined to belong to the high temperature region if its pixel value is larger than the high temperature threshold value Tu, and if the pixel value is equal to or lower than the high temperature threshold value Tu and equal to or higher than the low temperature threshold value Tl. , It is determined that it belongs to the intermediate temperature region, and if the pixel value is smaller than the low temperature threshold value Tl, it is determined that it belongs to the low temperature region.
- the pixels constituting the light emitting portion 101 of the street light, the automobile 103, and the person 105 are determined to belong to the high temperature region, and the pixels constituting the road marking 107 of the road belong to the intermediate temperature region. It is determined that the pixels constituting the pillar 109 of the street light belong to the low temperature region.
- the coloring unit 224 assigns colors in different ranges, that is, first, second, and third ranges to the high temperature region, intermediate temperature region, and low temperature region, and in each region, the colors in the range assigned to the region are assigned. Among them, a color corresponding to the pixel value is assigned to each pixel. At that time, in the boundary portion between the high temperature region, the intermediate temperature region, and the low temperature region, according to the above color assignment and the pixel value for the high temperature region, the intermediate temperature region, and the low temperature region so that the color change is continuous. It is desirable to assign different colors.
- the hue range centered on red in the high temperature region for example, from the center of magenta (center in the hue direction) to the center of yellow
- the hue centered on green in the intermediate temperature region For example, as shown in FIG. 9, the hue range centered on red in the high temperature region (for example, from the center of magenta (center in the hue direction) to the center of yellow), and the hue centered on green in the intermediate temperature region.
- a range (from the center of yellow to the center of cyan) and a hue range centered on blue (from the center of cyan to the center of magenta) are assigned to the low temperature region, and in each region, the hue assigned to each pixel value is assigned. Assign a range of colors.
- the superimposition unit 221d weights and adds the color image Dh and the skeleton image Dg read from the storage device 3 by using the composite weight w.
- the color image Dh is represented by R, G, and B signals, that is, signals of 3 channels, while the skeleton image Dg is represented by signals of 1 gray channel.
- the skeleton image Dg is added to the luminance component Dhy of the color image Dh.
- the color image Dh is converted into a luminance component Dhy and color components such as color difference components Dhcb and Dhcr, the skeleton image Dg is added to the luminance component Dhy, and the luminance component Djy after addition and the above color component are added.
- the values of the R, G, and B components of the corrected thermal image are obtained by performing inverse conversion from the color difference components Dhcb and Dhcr to R, G, and B.
- P Rin is the value of the signal Rin (value of the red component) of the R channel of the color image Dh
- P Gin is the value of the signal Gin of the G channel of the color image Dh (value of the green component)
- P Bin is the value of the signal Bin of the B channel of the color image Dh (the value of the blue component)
- P Dg is the pixel value of the skeleton image Dg
- g is the gain with respect to the skeleton image Dg
- w is the composite weight
- P Rout is the value of the signal Rout (red component value) of the R channel obtained as a result of addition
- P Gout is the value of the signal Gout (green component value) of the G channel obtained as a result of addition.
- P Bout is a value (blue component value) of the signal Bout of the B channel obtained as a result of the addition.
- the brightness component of the color image generated by coloring the second thermal image and the skeleton image Dg are combined, so that the information indicating the heat source and the skeleton image are visually separated. It is possible to improve the visibility of the heat source. That is, when the second heat image Din2 and the skeleton image Dg are combined without coloring, the information indicating the heat source contained in the second heat image Din2 may be buried in the skeleton image Dg, but the second heat By coloring the image, it is possible to avoid such a situation.
- the composite weight is increased to increase the skeleton.
- the skeleton image Dg becomes the second thermal image Din2 by sufficiently correcting the second thermal image Din2 by the image Dg and reducing the composite weight when the pixel value of the second thermal image Din2 is higher than the threshold Th. It has the effect of suppressing the addition of heat to the region where the heat source exists at a large rate and improving visibility.
- the color assignment to the pixel value is not fixed, but the second thermal image Din2 is in the high temperature region and the intermediate temperature region.
- relatively high-temperature areas in the image are always colored with high-temperature colors (colors assigned to high temperatures).
- the relatively low temperature part can always be expressed by the low temperature color (color assigned to the low temperature). For example, if the thermal image contains a temperature offset, and the fixed color assignments to the pixel values, the cold regions may be colored to represent intermediate temperatures, which prevents this. effective.
- a thermal image in color for example, a high-temperature subject is displayed in red, a low-temperature subject is displayed in blue, and an intermediate-temperature subject is displayed in green. If an offset is included, for example, low to intermediate temperatures may be displayed in green. The occurrence of such a situation can be prevented by dividing the thermal image into a high temperature region, an intermediate temperature region, and a low temperature region, and then coloring each region.
- the background image generation unit 21d performs sharpening and extraction of the skeleton component
- the skeleton image is stored in the storage device 3
- the image correction unit 22d stores the skeleton component in the storage device 3.
- the stored skeleton image is read out and used for correction of the second thermal image.
- the background image generation unit 21d stores the sharpening image Df obtained by the sharpening in the storage device 3, and the image correction unit 22d stores the sharpening image Df.
- a skeleton image may be generated by reading out the sharpened image Df stored in the device 3 and extracting the skeleton component, and the generated skeleton image may be used for correction of the second thermal image.
- the weight determination unit 222 may be omitted to perform weighted addition using a composite weight of a constant value.
- FIG. 10 is a functional block diagram of the image processing device 2e according to the fifth embodiment.
- the image processing device 2e shown in FIG. 10 is generally the same as the image processing device 2 of FIG. 2, but includes a background image generation unit 21e instead of the background image generation unit 21.
- the background image generation unit 21e is generally the same as the background image generation unit 21, but instead of the temperature sorting unit 211, the feature amount calculation unit 212, the analysis unit 213, and the average image generation unit 214, the temperature sorting unit 211e, the feature It includes a quantity calculation unit 212e, an analysis unit 213e, and an average image generation unit 214e.
- the feature amount calculation unit 212e calculates the feature amount Qe, which is an index of brightness, for each of the first thermal images Din1 of the plurality of frames, that is, for the first thermal image of each frame.
- the feature amount Qe the average value of the pixel values of each frame, the intermediate value of the pixel values, the maximum value of the pixel values, or the minimum value of the pixel values is calculated.
- the temperature sorting unit 211e receives the feature amount Qe calculated by the feature amount calculation unit 212e, and specifies the order of the magnitude of the feature amount Qe for the first thermal image Din1 of a plurality of frames.
- the features may be arranged in descending order (descending order) or in ascending order (ascending order).
- the temperature sorting unit 211e further specifies the first thermal image Din1 which is located at the center when arranged in the order of the size of the feature quantity Qe, that is, has an intermediate rank, as the intermediate image Dd.
- the temperature sorting unit 211e outputs information Sdin indicating the order of each of the first thermal images Din1 of a plurality of frames.
- the temperature sorting unit 211e also outputs the information IDd that identifies the intermediate image Dd.
- the analysis unit 213e receives the feature amount Qe of each first thermal image from the feature amount calculation unit 212e, receives the information IDd for identifying the intermediate image Dd from the temperature sorting unit 211e, and receives the high temperature boundary frame Fu and the low temperature boundary frame Fl. Identify.
- the analysis unit 213e selects the image having the largest feature amount among the first thermal images having a feature amount larger than that of the intermediate image Dd and a difference (absolute value) of the feature amount from the intermediate image Dd smaller than the difference threshold value FDt. It is specified as a high temperature boundary frame Fu. When there is no first thermal image in which the feature amount is larger than the intermediate image Dd and the difference (absolute value) of the feature amount is equal to or larger than the difference threshold FDt, the image having the largest feature amount among the first thermal images is used. It is identified as a high temperature boundary frame Fu.
- the analysis unit 213e further describes the image having the smallest feature amount among the first thermal images having a feature amount smaller than that of the intermediate image Dd and a difference (absolute value) of the feature amount from the intermediate image Dd smaller than the difference threshold value FDt. Is specified as the low temperature boundary frame Fl.
- the image having the smallest feature amount among the first thermal images is used. It is identified as the cold boundary frame Fl.
- the analysis unit 213e outputs the information IFu that specifies the high temperature boundary frame Fu and the information IFl that specifies the low temperature boundary frame Fl.
- the difference threshold value FDt may be stored in the storage device 3 or may be stored in a parameter memory (not shown).
- the average image generation unit 214e receives the input first thermal image Din1, receives information Sdin indicating the order of the first thermal image of each frame from the temperature sorting unit 211e, and receives the high temperature boundary frame Fu from the analysis unit 213e.
- the average image De is generated by receiving the information IFl to specify the information IFu to be specified and the information IFl to specify the low temperature boundary frame Fl.
- the average image generation unit 214e frames the pixel values of the images of the frames (including the high temperature boundary frame Fu and the low temperature boundary frame Fl) from the high temperature boundary frame Fu to the low temperature boundary frame Fl among the first thermal image Din1 of a plurality of frames.
- An average image De is generated by averaging in the direction.
- the average image De is unsteady by excluding the frame having a feature amount larger than the feature amount of the high temperature boundary frame Fu and the frame having a feature amount smaller than the feature amount of the low temperature boundary frame Fl. It is possible to prevent the influence of a subject that appears as a target, particularly a heat source (high temperature object) or a low temperature object.
- the subjects that appear non-stationarily here include people.
- the processing in the sharpening unit 215 and the skeletal component extraction unit 216 is the same as that described in the first embodiment.
- the background image generation unit 21e calculates the feature amount Qe for each of the first thermal images Din1 of the plurality of frames, and when the thermal images of the plurality of frames are arranged in the order of the size of the feature amount Qe, the center.
- a plurality of thermal images whose difference in feature amount from the thermal image located in is smaller than a predetermined threshold (difference threshold) FDt are averaged in the frame direction to generate an average image De, and the average image is sharpened.
- a skeleton image Dg is generated by extracting the skeleton component, and the generated skeleton image Dg is stored in the storage device 3 as a background image.
- the image correction unit 22 is the same as the image correction unit 22 of the first embodiment, and operates in the same manner.
- the temperature is sorted according to the feature amount for each frame, so that the process is relatively easy.
- the image processing devices 2, 2b, 2c, 2d or 2e described in the first to fifth embodiments may be partially or wholly composed of a processing circuit.
- the functions of each part of the image processing apparatus may be realized by separate processing circuits, or the functions of a plurality of parts may be collectively realized by one processing circuit.
- the processing circuit may be composed of dedicated hardware or software, that is, a programmed computer. Of the functions of each part of the image processing device, a part may be realized by dedicated hardware and a part may be realized by software.
- FIG. 11 shows an example of a configuration in which a computer 200 including a single processor realizes all the functions of the image processing devices 2, 2b, 2c, 2d or 2e according to the above embodiment. 1. Shown together with the storage device 3 and the display terminal 4.
- the computer 300 has a processor 310 and a memory 320.
- the memory 320 or the storage device 3 stores a program for realizing the functions of each part of the image processing device.
- the processor 310 uses, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a microprocessor, a microcontroller, a DSP (Digital Signal Processor), or the like.
- a CPU Central Processing Unit
- GPU Graphics Processing Unit
- microprocessor a microcontroller
- DSP Digital Signal Processor
- the memory 320 includes, for example, a RAM (Random Access Memory), a ROM (Read Only Memory), a flash memory, an EPROM (Erasable Programmable Lead Online Memory), an EEPROM (Electrically Memory Memory, etc.), an EEPROM (Electrically Memory Memory, etc.) Alternatively, a photomagnetic disk or the like is used.
- the processor 310 realizes the function of the image processing device by executing the program stored in the memory 320 or the storage device 3. When the program is stored in the storage device 3, it may be executed after being loaded into the memory 320 once.
- the functions of the image processing device include, as described above, control of display on the display terminal 4, writing of information to the storage device 3, and reading of information from the storage device 3.
- the above processing circuit may be attached to the thermal image sensor 1. That is, the image processing devices 2, 2b, 2c, 2d or 2e can also be mounted in a processing circuit attached to the thermal image sensor. Alternatively, the image processing devices 2, 2b, 2c, 2d or 2e can be mounted on a cloud server that can be connected to the thermal image sensor 1 via a communication network. Further, the storage device 3 may be a storage area on a server on the cloud.
- At least one of the image processing device and the storage device may be mounted in a communication mobile terminal, for example, a smartphone or a remote controller.
- the thermal image generation system including the image processing device may be applied to home appliances, in which case at least one of the image processing device and the storage device is contained in a HEMS (Home Energy Management System) controller. It may be implemented.
- HEMS Home Energy Management System
- the display terminal may also be a communication terminal, for example, a smartphone or one mounted on a HEMS (Home Energy Management System) controller.
- HEMS Home Energy Management System
- the thermal image generation system including the image processing device and the image processing device of the present invention has been described above.
- the image processing method performed by the above image processing apparatus also forms a part of the present invention.
- a computer-readable recording medium on which a program for causing a computer to perform processing in the above-mentioned image processing apparatus or image processing method and the program are recorded also forms a part of the present invention.
- thermal image sensor 2,2b, 2c, 2d, 2e image processing device, 3 storage device, 4 display terminal, 21,21b, 21c, 21d, 21e background image generation unit, 22,22b, 22c, 22d image correction unit , 211,211e temperature sorting unit, 212,212e feature calculation unit, 213,213e analysis unit, 214,214e average image generation unit, 215 sharpening unit, 216 skeletal component extraction unit, 217,217d threshold generation unit, 221,221d Superimposition part, 222 weight determination part, 223 skeletal component extraction part, 224 coloring part.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Signal Processing (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Image Processing (AREA)
- Closed-Circuit Television Systems (AREA)
- Studio Devices (AREA)
- Measuring And Recording Apparatus For Diagnosis (AREA)
Abstract
Description
背景画像生成部と画像補正部とを有し、
前記背景画像生成部は、
熱画像センサーが同じ視野での撮像で取得した複数フレームの第1熱画像又は前記第1熱画像から生成した複数フレームのソート画像を明るさの順に並べたときに中央に位置する中間画像を特定し、
前記第1熱画像又は前記ソート画像の各々につき明るさの指標となる特徴量を算出し、
前記第1熱画像又は前記ソート画像のうち、前記中間画像との前記特徴量の差が、予め定められた差分閾値よりも小さい、複数フレームの第1熱画像又はソート画像をフレーム方向に平均化して平均画像を生成し、
前記平均画像を鮮明化した後、骨格成分を抽出することで得られた骨格画像を記憶装置に記憶させ、
前記画像補正部は、
前記熱画像センサーが、前記第1熱画像と同じ視野での撮像で取得した第2熱画像に対して、前記記憶装置に記憶された前記骨格画像を用いて補正を行うことで、補正熱画像を生成する。
背景画像生成部と画像補正部とを有し、
前記背景画像生成部は、
熱画像センサーが同じ視野での撮像で取得した複数フレームの第1熱画像又は前記第1熱画像から生成した複数フレームのソート画像を明るさの順に並べたときに中央に位置する中間画像を特定し、
前記第1熱画像又は前記ソート画像の各々につき明るさの指標となる特徴量を算出し、
前記第1熱画像又は前記ソート画像のうち、前記中間画像との前記特徴量の差が、予め定められた差分閾値よりも小さい、複数フレームの第1熱画像又はソート画像をフレーム方向に平均化して平均画像を生成し、
前記平均画像を鮮明化することで得られた鮮明化画像を記憶装置に記憶させ、
前記画像補正部は、
前記熱画像センサーが、前記第1熱画像と同じ視野での撮像で取得した第2熱画像に対して、前記記憶装置に記憶された前記鮮明化画像から骨格成分を抽出することで得られた骨格画像を用いて補正を行うことで、補正熱画像を生成する。
以下、本発明の実施の形態について図面を参照して説明する。
図1は、本発明の実施の形態1の画像処理装置を備える熱画像生成システムの概略構成を示す。
図示の画像処理装置2は、背景画像生成部21と、画像補正部22とを有する。
背景画像の生成に用いられる複数フレームの熱画像は、熱画像センサー1が同じ視野での撮像を繰り返すことで取得したものである。
背景画像生成部21はさらに、画素値の大きさの順に並べたときに中央に位置する画素、即ち、中間の順位の画素の集合により構成されるソート画像を中間画像と特定する。
このようにして特定される中間画像は、中間順位の画素の集合により構成されるものであるので、複数フレームのソート画像Dcを明るさの順に並べたときに中央に位置するものとなる。
背景画像生成部21は、平均画像を鮮明化した後、骨格成分を抽出することで骨格画像Dgを生成し、生成した骨格画像Dgを背景画像として記憶装置3に記憶させる。
温度ソート部211はさらに、各々同じ順位の画素の集合により構成される複数フレームのソート画像Dcを生成する。
即ち、順位がn(nは1~Nのいずれか)である画素の集合によりn番目のソート画像Dcが構成される。
温度ソート部211はさらに、中間画像Ddを特定する情報IDdを出力する。
中間画像Ddよりも特徴量が大きくかつ特徴量の差分(絶対値)が差分閾値FDt以上であるソート画像が存在しない場合には、ソート画像のうちの特徴量が最も大きい画像が高温境界フレームFuと特定される。
中間画像Ddよりも特徴量が小さくかつ特徴量の差分(絶対値)が差分閾値FDt以上であるソート画像が存在しない場合には、ソート画像のうちの特徴量が最も小さい画像が低温境界フレームFlと特定される。
鮮明化部215における鮮明化の手法としては、ヒストグラム均等化、レティネックス方式などが挙げられる。
ヒストグラム均等化部2151は、平均画像Deに対しヒストグラム均等化を行う。ヒストグラム均等化は、画像全体の画素値の分布を算出し、画素値の分布が所望の形状の分布になるよう、画素値の変換を行う処理である。
ヒストグラム均等化は、コントラストを制限した適応ヒストグラム均等化(Contrast Limited Adaptive Histogram Equalization)であっても良い。
図3(b)に示される鮮明化部215は、フィルター分離部2152と、調整部2153、2154と、合成部2155とを有する。
調整部2153は、低周波成分Delに第1のゲインを乗算して画素値の大きさを調整する。
調整部2154は、高周波成分Dehに、第2のゲインを乗算して画素値の大きさを調整する。上記第2のゲインは上記第1のゲインよりも大きい。
合成部2155は、調整部2153、2154の出力を合成する。合成の結果得られる画像は、高周波成分が強められたものとなる。
骨格成分とは、画像の大局的構造を表す成分であり、画像中のエッジ成分及び平坦成分(緩やかに変化する成分)を含む。骨格成分の抽出には、例えば全変分ノルム最小化法を用いることができる。
上記のように、第2熱画像Din2は、第1熱画像Din1と同じ視野での撮像で得られたものである。第2熱画像Din2は、第1熱画像Din1と異なる撮像時刻における撮像で得られたものであっても良く、第1熱画像Din1のうちの1フレームの画像を第2熱画像として用いても良い。
重畳部221は、第2熱画像Din2に対して、骨格画像Dgを重畳することで、補正熱画像Doutを生成する。重畳は例えば加重加算により行われる。
骨格画像Dgの成分がより鮮明となるよう、骨格画像Dgの加算時に、骨格画像Dgに対してゲインを乗算しても良い。
PDout=PDin2+PDg×g 式(1)
式(1)で、
PDin2は、第2熱画像Din2の画素値、
PDgは、骨格画像Dgの画素値、
gは、骨格画像Dgに対するゲイン、
PDoutは、補正熱画像Doutの画素値である。
実施の形態1の画像処理装置では、複数フレームの第1熱画像からノイズが少なくコントラストが高い骨格画像を生成して背景画像として記憶させておき、記憶された骨格画像を第2熱画像と合成するので、SN比が高く、時間解像度の高い熱画像を生成することができる。
図4は、本発明の実施の形態2の画像処理装置2aの機能ブロック図である。
図4に示される画像処理装置2aは、図2の画像処理装置2と概して同じであるが、背景画像生成部21及び画像補正部22の代わりに、背景画像生成部21b及び画像補正部22bを備える。
背景画像生成部21bは、背景画像生成部21と概して同じであるが、閾値生成部217を備える。
画像補正部22bは、画像補正部22と概して同じであるが、重み決定部222を備える。
閾値Thを上記の平均値又は中間値に対してどのような関係を有するものとするかは、経験或いは実験(シミュレーション)に基づいて定められる。
閾値生成部217は、生成した閾値Thを記憶装置3へ送信して記憶させる。
重み決定部222が閾値Thに基づいて作成する重みテーブルの例を図5(a)及び(b)に示す。
図5(c)及び(d)に示される例では、第2熱画像Din2の画素値PDin2が0のとき、合成重みwが1であり、画素値PDin2が増加するに従い、合成重みwが次第に小さくなる。このような重みテーブルであっても、画素値PDin2が大きい範囲では、骨格画像Dgの加算割合を減らすことができる。
図6は、本発明の実施の形態3の画像処理装置2cの機能ブロック図である。
図6に示される画像処理装置2cは、図2の画像処理装置2と概して同じであるが、背景画像生成部21及び画像補正部22の代わりに、背景画像生成部21c及び画像補正部22cを備える。
背景画像生成部21cは、図2の背景画像生成部21と概して同じであるが、図2の骨格成分抽出部216を備えず、鮮明化部215から出力される鮮明化画像Dfを、背景画像として記憶装置3に記憶させる。
即ち、図6に示される画像処理装置2cでは、骨格成分の抽出が背景画像生成部ではなく、画像補正部で行われる。
骨格成分抽出部223は、記憶装置3に記憶されている鮮明化画像Dfを読み出して、骨格成分を抽出することで骨格画像Dgを生成する。
重畳部221は、骨格画像Dgを第2熱画像Din2に重畳することで、第2熱画像Din2を補正し、補正熱画像Doutを生成する。
図7は、本発明の実施の形態3の画像処理装置2dの機能ブロック図である。
図7に示される画像処理装置2dは、図4の画像処理装置2bと概して同じであるが、背景画像生成部21b及び画像補正部22bの代わりに、背景画像生成部21d及び画像補正部22dを備える。
背景画像生成部21dは、背景画像生成部21bと概して同じであるが、閾値生成部217の代わりに、閾値生成部217dを備える。
高温閾値Tuは、平均画像Deの画素値の平均値又は中間値に差分閾値FDtを加算することで求められる。
低温閾値Tlは、平均画像Deの画素値の平均値又は中間値から差分閾値FDtを減算することで求められる。
この場合の補正熱画像Doutは各部の温度に応じて着色されたカラー画像である。
画像補正部22dは、重み決定部222と、着色部224と、重畳部221dとを有する。
図5(a)又は(b)に示される重みテーブルを作成する場合には、閾値Thを用いる必要がある。上記のように、閾値Thは、高温閾値Tuと同じであっても良い。その場合には、記憶装置3に記憶されている高温閾値Tuを読み出して、閾値Thとして重みテーブルの作成に利用することができる。
要するに、重みテーブルは、第2熱画像Din2の画素値が大きくなるほど、合成重みwが小さくなるものであれば良い。
図8に示す例では、街路灯の発光部101、自動車103、人105を構成する画素は、高温領域に属するものと判定され、道路の路面標識107を構成する画素は、中間温領域に属するものと判定され、街路灯の支柱109を構成する画素は、低温領域に属するものと判定されている。
その際、高温領域、中間温領域、及び低温領域相互間の境界部分においては、色の変化が連続するように高温領域、中間温領域、及び低温領域に対する上記の色の割り当て及び画素値に応じた色の割り当てを行うのが望ましい。
カラー画像DhはR、G、Bの信号、即ち3チャネルの信号で表されるのに対して、骨格画像Dgはグレー1チャネルの信号で表される。
骨格画像Dgはカラー画像Dhの輝度成分Dhyに加算される。
PDjy=PDhy+PDg*g*w 式(2)
式(2)で、
PDhyは、カラー画像Dhの輝度成分Dhyの値、
PDgは、骨格画像Dgの画素値、
gは、骨格画像Dgに対するゲイン、
wは、合成重み、
PDjyは、加算の結果得られる輝度成分Djyの値である。
この場合の加算を下記の式(3a)~(3c)で表される。
PRout=PRin+PDg*g*w 式(3a)
PGout=PGin+PDg*g*w 式(3b)
PBout=PBin+PDg*g*w 式(3b)
PRinは、カラー画像DhのRチャネルの信号Rinの値(赤成分の値)、
PGinは、カラー画像DhのGチャネルの信号Ginの値(緑成分の値)、
PBinは、カラー画像DhのBチャネルの信号Binの値(青成分の値)、
PDgは、骨格画像Dgの画素値、
gは、骨格画像Dgに対するゲイン、
wは、合成重み、
PRoutは、加算の結果得られるRチャネルの信号Routの値(赤成分の値)、
PGoutは、加算の結果得られるGチャネルの信号Goutの値(緑成分の値)、
PBoutは、加算の結果得られるBチャネルの信号Boutの値(青成分の値)である。
実施の形態4でも実施の形態1と同様の効果が得られる。
熱画像を、高温領域、中間温領域、低温領域に分割した上で、それぞれの領域に対して着色を行うことで、そのような事態の発生を防ぐことができる。
図10は、実施の形態5の画像処理装置2eの機能ブロック図である。
図10に示される画像処理装置2eは、図2の画像処理装置2と概して同じであるが、背景画像生成部21の代わりに、背景画像生成部21eを備える。
特徴量Qeとしては、各フレームの画素値の平均値、画素値の中間値、画素値の最大値又は画素値の最小値が算出される。
温度ソート部211eはまた、中間画像Ddを特定する情報IDdを出力する。
中間画像Ddよりも特徴量が大きくかつ特徴量の差分(絶対値)が差分閾値FDt以上である第1熱画像が存在しない場合には、第1熱画像のうちの特徴量が最も大きい画像が高温境界フレームFuと特定される。
中間画像Ddよりも特徴量が小さくかつ特徴量の差分(絶対値)が差分閾値FDt以上である第1熱画像が存在しない場合には、第1熱画像のうちの特徴量が最も小さい画像が低温境界フレームFlと特定される。
また、各実施の形態の特徴を他の実施の形態の特徴と組み合わせることも可能である。
例えば、実施の形態2を実施の形態1に対する変形として説明したが、実施の形態3に対しても同様の変形を加えることができる。
また、実施の形態5を実施の形態1に対する変形例として説明したが、実施の形態2~4に対しても同様の変形を加えることが可能である。
例えば、画像処理装置の各部分の機能をそれぞれ別個の処理回路で実現してもよいし、複数の部分の機能をまとめて一つの処理回路で実現しても良い。
処理回路は専用のハードウェアで構成されていても良くソフトウェアで、即ちプログラムされたコンピュータで構成されていても良い。
画像処理装置の各部分の機能のうち、一部を専用のハードウェアで実現し、一部をソフトウェアで実現するようにしても良い。
メモリ320又は記憶装置3には、画像処理装置の各部の機能を実現するためのプログラムが記憶されている。
画像処理装置の機能には、上記のように表示端末4に対する表示の制御、記憶装置3に対する情報の書き込み、記憶装置3からの情報の読み出しが含まれる。
また、記憶装置3はクラウド上サーバ上の記憶領域であっても良い。
画像処理装置を備える熱画像生成システムは、家電製品に適用されるものであっても良く、その場合には、画像処理装置及び記憶装置の少なくとも一方が、HEMS(Home Energy Management System)コントローラ内に実装されたものであっても良い。
Claims (16)
- 背景画像生成部と画像補正部とを有し、
前記背景画像生成部は、
熱画像センサーが同じ視野での撮像で取得した複数フレームの第1熱画像又は前記第1熱画像から生成した複数フレームのソート画像を明るさの順に並べたときに中央に位置する中間画像を特定し、
前記第1熱画像又は前記ソート画像の各々につき明るさの指標となる特徴量を算出し、
前記第1熱画像又は前記ソート画像のうち、前記中間画像との前記特徴量の差が、予め定められた差分閾値よりも小さい、複数フレームの第1熱画像又はソート画像をフレーム方向に平均化して平均画像を生成し、
前記平均画像を鮮明化した後、骨格成分を抽出することで得られた骨格画像を記憶装置に記憶させ、
前記画像補正部は、
前記熱画像センサーが、前記第1熱画像と同じ視野での撮像で取得した第2熱画像に対して、前記記憶装置に記憶された前記骨格画像を用いて補正を行うことで、補正熱画像を生成する
画像処理装置。 - 背景画像生成部と画像補正部とを有し、
前記背景画像生成部は、
熱画像センサーが同じ視野での撮像で取得した複数フレームの第1熱画像又は前記第1熱画像から生成した複数フレームのソート画像を明るさの順に並べたときに中央に位置する中間画像を特定し、
前記第1熱画像又は前記ソート画像の各々につき明るさの指標となる特徴量を算出し、
前記第1熱画像又は前記ソート画像のうち、前記中間画像との前記特徴量の差が、予め定められた差分閾値よりも小さい、複数フレームの第1熱画像又はソート画像をフレーム方向に平均化して平均画像を生成し、
前記平均画像を鮮明化することで得られた鮮明化画像を記憶装置に記憶させ、
前記画像補正部は、
前記熱画像センサーが、前記第1熱画像と同じ視野での撮像で取得した第2熱画像に対して、前記記憶装置に記憶された前記鮮明化画像から骨格成分を抽出することで得られた骨格画像を用いて補正を行うことで、補正熱画像を生成する
画像処理装置。 - 前記背景画像生成部は、
前記複数フレームの前記第1熱画像の同じ位置の画素について画素値の大きさの順位を特定し、
各々同じ順位の画素の集合により構成される複数フレームの画像を前記ソート画像として生成し、
前記ソート画像のうち、中間の順位の画素の集合により構成されるソート画像を前記中間画像と特定し、
前記複数フレームの前記ソート画像の各々について前記特徴量を算出する
ことを特徴とする請求項1又は2に記載の画像処理装置。 - 前記背景画像生成部は、
前記複数フレームの前記第1熱画像の各々について前記特徴量を算出し、
前記複数フレームの前記第1熱画像を、前記特徴量の大きさの順に並べたときに中央に位置する第1熱画像を前記中間画像と特定する
ことを特徴とする請求項1又は2に記載の画像処理装置。 - 前記画像補正部は、
前記第2熱画像と前記骨格画像とを加重加算することで前記補正熱画像を生成する
ことを特徴とする請求項1から4のいずれか1項に記載の画像処理装置。 - 前記画像補正部は、
前記第2熱画像に対して前記骨格画像を用いて補正を行う際に、
前記第2熱画像の画素値が重み決定用閾値以上の値の場合に、前記第2熱画像の画素値が大きいほど、前記骨格画像に対する重み付けを小さくして前記加重加算を行う
ことを特徴とする請求項5に記載の画像処理装置。 - 前記画像補正部は、前記第2熱画像に対して着色を行うことでカラー画像を生成し、前記カラー画像に対し、前記骨格画像を用いて補正を行うことで前記補正熱画像を生成する
ことを特徴とする請求項1から4のいずれか1項に記載の画像処理装置。 - 前記画像補正部は、
前記カラー画像を輝度成分画像と色成分画像とに変換し、
前記輝度成分画像と前記骨格画像とを加重加算することで補正輝度成分画像を生成し、
前記補正輝度成分画像と前記色成分画像とからカラー画像への変換を行うことで前記補正熱画像を生成する
ことを特徴とする請求項7に記載の画像処理装置。 - 前記画像補正部は、
前記第2熱画像を、画像分割用の高温閾値及び低温閾値を用いて高温領域、中間温領域、及び低温領域に分割し、それぞれの領域に対して着色を行い合成することで前記カラー画像を生成する
ことを特徴とする請求項7又は8に記載の画像処理装置。 - 前記画像補正部は、
前記高温領域、前記中間温領域、及び前記低温領域に互いに異なる範囲の色を割り当て、
各領域において、当該領域に割り当てられた範囲の色のうち、各画素に対して画素値に応じた色を割り当て、
前記高温領域、前記中間温領域、及び前記低温領域相互間の境界部分においては、色の変化が連続するように前記高温領域、前記中間温領域、及び前記低温領域に対する前記割り当て及び前記画素値に応じた割り当てを行う
ことを特徴とする請求項9に記載の画像処理装置。 - 前記背景画像生成部は、
前記平均画像の画素値の平均値又は中間値に前記差分閾値を加算することで得られる値を前記高温閾値として用い、
前記平均画像の画素値の平均値又は中間値から前記差分閾値を減算することで得られる値を前記低温閾値として用いる
ことを特徴とする請求項9又は10に記載の画像処理装置。 - 前記熱画像センサーに付随する処理回路に実装される
ことを特徴とする請求項1から11のいずれか1項に記載の画像処理装置。 - 前記熱画像センサーと通信網を介して接続可能なクラウドサーバ上に実装される
ことを特徴とする請求項1から11のいずれか1項に記載の画像処理装置。 - 請求項1から13のいずれか1項の画像処理装置と、前記熱画像センサーと前記記憶装置とを備えた熱画像生成システム。
- 請求項1から13のいずれか1項に記載の画像処理装置における処理をコンピュータに実行させるためのプログラム。
- 請求項15に記載のプログラムを記録した、コンピュータで読取可能な記録媒体。
Priority Applications (4)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2019/009680 WO2020183565A1 (ja) | 2019-03-11 | 2019-03-11 | 画像処理装置及び熱画像生成システム、並びにプログラム及び記録媒体 |
US17/435,396 US11861847B2 (en) | 2019-03-11 | 2019-03-11 | Image processing device, thermal image generation system, and recording medium |
SE2151025A SE2151025A2 (en) | 2019-03-11 | 2019-03-11 | Image processing device, thermal image generation system, program, and recording medium |
JP2021504639A JP7076627B2 (ja) | 2019-03-11 | 2019-03-11 | 画像処理装置及び熱画像生成システム、並びにプログラム及び記録媒体 |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
PCT/JP2019/009680 WO2020183565A1 (ja) | 2019-03-11 | 2019-03-11 | 画像処理装置及び熱画像生成システム、並びにプログラム及び記録媒体 |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2020183565A1 true WO2020183565A1 (ja) | 2020-09-17 |
Family
ID=72426176
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/JP2019/009680 WO2020183565A1 (ja) | 2019-03-11 | 2019-03-11 | 画像処理装置及び熱画像生成システム、並びにプログラム及び記録媒体 |
Country Status (4)
Country | Link |
---|---|
US (1) | US11861847B2 (ja) |
JP (1) | JP7076627B2 (ja) |
SE (1) | SE2151025A2 (ja) |
WO (1) | WO2020183565A1 (ja) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023089723A1 (ja) * | 2021-11-18 | 2023-05-25 | 三菱電機株式会社 | 検温システム、検温装置、検温方法及びプログラム |
Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04192896A (ja) * | 1990-11-27 | 1992-07-13 | Mitsubishi Electric Corp | 赤外線撮像装置 |
JP2012000311A (ja) * | 2010-06-18 | 2012-01-05 | Hoya Corp | 動画像強調処理システムおよび方法 |
JP2013229706A (ja) * | 2012-04-25 | 2013-11-07 | Sony Corp | 画像取得装置、画像取得方法、および画像取得プログラム |
JP2018129672A (ja) * | 2017-02-08 | 2018-08-16 | パナソニックIpマネジメント株式会社 | 動体監視装置および動体監視システム |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2004320077A (ja) | 2002-12-03 | 2004-11-11 | Make Softwear:Kk | 写真プリント提供装置および方法ならびにプリント用紙ユニット |
JP4192896B2 (ja) | 2005-01-27 | 2008-12-10 | トヨタ自動車株式会社 | ハイブリッド車両 |
JP5446847B2 (ja) | 2009-12-25 | 2014-03-19 | カシオ計算機株式会社 | 画像処理装置及び方法、並びにプログラム |
US10129490B2 (en) * | 2015-04-05 | 2018-11-13 | Hema Imaging Llc | Systems and approaches for thermal image corrections |
WO2017120384A1 (en) * | 2016-01-08 | 2017-07-13 | Flir Systems, Inc. | Thermal-image based object detection and heat map generation systems and methods |
CN111445487B (zh) * | 2020-03-26 | 2023-06-02 | 深圳数联天下智能科技有限公司 | 图像分割方法、装置、计算机设备和存储介质 |
CN112379231B (zh) * | 2020-11-12 | 2022-06-03 | 国网浙江省电力有限公司信息通信分公司 | 一种基于多光谱图像的设备检测方法及装置 |
-
2019
- 2019-03-11 JP JP2021504639A patent/JP7076627B2/ja active Active
- 2019-03-11 WO PCT/JP2019/009680 patent/WO2020183565A1/ja active Application Filing
- 2019-03-11 SE SE2151025A patent/SE2151025A2/en unknown
- 2019-03-11 US US17/435,396 patent/US11861847B2/en active Active
Patent Citations (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JPH04192896A (ja) * | 1990-11-27 | 1992-07-13 | Mitsubishi Electric Corp | 赤外線撮像装置 |
JP2012000311A (ja) * | 2010-06-18 | 2012-01-05 | Hoya Corp | 動画像強調処理システムおよび方法 |
JP2013229706A (ja) * | 2012-04-25 | 2013-11-07 | Sony Corp | 画像取得装置、画像取得方法、および画像取得プログラム |
JP2018129672A (ja) * | 2017-02-08 | 2018-08-16 | パナソニックIpマネジメント株式会社 | 動体監視装置および動体監視システム |
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2023089723A1 (ja) * | 2021-11-18 | 2023-05-25 | 三菱電機株式会社 | 検温システム、検温装置、検温方法及びプログラム |
JP7307296B1 (ja) | 2021-11-18 | 2023-07-11 | 三菱電機株式会社 | 検温システム、検温装置、検温方法及びプログラム |
Also Published As
Publication number | Publication date |
---|---|
JPWO2020183565A1 (ja) | 2021-10-28 |
SE2151025A1 (en) | 2021-08-27 |
US20220148192A1 (en) | 2022-05-12 |
SE2151025A2 (en) | 2023-04-18 |
SE544734C2 (en) | 2022-10-25 |
US11861847B2 (en) | 2024-01-02 |
JP7076627B2 (ja) | 2022-05-27 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
US10530995B2 (en) | Global tone mapping | |
US9852499B2 (en) | Automatic selection of optimum algorithms for high dynamic range image processing based on scene classification | |
JP4208909B2 (ja) | 画像処理装置と撮影装置 | |
US9996913B2 (en) | Contrast based image fusion | |
EP2624204B1 (en) | Image processing apparatus and method of controlling the same | |
JP6351903B1 (ja) | 画像処理装置、画像処理方法、及び撮影装置 | |
CN107680056B (zh) | 一种图像处理方法及装置 | |
CN101690169B (zh) | 非线性色调映射设备和方法 | |
US20070183657A1 (en) | Color-image reproduction apparatus | |
TW448411B (en) | Global white point detection and white balance for color images | |
JP5648849B2 (ja) | 画像処理装置、画像処理方法 | |
JP7076627B2 (ja) | 画像処理装置及び熱画像生成システム、並びにプログラム及び記録媒体 | |
CN114841904A (zh) | 图像融合方法、电子设备及存储装置 | |
KR101349968B1 (ko) | 자동 영상보정을 위한 영상 처리 장치 및 방법 | |
JP2005025448A (ja) | 信号処理装置、信号処理プログラム、および電子カメラ | |
JP4359662B2 (ja) | カラー画像の露出補正方法 | |
KR101180409B1 (ko) | 히스토그램 정규화와 감마 보정 합성을 통하여 저조도 영상을 개선하는 방법 및 장치 | |
KR20160001582A (ko) | 화상 처리 장치 및 화상 처리 방법 | |
JP6276564B2 (ja) | 画像処理装置及び画像処理方法 | |
JP5050141B2 (ja) | カラー画像の露出評価方法 | |
Kang et al. | Bayer patterned high dynamic range image reconstruction using adaptive weighting function | |
Lee et al. | Complex adaptation-based LDR image rendering for 3D image reconstruction | |
JP2019040382A (ja) | 画像処理装置 | |
Dehesa et al. | Chromatic Improvement of Backgrounds Images Captured with Environmental Pollution Using Retinex Model. | |
JPWO2020017638A1 (ja) | 画像生成装置及び撮像装置 |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 19919503 Country of ref document: EP Kind code of ref document: A1 |
|
ENP | Entry into the national phase |
Ref document number: 2021504639 Country of ref document: JP Kind code of ref document: A |
|
WWE | Wipo information: entry into national phase |
Ref document number: 2151025-0 Country of ref document: SE |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 19919503 Country of ref document: EP Kind code of ref document: A1 |