WO2023053769A1 - Image processing device, imaging device, camera system, image processing method, and program - Google Patents

Image processing device, imaging device, camera system, image processing method, and program Download PDF

Info

Publication number
WO2023053769A1
WO2023053769A1 PCT/JP2022/031320 JP2022031320W WO2023053769A1 WO 2023053769 A1 WO2023053769 A1 WO 2023053769A1 JP 2022031320 W JP2022031320 W JP 2022031320W WO 2023053769 A1 WO2023053769 A1 WO 2023053769A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
image processing
evaluation value
regions
brightness
Prior art date
Application number
PCT/JP2022/031320
Other languages
French (fr)
Japanese (ja)
Inventor
哲也 藤川
智大 島田
臣一 下津
智行 河合
Original Assignee
富士フイルム株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 富士フイルム株式会社 filed Critical 富士フイルム株式会社
Publication of WO2023053769A1 publication Critical patent/WO2023053769A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • G01N21/3554Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light for determining moisture content
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/17Systems in which incident light is modified in accordance with the properties of the material investigated
    • G01N21/25Colour; Spectral properties, i.e. comparison of effect of material on the light at two or more different wavelengths or wavelength bands
    • G01N21/31Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry
    • G01N21/35Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light
    • G01N21/359Investigating relative effect of material at wavelengths characteristic of specific elements or molecules, e.g. atomic absorption spectrometry using infrared light using near infrared light
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination

Definitions

  • the present invention relates to an image processing device, an imaging device, a camera system, an image processing method, and a program, and more particularly to an image processing device, an imaging device, a camera system, an image processing method, and a program that identify defective areas.
  • Patent Document 1 the intensity distribution of the infrared light reflected from the inspection site when there is no water leakage and the intensity distribution of the infrared light reflected from the inspection site when there is water leakage are received by the light receiving means.
  • a technique for detecting water leaks by comparing is described.
  • An embodiment according to the technology of the present disclosure provides an image processing device, an imaging device, a camera system, an image processing method, and a program that can accurately identify a defective area.
  • An image processing apparatus is an image processing apparatus including a processor for identifying a defective portion of a structure, wherein the processor comprises a first wavelength of the structure having a water absorption band and a first wavelength of the structure.
  • a first evaluation value regarding changes in brightness in a plurality of regions in the image is calculated, and based on the first evaluation value, a plurality of Identify the defective area including the defective portion among the areas of .
  • the first evaluation value is the brightness change rate in the plurality of areas.
  • At least one of the rate of change in brightness changes in a direction that increases over time.
  • the brightness change rate is a weighted average brightness change rate calculated by weighted averaging the brightness change rates in a plurality of regions.
  • the processor calculates an image average brightness change rate by averaging the brightness change rates of the first image, and the weighted average brightness change rate lower than the image average brightness change rate by a first threshold or more is the first image brightness change rate. Areas that persist for a period of time are identified as defective areas.
  • the processor changes the first threshold over time.
  • the processor ends identifying the fault location when all the first evaluation values continue to be less than the second threshold.
  • the processor is a second image of a structure with a second wavelength different from the first wavelength, and in at least two or more second images corresponding to the first image, a plurality of images corresponding to the first image A second evaluation value is calculated for changes in brightness in the region, and the defect region is identified based on the first evaluation value and the second evaluation value.
  • the second image is acquired under the same shooting conditions as the corresponding first image, and the second image is taken so that the brightness is lower than that of the first image.
  • the processor divides the first image into a plurality of regions, and changes the division number and division size of the plurality of regions.
  • the processor identifies the defect area based on the first evaluation values of the plurality of areas.
  • An imaging device includes the image processing device described above.
  • a camera system includes the image processing device described above.
  • a light projector for irradiating the structure with light is provided.
  • An image processing method which is another aspect of the present invention, is an image processing method for an image processing apparatus having a processor for specifying a defective portion of a structure, wherein the processor detects a structure of a first wavelength having a water absorption band. calculating a first evaluation value regarding changes in brightness in a plurality of regions in the first images of the object, which are at least two or more first images acquired at different times; and a step of identifying a defective region including a defective portion among the plurality of regions based on the value.
  • a program which is another aspect of the present invention, is a program for causing an image processing device having a processor to perform an image processing method for identifying a defective portion of a structure, wherein Calculating a first evaluation value relating to changes in brightness in a plurality of regions in the first images of the structure of at least two or more obtained at different times; and a step of specifying a defective region including a defective portion among the plurality of regions based on one evaluation value.
  • FIG. 1 is a block diagram showing an embodiment of the internal configuration of an imaging device.
  • FIG. 2 is a diagram showing main functional blocks of the defect identification processing unit.
  • FIG. 3 is a diagram illustrating calculation of the first evaluation value.
  • FIG. 4 is a diagram for explaining identification of a defective area performed by the defective area identification unit.
  • FIG. 5 is a diagram showing the reflectance relationship between the defective area and the normal area.
  • FIG. 6 is a flow chart showing an image processing method using the defect identification processor.
  • FIG. 7 is a diagram conceptually showing the first image and the second image.
  • FIG. 8 is a diagram showing temporal changes in the reflectance of one region of the first image and the change rate of the reflectance.
  • FIG. 9 is a flow chart showing an imaging method.
  • FIG. 1 is a block diagram showing an embodiment of the internal configuration of an imaging device.
  • FIG. 2 is a diagram showing main functional blocks of the defect identification processing unit.
  • FIG. 3 is a diagram illustrating calculation of the first evaluation value
  • FIG. 10 is a diagram illustrating division of the first image into a plurality of regions.
  • FIG. 11 is a diagram illustrating division of the first image into a plurality of regions.
  • FIG. 12 is a diagram showing, as an example of the first threshold that fluctuates, the first threshold that fluctuates according to the rate of change.
  • FIG. 13 is a diagram for explaining calculation of the rate of change.
  • FIG. 14 is a flowchart when correction processing is performed.
  • FIG. 15 is a diagram for explaining correction processing.
  • FIG. 16 is a flow chart for starting use of the projector.
  • FIG. 17 is a diagram explaining the timing when the projector is used.
  • FIG. 18 is a diagram for explaining a case in which brightness changes within an image.
  • FIG. 19 is a flow chart for starting use of the projector.
  • the surface of the object to be investigated is not necessarily a smooth surface with a uniform reflectance.
  • the surface of the wall being investigated has unevenness, if the way the light hits the wall is uneven, or if the wall is made of different materials, the brightness (or reflectance) ) may vary even though there is no defect. In such a case, it is difficult to accurately identify the defect location based on the change in brightness.
  • the present invention proposes a technique that can accurately identify a defective location based on changes in brightness.
  • FIG. 1 is a block diagram showing an embodiment of an internal configuration of an imaging device equipped with an image processing device (defect identification processing section) of the present invention. Note that the image processing device functions as the defect identification processing unit 30 within the imaging device.
  • the imaging device 10 captures an image of a structure to be investigated as a subject. Then, by processing the obtained image with the defect identification processing unit 30, a defect area including the defect location is specified.
  • the defective location in the present disclosure includes a location where water leakage (or rain leakage) occurs and a failure location that may cause water leakage (or rain leakage).
  • the imaging device 10 is centrally controlled by a CPU (Central Processing Unit) (processor) 40 .
  • CPU Central Processing Unit
  • the imaging device 10 is provided with an operation unit 38 such as a power switch.
  • a signal from the operation unit 38 is input to the CPU 40, and the CPU 40 controls each circuit of the imaging device 10 based on the input signal.
  • driving control of the (mechanical shutter) 15 driving control of the diaphragm 14 by the diaphragm driving unit 34, and driving control of the imaging lens 12 by the lens driving unit 36, imaging operation control, image processing control, image data recording/ Playback control, etc.
  • the external device connection terminal 20 also includes a terminal for wireless connection.
  • the external device connection terminal 20 includes terminals for wireless LAN (Local Area Network), Bluetooth (registered trademark), UWB (Ultra Wide Band), and the like.
  • the luminous flux that has passed through the imaging lens 12, the diaphragm 14, the mechanical shutter (mechanical shutter) 15, etc. is imaged on the imaging element 16, which is a known image sensor.
  • the imaging element 16 can receive a first wavelength having an absorption band of water to obtain a first image.
  • the first wavelength consists of infrared wavelength bands including wavelengths of 1450 nm, 1940 nm, and 2900 nm that are absorbed by water.
  • the imaging element 16 can receive a second wavelength, which will be described later, to obtain a second image.
  • a band-pass filter (not shown) that transmits the first wavelength is inserted in the optical path of the imaging device 10 when acquiring the first image, and transmits the second wavelength when acquiring the second image.
  • a bandpass filter (not shown) is inserted in the optical path of the imaging device 10 to allow the light to pass through.
  • the imaging element 16 has a large number of light receiving elements (photodiodes) arranged two-dimensionally, and the subject image formed on the light receiving surface of each photodiode has an amount of signal charge corresponding to the amount of light incident on each light receiving element.
  • the voltage is converted into a voltage by an amplifier, converted into a digital signal through an A/D (Analog/Digital) converter in the imaging device 16, and output.
  • the image signal (image data) read from the imaging element 16 when shooting a moving image or still image is temporarily stored in a memory (SDRAM (Synchronous Dynamic Random Access Memory) 48 via the image input controller 22.
  • SDRAM Serial Dynamic Random Access Memory
  • the SDRAM is an example, and any memory such as ReRAM (Resistive Random Access Memory), PCM (Phase-change memory), MRAM (Magnetoresistive Random Access Memory) can be selected as necessary.
  • the ROM 47 is a flash memory (Flash Memory), ROM (Read Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), or MRAM (Magnetoresistive Random Access Memory) and the like.
  • the image processing unit 24 reads unprocessed image data (RAW data) acquired via the image input controller 22 when shooting a moving image or still image and temporarily stored in the memory 48 .
  • the image processing unit 24 performs offset processing, pixel interpolation processing (interpolation processing for phase difference detection pixels, defective pixels, etc.), white balance correction, gain control processing including sensitivity correction, gamma correction processing, and It performs demosaic processing, luminance and color difference signal generation processing, edge enhancement processing, color correction processing, and the like.
  • the image processed by the image processing unit 24 is encoded by the video encoder 28 and output to the display unit (not shown).
  • Image data processed by the image processing unit 24 and processed as still images or moving images for recording are stored in the memory 48 again.
  • the CPU 40 can also execute software processing using a program stored in the memory 48 (or ROM 47). is.
  • the imaging device 10 captures an image of an object to be investigated, and the image processed by the image processing unit 24 (a still image and a frame image that constitutes a moving image) is processed by the defect identification processing unit 30. , and processing is performed to identify the defect area to be investigated.
  • the imaging apparatus 10 described above is a specific example and is not limited to this. Any imaging device that can capture the first image and/or the second image can be applied to the present invention. Also, a camera system may be used instead of the imaging device described above.
  • FIG. 2 is a diagram showing main functional blocks of the defect identification processing unit 30. As shown in FIG. Note that each function of the defect identification processing unit 30 may be realized by executing a program by the CPU 40 of the imaging device 10, or may be implemented by one or more CPUs (processors) (unused) installed in the defect identification processing unit 30. shown).
  • the defect identification processing unit 30 is composed of an image acquisition unit 101 , an evaluation value calculation unit 103 , and a defect area identification unit 105 .
  • the image acquisition unit 101 acquires at least two first images captured at different times. Further, when the temporal average of the brightness change rate (for example, the weighted average brightness change rate) is calculated, the image acquisition unit 101 obtains at least the first images captured at different times. Get 3 or more.
  • the first image is an image captured using a first wavelength having a water absorption band with a structure to be investigated as an object. Structures include constructions such as buildings, houses, bridges, etc., and are provided indoors or outdoors. Note that the number of first images acquired by the image acquisition unit 101 is appropriately set depending on the investigation target and the investigation environment.
  • the evaluation value calculation unit 103 calculates a first evaluation value regarding changes in brightness in a plurality of regions in the plurality of first images acquired by the image acquisition unit 101 .
  • FIG. 3 is a diagram explaining calculation of the first evaluation value in the evaluation value calculation unit 103.
  • FIG. 3 is a diagram explaining calculation of the first evaluation value in the evaluation value calculation unit 103.
  • the first image P1 is an image acquired by the imaging device 10 at time t1.
  • the first image P2 is an image captured by the imaging device 10 at time t2.
  • the first image P3 is an image captured by the imaging device 10 at time t3. In the chronological order, time t1, time t2, and time t3 have elapsed.
  • the evaluation value calculation unit 103 divides the first image P1 into a plurality of areas (areas S1 to S9). Also, the evaluation value calculation unit 103 divides the first image P2 and the first image P3 into a plurality of regions (regions S1 to S9) so as to correspond to the first image P1.
  • the evaluation value calculation unit 103 calculates the brightness of the regions S1 to S9 of the first images P1 to P3.
  • the evaluation value calculation unit 103 calculates the brightness of the regions S1 to S9 of the first images P1 to P3 using luminance as one index of brightness. For example, the evaluation value calculation unit 103 calculates the brightness of the area S1 by averaging the brightness of the area S1 of the first image P1 (average brightness). Similarly, the evaluation value calculation unit 103 calculates the brightness (average brightness) of the regions S2 to S9 of the first image P1, the regions S1 to S9 of the first image P2, and the regions S1 to S9 of the first image P3. Calculate
  • the evaluation value calculation unit 103 calculates a first evaluation value regarding changes in brightness in the regions S1 to S9 of the first images P1 to P3.
  • the evaluation value calculation unit 103 calculates, as the first evaluation value, the luminance change rate in the regions S1 to S9 of the first images P1 to P3.
  • the evaluation value calculation unit 103 calculates the rate of change between the average luminance of the area S1 of the first image P1 and the average luminance of the area S1 of the first image P2, and the average luminance of the area S1 of the first image P2.
  • a weighted average brightness change rate R1 of the rate of change between the average brightness and the average brightness of the region S1 of the first image P3 is calculated as a first evaluation value.
  • the evaluation value calculation unit 103 calculates the weighted average brightness of the average brightness of the regions S2 to S9 of the first image P1, the regions S1 to S9 of the first image P2, and the regions S1 to S9 of the first image P3.
  • the rate of change in thickness is calculated as the first evaluation value. Note that the weight for calculating the weighted average brightness change rate is appropriately determined.
  • the first image P1 to the first image P3 are, for example, images of the wall of the house captured by the imaging device 10.
  • the wall to be inspected is evenly sprinkled with water to investigate water leakage.
  • the weather was fine, the water on the wall surface evaporated and decreased with the passage of time, and the reflectance of the wall surface was getting higher. Therefore, at least one of the weighted average brightness change rate R1 to the weighted average brightness change rate R9 increases.
  • the defect area identifying unit 105 identifies the defect area including the defect location among the plurality of areas. Specifically, the defective area identifying unit 105 determines that the weighted average brightness change rate R1 to the weighted average brightness change rate R9 are lower than the image average brightness change rate by a first threshold or more for a first period. Then, the area corresponding to the weighted average brightness change rate is identified as the defective area.
  • the image average brightness change rate is a change rate obtained by averaging all the weighted average brightness change rates of the regions S1 to S9 in the first image P1 to the first image P3. Note that the first threshold value and the first period are appropriately set according to the investigation target and the investigation environment.
  • FIG. 4 is a diagram for explaining identification of a defective area performed by the defective area identification unit 105.
  • FIG. 4 In the case shown in FIG. 4, water leakage occurs due to a crack C in the region S5. Note that the crack is an example of a defective portion.
  • FIG. 4 shows a first image P1 captured at time t1, a first image P2 captured at time t2, and a first image P3 captured at time t3 acquired by the image acquisition unit 101.
  • the evaluation value calculation unit 103 calculates the weighted average brightness change rate R1 to the weighted average brightness change rate R9 of the regions S1 to S9 in the first image P1 to the first image P3 as the first evaluation values.
  • the defect area identifying unit 105 acquires the weighted average brightness change rate R1 to the weighted average brightness change rate R9 (see symbol H). In addition, the defect area specifying unit 105 acquires an image average brightness change rate R(ALL) by averaging the weighted average brightness change rate R1 to the weighted average brightness change rate R9.
  • the defective area identifying unit 105 determines that the weighted average brightness change rate R1 to the weighted average brightness change rate R9 are lower than the image average brightness change rate R(ALL) by a first threshold or more for the first period. detect areas where Here, since the crack C exists in the area S5 and water leakage occurs, the weighted average brightness change rate R5 is lower than the image average brightness change rate R(ALL) by the first threshold or more in the first period. continue. Therefore, the defective area identifying unit 105 identifies the area S5 as a defective area having a defective portion.
  • FIG. 5 is a diagram showing the reflectance relationship between the defective area and the normal area.
  • the vertical axis indicates reflectance
  • the horizontal axis indicates time. Also shown are times t1 to t3 when the first images P1 to P3 shown in FIG. 4 were captured.
  • a line L1 indicates the reflectance in the region S5 explained in FIG. 4, and a line L2 shows the reflectance in the regions S1 to S4 and the regions S6 to S9 explained in FIG. ing.
  • water seeps into the surface of the object to be investigated from the cracks C and water continues to remain on the surface, so the increase in reflectance is gradual even when time elapses.
  • the water on the surface of the investigation object evaporates with the passage of time, and the reflectance sharply increases. Then, in L2, when the surface under investigation becomes dry, the reflectance remains constant at a high value. As described above, the reflectance of the region S5 having the defective portion gradually increases with the passage of time, and the reflectance of the normal regions S1 to S4 and the regions S6 to S9 sharply increases with the passage of time.
  • the weighted average brightness change rate R5 of the cracked region S5 is the weighted average brightness change rate R1 to the weighted average brightness change rate R4 of the other regions, the weighted average brightness change rate R6 to the weighted average brightness change less than rate R9. Furthermore, the weighted average brightness change rate R5 is lower than the image average brightness change rate R(ALL) by the first threshold or more.
  • FIG. 6 is a flow chart showing an image processing method using the defect identification processing unit 30 mounted on the imaging device 10.
  • each step is executed by executing a program by the CPU 40 or a CPU (not shown) installed in the defect identification processing section 30 . Further, the following description will be made along the specific example described with reference to FIG. 4 .
  • the image acquisition unit 101 acquires the first images P1 to P3 of the first wavelength having the absorption band of water (step S10). Note that the first image P1 to first image P3 are acquired at time t1 to time t3, respectively.
  • the evaluation value calculation unit 103 divides the first image P1 to the first image P3 into regions S1 to S9, and calculates the weighted average brightness change rates R1 to R9 of each region (the first evaluation value is Calculation step: step S11).
  • the defective area identifying unit 105 determines whether there is an area in which the weighted change rate is lower than the average brightness change rate of the image by a first threshold value (referred to as a threshold value in the drawing) or more for a first period of time.
  • step S12 step of identifying the defective area.
  • the weighted average brightness change rate R5 of the region S5 is lower than the image average brightness change rate by the first threshold or more, and this state continues for the first period. Therefore, the defect area identifying unit 105 identifies the area S5 as a defect area (step S13).
  • the defect identification processing unit 30 installed in the imaging device 10, in at least two or more first images acquired at different times, changes in brightness in a plurality of regions within the image is calculated, and based on the first evaluation value, the defective area including the defective portion is specified among the plurality of areas. Therefore, the defective area can be accurately specified.
  • the second image is shot separately from the first image, and the exposure amount is adjusted for each shot.
  • the first images are captured at different times, all the first images can be captured with uniform brightness.
  • the rate of change of all areas of the first image continues to be less than the set second threshold, it is determined that the investigation target is dry. , terminates the process of identifying the defective portion. As a result, when there is no defective area, the process can be automatically terminated and an efficient investigation can be performed.
  • the second image is an image of the structure under investigation at a second wavelength.
  • the second wavelength is a wavelength different from the absorption band of the defect site under investigation.
  • the second image is preferably captured at a wavelength lower than the first wavelength with respect to the absorbance of the leak (water) to be detected.
  • FIG. 7 is a diagram conceptually showing the first image and the second image captured by the imaging device 10.
  • FIG. 7 schematically shows the case where the first image and the second image are acquired at time t1 and time t2, more first images and second images may be acquired. .
  • the imaging device 10 captures the second image Q1 and the first image P1. Also, the second image Q2 and the first image P2 are captured by the imaging device 10 at time t2.
  • the first image P1 is captured immediately after the second image Q1 is captured or substantially immediately from the viewpoint of exposure adjustment. Then, the second image Q1 and the first image P1 are shot under the same shooting conditions.
  • the first image P2 is shot immediately after the second image Q2 is shot, or substantially immediately from the viewpoint of exposure adjustment. Then, the second image Q2 and the first image P2 are shot under the same shooting conditions. Note that it is preferable to adjust the exposure between the second image Q1 and the second image Q2 by adjusting the gain of the diaphragm 14 and the image sensor 16 while keeping the exposure time constant. As a result, it is possible to suppress the influence of variation in the sensitivity of the imaging element 16 due to the exposure time.
  • the second image Q1 and the second image Q2 are shot with uniform brightness after exposure adjustment. Also, the second image Q1 and the first image P1 are shot under the same shooting conditions, and the second image Q2 and the first image P2 are shot under the same shooting conditions. Therefore, since the first image P1 and the first image P2 are shot with the same brightness, the first evaluation value can be obtained with high accuracy, and the defect area can be specified more accurately.
  • the defective area identifying unit 105 determines the dry state based on the first evaluation value of each area, and ends the process of identifying the defective area when determining that the structure to be inspected is in a dry state. Specifically, if all the first evaluation values continue to be less than the second threshold value, the defect area identification unit 105 terminates the defect area detection processing performed by the defect identification processing unit 30. .
  • FIG. 8 is a diagram showing temporal changes in the reflectance of one region of the first image and the change rate of the reflectance.
  • the object of investigation is a structure installed outdoors, and the water on the surface of the structure gradually evaporates over time.
  • a line L11 indicates the reflectance
  • a line L12 indicates the rate of change.
  • the water on the surface of the structure evaporates over time and the reflectance increases.
  • the amount of water on the surface of the structure decreases due to evaporation, and the increase in reflectance also slows down.
  • the defective area identifying unit 105 determines that the area is dry.
  • the defect identification processing unit 30 terminates the defect identification processing performed by the defect identification processing unit 30 when determining that all of the plurality of areas of the first image are dry. As a result, when all the areas are normal, the defect area specifying unit 105 can automatically determine that there is no defect, and can perform an efficient investigation.
  • FIG. 9 is a flowchart showing an imaging method using the imaging device 10 of this embodiment.
  • each step is executed by executing a program by the CPU 40 or a CPU (not shown) installed in the defect identification processing section 30 . Further, the following description will be made along the specific example described with reference to FIG.
  • the imaging device 10 captures the second image Q1 at time t1 (step S20). After that, the imaging device 10 shoots the first image P1 under the same shooting conditions as when the second image Q1 was shot (step S21). After that, the imaging device 10 captures the second image Q2 at time t2 (step S22). The imaging device 10 adjusts the exposure so that the second image Q2 has the same brightness as the second image Q1, and shoots the second image Q2. After that, the imaging device 10 shoots the first image P2 under the same shooting conditions as when the second image Q2 was shot (step S23).
  • the defect identification processing unit 30 acquires the first image P1 and the first image P2 (step S24). Further, the defect identification processing unit 30 acquires the second image Q1 and the second image Q2 (step S25).
  • the evaluation value calculation unit 103 divides the obtained first image P1, first image P2, second image Q1, and second image Q2 into a plurality of regions, and calculates the luminance change rate of each region (step S26 ). Specifically, the evaluation value calculation unit 103 calculates luminance change rates (first evaluation values) of a plurality of regions of the first image P1 and the first image P2. The evaluation value calculation unit 103 also calculates luminance change rates (second evaluation values) of a plurality of regions of the second image Q1 and the second image Q2. Then, the defect area identifying unit 105 identifies the defect area based on the rate of change of the first image and the rate of change of the second image (step S27).
  • the defect area identifying unit 105 calculates the difference or ratio between the rate of change of the first image and the rate of change of the second image, and the calculated rate of change (correction rate of change) is the image average brightness change. It is determined whether or not there is an area where the rate is lower than the rate by a first threshold or more for a first period. Then, if there is an area in which the corrected change rate is lower than the image average brightness change rate by the first threshold value or more for the first period of time, that area is identified as a defective area (step S28).
  • step S29 it is determined whether or not the corrected rate of change continues to be less than the second threshold for the second period (fixed period in the figure) in all regions. If the corrected rate of change continues to be less than the second threshold for the second period in all regions, the defect region identifying unit 105 determines that all regions are normal regions, and performs processing to identify the defect region. terminate. On the other hand, if the corrected rate of change is greater than or equal to the second threshold, then a second image is taken to continue the investigation. Note that the second threshold and the second period are values arbitrarily set by the user.
  • the corrected change rate calculated from the first evaluation value and the second evaluation value is used as the change rate.
  • the example of the present embodiment is not limited to this, and as described in the first embodiment, the defect area identification process is performed using the first evaluation value.
  • the first image can be captured with uniform brightness by capturing the second image and adjusting the brightness before capturing the first image.
  • the present embodiment can identify the defect area with higher accuracy.
  • the rate of change of all regions of the first image continues to be less than the second threshold value, it is determined that the investigation target is dry, and the process of identifying the defective portion is performed. terminate. As a result, when there is no defective area, the process can be automatically terminated and an efficient investigation can be performed.
  • Modification 1 of the present invention will be described.
  • the way the survey target spreads when it absorbs water may change. Therefore, in this example, when the evaluation value calculation unit 103 divides the first image to set a plurality of areas, the number of divisions and the division size are changed according to the member to be investigated. As a result, the first evaluation value is calculated in consideration of the difference in how water spreads due to the member when flooded, and the process of identifying the defective area can be performed, thereby improving the accuracy of identifying the defective area.
  • FIG. 10 is a diagram explaining how the evaluation value calculation unit 103 divides the first image into a plurality of regions.
  • the first image P11 is the first image when a concrete structure is targeted for investigation. Concrete structures are members that are difficult to be flooded. Therefore, even when water leakage occurs, the area D1 where water is flooded is narrow. In such a case, the evaluation value calculation unit 103 divides the first image P11 by setting the region S to be small.
  • the first image P12 is the first image when a wooden structure is targeted for investigation.
  • a wooden structure is a member that is easily flooded. Therefore, even when water leakage occurs, the area D2 where water is flooded is wide. In such a case, the evaluation value calculation unit 103 divides the first image P12 by setting the region S to be large.
  • the number of divisions and the division size of the area are input and set in advance by the user according to the member to be investigated.
  • the number of divided regions of the first image performed by the evaluation value calculation unit 103 or the size of the divided regions can be changed according to the members constituting the investigation target. .
  • Modification 2 of the present invention will be described.
  • the evaluation value calculation unit 103 changes the size of the area S according to the size of the defective portion, the subject distance, and the focal length.
  • FIG. 11 is a diagram explaining how the evaluation value calculation unit 103 divides the first image into a plurality of regions.
  • the first image P13 is the first image captured when the subject distance is long or when the focus is short. Also, the first image P13 is the first image captured when the crack to be detected is small. In such a case, the defective portion D3 is photographed small, so the evaluation value calculation unit 103 sets the region S small. Specifically, the area S is set so that the defective portion D3 to be detected is circumscribed.
  • the first image P14 is the first image captured when the subject distance is short or when the focus is long. Also, the first image P14 is the first image captured when the crack to be detected is large. In such a case, the defective portion D4 is photographed small, so the evaluation value calculation unit 103 sets the region S large. Specifically, the area S is set so as to include the defective portion D4 to be detected.
  • the number of divisions and the division size of the region are input and set in advance by the user according to the size of the defect location to be detected.
  • the evaluation value calculation unit 103 changes the size of the area S according to the size of the defect location, the subject distance, and the focal length.
  • the defect area identifying unit 105 can accurately identify the defect area.
  • Modification 3 of the present invention will be described.
  • the first threshold used when the defect area identifying unit 105 identifies the defect area is dynamically changed.
  • the first threshold which is the threshold for the rate of change in brightness
  • the rate of change in each area fluctuates non-linearly
  • the defect area identification process will not be performed appropriately.
  • the first threshold is constant, it may take time to specify the defect area. Therefore, in this example, the first threshold value is not set to a constant value, but is dynamically changed to identify the defective portion in an appropriately short time.
  • FIG. 12 is a diagram showing, as an example of the first threshold that fluctuates, the first threshold that fluctuates according to the rate of change.
  • the left vertical axis indicates reflectance, the right vertical axis indicates rate of change, and the horizontal axis indicates time.
  • a line L110 and a line L111 are diagrams showing the reflectance of the region S.
  • a line L110 indicates the temporal change in reflectance of the region identified as the defective region
  • a line L111 indicates the temporal change of the reflectance of the normal region. As indicated by lines L110 and L111, it takes time for the reflectance of the region identified as the defective region to increase compared to the reflectance of the normal region.
  • a line L120 and a line L121 are diagrams showing the change rate of the reflectance of the region S.
  • a line L120 indicates the temporal change in the rate of change in the area identified as the defective area
  • a line L121 indicates the temporal change in the rate of change in the normal area.
  • the rate of change in the reflectance of the area identified as the defect area gradually increases with the lapse of time, and then increases sharply.
  • a line L101 is a diagram showing the first threshold.
  • the fluctuating first threshold value may be the average of the rate of change of all areas of the first image (image average brightness change rate) or the median value of the rate of change of all areas of the first image.
  • As the first threshold value a value based on the average of the rate of change of all regions of the first image (image average brightness change rate) or the median value of the rate of change of all regions of the first image is adopted.
  • the average of the rate of change of all regions of the first image (image average brightness change rate), or the median value of the rate of change of all regions of the first image which is greater than 0 and greater than 1 A value multiplied by a small factor may be adopted.
  • the evaluation value calculation unit 103 may calculate the rate of change using a plurality of adjacent areas instead of the rate of change of one area.
  • FIG. 13 is a diagram explaining the calculation of the rate of change in the evaluation value calculation unit 103 of this example.
  • the evaluation value calculation unit 103 calculates the rate of change in brightness in the areas S11, S12, and S13 in accordance with the area of the defect location D. In this manner, the evaluation value calculation unit 103 may calculate the rate of change in brightness not only for one region but also for adjacent regions. Note that the evaluation value calculation unit 103 may calculate the rate of change in brightness of vertically elongated regions such as the adjacent regions S11 to S13 as described above, or calculate the rate of change in brightness of the horizontally elongated region. Alternatively, the rate of change in brightness of five vertically and horizontally adjacent regions may be calculated, or the brightness of nine regions obtained by adding a diagonally adjacent region to the five vertically and horizontally adjacent regions. A rate of change may be calculated.
  • the evaluation value calculation unit 103 calculates the rate of change in the brightness of adjacent regions, and if there is a tendency in the shape of the defective portion, setting the region in accordance with the shape can improve the brightness. Robust failure area identification can be performed.
  • Modification 5 of the present invention will be described.
  • the evaluation value calculation unit 103 performs correction processing.
  • FIG. 14 is a flowchart when correction processing is performed.
  • the evaluation value calculation unit 103 determines whether the LOG mode is activated in the imaging device 10 (step S30). When the evaluation value calculation unit 103 determines that the LOG mode is activated, it executes the correction processing described below (step S31).
  • FIG. 15 is a diagram for explaining correction processing performed by the evaluation value calculation unit 103 of this example.
  • the imaging device 10 may have a LOG (logarithmic) mode and a LINEAR (linear) mode.
  • the output value is nonlinear with respect to the amount of incident light. If the process of specifying the defect area of the present invention is performed based on such an output value, the calculation of the first evaluation value by the evaluation value calculation unit 103 and the process by the defect area identification unit 105 may become complicated. . Therefore, when the LOG mode is set in the imaging apparatus 10, the evaluation value calculation unit 103 performs processing to cancel the LOG mode. Specifically, as shown in FIG. 15B, correction is performed such that the corrected value is output with respect to the output value from the camera.
  • Modification 6 of the present invention will be described.
  • the light projector 21 provided in the imaging device 10 processing for identifying the defective area is performed.
  • drying may occur due to factors such as high humidity or low temperature. It is considered a situation in which no progress can be made. Therefore, in this case, drying is accelerated by irradiating the survey object with the light projector 21 . Thus, by using the light projector 21, it is possible to identify the defective area in a short time.
  • FIG. 16 is a flow chart for starting to use the projector 21.
  • the evaluation value calculation unit 103 determines whether or not the brightness change rate is less than the third threshold during the third period (step S41). If the rate of change remains below the third threshold, the evaluation value calculator 103 turns on the light projector 21 via the CPU 40, and the light projector 21 is used (step S42).
  • FIG. 17 is a diagram explaining the timing when the light projector 21 is used.
  • the left vertical axis is the reflectance
  • the right vertical axis is the rate of change
  • the horizontal axis is time.
  • the projector 21 is started to be used.
  • the reflectance and rate of change abruptly improve (see lines L130 and L131). This makes it possible to identify the defective area in a short period of time.
  • the illustrated third threshold value and third period are appropriately set according to the investigation target and the investigation environment.
  • the evaluation value calculation unit 103 starts using the light projector 21 via the CPU 40 when the light amount in the acquired first image is lower than the fourth threshold.
  • the fourth threshold is a value appropriately set by the user.
  • the brightness may change depending on the location in the image as described below due to changes in the lighting conditions.
  • FIG. 18 is a diagram explaining a case where the brightness changes in the image.
  • the second image P31 is an image captured at time t1.
  • a second image P32 is an image captured at time t2.
  • the second image P32 is an image after exposure adjustment, but since the shooting time is different, the sunshine conditions are different, and there is an area with a certain amount or more of difference in the image. In such a case, the projector 21 is used.
  • FIG. 19 is a flow chart for starting to use the projector 21.
  • the evaluation value calculation unit 103 determines whether or not there is an area having a difference equal to or greater than a fifth threshold (referred to as a threshold in the figure) in the second image (step S50). If there is an area with a difference equal to or greater than the fifth threshold in the second image, the projector 21 is used (step S51). This makes it possible to maintain the accuracy of specifying the defect area even when the sunshine conditions change due to different shooting times.
  • the fifth threshold value described above is a value appropriately set by the user.
  • the hardware structure of the processing unit (for example, the image acquisition unit 101, the evaluation value calculation unit 103, and the defect area identification unit 105) (processing unit) that executes various processes is as follows.
  • various processors for various processors, the circuit configuration can be changed after manufacturing, such as CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), which is a general-purpose processor that executes software (program) and functions as various processing units.
  • Programmable Logic Device which is a processor, ASIC (Application Specific Integrated Circuit), etc. be
  • One processing unit may be composed of one of these various processors, or may be composed of two or more processors of the same type or different types (eg, multiple FPGAs, or combinations of CPUs and FPGAs).
  • a plurality of processing units may be configured by one processor.
  • a processor functions as multiple processing units.
  • SoC System On Chip
  • SoC System On Chip
  • the various processing units are configured by using one or more of the above various processors as a hardware structure.
  • the hardware structure of these various processors is, more specifically, an electrical circuit that combines circuit elements such as semiconductor elements.
  • Imaging device 12 Imaging lens 14 : Aperture 16 : Imaging element 20 : External device connection terminal 21 : Projector 22 : Image input controller 24 : Image processing unit 28 : Video encoder 30 : Defect identification processing unit 32 : Sensor driving unit 33 : Shutter drive unit 34 : Aperture drive unit 36 : Lens drive unit 38 : Operation unit 40 : CPU 47: ROM 48: memory 101: image acquisition unit 103: evaluation value calculation unit 105: defect area identification unit

Abstract

Provided are an image processing device, an imaging device, a camera system, an image processing method, and a program that are capable of accurately identifying a defective region. An image processing device (30) is provided with a processor for identifying a defective part of a structure, wherein the processor: calculates a first evaluation value relating to a change in brightness of a plurality of regions in an image, for two or more first images of the structure at a first wavelength including an absorption band of water, acquired at different times; and identifies a defective region including the defective part from among the plurality of regions on the basis of the first evaluation value.

Description

画像処理装置、撮像装置、カメラシステム、画像処理方法、及びプログラムImage processing device, imaging device, camera system, image processing method, and program
 本発明は、画像処理装置、撮像装置、カメラシステム、画像処理方法、及びプログラムに関し、特に不具合領域を特定する画像処理装置、撮像装置、カメラシステム、画像処理方法、及びプログラムに関する。 The present invention relates to an image processing device, an imaging device, a camera system, an image processing method, and a program, and more particularly to an image processing device, an imaging device, a camera system, an image processing method, and a program that identify defective areas.
 従来、被写体に赤外光を照射し、反射した赤外光を撮影することにより、被写体における水漏れ箇所の検査が行われてきた。  Conventionally, water leaks have been inspected by irradiating the subject with infrared light and photographing the reflected infrared light.
 特許文献1では、水漏れが無い場合に被検査部位から反射される赤外光の強度分布と、水漏れが有る場合に被検査部位から反射された赤外光を受光手段が受光した強度分布とを比較して、水漏れを検出する技術が記載されている。 In Patent Document 1, the intensity distribution of the infrared light reflected from the inspection site when there is no water leakage and the intensity distribution of the infrared light reflected from the inspection site when there is water leakage are received by the light receiving means. A technique for detecting water leaks by comparing is described.
特開平11-160185号公報JP-A-11-160185
 本開示の技術にかかる一つの実施形態は、正確に不具合領域を特定することができる画像処理装置、撮像装置、カメラシステム、画像処理方法、及びプログラムを提供する。 An embodiment according to the technology of the present disclosure provides an image processing device, an imaging device, a camera system, an image processing method, and a program that can accurately identify a defective area.
 本発明の一の態様である画像処理装置は、構造物の不具合箇所を特定する、プロセッサを備える画像処理装置であって、プロセッサは、水の吸収帯域を持つ第1波長の構造物の第1画像であって、異なる時刻で取得された少なくとも2枚以上の第1画像において、画像内の複数の領域における明るさの変化に関する第1評価値を算出し、第1評価値に基づいて、複数の領域のうち不具合箇所を含む不具合領域を特定する。 An image processing apparatus according to one aspect of the present invention is an image processing apparatus including a processor for identifying a defective portion of a structure, wherein the processor comprises a first wavelength of the structure having a water absorption band and a first wavelength of the structure. In at least two or more first images that are images acquired at different times, a first evaluation value regarding changes in brightness in a plurality of regions in the image is calculated, and based on the first evaluation value, a plurality of Identify the defective area including the defective portion among the areas of .
 好ましくは、第1評価値は、複数の領域における明るさの変化率である。 Preferably, the first evaluation value is the brightness change rate in the plurality of areas.
 好ましくは、明るさの変化率のうち少なくとも一つは、時間の経過に沿って増大する方向に変化する。 Preferably, at least one of the rate of change in brightness changes in a direction that increases over time.
 好ましくは、明るさの変化率は、複数の領域における明るさの変化率を加重平均して算出する加重平均明るさ変化率である。 Preferably, the brightness change rate is a weighted average brightness change rate calculated by weighted averaging the brightness change rates in a plurality of regions.
 好ましくは、プロセッサは、第1画像の明るさの変化率を平均した画像平均明るさ変化率を算出し、画像平均明るさ変化率よりも第1閾値以上低い加重平均明るさ変化率が第1期間続いている領域を不具合領域として特定する。 Preferably, the processor calculates an image average brightness change rate by averaging the brightness change rates of the first image, and the weighted average brightness change rate lower than the image average brightness change rate by a first threshold or more is the first image brightness change rate. Areas that persist for a period of time are identified as defective areas.
 好ましくは、プロセッサは、第1閾値を時間経過に応じて変更させる。 Preferably, the processor changes the first threshold over time.
 好ましくは、プロセッサは、全ての第1評価値が第2閾値未満の状態が継続している場合には、不具合箇所を特定することを終了する。 Preferably, the processor ends identifying the fault location when all the first evaluation values continue to be less than the second threshold.
 好ましくは、プロセッサは、第1波長とは異なる第2波長の構造物の第2画像であって、第1画像に対応した少なくとも2枚以上の第2画像において、第1画像に対応した複数の領域における明るさの変化に関する第2評価値を算出し、第1評価値及び第2評価値に基づいて、不具合領域を特定する。 Preferably, the processor is a second image of a structure with a second wavelength different from the first wavelength, and in at least two or more second images corresponding to the first image, a plurality of images corresponding to the first image A second evaluation value is calculated for changes in brightness in the region, and the defect region is identified based on the first evaluation value and the second evaluation value.
 好ましくは、第2画像は、対応する第1画像と同じ撮影条件で取得されており、第2画像は、第1画像との明るさが少なくなるように撮影されている。 Preferably, the second image is acquired under the same shooting conditions as the corresponding first image, and the second image is taken so that the brightness is lower than that of the first image.
 好ましくは、プロセッサは、第1画像を複数の領域に分割し、複数の領域の分割数や分割サイズを変更する。 Preferably, the processor divides the first image into a plurality of regions, and changes the division number and division size of the plurality of regions.
 好ましくは、プロセッサは、複数の領域の第1評価値に基づいて、不具合領域を特定する。 Preferably, the processor identifies the defect area based on the first evaluation values of the plurality of areas.
 本発明の他の態様である撮像装置は、上述の画像処理装置を備える。 An imaging device according to another aspect of the present invention includes the image processing device described above.
 本発明の他の態様であるカメラシステムは、上述の画像処理装置を備える。好ましくは、構造物に光を照射する投光器を備える。 A camera system according to another aspect of the present invention includes the image processing device described above. Preferably, a light projector for irradiating the structure with light is provided.
 本発明の他の態様である画像処理方法は、構造物の不具合箇所を特定する、プロセッサを備える画像処理装置の画像処理方法であって、プロセッサにより、水の吸収帯域を持つ第1波長の構造物の第1画像であって、異なる時刻で取得された少なくとも2枚以上の第1画像において、画像内の複数の領域における明るさの変化に関する第1評価値を算出する工程と、第1評価値に基づいて、複数の領域のうち不具合箇所を含む不具合領域を特定する工程と、が行われる。 An image processing method, which is another aspect of the present invention, is an image processing method for an image processing apparatus having a processor for specifying a defective portion of a structure, wherein the processor detects a structure of a first wavelength having a water absorption band. calculating a first evaluation value regarding changes in brightness in a plurality of regions in the first images of the object, which are at least two or more first images acquired at different times; and a step of identifying a defective region including a defective portion among the plurality of regions based on the value.
 本発明の他の態様であるプログラムは、構造物の不具合箇所を特定する、プロセッサを備える画像処理装置に画像処理方法を行わせるプログラムであって、プロセッサに、水の吸収帯域を持つ第1波長の構造物の第1画像であって、異なる時刻で取得された少なくとも2枚以上の第1画像において、画像内の複数の領域における明るさの変化に関する第1評価値を算出する工程と、第1評価値に基づいて、複数の領域のうち不具合箇所を含む不具合領域を特定する工程と、を行わせる。 A program, which is another aspect of the present invention, is a program for causing an image processing device having a processor to perform an image processing method for identifying a defective portion of a structure, wherein Calculating a first evaluation value relating to changes in brightness in a plurality of regions in the first images of the structure of at least two or more obtained at different times; and a step of specifying a defective region including a defective portion among the plurality of regions based on one evaluation value.
図1は、撮像装置の内部構成の実施形態を示すブロック図である。FIG. 1 is a block diagram showing an embodiment of the internal configuration of an imaging device. 図2は、不具合特定処理部の主な機能ブロックを示す図である。FIG. 2 is a diagram showing main functional blocks of the defect identification processing unit. 図3は、第1評価値の算出に関して説明する図である。FIG. 3 is a diagram illustrating calculation of the first evaluation value. 図4は、不具合領域特定部で行われる不具合領域の特定に関して説明する図である。FIG. 4 is a diagram for explaining identification of a defective area performed by the defective area identification unit. 図5は、不具合領域と正常領域との反射率の関係を示す図である。FIG. 5 is a diagram showing the reflectance relationship between the defective area and the normal area. 図6は、不具合特定処理部を用いた画像処理方法を示すフローチャートである。FIG. 6 is a flow chart showing an image processing method using the defect identification processor. 図7は、第1画像及び第2画像を概念的に示した図である。FIG. 7 is a diagram conceptually showing the first image and the second image. 図8は、第1画像の1つの領域の反射率と反射率の変化率との時間変化を示す図である。FIG. 8 is a diagram showing temporal changes in the reflectance of one region of the first image and the change rate of the reflectance. 図9は、撮像方法を示すフローチャートである。FIG. 9 is a flow chart showing an imaging method. 図10は、第1画像の複数の領域の分割に関して説明する図である。FIG. 10 is a diagram illustrating division of the first image into a plurality of regions. 図11は、第1画像の複数の領域の分割に関して説明する図である。FIG. 11 is a diagram illustrating division of the first image into a plurality of regions. 図12は、変動する第1閾値の例として、変化率に応じて変動する第1閾値を示す図である。FIG. 12 is a diagram showing, as an example of the first threshold that fluctuates, the first threshold that fluctuates according to the rate of change. 図13は、変化率の算出に関して説明する図である。FIG. 13 is a diagram for explaining calculation of the rate of change. 図14は、補正処理が実施される場合のフローチャートである。FIG. 14 is a flowchart when correction processing is performed. 図15は、補正処理に関して説明する図である。FIG. 15 is a diagram for explaining correction processing. 図16は、投光器の利用開始のフローチャートである。FIG. 16 is a flow chart for starting use of the projector. 図17は、投光器が使用される場合のタイミングに関して説明する図である。FIG. 17 is a diagram explaining the timing when the projector is used. 図18は、画像内において明るさが変化してしまう場合を説明する図である。FIG. 18 is a diagram for explaining a case in which brightness changes within an image. 図19は、投光器の利用開始のフローチャートである。FIG. 19 is a flow chart for starting use of the projector.
 以下、添付図面にしたがって本発明にかかる画像処理装置、撮像装置、カメラシステム、画像処理方法、及びプログラムの好ましい実施の形態について説明する。 Preferred embodiments of an image processing device, an imaging device, a camera system, an image processing method, and a program according to the present invention will be described below with reference to the accompanying drawings.
 従来、建築物などの雨漏りの原因となっている不具合箇所を調査する際に、水の吸収帯域の波長を有する光が利用されてきた。例えば、建造物の壁面の水漏れを調査する場合には、壁面の反射率(カメラで得られる明るさ)を測定することで水漏れ箇所の検出が行われてきた。 Conventionally, light with a wavelength in the absorption band of water has been used when investigating trouble spots that cause rain leaks in buildings. For example, when investigating water leaks on the wall surface of a building, the location of the water leak has been detected by measuring the reflectance of the wall surface (brightness obtained by a camera).
 しかしながら、調査対象の表面は滑らかで、反射率などが均一な表面であるとは限らない。例えば調査対象である壁面の表面に凹凸などが存在している場合、壁面への光の当たり方が不均一な場合、壁面が異なる部材で構成されている場合などでは、明るさ(または反射率)が、不具合箇所が存在しないにもかかわらず、ばらついてしまうことがある。このような場合には、明るさの変化に基づいて不具合箇所を正確に特定することが困難である。 However, the surface of the object to be investigated is not necessarily a smooth surface with a uniform reflectance. For example, if the surface of the wall being investigated has unevenness, if the way the light hits the wall is uneven, or if the wall is made of different materials, the brightness (or reflectance) ) may vary even though there is no defect. In such a case, it is difficult to accurately identify the defect location based on the change in brightness.
 そこで、以下で説明する本発明では、明るさの変化に基づいて正確に不具合箇所を特定することができる技術を提案する。 Therefore, the present invention, which will be described below, proposes a technique that can accurately identify a defective location based on changes in brightness.
 <第1の実施形態>
 図1は、本発明の画像処理装置(不具合特定処理部)を搭載する撮像装置の内部構成の実施形態を示すブロック図である。なお、画像処理装置は、撮像装置内では不具合特定処理部30として機能する。
<First embodiment>
FIG. 1 is a block diagram showing an embodiment of an internal configuration of an imaging device equipped with an image processing device (defect identification processing section) of the present invention. Note that the image processing device functions as the defect identification processing unit 30 within the imaging device.
 撮像装置10は、調査対象である構造物を被写体として撮影し画像を取得する。そして、得られた画像を不具合特定処理部30で処理することにより、不具合箇所を含む不具合領域が特定される。ここで、本開示における不具合箇所とは、水漏れ(または雨漏り)が発生している箇所、及び水漏れ(または雨漏り)の発生原因となり得る故障箇所を含む。 The imaging device 10 captures an image of a structure to be investigated as a subject. Then, by processing the obtained image with the defect identification processing unit 30, a defect area including the defect location is specified. Here, the defective location in the present disclosure includes a location where water leakage (or rain leakage) occurs and a failure location that may cause water leakage (or rain leakage).
 撮像装置10は、CPU(Central Processing Unit)(プロセッサ)40により統括制御される。 The imaging device 10 is centrally controlled by a CPU (Central Processing Unit) (processor) 40 .
 撮像装置10には、電源スイッチなどの操作部38が設けられている。この操作部38からの信号はCPU40に入力され、CPU40は入力信号に基づいて撮像装置10の各回路を制御し、例えば、センサ駆動部32による撮像素子16の駆動制御、シャッタ駆動部33によるメカシャッタ(機械的シャッタ)15の駆動制御、絞り駆動部34による絞り14の駆動制御、及びレンズ駆動部36により撮像レンズ12の駆動制御を司る他、撮像動作制御、画像処理制御、画像データの記録/再生制御などを行う。 The imaging device 10 is provided with an operation unit 38 such as a power switch. A signal from the operation unit 38 is input to the CPU 40, and the CPU 40 controls each circuit of the imaging device 10 based on the input signal. In addition to driving control of the (mechanical shutter) 15, driving control of the diaphragm 14 by the diaphragm driving unit 34, and driving control of the imaging lens 12 by the lens driving unit 36, imaging operation control, image processing control, image data recording/ Playback control, etc.
 外部機器接続端子20には、撮像装置10に接続される外部機器が接続される。本例では外部機器接続端子20に接続される外部機器として投光器21が接続されている。投光器21は調査対象に対して光(例えば、後で説明を行う第1波長及び/または第2波長を含む光)を照射することにより、撮像装置10の撮影における光量の不足を補う。投光器21の制御はCPU40で行われる。なお、本発明において投光器21を利用した例は、変形例6で説明を行う。外部機器接続端子20は無線により接続する端子も含む。例えば、外部機器接続端子20は、無線通信を行う無線LAN(Local Area Network)、Bluetooth(登録商標)、UWB(Ultra Wide Band)などの端子を含む。 An external device connected to the imaging device 10 is connected to the external device connection terminal 20 . In this example, a projector 21 is connected as an external device to be connected to the external device connection terminal 20 . The light projector 21 irradiates the object to be investigated with light (for example, light containing a first wavelength and/or a second wavelength, which will be described later), thereby compensating for a shortage of light in photographing by the imaging device 10 . The control of the projector 21 is performed by the CPU 40 . An example using the light projector 21 in the present invention will be described in Modified Example 6. The external device connection terminal 20 also includes a terminal for wireless connection. For example, the external device connection terminal 20 includes terminals for wireless LAN (Local Area Network), Bluetooth (registered trademark), UWB (Ultra Wide Band), and the like.
 撮像レンズ12、絞り14、メカシャッタ(機械的シャッタ)15等を通過した光束は、公知のイメージセンサである撮像素子16に結像される。撮像素子16は、水の吸収帯域を持つ第1波長を受光して第1画像を得ることができる。第1波長は、1450nm、1940nm、2900nmの水に吸収される波長を含む赤外光の波長帯域で構成される。また、撮像素子16は、後で説明を行う第2波長を受光して第2画像を得ることができる。なお、第1画像を取得する場合には、第1波長を透過させるバンドパスフィルタ(不図示)が撮像装置10の光路に挿入され、第2画像を取得する場合には、第2波長を透過させるバンドパスフィルタ(不図示)が撮像装置10の光路に挿入される。 The luminous flux that has passed through the imaging lens 12, the diaphragm 14, the mechanical shutter (mechanical shutter) 15, etc. is imaged on the imaging element 16, which is a known image sensor. The imaging element 16 can receive a first wavelength having an absorption band of water to obtain a first image. The first wavelength consists of infrared wavelength bands including wavelengths of 1450 nm, 1940 nm, and 2900 nm that are absorbed by water. Further, the imaging element 16 can receive a second wavelength, which will be described later, to obtain a second image. A band-pass filter (not shown) that transmits the first wavelength is inserted in the optical path of the imaging device 10 when acquiring the first image, and transmits the second wavelength when acquiring the second image. A bandpass filter (not shown) is inserted in the optical path of the imaging device 10 to allow the light to pass through.
 撮像素子16は、多数の受光素子(フォトダイオード)が2次元配列されており、各フォトダイオードの受光面に結像された被写体像は、各受光素子の入射光量に応じた量の信号電荷がアンプにより電圧に変換され、撮像素子16内のA/D(Analog/Digital)変換器を介してデジタル信号に変換されて出力される。 The imaging element 16 has a large number of light receiving elements (photodiodes) arranged two-dimensionally, and the subject image formed on the light receiving surface of each photodiode has an amount of signal charge corresponding to the amount of light incident on each light receiving element. The voltage is converted into a voltage by an amplifier, converted into a digital signal through an A/D (Analog/Digital) converter in the imaging device 16, and output.
 動画または静止画の撮影時に撮像素子16から読み出された画像信号(画像データ)は、画像入力コントローラ22を介してメモリ(SDRAM(Synchronous Dynamic Random Access Memory))48に一時的に記憶される。なおSDRAMは一例であり、ReRAM(Resistive Random Access Memory)、PCM(Phase-change memory)、MRAM(Magnetoresistive Random Access Memory)など必要に応じて任意のメモリを選択可能である。 The image signal (image data) read from the imaging element 16 when shooting a moving image or still image is temporarily stored in a memory (SDRAM (Synchronous Dynamic Random Access Memory) 48 via the image input controller 22. The SDRAM is an example, and any memory such as ReRAM (Resistive Random Access Memory), PCM (Phase-change memory), MRAM (Magnetoresistive Random Access Memory) can be selected as necessary.
 ROM47は、カメラ制御プログラム、画像処理等に使用する各種のパラメータやテーブルが記憶されているフラッシュメモリ(Flash Memory)、ROM(Read Only Memory)、またはEEPROM(Electrically Erasable Programmable Read-Only Memory)、またはMRAM(Magnetoresistive Random Access Memory)等である。 The ROM 47 is a flash memory (Flash Memory), ROM (Read Only Memory), EEPROM (Electrically Erasable Programmable Read-Only Memory), or MRAM (Magnetoresistive Random Access Memory) and the like.
 画像処理部24は、動画または静止画の撮影時に画像入力コントローラ22を介して取得され、メモリ48に一時的に記憶された未処理の画像データ(RAWデータ)を読み出す。画像処理部24は、読み出したRAWデータに対してオフセット処理、画素補間処理(位相差検出用画素、傷画素等の補間処理)、ホワイトバランス補正、感度補正を含むゲインコントロール処理、ガンマ補正処理、デモザイク処理、輝度及び色差信号生成処理、輪郭強調処理、及び色補正処理等を行う。 The image processing unit 24 reads unprocessed image data (RAW data) acquired via the image input controller 22 when shooting a moving image or still image and temporarily stored in the memory 48 . The image processing unit 24 performs offset processing, pixel interpolation processing (interpolation processing for phase difference detection pixels, defective pixels, etc.), white balance correction, gain control processing including sensitivity correction, gamma correction processing, and It performs demosaic processing, luminance and color difference signal generation processing, edge enhancement processing, color correction processing, and the like.
 画像処理部24で処理された画像は、ビデオエンコーダ28においてエンコーディングされ、表示部(不図示)に出力される。画像処理部24により処理された画像データであって、記録用の静止画または動画として処理された画像データは、再びメモリ48に記憶される。 The image processed by the image processing unit 24 is encoded by the video encoder 28 and output to the display unit (not shown). Image data processed by the image processing unit 24 and processed as still images or moving images for recording are stored in the memory 48 again.
 画像入力コントローラ22、画像処理部24、ビデオエンコーダ28などは独立した処理部(回路)として説明してきたが、CPU40がメモリ48(またはROM47)に内蔵されたプログラムによってソフトウェア処理によって実行することも可能である。 Although the image input controller 22, the image processing unit 24, the video encoder 28, etc. have been described as independent processing units (circuits), the CPU 40 can also execute software processing using a program stored in the memory 48 (or ROM 47). is.
 以上で説明したように、撮像装置10によって調査対象を被写体として撮影が行われ、画像処理部24で処理が行われた画像(静止画及び動画を構成するフレーム画像)が、不具合特定処理部30に入力され、調査対象の不具合領域を特定する処理が行われる。なお、以上で説明した撮像装置10は、具体例でありこれに限定されない。第1画像及び/または第2画像を撮像することができる撮像装置であれば本発明に適用することができる。また、上述した撮像装置の代わりにカメラシステムが用いられてもよい。 As described above, the imaging device 10 captures an image of an object to be investigated, and the image processed by the image processing unit 24 (a still image and a frame image that constitutes a moving image) is processed by the defect identification processing unit 30. , and processing is performed to identify the defect area to be investigated. Note that the imaging apparatus 10 described above is a specific example and is not limited to this. Any imaging device that can capture the first image and/or the second image can be applied to the present invention. Also, a camera system may be used instead of the imaging device described above.
 図2は、不具合特定処理部30の主な機能ブロックを示す図である。なお、不具合特定処理部30の各機能は、撮像装置10のCPU40がプログラムを実行することにより実現されてもよいし、不具合特定処理部30に搭載された単数又は複数のCPU(プロセッサ)(不図示)によって実現されてもよい。 FIG. 2 is a diagram showing main functional blocks of the defect identification processing unit 30. As shown in FIG. Note that each function of the defect identification processing unit 30 may be realized by executing a program by the CPU 40 of the imaging device 10, or may be implemented by one or more CPUs (processors) (unused) installed in the defect identification processing unit 30. shown).
 不具合特定処理部30は、画像取得部101、評価値算出部103、及び不具合領域特定部105で構成されている。 The defect identification processing unit 30 is composed of an image acquisition unit 101 , an evaluation value calculation unit 103 , and a defect area identification unit 105 .
 画像取得部101は、異なる時刻で撮影された第1画像を少なくとも2枚以上取得する。また、画像取得部101は、後で説明する明るさの変化率の時間的な平均(例えば加重平均明るさ変化率)が算出される場合には、異なる時刻で撮影された第1画像を少なくとも3枚以上取得する。ここで、第1画像とは、調査対象である構造物を被写体として水の吸収帯域を持つ第1波長を用いて撮影された画像である。また、構造物とは、ビル、家屋、橋梁、なのどの建造物を含み、屋内または屋外に設けられている。なお、画像取得部101で取得する第1画像の枚数は、調査対象や調査環境によって適宜設定される。 The image acquisition unit 101 acquires at least two first images captured at different times. Further, when the temporal average of the brightness change rate (for example, the weighted average brightness change rate) is calculated, the image acquisition unit 101 obtains at least the first images captured at different times. Get 3 or more. Here, the first image is an image captured using a first wavelength having a water absorption band with a structure to be investigated as an object. Structures include constructions such as buildings, houses, bridges, etc., and are provided indoors or outdoors. Note that the number of first images acquired by the image acquisition unit 101 is appropriately set depending on the investigation target and the investigation environment.
 評価値算出部103は、画像取得部101で取得した複数の第1画像において、画像内の複数の領域における明るさの変化に関する第1評価値を算出する。 The evaluation value calculation unit 103 calculates a first evaluation value regarding changes in brightness in a plurality of regions in the plurality of first images acquired by the image acquisition unit 101 .
 図3は、評価値算出部103での第1評価値の算出に関して説明する図である。 FIG. 3 is a diagram explaining calculation of the first evaluation value in the evaluation value calculation unit 103. FIG.
 第1画像P1は、時刻t1に撮像装置10で取得された画像である。第1画像P2は時刻t2に撮像装置10で取得された画像である。第1画像P3は時刻t3に撮像装置10で取得された画像である。なお、時系列的な順序は、時刻t1、時刻t2、時刻t3と時間が経過している。 The first image P1 is an image acquired by the imaging device 10 at time t1. The first image P2 is an image captured by the imaging device 10 at time t2. The first image P3 is an image captured by the imaging device 10 at time t3. In the chronological order, time t1, time t2, and time t3 have elapsed.
 評価値算出部103は、第1画像P1を複数の領域(領域S1~領域S9)に分割する。また、評価値算出部103は、第1画像P1と対応するように第1画像P2及び第1画像P3を複数の領域(領域S1~領域S9)に分割する。 The evaluation value calculation unit 103 divides the first image P1 into a plurality of areas (areas S1 to S9). Also, the evaluation value calculation unit 103 divides the first image P2 and the first image P3 into a plurality of regions (regions S1 to S9) so as to correspond to the first image P1.
 そして評価値算出部103は、第1画像P1~第1画像P3の領域S1~領域S9の明るさを算出する。ここで、評価値算出部103は、明るさの一つの指標としての輝度を使用して第1画像P1~第1画像P3の領域S1~領域S9の明るさを算出する。例えば、評価値算出部103は、第1画像P1の領域S1の輝度を平均することにより領域S1の明るさを算出する(平均輝度)。同様にして、評価値算出部103は、第1画像P1の領域S2~領域S9、第1画像P2の領域S1~領域S9、第1画像P3の領域S1~領域S9に関して明るさ(平均輝度)を算出する。 Then, the evaluation value calculation unit 103 calculates the brightness of the regions S1 to S9 of the first images P1 to P3. Here, the evaluation value calculation unit 103 calculates the brightness of the regions S1 to S9 of the first images P1 to P3 using luminance as one index of brightness. For example, the evaluation value calculation unit 103 calculates the brightness of the area S1 by averaging the brightness of the area S1 of the first image P1 (average brightness). Similarly, the evaluation value calculation unit 103 calculates the brightness (average brightness) of the regions S2 to S9 of the first image P1, the regions S1 to S9 of the first image P2, and the regions S1 to S9 of the first image P3. Calculate
 その後、評価値算出部103は、第1画像P1~第1画像P3の領域S1~領域S9における明るさの変化に関する第1評価値を算出する。図示した場合では、評価値算出部103は、第1画像P1~第1画像P3の領域S1~領域S9における輝度の変化率を第1評価値として算出する。 After that, the evaluation value calculation unit 103 calculates a first evaluation value regarding changes in brightness in the regions S1 to S9 of the first images P1 to P3. In the illustrated case, the evaluation value calculation unit 103 calculates, as the first evaluation value, the luminance change rate in the regions S1 to S9 of the first images P1 to P3.
 図3に示した場合では、評価値算出部103は、第1画像P1の領域S1の平均輝度と第1画像P2の領域S1の平均輝度との変化率、及び第1画像P2の領域S1の平均輝度と第1画像P3の領域S1の平均輝度との変化率の加重平均明るさ変化率R1を第1評価値として算出する。 In the case shown in FIG. 3, the evaluation value calculation unit 103 calculates the rate of change between the average luminance of the area S1 of the first image P1 and the average luminance of the area S1 of the first image P2, and the average luminance of the area S1 of the first image P2. A weighted average brightness change rate R1 of the rate of change between the average brightness and the average brightness of the region S1 of the first image P3 is calculated as a first evaluation value.
 同様にして、評価値算出部103は、第1画像P1の領域S2~領域S9、第1画像P2の領域S1~領域S9、第1画像P3の領域S1~領域S9の平均輝度の加重平均明るさ変化率を第1評価値として算出する。なお、加重平均明るさ変化率の算出する場合の重さは適宜決定される。 Similarly, the evaluation value calculation unit 103 calculates the weighted average brightness of the average brightness of the regions S2 to S9 of the first image P1, the regions S1 to S9 of the first image P2, and the regions S1 to S9 of the first image P3. The rate of change in thickness is calculated as the first evaluation value. Note that the weight for calculating the weighted average brightness change rate is appropriately determined.
 また、第1画像P1~第1画像P3は、例えば家屋の壁を撮像装置10で撮影した画像である。また、撮像装置10で撮影される前に(時刻t1よりも前に)水漏れ調査のために調査対象である壁に水がむら無く掛けられている。調査日(撮像装置10により第1画像P1~第1画像P3が撮影された日)は、天候晴れであり、時間の経過と共に壁表面の水は蒸発し減っていき、壁表面の反射率は高くなっていく。したがって、加重平均明るさ変化率R1~加重平均明るさ変化率R9の少なくとも一つは増大する方向に変化する。 Also, the first image P1 to the first image P3 are, for example, images of the wall of the house captured by the imaging device 10. In addition, before the imaging device 10 takes an image (before time t1), the wall to be inspected is evenly sprinkled with water to investigate water leakage. On the survey day (the day on which the first image P1 to the first image P3 were captured by the imaging device 10), the weather was fine, the water on the wall surface evaporated and decreased with the passage of time, and the reflectance of the wall surface was getting higher. Therefore, at least one of the weighted average brightness change rate R1 to the weighted average brightness change rate R9 increases.
 図2に戻って、不具合領域特定部105は、評価値算出部103が算出した第1評価値に基づいて、複数の領域のうち不具合箇所を含む不具合領域を特定する。具体的には、不具合領域特定部105は、加重平均明るさ変化率R1~加重平均明るさ変化率R9が画像平均明るさ変化率よりも第1閾値以上低い状態が第1期間続いている場合には、その加重平均明るさ変化率に対応する領域を不具合領域として特定する。ここで、画像平均明るさ変化率は、第1画像P1~第1画像P3において、領域S1~領域S9の加重平均明るさ変化率を全て平均した変化率のことである。なお、第1閾値及び第1期間は、調査対象及び調査環境によって適宜設定される。 Returning to FIG. 2, based on the first evaluation value calculated by the evaluation value calculation unit 103, the defect area identifying unit 105 identifies the defect area including the defect location among the plurality of areas. Specifically, the defective area identifying unit 105 determines that the weighted average brightness change rate R1 to the weighted average brightness change rate R9 are lower than the image average brightness change rate by a first threshold or more for a first period. Then, the area corresponding to the weighted average brightness change rate is identified as the defective area. Here, the image average brightness change rate is a change rate obtained by averaging all the weighted average brightness change rates of the regions S1 to S9 in the first image P1 to the first image P3. Note that the first threshold value and the first period are appropriately set according to the investigation target and the investigation environment.
 図4は、不具合領域特定部105で行われる不具合領域の特定に関して説明する図である。図4に示した場合では、領域S5にひび割れCにより水漏れが発生している場合である。なお、ひび割れは、不具合箇所の一例である。 FIG. 4 is a diagram for explaining identification of a defective area performed by the defective area identification unit 105. FIG. In the case shown in FIG. 4, water leakage occurs due to a crack C in the region S5. Note that the crack is an example of a defective portion.
 図4には、画像取得部101が取得した時刻t1に撮影された第1画像P1、時刻t2に撮影された第1画像P2、時刻t3に撮影された第1画像P3が示されている。そして、評価値算出部103は、第1画像P1~第1画像P3において領域S1~領域S9の加重平均明るさ変化率R1~加重平均明るさ変化率R9を第1評価値として算出する。 FIG. 4 shows a first image P1 captured at time t1, a first image P2 captured at time t2, and a first image P3 captured at time t3 acquired by the image acquisition unit 101. FIG. Then, the evaluation value calculation unit 103 calculates the weighted average brightness change rate R1 to the weighted average brightness change rate R9 of the regions S1 to S9 in the first image P1 to the first image P3 as the first evaluation values.
 不具合領域特定部105は、加重平均明るさ変化率R1~加重平均明るさ変化率R9を取得する(符号Hを参照)。また、不具合領域特定部105は、加重平均明るさ変化率R1~加重平均明るさ変化率R9を平均した画像平均明るさ変化率R(ALL)を取得する。 The defect area identifying unit 105 acquires the weighted average brightness change rate R1 to the weighted average brightness change rate R9 (see symbol H). In addition, the defect area specifying unit 105 acquires an image average brightness change rate R(ALL) by averaging the weighted average brightness change rate R1 to the weighted average brightness change rate R9.
 そして、不具合領域特定部105は、加重平均明るさ変化率R1~加重平均明るさ変化率R9において、画像平均明るさ変化率R(ALL)よりも第1閾値以上低い状態が第1期間続いている領域を検出する。ここで、領域S5はひび割れCが存在し水漏れが発生しているので、加重平均明るさ変化率R5は画像平均明るさ変化率R(ALL)よりも第1閾値以上低い状態が第1期間継続する。したがって、不具合領域特定部105は、領域S5には不具合箇所を有する不具合領域であると特定する。 Then, the defective area identifying unit 105 determines that the weighted average brightness change rate R1 to the weighted average brightness change rate R9 are lower than the image average brightness change rate R(ALL) by a first threshold or more for the first period. detect areas where Here, since the crack C exists in the area S5 and water leakage occurs, the weighted average brightness change rate R5 is lower than the image average brightness change rate R(ALL) by the first threshold or more in the first period. continue. Therefore, the defective area identifying unit 105 identifies the area S5 as a defective area having a defective portion.
 以下に、不具合領域と正常領域との反射率の関係に関して説明する。 Below, the relationship between the reflectance of the defective area and the normal area will be explained.
 図5は、不具合領域と正常領域との反射率の関係を示す図である。図5では縦軸が反射率、横軸が時間を示している。また、図4で示した第1画像P1~第1画像P3が撮影された時刻t1~時刻t3が示されている。線L1は、図4で説明を行った領域S5における反射率が示されており、線L2は、図4で説明を行った領域S1~領域S4、領域S6~領域S9における反射率が示されている。線L1では、調査対象の表面にひび割れCから水が浸みだし、表面に水が残存し続けるために、時間が経過した場合であっても反射率の増大が緩やかである。一方線L2では、調査対象の表面の水が時間の経過と共に蒸発し、急激に反射率が向上する。その後、L2では、調査対象の表面が乾燥した状態になると、反射率は高い値のまま一定となる。このように、不具合箇所がある領域S5の反射率は時間経過にともなって緩やかに上がり、正常の域S1~領域S4、領域S6~領域S9の反射率は時間経過にともなって急激に上がる。 FIG. 5 is a diagram showing the reflectance relationship between the defective area and the normal area. In FIG. 5, the vertical axis indicates reflectance, and the horizontal axis indicates time. Also shown are times t1 to t3 when the first images P1 to P3 shown in FIG. 4 were captured. A line L1 indicates the reflectance in the region S5 explained in FIG. 4, and a line L2 shows the reflectance in the regions S1 to S4 and the regions S6 to S9 explained in FIG. ing. In the line L1, water seeps into the surface of the object to be investigated from the cracks C and water continues to remain on the surface, so the increase in reflectance is gradual even when time elapses. On the other hand, in the line L2, the water on the surface of the investigation object evaporates with the passage of time, and the reflectance sharply increases. Then, in L2, when the surface under investigation becomes dry, the reflectance remains constant at a high value. As described above, the reflectance of the region S5 having the defective portion gradually increases with the passage of time, and the reflectance of the normal regions S1 to S4 and the regions S6 to S9 sharply increases with the passage of time.
 したがって、ひび割れを有する領域S5の加重平均明るさ変化率R5は、他の領域の加重平均明るさ変化率R1~加重平均明るさ変化率R4、加重平均明るさ変化率R6~加重平均明るさ変化率R9よりも小さくなる。さらに、加重平均明るさ変化率R5は、画像平均明るさ変化率R(ALL)よりも第1閾値以上低くなる。 Therefore, the weighted average brightness change rate R5 of the cracked region S5 is the weighted average brightness change rate R1 to the weighted average brightness change rate R4 of the other regions, the weighted average brightness change rate R6 to the weighted average brightness change less than rate R9. Furthermore, the weighted average brightness change rate R5 is lower than the image average brightness change rate R(ALL) by the first threshold or more.
 図6は、撮像装置10に搭載される不具合特定処理部30を用いた画像処理方法を示すフローチャートである。なお、この画像処理方法は、CPU40または不具合特定処理部30に搭載されるCPU(不図示)がプログラムを実行することにより、各ステップが実行される。また、図4で説明を行った具体例に沿って以下説明を行う。 FIG. 6 is a flow chart showing an image processing method using the defect identification processing unit 30 mounted on the imaging device 10. FIG. In this image processing method, each step is executed by executing a program by the CPU 40 or a CPU (not shown) installed in the defect identification processing section 30 . Further, the following description will be made along the specific example described with reference to FIG. 4 .
 先ず、画像取得部101は、水の吸収帯域を持つ第1波長の第1画像P1~第1画像P3を取得する(ステップS10)。なお、第1画像P1~第1画像P3は、それぞれ時刻t1~時刻t3に取得されている。次に、評価値算出部103は、第1画像P1~第1画像P3を領域S1~領域S9に分割し、各領域の加重平均明るさ変化率R1~R9を算出する(第1評価値を算出する工程:ステップS11)。その後、不具合領域特定部105は、画像平均明るさ変化率よりも加重変化率が第1閾値(図では閾値と記載する)以上低い状態が第1期間続いている領域があるか否かを判定する(不具合領域を特定する工程:ステップS12)。本例においては、領域S5の加重平均明るさ変化率R5が画像平均明るさ変化率よりも第1閾値以上低く、その状態が第1期間続いている。したがって、不具合領域特定部105は、領域S5を不具合領域として特定する(ステップS13)。 First, the image acquisition unit 101 acquires the first images P1 to P3 of the first wavelength having the absorption band of water (step S10). Note that the first image P1 to first image P3 are acquired at time t1 to time t3, respectively. Next, the evaluation value calculation unit 103 divides the first image P1 to the first image P3 into regions S1 to S9, and calculates the weighted average brightness change rates R1 to R9 of each region (the first evaluation value is Calculation step: step S11). After that, the defective area identifying unit 105 determines whether there is an area in which the weighted change rate is lower than the average brightness change rate of the image by a first threshold value (referred to as a threshold value in the drawing) or more for a first period of time. (step of identifying the defective area: step S12). In this example, the weighted average brightness change rate R5 of the region S5 is lower than the image average brightness change rate by the first threshold or more, and this state continues for the first period. Therefore, the defect area identifying unit 105 identifies the area S5 as a defect area (step S13).
 以上で説明したように、撮像装置10に搭載された不具合特定処理部30によれば、異なる時刻で取得された少なくとも2枚以上の第1画像において、画像内の複数の領域における明るさの変化に関する第1評価値を算出し、第1評価値に基づいて、複数の領域のうち不具合箇所を含む不具合領域を特定するので、正確に不具合領域の特定を行うことができる。 As described above, according to the defect identification processing unit 30 installed in the imaging device 10, in at least two or more first images acquired at different times, changes in brightness in a plurality of regions within the image is calculated, and based on the first evaluation value, the defective area including the defective portion is specified among the plurality of areas. Therefore, the defective area can be accurately specified.
 <第2の実施形態>
 次に本発明の第2の実施形態に関して説明する。本実施形態おいては、第1画像とは別に第2画像の撮影を行い各撮影における露出量の調整を行う。これにより、第1画像の撮影は異なる時刻により行われるが、全ての第1画像の撮影を統一した明るさで行うことができる。また、本実施形態においては、第1画像の全ての領域の変化率について、変化率が設定された第2閾値未満である状態が続く場合には、調査対象は乾燥していると判定して、不具合箇所を特定する処理を終了する。これにより、不具合領域が無い場合に、自動的に処理を終了して効率的な調査を行うことができる。
<Second embodiment>
Next, a second embodiment of the invention will be described. In this embodiment, the second image is shot separately from the first image, and the exposure amount is adjusted for each shot. As a result, although the first images are captured at different times, all the first images can be captured with uniform brightness. In addition, in the present embodiment, if the rate of change of all areas of the first image continues to be less than the set second threshold, it is determined that the investigation target is dry. , terminates the process of identifying the defective portion. As a result, when there is no defective area, the process can be automatically terminated and an efficient investigation can be performed.
 <<第2画像の撮影>>
 先ず、撮像装置10で行われる第2画像の撮影に関して説明する。第2画像は、第2波長で調査対象の構造物を撮影した画像である。第2波長は、調査対象の不具合箇所の吸収帯域とは異なる波長であることが好ましい。具体的には第2画像は、第1波長よりも、検出対象である水漏れ(水)の吸光度に対して低い波長で撮影されることが好ましい。第2画像を、このような第2波長で撮影することにより、検出対象である水漏れに吸光され露出調整が影響されることを抑制することができる。
<<Capturing the second image>>
First, the shooting of the second image performed by the imaging device 10 will be described. The second image is an image of the structure under investigation at a second wavelength. Preferably, the second wavelength is a wavelength different from the absorption band of the defect site under investigation. Specifically, the second image is preferably captured at a wavelength lower than the first wavelength with respect to the absorbance of the leak (water) to be detected. By capturing the second image at such a second wavelength, it is possible to suppress exposure adjustment from being affected by absorption of light by the water leakage to be detected.
 図7は、撮像装置10で撮影される第1画像及び第2画像を概念的に示した図である。なお、図7では、模式的に時刻t1及び時刻t2において、第1画像及び第2画像が取得される場合を示しているが、より多くの第1画像及び第2画像が取得されてもよい。 FIG. 7 is a diagram conceptually showing the first image and the second image captured by the imaging device 10. FIG. Although FIG. 7 schematically shows the case where the first image and the second image are acquired at time t1 and time t2, more first images and second images may be acquired. .
 時刻t1において、第2画像Q1及び第1画像P1が撮像装置10により撮影される。また、時刻t2に第2画像Q2及び第1画像P2が撮像装置10により撮影される。 At time t1, the imaging device 10 captures the second image Q1 and the first image P1. Also, the second image Q2 and the first image P2 are captured by the imaging device 10 at time t2.
 時刻t1において、第2画像Q1が撮影された後直ぐにまたは露出調整の観点より実質的に直ぐに第1画像P1が撮影される。そして、第2画像Q1と第1画像P1とは同じ撮影条件で撮影が行われる。また、時刻t2においても同様に、第2画像Q2が撮影された後直ぐにまたは露出調整の観点より実質的に直ぐに第1画像P2が撮影される。そして、第2画像Q2と第1画像P2とは同じ撮影条件で撮影が行われる。なお、第2画像Q1と第2画像Q2との露出調整は、露光時間は一定として、絞り14、撮像素子16のゲインで調整することが好ましい。これにより、露光時間による撮像素子16の感度のばらつきの影響を抑制することができる。 At time t1, the first image P1 is captured immediately after the second image Q1 is captured or substantially immediately from the viewpoint of exposure adjustment. Then, the second image Q1 and the first image P1 are shot under the same shooting conditions. Similarly, at time t2, the first image P2 is shot immediately after the second image Q2 is shot, or substantially immediately from the viewpoint of exposure adjustment. Then, the second image Q2 and the first image P2 are shot under the same shooting conditions. Note that it is preferable to adjust the exposure between the second image Q1 and the second image Q2 by adjusting the gain of the diaphragm 14 and the image sensor 16 while keeping the exposure time constant. As a result, it is possible to suppress the influence of variation in the sensitivity of the imaging element 16 due to the exposure time.
 第2画像Q1と第2画像Q2とは、露出調整が行われて統一された明るさで撮影が行われている。また、第2画像Q1と第1画像P1とは同じ撮影条件で撮影が行われ、第2画像Q2と第1画像P2とは同じ撮影条件で撮影が行われている。したがって、第1画像P1と第1画像P2とは同じ明るさで撮影が行われるので、第1評価値が精度良く取得することができので、より正確に不具合領域を特定することができる。 The second image Q1 and the second image Q2 are shot with uniform brightness after exposure adjustment. Also, the second image Q1 and the first image P1 are shot under the same shooting conditions, and the second image Q2 and the first image P2 are shot under the same shooting conditions. Therefore, since the first image P1 and the first image P2 are shot with the same brightness, the first evaluation value can be obtained with high accuracy, and the defect area can be specified more accurately.
 <<乾燥状態の判定>>
 次に、乾燥状態の判定に関して説明する。不具合領域特定部105は、乾燥状態を各領域の第1評価値で判定し、検査対象である構造物が乾燥状態であると判定した場合には、不具合領域を特定する処理を終了する。具体的には、不具合領域特定部105は、全ての第1評価値が第2閾値未満の状態が継続している場合には、不具合特定処理部30で行われる不具合領域の検出処理を終了する。
<< Determination of dry state >>
Next, determination of the dry state will be described. The defective area identifying unit 105 determines the dry state based on the first evaluation value of each area, and ends the process of identifying the defective area when determining that the structure to be inspected is in a dry state. Specifically, if all the first evaluation values continue to be less than the second threshold value, the defect area identification unit 105 terminates the defect area detection processing performed by the defect identification processing unit 30. .
 図8は、第1画像の1つの領域の反射率とその反射率の変化率との時間変化を示す図である。なお、本例では調査対象は屋外に設置された構造物であり、時間経過と共に構造物表面の水は徐々に蒸発する。 FIG. 8 is a diagram showing temporal changes in the reflectance of one region of the first image and the change rate of the reflectance. In this example, the object of investigation is a structure installed outdoors, and the water on the surface of the structure gradually evaporates over time.
 線L11は反射率を示し、線L12は変化率を示す。線L11に示すように、時間の経過に伴って構造物の表面の水は蒸発し反射率は上昇する。また、線L11に示すように、時間がある程度経過したのちには、構造物の表面の水の量が蒸発により減少し、反射率の上昇も緩やかになってくる。 A line L11 indicates the reflectance, and a line L12 indicates the rate of change. As indicated by line L11, the water on the surface of the structure evaporates over time and the reflectance increases. Moreover, as indicated by line L11, after a certain amount of time has passed, the amount of water on the surface of the structure decreases due to evaporation, and the increase in reflectance also slows down.
 線L12に示すように、時間がある程度経過した後には、変化率は第2閾値未満となる状態が続く。このような状態となった場合には、不具合領域特定部105は、この領域は乾燥していると判定する。不具合特定処理部30は、第1画像の複数の領域の全てにおいて乾燥していると判定した場合には、不具合特定処理部30で行われる不具合特定処理を終了させる。これにより、全ての領域が正常である場合に、不具合領域特定部105は、自動的に不具合が無いことを判定することができ、効率的な調査を行うことができる。 As shown by line L12, after a certain amount of time has passed, the rate of change continues to be less than the second threshold. In such a state, the defective area identifying unit 105 determines that the area is dry. The defect identification processing unit 30 terminates the defect identification processing performed by the defect identification processing unit 30 when determining that all of the plurality of areas of the first image are dry. As a result, when all the areas are normal, the defect area specifying unit 105 can automatically determine that there is no defect, and can perform an efficient investigation.
 図9は、本実施形態の撮像装置10を用いた撮像方法を示すフローチャートである。なお、この画像処理方法は、CPU40または不具合特定処理部30に搭載されるCPU(不図示)がプログラムを実行することにより、各ステップが実行される。また、図7で説明を行った具体例に沿って以下説明を行う。 FIG. 9 is a flowchart showing an imaging method using the imaging device 10 of this embodiment. In this image processing method, each step is executed by executing a program by the CPU 40 or a CPU (not shown) installed in the defect identification processing section 30 . Further, the following description will be made along the specific example described with reference to FIG.
 先ず、撮像装置10は、時刻t1において第2画像Q1を撮影する(ステップS20)。その後、撮像装置10は第2画像Q1を撮影した時と同じ撮影条件で第1画像P1を撮影する(ステップS21)。その後、撮像装置10は、時刻t2において第2画像Q2を撮影する(ステップS22)。撮像装置10は、第2画像Q2を第2画像Q1と同じ明るさとなるように露出調整を行って、第2画像Q2を撮影する。その後、撮像装置10は、第2画像Q2を撮影した時と同じ撮影条件で第1画像P2を撮影する(ステップS23)。 First, the imaging device 10 captures the second image Q1 at time t1 (step S20). After that, the imaging device 10 shoots the first image P1 under the same shooting conditions as when the second image Q1 was shot (step S21). After that, the imaging device 10 captures the second image Q2 at time t2 (step S22). The imaging device 10 adjusts the exposure so that the second image Q2 has the same brightness as the second image Q1, and shoots the second image Q2. After that, the imaging device 10 shoots the first image P2 under the same shooting conditions as when the second image Q2 was shot (step S23).
 その後、不具合特定処理部30は、第1画像P1及び第1画像P2を取得する(ステップS24)。また、不具合特定処理部30は、第2画像Q1及び第2画像Q2を取得する(ステップS25)。 After that, the defect identification processing unit 30 acquires the first image P1 and the first image P2 (step S24). Further, the defect identification processing unit 30 acquires the second image Q1 and the second image Q2 (step S25).
 その後、評価値算出部103は、取得した第1画像P1、第1画像P2、第2画像Q1及び第2画像Q2を複数領域に分割し、各領域の輝度の変化率を算出する(ステップS26)。具体的には、評価値算出部103は、第1画像P1及び第1画像P2の複数の領域の輝度の変化率(第1評価値)を算出する。また評価値算出部103は、第2画像Q1及び第2画像Q2の複数の領域の輝度の変化率(第2評価値)を算出する。そして、不具合領域特定部105は、第1画像の変化率と第2画像の変化率に基づいて、不具合領域を特定する(ステップS27)。具体的には、不具合領域特定部105は、第1画像の変化率と第2画像の変化率の差または比を算出し、その算出された変化率(補正変化率)が画像平均明るさ変化率よりも第1閾値以上低い状態が第1期間続いている領域があるか否かを判定する。そして、補正変化率が画像平均明るさ変化率よりも第1閾値以上低い状態が第1期間続いている領域がある場合には、その領域を不具合領域として特定する(ステップS28)。 After that, the evaluation value calculation unit 103 divides the obtained first image P1, first image P2, second image Q1, and second image Q2 into a plurality of regions, and calculates the luminance change rate of each region (step S26 ). Specifically, the evaluation value calculation unit 103 calculates luminance change rates (first evaluation values) of a plurality of regions of the first image P1 and the first image P2. The evaluation value calculation unit 103 also calculates luminance change rates (second evaluation values) of a plurality of regions of the second image Q1 and the second image Q2. Then, the defect area identifying unit 105 identifies the defect area based on the rate of change of the first image and the rate of change of the second image (step S27). Specifically, the defect area identifying unit 105 calculates the difference or ratio between the rate of change of the first image and the rate of change of the second image, and the calculated rate of change (correction rate of change) is the image average brightness change. It is determined whether or not there is an area where the rate is lower than the rate by a first threshold or more for a first period. Then, if there is an area in which the corrected change rate is lower than the image average brightness change rate by the first threshold value or more for the first period of time, that area is identified as a defective area (step S28).
 その後、全ての領域において第2期間(図では一定期間)、補正変化率が第2閾値未満の状態が続いているかの判定が行われる(ステップS29)。不具合領域特定部105は、全ての領域において第2期間、補正変化率が第2閾値未満の状態が続いている場合には、全ての領域は正常領域であるとして、不具合領域を特定する処理を終了させる。一方、補正変化率が第2閾値以上である場合には、さらに第2画像が撮影され調査が継続される。なお、第2閾値及び第2期間はユーザにより任意に設定される値である。 After that, it is determined whether or not the corrected rate of change continues to be less than the second threshold for the second period (fixed period in the figure) in all regions (step S29). If the corrected rate of change continues to be less than the second threshold for the second period in all regions, the defect region identifying unit 105 determines that all regions are normal regions, and performs processing to identify the defect region. terminate. On the other hand, if the corrected rate of change is greater than or equal to the second threshold, then a second image is taken to continue the investigation. Note that the second threshold and the second period are values arbitrarily set by the user.
 なお、上述では、変化率を第1評価値と第2評価値から算出される補正変化率が用いられている例について説明した。本実施形態の例はこれに限定されず、第1の実施形態で説明したように第1評価値を用いて、不具合領域の特定処理が行われる。 In the above description, an example was explained in which the corrected change rate calculated from the first evaluation value and the second evaluation value is used as the change rate. The example of the present embodiment is not limited to this, and as described in the first embodiment, the defect area identification process is performed using the first evaluation value.
 以上で説明したように、本実施形態では、第1画像を撮影する前に第2画像が撮影して明るさ調整を行うことにより統一された明るさで第1画像を撮影することができる。これにより、本実施形態はより精度の高い不具合領域の特定を行うことができる。また、本実施形態では、第1画像の全ての領域の変化率が第2閾値未満である状態が続く場合には、調査対象は乾燥していると判定して、不具合箇所を特定する処理を終了させる。これにより、不具合領域が無い場合に、自動的に処理を終了して効率的な調査を行うことができる。 As described above, in the present embodiment, the first image can be captured with uniform brightness by capturing the second image and adjusting the brightness before capturing the first image. As a result, the present embodiment can identify the defect area with higher accuracy. Further, in the present embodiment, when the rate of change of all regions of the first image continues to be less than the second threshold value, it is determined that the investigation target is dry, and the process of identifying the defective portion is performed. terminate. As a result, when there is no defective area, the process can be automatically terminated and an efficient investigation can be performed.
 <変形例1>
 次に、本発明の変形例1に関して説明する。調査対象が水を吸収した際の広がり方が部材によって変わることがある。そこで本例では、評価値算出部103が第1画像を分割して複数の領域を設定する場合に、調査対象の部材によって分割数や分割サイズを変更する。これにより、部材による浸水時の水の広がり方の差を考慮して第1評価値の算出が行われて不具合領域の特定する処理を行うことができ、不具合領域の特定精度が向上する。
<Modification 1>
Next, Modification 1 of the present invention will be described. Depending on the material, the way the survey target spreads when it absorbs water may change. Therefore, in this example, when the evaluation value calculation unit 103 divides the first image to set a plurality of areas, the number of divisions and the division size are changed according to the member to be investigated. As a result, the first evaluation value is calculated in consideration of the difference in how water spreads due to the member when flooded, and the process of identifying the defective area can be performed, thereby improving the accuracy of identifying the defective area.
 図10は、評価値算出部103が行う第1画像の複数の領域の分割に関して説明する図である。 FIG. 10 is a diagram explaining how the evaluation value calculation unit 103 divides the first image into a plurality of regions.
 第1画像P11は、コンクリートの構造物を調査対象とした場合の第1画像である。コンクリートの構造物は浸水しにくい部材である。したがって、水漏れが発生している場合でも、水が浸水する領域D1は狭い。このような場合には、評価値算出部103は、第1画像P11において領域Sを小さく設定して分割する。 The first image P11 is the first image when a concrete structure is targeted for investigation. Concrete structures are members that are difficult to be flooded. Therefore, even when water leakage occurs, the area D1 where water is flooded is narrow. In such a case, the evaluation value calculation unit 103 divides the first image P11 by setting the region S to be small.
 第1画像P12は、木材の構造物を調査対象とした場合の第1画像である。木材の構造物は浸水し易い部材である。したがって、水漏れが発生している場合にも、水が浸水してしまう領域D2は広い。このような場合には、評価値算出部103は、第1画像P12において領域Sを大きく設定して分割する。 The first image P12 is the first image when a wooden structure is targeted for investigation. A wooden structure is a member that is easily flooded. Therefore, even when water leakage occurs, the area D2 where water is flooded is wide. In such a case, the evaluation value calculation unit 103 divides the first image P12 by setting the region S to be large.
 なお、領域の分割数や分割サイズは、調査対象の部材に応じて予めユーザにより入力されて設定される。 It should be noted that the number of divisions and the division size of the area are input and set in advance by the user according to the member to be investigated.
 以上で説明したように、本例では、調査対象を構成する部材に応じて、評価値算出部103で行われる第1画像の分割の領域の数または分割する領域のサイズを変更することができる。これにより、調査対象の部材による浸水時の水の広がり方の差を考慮した不具合領域の特定を行うことができる。 As described above, in this example, the number of divided regions of the first image performed by the evaluation value calculation unit 103 or the size of the divided regions can be changed according to the members constituting the investigation target. . As a result, it is possible to specify a defective area in consideration of the difference in how water spreads when flooded due to the member to be investigated.
 <変形例2>
 次に、本発明の変形例2に関して説明する。本例においては、評価値算出部103は、不具合箇所のサイズ、被写体距離、及び焦点距離に応じて、領域Sの大きさを変更する。
<Modification 2>
Next, Modification 2 of the present invention will be described. In this example, the evaluation value calculation unit 103 changes the size of the area S according to the size of the defective portion, the subject distance, and the focal length.
 図11は、評価値算出部103が行う第1画像の複数の領域の分割に関して説明する図である。 FIG. 11 is a diagram explaining how the evaluation value calculation unit 103 divides the first image into a plurality of regions.
 第1画像P13は、被写体距離が遠い場合または短焦点の場合に撮影された第1画像である。また、第1画像P13は、検出したいクラックが小さい場合に撮影された第1画像である。このような場合には、不具合箇所D3が小さく撮影されてしまうので、評価値算出部103は領域Sを小さく設定する。具体的には、検出したい不具合箇所D3が外接するように領域Sを設定する。 The first image P13 is the first image captured when the subject distance is long or when the focus is short. Also, the first image P13 is the first image captured when the crack to be detected is small. In such a case, the defective portion D3 is photographed small, so the evaluation value calculation unit 103 sets the region S small. Specifically, the area S is set so that the defective portion D3 to be detected is circumscribed.
 第1画像P14は、被写体距離が近い場合または長焦点の場合に撮影された第1画像である。また、第1画像P14は、検出したいクラックが大きい場合に撮影された第1画像である。このような場合には、不具合箇所D4が小さく撮影されてしまうので、評価値算出部103は領域Sを大きく設定する。具体的には、検出したい不具合箇所D4が含まれるように、領域Sを設定する。 The first image P14 is the first image captured when the subject distance is short or when the focus is long. Also, the first image P14 is the first image captured when the crack to be detected is large. In such a case, the defective portion D4 is photographed small, so the evaluation value calculation unit 103 sets the region S large. Specifically, the area S is set so as to include the defective portion D4 to be detected.
 なお、領域の分割数や分割サイズは、検出する不具合箇所の大きさに応じて予めユーザにより入力されて設定される。 It should be noted that the number of divisions and the division size of the region are input and set in advance by the user according to the size of the defect location to be detected.
 以上で説明したように、本例では評価値算出部103は、不具合箇所のサイズ、被写体距離、及び焦点距離に応じて、領域Sの大きさを変更する。これにより、不具合領域特定部105において精度良く不具合領域を特定することができる。 As described above, in this example, the evaluation value calculation unit 103 changes the size of the area S according to the size of the defect location, the subject distance, and the focal length. As a result, the defect area identifying unit 105 can accurately identify the defect area.
 <変形例3>
 次に、本発明の変形例3に関して説明する。本例においては、不具合領域特定部105で不具合領域を特定する場合に使用される第1閾値を動的に変化させる。
<Modification 3>
Next, Modification 3 of the present invention will be described. In this example, the first threshold used when the defect area identifying unit 105 identifies the defect area is dynamically changed.
 明るさの変化率に対する閾値である第1閾値が一定であると、各領域の変化率が非線形に変動する場合に、不具合領域の特定処理が適切に実施されない可能性がある。また、第1閾値が一定の場合には、不具合領域の特定処理に時間を要してしまう場合がある。そこで、本例では第1閾値を一定値にせず、動的に変化させることで適切に短い時間で不具合箇所の特定を行う。 If the first threshold, which is the threshold for the rate of change in brightness, is constant, and the rate of change in each area fluctuates non-linearly, there is a possibility that the defect area identification process will not be performed appropriately. Further, when the first threshold is constant, it may take time to specify the defect area. Therefore, in this example, the first threshold value is not set to a constant value, but is dynamically changed to identify the defective portion in an appropriately short time.
 図12は、変動する第1閾値の例として、変化率に応じて変動する第1閾値を示す図である。左側の縦軸では反射率が示され、右側の縦軸では変化率が示され、横軸では時刻が示されている。 FIG. 12 is a diagram showing, as an example of the first threshold that fluctuates, the first threshold that fluctuates according to the rate of change. The left vertical axis indicates reflectance, the right vertical axis indicates rate of change, and the horizontal axis indicates time.
 線L110及び線L111は、領域Sの反射率を示す図である。線L110は不具合領域として特定される領域の反射率の時間的な変化を示し、線L111は正常な領域の反射率の時間的な変化を示す。線L110及び線L111に示すように、不具合領域として特定される領域の反射率は、正常な領域の反射率に比べて増大するまでに時間を要する。 A line L110 and a line L111 are diagrams showing the reflectance of the region S. A line L110 indicates the temporal change in reflectance of the region identified as the defective region, and a line L111 indicates the temporal change of the reflectance of the normal region. As indicated by lines L110 and L111, it takes time for the reflectance of the region identified as the defective region to increase compared to the reflectance of the normal region.
 線L120及び線L121は、領域Sの反射率の変化率を示す図である。線L120は不具合領域として特定される領域の変化率の時間的な変化を示し、線L121は正常な領域の変化率の時間的な変化を示す。線L120及び線L121に示すように、不具合領域として特定される領域の反射率の変化率は、時間の経過と共に徐々に上がっていき、その後急激に上がる。 A line L120 and a line L121 are diagrams showing the change rate of the reflectance of the region S. A line L120 indicates the temporal change in the rate of change in the area identified as the defective area, and a line L121 indicates the temporal change in the rate of change in the normal area. As indicated by lines L120 and L121, the rate of change in the reflectance of the area identified as the defect area gradually increases with the lapse of time, and then increases sharply.
 線L101は、第1閾値を示す図である。変動する第1閾値は、第1画像の全ての領域の変化率の平均(画像平均明るさ変化率)や、第1画像の全ての領域の変化率の中央値などとしてもよい。また、第1閾値として、第1画像の全ての領域の変化率の平均(画像平均明るさ変化率)や、第1画像の全ての領域の変化率の中央値などに基づいた値が採用されてもよい。また、第1閾値として、第1画像の全ての領域の変化率の平均(画像平均明るさ変化率)や、第1画像の全ての領域の変化率の中央値に0よりも大きく1よりも小さい係数をかけた値が採用されてもよい。 A line L101 is a diagram showing the first threshold. The fluctuating first threshold value may be the average of the rate of change of all areas of the first image (image average brightness change rate) or the median value of the rate of change of all areas of the first image. As the first threshold value, a value based on the average of the rate of change of all regions of the first image (image average brightness change rate) or the median value of the rate of change of all regions of the first image is adopted. may Further, as the first threshold, the average of the rate of change of all regions of the first image (image average brightness change rate), or the median value of the rate of change of all regions of the first image, which is greater than 0 and greater than 1 A value multiplied by a small factor may be adopted.
 以上で説明したように、不具合領域特定部105で不具合領域を特定する場合に使用される第1閾値を変動させることにより、効率的な不具合領域の特定を行うことができる。 As described above, it is possible to efficiently identify a defective area by varying the first threshold value used when the defective area identifying unit 105 identifies the defective area.
 <変形例4>
 次に、本発明の変形例4に関して説明する。本例では評価値算出部103は、1つの領域の変化率ではなく、隣接する複数の領域を使って変化率を算出してもよい。
<Modification 4>
Next, Modification 4 of the present invention will be described. In this example, the evaluation value calculation unit 103 may calculate the rate of change using a plurality of adjacent areas instead of the rate of change of one area.
 図13は、本例の評価値算出部103での変化率の算出に関して説明する図である。 FIG. 13 is a diagram explaining the calculation of the rate of change in the evaluation value calculation unit 103 of this example.
 第1画像P15において、評価値算出部103は、不具合箇所Dの領域に合わせて、領域S11、領域S12、及び領域S13での明るさの変化率を算出する。このように評価値算出部103は、一つの領域だけでなく、隣接する領域の明るさの変化率を算出してもよい。なお、評価値算出部103は、上述したように隣接する領域S11~領域S13のように縦長の領域の明るさの変化率を算出してもよいし、横長の領域の明るさの変化率を算出してもよいし、上下左右に隣接する5領域の明るさの変化率を算出してもよいし、上下左右に隣接する5領域に斜めで隣接する領域を加えた9領域の明るさの変化率を算出してもよい。 In the first image P15, the evaluation value calculation unit 103 calculates the rate of change in brightness in the areas S11, S12, and S13 in accordance with the area of the defect location D. In this manner, the evaluation value calculation unit 103 may calculate the rate of change in brightness not only for one region but also for adjacent regions. Note that the evaluation value calculation unit 103 may calculate the rate of change in brightness of vertically elongated regions such as the adjacent regions S11 to S13 as described above, or calculate the rate of change in brightness of the horizontally elongated region. Alternatively, the rate of change in brightness of five vertically and horizontally adjacent regions may be calculated, or the brightness of nine regions obtained by adding a diagonally adjacent region to the five vertically and horizontally adjacent regions. A rate of change may be calculated.
 以上で説明したように、評価値算出部103が隣接する領域の明るさの変化率を算出することにより、不具合箇所の形状に傾向がある場合に、それに合わせた領域設定にすることで、よりロバストな不具合領域の特定を行うことができる。 As described above, the evaluation value calculation unit 103 calculates the rate of change in the brightness of adjacent regions, and if there is a tendency in the shape of the defective portion, setting the region in accordance with the shape can improve the brightness. Robust failure area identification can be performed.
 <変形例5>
 次に、本発明の変形例5に関して説明する。本例では、撮像装置10にLOG(対数)モードが設けられている場合、評価値算出部103は補正処理を実施する。
<Modification 5>
Next, Modification 5 of the present invention will be described. In this example, when the imaging apparatus 10 is provided with a LOG (logarithmic) mode, the evaluation value calculation unit 103 performs correction processing.
 図14は、補正処理が実施される場合のフローチャートである。 FIG. 14 is a flowchart when correction processing is performed.
 評価値算出部103は、撮像装置10において、LOGモードが起動されているかを判定する(ステップS30)。評価値算出部103は、LOGモードが起動されていると判定した場合には、以下で説明する補正処理を実行する(ステップS31)。 The evaluation value calculation unit 103 determines whether the LOG mode is activated in the imaging device 10 (step S30). When the evaluation value calculation unit 103 determines that the LOG mode is activated, it executes the correction processing described below (step S31).
 図15は、本例の評価値算出部103で行われる補正処理に関して説明する図である。 FIG. 15 is a diagram for explaining correction processing performed by the evaluation value calculation unit 103 of this example.
 撮像装置10には、LOG(対数)モードとLINEAR(線形)モードとを備えている場合が有る。LOGモードの場合、図15(A)に示すように入射光量に対して、出力値が非線形となるように出力される。このような出力値に基づいて本発明の不具合領域の特定の処理を行うと、評価値算出部103で第1評価値の算出及び不具合領域特定部105での処理が複雑化する可能性がある。そこで、撮像装置10においてLOGモードが設定されている場合には、LOGモードを打ち消すような処理を評価値算出部103は行う。具体的には、図15(B)に示されているような、カメラからの出力値に対し補正後の値を出力するような補正を行う。これにより、図15(C)で示すようにカメラへの入射光量に対して線形となる出力値が出力される。これにより、LOGモードが起動された場合であっても、図15(A)で示すような非線形な画像処理の、不具合領域の特定処理への影響を抑制することができ、不具合領域の特定処理の実用性が向上される。 The imaging device 10 may have a LOG (logarithmic) mode and a LINEAR (linear) mode. In the LOG mode, as shown in FIG. 15A, the output value is nonlinear with respect to the amount of incident light. If the process of specifying the defect area of the present invention is performed based on such an output value, the calculation of the first evaluation value by the evaluation value calculation unit 103 and the process by the defect area identification unit 105 may become complicated. . Therefore, when the LOG mode is set in the imaging apparatus 10, the evaluation value calculation unit 103 performs processing to cancel the LOG mode. Specifically, as shown in FIG. 15B, correction is performed such that the corrected value is output with respect to the output value from the camera. As a result, an output value that is linear with respect to the amount of light incident on the camera is output as shown in FIG. 15(C). As a result, even when the LOG mode is activated, it is possible to suppress the influence of the non-linear image processing as shown in FIG. is improved.
 <変形例6>
 次に本発明の変形例6に関して説明する。本例では撮像装置10に備えられている投光器21を利用して、不具合領域の特定処理を行う。
<Modification 6>
Next, Modification 6 of the present invention will be described. In this example, using the light projector 21 provided in the imaging device 10, processing for identifying the defective area is performed.
 明るさの変化率が第3期間(図では一定期間と記載する)に第3閾値(図では閾値と記載する)よりも低い状態が続いている場合は高湿度や低温等の要因により乾燥が進んでいかない状況と考えられる。したがって、この場合には、投光器21を調査対象に照射することにより乾燥を促進させる。このように、投光器21を利用することにより短時間での不具合領域の特定が可能となる。 If the rate of change in brightness remains lower than the third threshold (denoted as the threshold in the figure) during the third period (denoted as the constant period in the figure), drying may occur due to factors such as high humidity or low temperature. It is considered a situation in which no progress can be made. Therefore, in this case, drying is accelerated by irradiating the survey object with the light projector 21 . Thus, by using the light projector 21, it is possible to identify the defective area in a short time.
 図16は、投光器21の利用開始のフローチャートである。 FIG. 16 is a flow chart for starting to use the projector 21. FIG.
 評価値算出部103は、第3期間、明るさの変化率が第3閾値未満であるか否かの判定を行う(ステップS41)。変化率が第3閾値未満である状態が続いている場合には、評価値算出部103は、CPU40を介して投光器21の電源をONさせ、投光器21が使用される(ステップS42)。 The evaluation value calculation unit 103 determines whether or not the brightness change rate is less than the third threshold during the third period (step S41). If the rate of change remains below the third threshold, the evaluation value calculator 103 turns on the light projector 21 via the CPU 40, and the light projector 21 is used (step S42).
 図17は、投光器21が使用される場合のタイミングに関して説明する図である。左側の縦軸に反射率、右側の縦軸に変化率、横軸に時間を示した図である。 FIG. 17 is a diagram explaining the timing when the light projector 21 is used. The left vertical axis is the reflectance, the right vertical axis is the rate of change, and the horizontal axis is time.
 図17に示すように、第3期間に第3閾値より低い変化率が継続する場合には、投光器21の使用を開始する。投光器21の使用を開始することにより、反射率及び変化率が急激に向上する(線L130及び線L131を参照)。これにより、短時間での不具合領域の特定を実現することができる。なお、図示した第3閾値および第3期間は、調査対象及び調査環境によって適宜設定される。 As shown in FIG. 17, when the rate of change lower than the third threshold continues during the third period, the projector 21 is started to be used. By starting to use the light projector 21, the reflectance and rate of change abruptly improve (see lines L130 and L131). This makes it possible to identify the defective area in a short period of time. It should be noted that the illustrated third threshold value and third period are appropriately set according to the investigation target and the investigation environment.
 次に、投光器21を利用する別の態様に関して説明する。 Next, another aspect of using the light projector 21 will be described.
 例えば、調査対象に照らされている光量が第4閾値よりも低い場合には、投光器21で調査対象を照らすことにより受光される光量を増やす。これにより、撮像素子16の感度不足及びノイズの影響を抑制して、不具合領域の特定精度を改善することができる。したがって、評価値算出部103は、取得した第1画像において光量が第4閾値よりも低い場合には、CPU40を介して投光器21の使用を開始する。なお、第4閾値は、ユーザにより適宜設定される値である。 For example, if the amount of light illuminating the investigation target is lower than the fourth threshold, the amount of light received by illuminating the investigation target with the light projector 21 is increased. As a result, the lack of sensitivity of the imaging element 16 and the influence of noise can be suppressed, and the accuracy of identifying the defective area can be improved. Therefore, the evaluation value calculation unit 103 starts using the light projector 21 via the CPU 40 when the light amount in the acquired first image is lower than the fourth threshold. Note that the fourth threshold is a value appropriately set by the user.
 次に、投光器21を利用する別の態様に関して説明する。 Next, another aspect of using the light projector 21 will be described.
 異なる時刻で第2画像を撮影する際に、日照条件などが変化する事によって、以下で説明するように画像内の場所によって明るさが変化してしまう場合がある。 When shooting the second image at different times, the brightness may change depending on the location in the image as described below due to changes in the lighting conditions.
 図18は、画像内において明るさが変化してしまう場合を説明する図である。 FIG. 18 is a diagram explaining a case where the brightness changes in the image.
 第2画像P31は時刻t1に撮影された画像である。また、第2画像P32は時刻t2に撮影された画像である。第2画像P32は、露出調整後の画像であるが、撮影時刻が異なるために日照条件が異なり、画像内に一定以上差分の有る領域がある。このような場合に、投光器21を利用する。 The second image P31 is an image captured at time t1. A second image P32 is an image captured at time t2. The second image P32 is an image after exposure adjustment, but since the shooting time is different, the sunshine conditions are different, and there is an area with a certain amount or more of difference in the image. In such a case, the projector 21 is used.
 図19は、投光器21の利用開始のフローチャートである。 FIG. 19 is a flow chart for starting to use the projector 21. FIG.
 評価値算出部103は、第2画像内で第5閾値(図では閾値と記載する)以上差分がある領域が有るか否かを判定する(ステップS50)。第2画像内で第5閾値以上差分がある領域が有る場合には、投光器21を使用する(ステップS51)。これにより、撮影時刻が異なることにより、日照条件が変化した場合であっても、不具合領域の特定の精度を維持することができる。なお、上述した第5閾値は、ユーザにより適宜設定される値である。 The evaluation value calculation unit 103 determines whether or not there is an area having a difference equal to or greater than a fifth threshold (referred to as a threshold in the figure) in the second image (step S50). If there is an area with a difference equal to or greater than the fifth threshold in the second image, the projector 21 is used (step S51). This makes it possible to maintain the accuracy of specifying the defect area even when the sunshine conditions change due to different shooting times. In addition, the fifth threshold value described above is a value appropriately set by the user.
 上記実施形態において、各種の処理を実行する処理部(例えば、画像取得部101、評価値算出部103、及び不具合領域特定部105)(processing unit)のハードウェア的な構造は、次に示すような各種のプロセッサ(processor)である。各種のプロセッサには、ソフトウェア(プログラム)を実行して各種の処理部として機能する汎用的なプロセッサであるCPU(Central Processing Unit)、FPGA(Field Programmable Gate Array)などの製造後に回路構成を変更可能なプロセッサであるプログラマブルロジックデバイス(Programmable Logic Device:PLD)、ASIC(Application Specific Integrated Circuit)などの特定の処理を実行させるために専用に設計された回路構成を有するプロセッサである専用電気回路などが含まれる。 In the above embodiment, the hardware structure of the processing unit (for example, the image acquisition unit 101, the evaluation value calculation unit 103, and the defect area identification unit 105) (processing unit) that executes various processes is as follows. various processors. For various processors, the circuit configuration can be changed after manufacturing, such as CPU (Central Processing Unit), FPGA (Field Programmable Gate Array), which is a general-purpose processor that executes software (program) and functions as various processing units. Programmable Logic Device (PLD), which is a processor, ASIC (Application Specific Integrated Circuit), etc. be
 1つの処理部は、これら各種のプロセッサのうちの1つで構成されていてもよいし、同種または異種の2つ以上のプロセッサ(例えば、複数のFPGA、あるいはCPUとFPGAの組み合わせ)で構成されてもよい。また、複数の処理部を1つのプロセッサで構成してもよい。複数の処理部を1つのプロセッサで構成する例としては、第1に、クライアントやサーバなどのコンピュータに代表されるように、1つ以上のCPUとソフトウェアの組合せで1つのプロセッサを構成し、このプロセッサが複数の処理部として機能する形態がある。第2に、システムオンチップ(System On Chip:SoC)などに代表されるように、複数の処理部を含むシステム全体の機能を1つのIC(Integrated Circuit)チップで実現するプロセッサを使用する形態がある。このように、各種の処理部は、ハードウェア的な構造として、上記各種のプロセッサを1つ以上用いて構成される。 One processing unit may be composed of one of these various processors, or may be composed of two or more processors of the same type or different types (eg, multiple FPGAs, or combinations of CPUs and FPGAs). may Also, a plurality of processing units may be configured by one processor. As an example of configuring a plurality of processing units in a single processor, first, as represented by a computer such as a client or server, a single processor is configured by combining one or more CPUs and software. There is a form in which a processor functions as multiple processing units. Second, as typified by System On Chip (SoC), etc., there is a form of using a processor that realizes the functions of the entire system including multiple processing units with a single IC (Integrated Circuit) chip. be. In this way, the various processing units are configured by using one or more of the above various processors as a hardware structure.
 さらに、これらの各種のプロセッサのハードウェア的な構造は、より具体的には、半導体素子などの回路素子を組み合わせた電気回路(circuitry)である。 Furthermore, the hardware structure of these various processors is, more specifically, an electrical circuit that combines circuit elements such as semiconductor elements.
 上述の各構成及び機能は、任意のハードウェア、ソフトウェア、或いは両者の組み合わせによって適宜実現可能である。例えば、上述の処理ステップ(処理手順)をコンピュータに実行させるプログラム、そのようなプログラムを記録したコンピュータ読み取り可能な記録媒体(非一時的記録媒体)、或いはそのようなプログラムをインストール可能なコンピュータに対しても本発明を適用することが可能である。 Each configuration and function described above can be appropriately realized by arbitrary hardware, software, or a combination of both. For example, a program that causes a computer to execute the above-described processing steps (procedures), a computer-readable recording medium (non-temporary recording medium) recording such a program, or a computer capable of installing such a program However, it is possible to apply the present invention.
 以上で本発明の例に関して説明してきたが、本発明は上述した実施の形態に限定されず、本発明の趣旨を逸脱しない範囲で種々の変形が可能であることは言うまでもない。 Although the examples of the present invention have been described above, it goes without saying that the present invention is not limited to the above-described embodiments, and that various modifications are possible without departing from the gist of the present invention.
10  :撮像装置
12  :撮像レンズ
14  :絞り
16  :撮像素子
20  :外部機器接続端子
21  :投光器
22  :画像入力コントローラ
24  :画像処理部
28  :ビデオエンコーダ
30  :不具合特定処理部
32  :センサ駆動部
33  :シャッタ駆動部
34  :絞り駆動部
36  :レンズ駆動部
38  :操作部
40  :CPU
47  :ROM
48  :メモリ
101 :画像取得部
103 :評価値算出部
105 :不具合領域特定部
10 : Imaging device 12 : Imaging lens 14 : Aperture 16 : Imaging element 20 : External device connection terminal 21 : Projector 22 : Image input controller 24 : Image processing unit 28 : Video encoder 30 : Defect identification processing unit 32 : Sensor driving unit 33 : Shutter drive unit 34 : Aperture drive unit 36 : Lens drive unit 38 : Operation unit 40 : CPU
47: ROM
48: memory 101: image acquisition unit 103: evaluation value calculation unit 105: defect area identification unit

Claims (17)

  1.  構造物の不具合箇所を特定する、プロセッサを備える画像処理装置であって、
     前記プロセッサは、
     水の吸収帯域を持つ第1波長の前記構造物の第1画像であって、異なる時刻で取得された少なくとも2枚以上の前記第1画像において、画像内の複数の領域における明るさの変化に関する第1評価値を算出し、
     前記第1評価値に基づいて、前記複数の領域のうち前記不具合箇所を含む不具合領域を特定する、
     画像処理装置。
    An image processing device, comprising a processor, for identifying a defective portion of a structure,
    The processor
    A first image of the structure at a first wavelength having a water absorption band, wherein at least two or more of the first images taken at different times relate to changes in brightness in a plurality of regions within the image. Calculate the first evaluation value,
    Based on the first evaluation value, identifying a defect area including the defect location among the plurality of areas;
    Image processing device.
  2.  前記第1評価値は、前記複数の領域における明るさの変化率である請求項1に記載の画像処理装置。 The image processing apparatus according to claim 1, wherein the first evaluation value is a brightness change rate in the plurality of areas.
  3.  前記明るさの変化率のうち少なくとも一つは、時間の経過に沿って増大する方向に変化する請求項2に記載の画像処理装置。 The image processing apparatus according to claim 2, wherein at least one of the rate of change in brightness changes in an increasing direction over time.
  4.  前記明るさの変化率は、前記複数の領域における前記明るさの変化率を加重平均して算出する加重平均明るさ変化率である請求項2又は3に記載の画像処理装置。 The image processing apparatus according to claim 2 or 3, wherein the rate of change in brightness is a weighted average rate of change in brightness calculated by weighted averaging the rates of change in brightness in the plurality of regions.
  5.  前記プロセッサは、
     前記第1画像の前記明るさの変化率を平均した画像平均明るさ変化率を算出し、
     前記画像平均明るさ変化率よりも第1閾値以上低い前記加重平均明るさ変化率が第1期間続いている前記領域を前記不具合領域として特定する請求項4に記載の画像処理装置。
    The processor
    calculating an image average brightness change rate by averaging the brightness change rate of the first image;
    5. The image processing apparatus according to claim 4, wherein the area in which the weighted average brightness change rate lower than the image average brightness change rate by a first threshold or more continues for a first period is specified as the defective area.
  6.  前記プロセッサは、前記第1閾値を時間経過に応じて変更させる請求項5に記載の画像処理装置。 The image processing apparatus according to claim 5, wherein the processor changes the first threshold over time.
  7.  前記プロセッサは、全ての前記第1評価値が第2閾値未満の状態が継続している場合には、前記不具合箇所を特定することを終了する請求項5又は6に記載の画像処理装置。 The image processing apparatus according to claim 5 or 6, wherein the processor terminates identifying the defective portion when all the first evaluation values continue to be less than the second threshold value.
  8.  前記プロセッサは、
     前記第1波長とは異なる第2波長の前記構造物の第2画像であって、前記第1画像に対応した少なくとも2枚以上の前記第2画像において、
     前記第1画像に対応した複数の領域における明るさの変化に関する第2評価値を算出し、前記第1評価値及び前記第2評価値に基づいて、前記不具合領域を特定する請求項1から7のいずれか1項に記載の画像処理装置。
    The processor
    In at least two or more second images of the structure corresponding to the first image, which are second images of the structure at a second wavelength different from the first wavelength,
    8. Calculating a second evaluation value regarding changes in brightness in a plurality of regions corresponding to the first image, and specifying the defect region based on the first evaluation value and the second evaluation value. The image processing device according to any one of .
  9.  前記第2画像は、対応する前記第1画像と同じ撮影条件で取得されており、
     前記第2画像は、前記第1画像との明るさが少なくなるように撮影されている請求項8に記載の画像処理装置。
    The second image is acquired under the same shooting conditions as the corresponding first image,
    9. The image processing apparatus according to claim 8, wherein the second image is shot so as to be less bright than the first image.
  10.  前記プロセッサは、
     前記第1画像を前記複数の領域に分割し、
     前記複数の領域の分割数や分割サイズを変更する請求項1から9のいずれか1項に記載の画像処理装置。
    The processor
    dividing the first image into the plurality of regions;
    10. The image processing apparatus according to any one of claims 1 to 9, wherein the division number and division size of the plurality of regions are changed.
  11.  前記プロセッサは、
     複数の前記領域の前記第1評価値に基づいて、前記不具合領域を特定する請求項1から10のいずれか1項に記載の画像処理装置。
    The processor
    The image processing apparatus according to any one of claims 1 to 10, wherein the defect area is specified based on the first evaluation values of the plurality of areas.
  12.  請求項1から11のいずれか1項に記載の画像処理装置を備える撮像装置。 An imaging device comprising the image processing device according to any one of claims 1 to 11.
  13.  請求項1から11のいずれか1項に記載の画像処理装置を備えるカメラシステム。 A camera system comprising the image processing device according to any one of claims 1 to 11.
  14.  前記構造物に光を照射する投光器を備える請求項13に記載のカメラシステム。 The camera system according to claim 13, comprising a projector for irradiating the structure with light.
  15.  構造物の不具合箇所を特定する、プロセッサを備える画像処理装置の画像処理方法であって、
     前記プロセッサにより、
     水の吸収帯域を持つ第1波長の前記構造物の第1画像であって、異なる時刻で取得された少なくとも2枚以上の前記第1画像において、画像内の複数の領域における明るさの変化に関する第1評価値を算出する工程と、
     前記第1評価値に基づいて、前記複数の領域のうち前記不具合箇所を含む不具合領域を特定する工程と、
     が行われる画像処理方法。
    An image processing method for an image processing device having a processor for identifying a defective portion of a structure,
    by the processor,
    A first image of the structure at a first wavelength having a water absorption band, wherein at least two or more of the first images taken at different times relate to changes in brightness in a plurality of regions within the image. a step of calculating a first evaluation value;
    a step of identifying a defective region including the defective portion among the plurality of regions based on the first evaluation value;
    An image processing method in which
  16.  構造物の不具合箇所を特定する、プロセッサを備える画像処理装置に画像処理方法を行わせるプログラムであって、
     前記プロセッサに、
     水の吸収帯域を持つ第1波長の前記構造物の第1画像であって、異なる時刻で取得された少なくとも2枚以上の前記第1画像において、画像内の複数の領域における明るさの変化に関する第1評価値を算出する工程と、
     前記第1評価値に基づいて、前記複数の領域のうち前記不具合箇所を含む不具合領域を特定する工程と、
     を行わせるプログラム。
    A program for causing an image processing device having a processor to perform an image processing method for identifying a defective portion of a structure,
    to the processor;
    A first image of the structure at a first wavelength having a water absorption band, wherein at least two or more of the first images taken at different times relate to changes in brightness in a plurality of regions within the image. a step of calculating a first evaluation value;
    a step of identifying a defective region including the defective portion among the plurality of regions based on the first evaluation value;
    A program that allows you to do
  17.  非一時的かつコンピュータ読取可能な記録媒体であって、請求項16に記載のプログラムが記録された記録媒体。 A recording medium that is non-temporary and computer-readable, in which the program according to claim 16 is recorded.
PCT/JP2022/031320 2021-09-29 2022-08-19 Image processing device, imaging device, camera system, image processing method, and program WO2023053769A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-159741 2021-09-29
JP2021159741 2021-09-29

Publications (1)

Publication Number Publication Date
WO2023053769A1 true WO2023053769A1 (en) 2023-04-06

Family

ID=85782300

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/031320 WO2023053769A1 (en) 2021-09-29 2022-08-19 Image processing device, imaging device, camera system, image processing method, and program

Country Status (1)

Country Link
WO (1) WO2023053769A1 (en)

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05164703A (en) * 1991-12-17 1993-06-29 Honda Motor Co Ltd Inspecting method for surface of workpiece
JPH11259656A (en) * 1998-03-10 1999-09-24 Teito Rapid Transit Authority Tunnel wall surface decision device
JP2013068507A (en) * 2011-09-22 2013-04-18 Toppan Printing Co Ltd Defect inspection device
JP2015072204A (en) * 2013-10-03 2015-04-16 パナソニックIpマネジメント株式会社 Monitoring camera and monitoring system
CN108119726A (en) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 For detecting the system of underground large-scale pipeline leak
JP2019184462A (en) * 2018-04-12 2019-10-24 日立グローバルライフソリューションズ株式会社 Water leakage inspection system and water leakage inspection method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JPH05164703A (en) * 1991-12-17 1993-06-29 Honda Motor Co Ltd Inspecting method for surface of workpiece
JPH11259656A (en) * 1998-03-10 1999-09-24 Teito Rapid Transit Authority Tunnel wall surface decision device
JP2013068507A (en) * 2011-09-22 2013-04-18 Toppan Printing Co Ltd Defect inspection device
JP2015072204A (en) * 2013-10-03 2015-04-16 パナソニックIpマネジメント株式会社 Monitoring camera and monitoring system
CN108119726A (en) * 2016-11-26 2018-06-05 沈阳新松机器人自动化股份有限公司 For detecting the system of underground large-scale pipeline leak
JP2019184462A (en) * 2018-04-12 2019-10-24 日立グローバルライフソリューションズ株式会社 Water leakage inspection system and water leakage inspection method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
YAGI, YASUSHI, KAWATO, SHINJIRO: "Leak Detection for Plant Patrol", JOURNAL OF THE JAPAN SOCIETY OF PRECISION ENGINEERING, vol. 56, no. 8, August 1990 (1990-08-01), pages 1399 - 1402 *

Similar Documents

Publication Publication Date Title
TWI394442B (en) Image pickup apparatus, image pickup method, and program therefor
JP2014042272A (en) White balance calibration for digital camera device
CN111510592A (en) Illumination processing method and device and image pickup device
TWI684365B (en) Camera and method of producing color images
JP2013021658A5 (en)
US20170374260A1 (en) Image processing apparatus, image processing method, and computer-readable recording medium
TW200414759A (en) Image sensor having dual automatic exposure control
TWI387332B (en) A method of backlighting
US10416026B2 (en) Image processing apparatus for correcting pixel value based on difference between spectral sensitivity characteristic of pixel of interest and reference spectral sensitivity, image processing method, and computer-readable recording medium
US10771717B2 (en) Use of IR pre-flash for RGB camera&#39;s automatic algorithms
JP2011119875A (en) Image processing device and image processing method
US10887527B2 (en) Image capture apparatus having illumination section, monitoring system including image capture apparatus, method of controlling image capture apparatus, and storage medium
US7990426B2 (en) Phase adjusting device and digital camera
JP6334976B2 (en) Digital camera with focus detection pixels used for photometry
WO2023053769A1 (en) Image processing device, imaging device, camera system, image processing method, and program
JP2002287039A (en) Photographing device for microscope
JP2015034850A (en) Photographing device and photographing method
JP5875307B2 (en) Imaging apparatus and control method thereof
JP2009141571A (en) Imaging apparatus
US20200036877A1 (en) Use of ir pre-flash for rgb camera&#39;s automatic algorithms
JP2005292784A (en) Photometric device equipped with color measuring function and camera equipped with photometric device
Bellia et al. Calibration procedures of a CCD camera for photometric measurements
TWI767484B (en) Dual sensor imaging system and depth map calculation method thereof
JP2008185821A (en) Photometric device and imaging apparatus
JP2008175739A (en) Wavelength shift amount detecting method

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22875641

Country of ref document: EP

Kind code of ref document: A1