WO2022186306A1 - Fire detecting device, and fire detecting method - Google Patents

Fire detecting device, and fire detecting method Download PDF

Info

Publication number
WO2022186306A1
WO2022186306A1 PCT/JP2022/008985 JP2022008985W WO2022186306A1 WO 2022186306 A1 WO2022186306 A1 WO 2022186306A1 JP 2022008985 W JP2022008985 W JP 2022008985W WO 2022186306 A1 WO2022186306 A1 WO 2022186306A1
Authority
WO
WIPO (PCT)
Prior art keywords
fire
fire detection
data
unit
detection device
Prior art date
Application number
PCT/JP2022/008985
Other languages
French (fr)
Japanese (ja)
Inventor
悠子 川添
正雄 長谷川
Original Assignee
株式会社Ihi
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社Ihi filed Critical 株式会社Ihi
Priority to JP2023503931A priority Critical patent/JPWO2022186306A1/ja
Publication of WO2022186306A1 publication Critical patent/WO2022186306A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01VGEOPHYSICS; GRAVITATIONAL MEASUREMENTS; DETECTING MASSES OR OBJECTS; TAGS
    • G01V8/00Prospecting or detecting by optical means
    • G01V8/10Detecting, e.g. by using light barriers
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B21/00Alarms responsive to a single specified undesired or abnormal condition and not otherwise provided for
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • GPHYSICS
    • G08SIGNALLING
    • G08BSIGNALLING OR CALLING SYSTEMS; ORDER TELEGRAPHS; ALARM SYSTEMS
    • G08B25/00Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems
    • G08B25/01Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium
    • G08B25/10Alarm systems in which the location of the alarm condition is signalled to a central station, e.g. fire or police telegraphic systems characterised by the transmission medium using wireless transmission systems
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A40/00Adaptation technologies in agriculture, forestry, livestock or agroalimentary production
    • Y02A40/10Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture
    • Y02A40/28Adaptation technologies in agriculture, forestry, livestock or agroalimentary production in agriculture specially adapted for farming

Definitions

  • the present disclosure relates to a fire detection device and fire detection method.
  • This application claims priority based on Japanese Patent Application No. 2021-032432 filed in Japan on March 2, 2021, the content of which is incorporated herein.
  • Non-Patent Document 1 discloses a method of generating an RGB pseudo-color composite image from observation data and detecting forest fire smoke from the data of this pseudo-image.
  • the present disclosure has been made in view of such circumstances, and aims to provide a fire detection device and a fire that can improve the accuracy of fire detection when performing fire detection from pseudo-color composite image data. It is to provide a detection method.
  • a first aspect of the present disclosure is a fire detection device that detects a fire on the earth using observation data of a radiometer mounted on an artificial satellite, wherein pseudo-color synthesis is performed based on the observation data
  • the processing unit normalizes the decompressed pixel data, and the determination unit determines whether or not there is a fire based on the normalized pixel data.
  • the processing unit may decompress either or both of the smoke aerosol reflectance index and the moisture index among the pixel data for each pixel in the image extracted by the extraction unit.
  • the processing unit may normalize the expanded index out of the smoke aerosol reflectance index and the moisture index.
  • a fire candidate position specifying unit that specifies a fire candidate position and a fire detection area creating unit that creates a predetermined area including the fire candidate position as a fire detection area may be provided.
  • the determination unit receives pixel data or a value based on the pixel data as input data, and determines whether the input data indicates a fire.
  • the presence or absence of a fire may be determined by inputting the pixel data decompressed by the processing unit or a value based on the pixel data into the learning model.
  • a second aspect of the present disclosure is a fire detection method for a fire detection device that detects a fire on the earth using observation data of a radiometer mounted on a satellite, comprising: extracting an image of a range corresponding to a fire detection area from the pseudo-color composite image; and decompressing pixel data of each pixel in the extracted image. and determining whether or not there is a fire based on the decompressed pixel data.
  • FIG. 1 is a schematic configuration diagram of a fire detection device according to a first embodiment
  • FIG. 4 is a flow chart of the fire detection device according to the first embodiment
  • FIG. 10 is a diagram showing determination thresholds (boundary lines) according to the second embodiment
  • FIG. 1 is a diagram showing an example of a schematic configuration of a fire detection system A including a fire detection device according to the first embodiment.
  • a fire detection system A includes a satellite 1 , a ground receiving station 2 and a fire detection device 3 .
  • Artificial satellite 1 is a geostationary satellite placed in a geostationary orbit at an altitude of about 36,000 km above the equator, and revolves at the same period as the rotation of the earth.
  • the artificial satellite 1 is the geostationary meteorological satellite "Himawari”.
  • This artificial satellite 1 is equipped with a visible and infrared radiometer 100, and can measure radiance (observation data) in a predetermined wavelength band (observation band) at regular intervals.
  • the artificial satellite 1 can acquire observation data with a higher time resolution than a low-orbit satellite (satellite that orbits the earth at an altitude of several thousand kilometers from the ground).
  • a low-orbit satellite satellite that orbits the earth at an altitude of several thousand kilometers from the ground.
  • the artificial satellite 1 transmits the acquired observation data to a ground receiving station 2 on the earth.
  • a ground receiving station 2 on the earth As an example of the present embodiment, in the following description, a case where the artificial satellite 1 is Himawari 8 will be explained, but the present invention is not limited to this, and other artificial satellites may be used.
  • the ground receiving station 2 is a communication facility that is installed at a predetermined location on the ground and receives observation data from the artificial satellite 1.
  • the ground receiving station 2 includes at least a radio antenna and a radio communication device for wireless communication with the satellite 1, receives observation data intermittently transmitted from the satellite 1, and outputs the observation data to the fire detection device 3. .
  • the fire detection device 3 uses the observation data of the radiometer 100 mounted on the satellite 1 to detect fires in the fire detection area on the earth.
  • the fire detection device 3 may be, for example, a data center or other information processing device.
  • the fire detection device 3 acquires the observation data from the ground receiving station 2, it is not limited to this, and may acquire the observation data from a server of the Meteorological Agency, for example.
  • the visible/infrared radiometer 100 of the artificial satellite 1 has a total of 16 bands, including 3 visible bands and 13 near-infrared/infrared bands.
  • the fire detection device 3 uses band B1 with band number 1, band B3 with band number 3, band B6 with band number 6, band B7 with band number 7, band Observation data of band B14 of number 14 and band B15 of band number 15 are used.
  • FIG. 2 is a schematic configuration diagram of the fire detection device 3 according to the first embodiment.
  • the fire detection device 3 includes a fire candidate position specifying unit 10 and a fire detection unit 20. These components are realized by executing a program (software) by a hardware processor such as a CPU (Central Processing Unit). In addition, some or all of these components are hardware such as LSI (Large Scale Integrated Circuit), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), GPU (Graphics Processing Unit), etc. (including circuitry), or by cooperation of software and hardware.
  • the program may be stored in advance in a storage device such as a HDD (Hard Disk Drive) or flash memory (a storage device with a non-transitory storage medium), or may be stored in a removable storage such as a DVD or CD-ROM.
  • the storage device may be stored in a medium (non-transitory storage medium) and installed in the storage device by loading the storage medium into the drive device.
  • the storage device is configured by, for example, an HDD, flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), ROM (Read Only Memory), or RAM (Random Access Memory).
  • the fire candidate position identifying unit 10 identifies a position where a fire is suspected to occur (hereinafter referred to as a "fire candidate position") using the observation data of band B7 in the observation band. For example, the fire candidate position specifying unit 10 uses the observation data of the band B7 as brightness temperature information, and specifies the fire candidate position based on the brightness temperature. Then, the fire candidate position identification unit 10 transmits position information indicating the identified fire candidate position to the fire detection unit 20 .
  • the fire detection unit 20 includes an image generation unit 30, a mask unit 31, a fire detection area creation unit 32, an extraction unit 33, a processing unit 34, and a determination unit 35.
  • the image generation unit 30 generates a pseudo-color composite image of RGB colors based on the observation data.
  • the image generator 30 may use the technique disclosed in Non-Patent Document 1 to generate a pseudo-color composite image.
  • the image generator 30 generates a pseudo-color composite image based on observation data of band B1, band B3, band B6, band B14, and band B15. Specifically, first, the image generation unit 30 utilizes the difference in reflectance for smoke aerosol between the band B1 and the band B3, and obtains the difference between the band B1 and the band B3 to calculate the smoke aerosol reflectance AE (Aerosol Enhancement : AE).
  • This smoke aerosol reflectance AE is the reflectance with the ground cover reflectance suppressed.
  • the image generation unit 30 calculates the smoke aerosol reflectance AE using the difference between twice the reflectance of the band B3 and the reflectance of the band B1. You may
  • the image generator 30 obtains a smoke aerosol reflectance index (hereinafter referred to as "SARI": Smoke Aerosol Reflectance Index) based on the smoke aerosol reflectance AE.
  • SARI is a value for image-enhancing the smoke aerosol reflectance AE with an exponential function so that differences in density can be identified, and is an index obtained by exponentially enhancing the smoke aerosol reflectance AE.
  • This SARI indicates a correlation with smoke density.
  • SARI is an index that emphasizes smoke aerosol reflectance AE, which is a value in which the reflectance of main coverings such as soil, vegetation, and cities is suppressed by taking the difference between band B3 and band B1. be.
  • the image generation unit 30 generates a pseudo-color composite image using the difference in brightness temperature between band B14 and band B15.
  • the image generator 30 generates a water index (hereinafter referred to as “WI”) for distinguishing between fire smoke, clouds, and snow and ice.
  • WI water index
  • This WI is information about the amount of water in the atmosphere and on the ground. WI is calculated using the method described in Non-Patent Document 1, for example.
  • the image generator 30 generates a pseudo-color composite image based on SARI and WI.
  • the image generation unit 30 generates a pseudo-color composite image expressing smoke in red by assigning SARI to "R", reflectance of band B6 to "G", and WI to "B".
  • band B6 mid-infrared reflectance
  • band B6 shows no reflection in smoke and reflection in thick clouds.
  • band B6 to "G” in this way, smoke can be represented in red and thick clouds can be represented in white.
  • smoke is red, soil and forests are green, thick clouds are pink to white, and thin clouds, fog, snow and ice, and water surfaces are blue. This allows the identification of forest fire smoke with a pseudo-color composite image.
  • the fire detection area creating unit 32 creates a fire detection area ROI (Region Of Interest) based on the fire candidate positions from the fire candidate position specifying unit 10 .
  • the fire detection area ROI is a rectangular area having fire candidate positions.
  • the fire detection area ROI is a square area (for example, 25 [km] ⁇ 25 [km]) centered on the fire candidate position.
  • the mask section 31 includes a first mask section 310 and a second mask section 320 .
  • the first mask unit 310 determines the presence or absence of clouds in each pixel data of the pseudo-color composite image based on WI and band B6.
  • the mask unit 31 performs mask processing to exclude pixel data determined to have clouds in the pseudo-color composite image so as not to be detected. This is because the ground surface data is not reflected in the pseudo-color composite image if there are clouds.
  • the second mask unit 320 excludes pixel data corresponding to the sea area from the pseudo-color composite image so as not to be detected. Considering that the sea shows a high brightness temperature depending on the incident angle from the sun, and that the possibility of fires breaking out on the sea is low, it is excluded by masking so as not to be detected.
  • the extracting unit 33 extracts an image of the range of the fire detection area ROI created by the fire detection area creating unit 32 (hereinafter referred to as "extracted image") from the pseudo-color composite image.
  • the processing unit 34 decompresses pixel data for each pixel in the extraction image extracted by the extraction unit 33 .
  • the pixel data to be decompressed is either or both of SARI and WI data. Expansion is a process of expanding the distribution of pixel data for each pixel in the extracted image so that it is distributed over a wider range.
  • the pixel data before being decompressed may be referred to as "first data”.
  • the processing unit 34 determines the range (a, b) to be stretched.
  • the processing unit 34 determines a as the "minimum value in the data" and b as the "maximum value in the data”. Then, the processing unit 34 decompresses a to "0 level” and b to "255 level". For example, this stretching may be histogram stretching.
  • the processing unit 34 normalizes the pixel data after decompressing the first data (hereinafter referred to as "second data").
  • the second data is decompressed SARI and WI data, decompressed SARI and non-decompressed WI data, and non-decompressed SARI and decompressed WI data.
  • data of the processing unit 34 normalizes the second data by setting the average value of the second data for each pixel to "1".
  • Pixel data after normalizing the second data is referred to as third data.
  • the processing unit 34 may omit the normalization processing.
  • the processing unit 34 decompresses both SARI and WI data, only one of the decompressed SARI and WI may be normalized.
  • the determination unit 35 determines whether or not there is a fire within the fire detection area ROI based on the pixel data decompressed by the processing unit 34 .
  • the determination unit 35 may determine whether or not there is a fire in the fire detection area ROI based on the pixel data (i.e., the third data) decompressed and normalized by the processing unit 34, Whether or not there is a fire in the fire detection area ROI may be determined based on the decompressed pixel data (that is, the second data).
  • the determination unit 35 sets a determination value for each pixel, and compares this determination value with a threshold (hereinafter referred to as "determination threshold") to determine the presence or absence of fire for each pixel.
  • the determination threshold may be a single value or a value representing a predetermined range.
  • the determination value may be either or both of at least decompressed SARI and WI among the second data and the third data. That is, the determination unit 35 may use SARI and WI of the second data or the third data as determination values.
  • the determination unit 35 compares the SARI with a first determination threshold (e.g., first range), and compares WI with a second determination threshold (e.g., second range).
  • the determination value is not limited to this, and may be a value calculated using SARI and WI of the second data or the third data.
  • the determination unit 35 performs a process of comparing the determination value and the determination threshold value for each pixel, thereby determining whether or not there is a fire for each pixel. You may For example, when the determination value is within a predetermined range indicated by the determination threshold value, it may be determined as a fire, and when the determination value is outside the predetermined range, it may be determined as a fire. That is, the method of comparing the determination value and the determination threshold may vary depending on the value of the determination value and the setting method of the determination threshold, and can be arbitrarily adjusted by the user.
  • the determination unit 35 creates a composite image using the second data or the third data that has undergone decompression and normalization processing, and applies a determination threshold to this composite image to determine whether the fire has occurred. The presence or absence may be determined.
  • FIG. 3 is a flow chart of the fire detection device 3 according to the first embodiment.
  • the fire detection device 3 acquires observation data from the satellite 1 (step S101).
  • the fire detection device 3 calculates SARI and WI based on the observation data (step S102).
  • the fire detection device 3 creates a fire detection area ROI based on the observation data (step S103).
  • the fire detection device 3 generates a pseudo-color composite image based on the observation data, SARI, and WI (step S104).
  • the fire detection device 3 masks the pixel data corresponding to the clouds and the sea in the pseudo-color composite image (step S105).
  • the fire detection device 3 extracts an image within the range of the fire detection area ROI from the masked pseudo-color composite image as an extraction image (step S106).
  • the fire detection device 3 decompresses either or both of the SARI and WI data for each pixel in the extracted extraction image (step S107).
  • the fire detection device 3 then normalizes the decompressed data (step S108).
  • the fire detection device 3 sets a judgment value from the expanded and normalized values (step S109), and judges whether or not there is a fire using the judgment value and a preset judgment threshold (step S110).
  • the fire detection device 3 includes the image generation unit 30, the extraction unit 33, the processing unit 34, and the determination unit 35.
  • the image generator 30 generates a pseudo-color composite image based on observation data from the artificial satellite 1 .
  • the processing unit 34 extracts an image of a range corresponding to the fire detection area from the pseudo-color composite image.
  • the processing unit 34 decompresses one or both of the SARI and WI data for each pixel in the image extracted by the extraction unit 33 .
  • the determination unit 35 determines whether or not there is a fire based on the data decompressed by the processing unit 34 .
  • FIG. 4 is a diagram showing an example of a schematic configuration of a fire detection system B including a fire detection device 3B according to the second embodiment.
  • a fire detection device 3B according to the second embodiment differs from the first embodiment in that it uses a learning model to determine the presence or absence of a fire.
  • portions having functions similar to those described in the first embodiment are given the same names and symbols, and specific descriptions of their configurations and functions may be omitted. .
  • a fire detection system B includes a satellite 1, a ground receiving station 2, and a fire detection device 3B.
  • FIG. 5 is a diagram showing an example of a schematic configuration of a fire detection device 3B according to the second embodiment.
  • the fire detection device 3B includes a fire candidate position identification unit 10, a learning model creation unit 40, and a fire detection unit 20B.
  • These components are implemented by, for example, a hardware processor such as a CPU executing a program (software).
  • the program may be stored in advance in a storage device such as a HDD (Hard Disk Drive) or flash memory (a storage device with a non-transitory storage medium), or may be stored in a removable storage such as a DVD or CD-ROM. It may be stored in a medium (non-transitory storage medium) and installed in the storage device by loading the storage medium into the drive device.
  • a storage device such as a HDD (Hard Disk Drive) or flash memory (a storage device with a non-transitory storage medium), or may be stored in a removable storage such as a DVD or CD-ROM. It may be stored in a medium (non-transitory storage medium) and installed in the storage device by loading the storage medium into the drive device.
  • the storage device is configured by, for example, an HDD, flash memory, EEPROM, ROM, or RAM.
  • the learning model creation unit 40 sets an appropriate judgment threshold by using machine learning.
  • the learning model creation unit 40 includes a preprocessing unit 41 and a learning unit 42 .
  • the preprocessing unit 41 creates learning data for machine learning by the learning unit 42 .
  • This learning data is calculated from the judgment value (hereinafter referred to as the "first past judgment value”) calculated from the observation data of the location where the fire actually broke out and the observation data of the location where the fire did not actually break out. and the determined value (hereinafter referred to as "second past determination value").
  • the preprocessing unit 41 acquires observation data from past fires from the Meteorological Agency server or the like.
  • the observation data when there was a fire in the past is called “past observation data.”
  • the preprocessing unit 41 generates a pseudo-color composite image using past observation data. Note that the method of generating the pseudo-color composite image by the preprocessing unit 41 is the same as the method of generating the pseudo-color composite image by the image generation unit 30, so the description is omitted. In the following description, the pseudo-color composite image generated by the preprocessing unit 41 is referred to as a “learning pseudo-color composite image” to distinguish it from the pseudo-color composite image generated by the image generation unit 30 .
  • the preprocessing unit 41 extracts an image of a predetermined area (hereinafter referred to as "learning extracted image") from the learning pseudo-color composite image.
  • a predetermined area hereinafter referred to as "learning extracted image”
  • the size, shape, etc. of the predetermined area are preferably the same as those of the fire detection area ROI.
  • the predetermined area is a fire area, which is an area including a place where a fire actually broke out.
  • the preprocessing unit 41 applies the same processing as the processing performed by the processing unit 34 to the pixel data of each pixel in the learning extraction image.
  • the preprocessing unit 41 decompresses one or both of SARI and WI, which are pixel data for each pixel in the learning extraction image. Further, the preprocessing unit 41 normalizes the decompressed data out of the SARI and WI when the processing unit 34 executes normalization processing. Then, the preprocessing unit 41 obtains a determination value for each pixel from the pixel data after performing the same processing as the processing performed by the processing unit 34 .
  • the location of the fire is known.
  • the preprocessing unit 41 determines the determination value (first past determination value) of the pixel corresponding to the location where the fire occurred, among the determination values for each pixel, and the determination value of the pixel corresponding to the location where the fire occurred. A determination value (second past determination value) of a pixel corresponding to the location is determined. Then, the preprocessing unit 41 transmits the first past determination value labeled indicating that there is a fire and the second past determination value labeled indicating that there is no fire to the learning unit 42 as learning data. do.
  • the learning unit 42 uses the learning data generated by the preprocessing unit 41 to machine-learn a learning model in which the input is the judgment value and the output is the presence or absence of fire (for example, a support vector machine (VM), etc. supervised learning). For example, in this learning model, machine learning is performed using learning data to draw a boundary line H as shown in FIG. 6 that divides the first past judgment value and the second past judgment value. Then, in the learned model after learning, when an input judgment value (a judgment value that is not learning data) is input as input data, the input data indicates that there is a fire based on the boundary line H. Output whether it is a value or not.
  • the learning unit 42 transmits the constructed learned model to the fire detection unit 20B. Note that this boundary line H corresponds to the determination threshold. For example, in the example shown in FIG. 6, x indicates that there is a fire, and black squares indicate that there is no fire.
  • the fire detection unit 20B includes an image generation unit 30, a mask unit 31, a fire detection area creation unit 32, an extraction unit 33, a processing unit 34, and a determination unit 35B.
  • the determination unit 35B has a learned model constructed by the learning unit 42 .
  • the determination unit 35B determines whether or not there is a fire for each pixel by inputting the determination value set by the processing unit 34 into the learned model as input data. Further, the determination unit 35B may specify the location of the fire based on the pixels of the input data determined to have the fire.
  • FIG. 7 is a flow chart of the fire detection device 3B according to the second embodiment of the fire detection device 3B.
  • the fire detection device 3B acquires observation data from the satellite 1 (step S201).
  • the fire detection device 3B calculates SARI and WI based on the observation data (step S202).
  • the fire detection device 3B creates a fire detection area ROI based on the observation data (step S203).
  • the fire detection device 3B generates a pseudo-color composite image based on the observation data, SARI, and WI (step S204).
  • the fire detection device 3B masks the pixel data corresponding to the clouds and the sea in the pseudo-color composite image (step S205).
  • the fire detection device 3B extracts an image of the range of the fire detection area ROI from the masked pseudo-color composite image as an extraction image (step S206).
  • the fire detection device 3B decompresses either or both of the SARI and WI data for each pixel in the extracted extraction image (step S207).
  • the fire detection device 3B then normalizes the decompressed data (step S208).
  • the fire detection device 3B sets a determination value from the decompressed and normalized values (step S209), and inputs the determination value to the learned model to determine whether or not there is a fire for each pixel (step S210). .
  • the learning model creation unit 40 is included in the fire detection device 3B, but is not limited to this, and may be a device separate from the fire detection device 3B. That is, the fire detection device 3 should have at least a trained model, and a device different from the fire detection device 3 may create the trained model.
  • the data indicating the occurrence of the fire in the pseudo-color composite image and the fire by decompressing the pixel data of each pixel in the pseudo-color composite image generated based on the observation data, the data indicating the occurrence of the fire in the pseudo-color composite image and the fire.
  • the distinction from the data that does not show the occurrence of fire becomes clear, and it is possible to suppress the deterioration of the accuracy of fire detection due to smoke. As a result, the accuracy of fire detection can be improved.
  • the trained model uses past judgment values generated based on decompressed pixel data as learning data to detect fires without depending on the atmospheric environment or land in the fire detection area. This eliminates the need to collect and learn learning data for each fire detection area.
  • the boundary line H is a threshold that can be used all over the world.
  • the fire detection devices 3 and 3B of at least one embodiment described above are, for example, computers, and are configured to be executed by one or more processors and the one or more processors. and a memory storing one or more programs.
  • One or more programs cause the fire detection devices 3 and 3B to generate a pseudo-color composite image based on the observation data, and out of the pseudo-color composite image, an image (extracted image) of a range corresponding to the fire detection area. is extracted, pixel data for each pixel in the extracted image is decompressed, and the presence or absence of fire is determined based on the decompressed pixel data.
  • one or more components may be configured by one computer, that is, the fire detection device 3 or 3B may be configured by a plurality of computers, or the fire detection device 3 or 3B may be configured in one computer.
  • the fire detection device 3 or 3B of the above embodiment includes the fire candidate position specifying unit 10, the mask unit 31, and the fire detection area creation unit 32, and these configurations are essential elements for the fire detection device 3 or 3B. is not.
  • the fire detection device 3 or 3B may acquire the fire candidate positions and the fire detection area ROI from an external device.
  • the mask section 31 may be excluded from the fire detection device 3 or 3B if mask processing is unnecessary.
  • ... unit described in the specification means a unit that processes at least one function or operation, which may be embodied as hardware or software, or a combination of hardware and software. It may be embodied in combination.

Abstract

A fire detecting device (3, 3B) for detecting a fire on the Earth using observation data from a radiometer (100) mounted on an artificial satellite (1) is provided with: an image generating unit (30) for generating a pseudo-color composite image on the basis of the observation data; an extracting unit (33) for extracting an image of a zone corresponding to a fire detection area from within the pseudo-color composite image; a processing unit (34) for expanding pixel data of each pixel in the image extracted by the extracting unit; and a determining unit (35, 35B) for determining the presence or absence of a fire on the basis of the pixel data expanded by the processing unit.

Description

火災検知装置及び火災検知方法Fire detection device and fire detection method
 本開示は、火災検知装置及び火災検知方法に関する。
 本願は、2021年3月2日に日本に出願された特願2021-032432号に基づき優先権を主張し、その内容をここに援用する。
The present disclosure relates to a fire detection device and fire detection method.
This application claims priority based on Japanese Patent Application No. 2021-032432 filed in Japan on March 2, 2021, the content of which is incorporated herein.
 人工衛星の観測データを用いて、森林火災の煙を検出する手法が知られている。例えば、非特許文献1には、観測データからRGBの疑似カラー合成画像を生成し、この疑似画像のデータから森林火災の煙を検出する手法が開示されている。 A known method is to detect forest fire smoke using satellite observation data. For example, Non-Patent Document 1 discloses a method of generating an RGB pseudo-color composite image from observation data and detecting forest fire smoke from the data of this pseudo-image.
 しかしながら、大気中の水分量などの大気の影響により、疑似カラー合成画像において火災の発生を示すデータと火災の発生を示さないデータとの区別が付きにくくなり、煙による火災検知の精度が低下する場合がある。 However, due to atmospheric influences such as the amount of moisture in the atmosphere, it becomes difficult to distinguish between data that indicate the occurrence of fires and data that does not indicate the occurrence of fires in the pseudo-color composite image, which reduces the accuracy of smoke-based fire detection. Sometimes.
 本開示は、このような事情に鑑みてなされたもので、その目的は、疑似カラー合成画像のデータから火災検知を行う場合において、その火災検知の精度を向上させることができる火災検知装置及び火災検知方法を提供することである。 The present disclosure has been made in view of such circumstances, and aims to provide a fire detection device and a fire that can improve the accuracy of fire detection when performing fire detection from pseudo-color composite image data. It is to provide a detection method.
(1)本開示の第一の態様は、人工衛星に搭載された放射計の観測データを用いて、地球上の火災を検知する火災検知装置であって、前記観測データに基づいて疑似カラー合成画像を生成する画像生成部と、前記疑似カラー合成画像のうち、火災検知エリアに相当する範囲の画像を抽出する抽出部と、前記抽出部によって抽出された画像内のピクセル毎のピクセルデータを伸張化する処理部と、前記処理部によって伸張化されたピクセルデータに基づいて、火災の有無を判定する判定部と、を備える、火災検知装置である。 (1) A first aspect of the present disclosure is a fire detection device that detects a fire on the earth using observation data of a radiometer mounted on an artificial satellite, wherein pseudo-color synthesis is performed based on the observation data An image generating unit for generating an image, an extracting unit for extracting an image of a range corresponding to a fire detection area from the pseudo-color composite image, and decompressing pixel data for each pixel in the image extracted by the extracting unit. and a determination unit that determines whether or not there is a fire based on the pixel data decompressed by the processing unit.
(2)上記(1)の火災検知装置であって、前記処理部は、伸張化したピクセルデータを正規化し、前記判定部は、正規化後のピクセルデータに基づいて、火災の有無を判定してもよい。 (2) In the fire detection device of (1) above, the processing unit normalizes the decompressed pixel data, and the determination unit determines whether or not there is a fire based on the normalized pixel data. may
(3)上記(1)の火災検知装置であって、前記ピクセルデータは、煙に対する反射率に関する指数である煙エアロゾル反射率指数と、大気及び地表の水分量に関する水分指数と、を含み、前記処理部は、前記抽出部によって抽出された画像内のピクセル毎のピクセルデータのうち、煙エアロゾル反射率指数及び水分指数のいずれか又は両方を伸張化してもよい。 (3) The fire detection device of (1) above, wherein the pixel data includes a smoke aerosol reflectance index, which is an index related to reflectance to smoke, and a moisture index related to the amount of moisture in the atmosphere and on the ground, The processing unit may decompress either or both of the smoke aerosol reflectance index and the moisture index among the pixel data for each pixel in the image extracted by the extraction unit.
(4)上記(3)の火災検知装置であって、前記処理部は、煙エアロゾル反射率指数及び水分指数のうち、伸張化された指数を正規化してもよい。 (4) In the fire detection device of (3) above, the processing unit may normalize the expanded index out of the smoke aerosol reflectance index and the moisture index.
(5)上記(1)から上記(4)のいずれかの火災検知装置であって、前記観測データのうち、輝度温度を示す観測バンドの情報に基づいて火災の発生の疑いがある位置である火災候補位置を特定する火災候補位置特定部と、前記火災候補位置を含む所定のエリアを火災検知エリアとして作成する火災検知エリア作成部と、を備えてもよい。 (5) The fire detection device according to any one of (1) to (4) above, wherein the position is suspected to be a fire based on information of an observation band indicating brightness temperature among the observation data. A fire candidate position specifying unit that specifies a fire candidate position and a fire detection area creating unit that creates a predetermined area including the fire candidate position as a fire detection area may be provided.
(6)上記(1)から上記(5)のいずれかの火災検知装置であって、前記判定部は、ピクセルデータ又はピクセルデータに基づく値を入力データとし、入力データが火災を示すか否かの判定結果を出力する学習モデルを有し、前記処理部による伸張化後の前記ピクセルデータ又は当該ピクセルデータに基づく値を前記学習モデルに入力することで火災の有無を判定してもよい。 (6) In the fire detection device according to any one of (1) to (5) above, the determination unit receives pixel data or a value based on the pixel data as input data, and determines whether the input data indicates a fire. The presence or absence of a fire may be determined by inputting the pixel data decompressed by the processing unit or a value based on the pixel data into the learning model.
(7)本開示の第二の態様は、人工衛星に搭載された放射計の観測データを用いて、地球上の火災を検知する火災検知装置の火災検知方法であって、前記観測データに基づいて疑似カラー合成画像を生成するステップと、前記疑似カラー合成画像のうち、火災検知エリアに相当する範囲の画像を抽出するステップと、抽出された画像内のピクセル毎のピクセルデータを伸張化するステップと、伸張化されたピクセルデータに基づいて、火災の有無を判定するステップと、を含む火災検知方法である。 (7) A second aspect of the present disclosure is a fire detection method for a fire detection device that detects a fire on the earth using observation data of a radiometer mounted on a satellite, comprising: extracting an image of a range corresponding to a fire detection area from the pseudo-color composite image; and decompressing pixel data of each pixel in the extracted image. and determining whether or not there is a fire based on the decompressed pixel data.
 以上説明したように、本開示によれば、疑似カラー合成画像のデータから火災検知を行う場合において、その火災検知の精度を向上させることができる。 As described above, according to the present disclosure, it is possible to improve the accuracy of fire detection when performing fire detection from pseudo-color composite image data.
第1の実施形態に係る火災検知システムの概略構成の一例を示す図である。BRIEF DESCRIPTION OF THE DRAWINGS It is a figure which shows an example of schematic structure of the fire detection system which concerns on 1st Embodiment. 第1の実施形態に係る火災検知装置の概略構成図である。1 is a schematic configuration diagram of a fire detection device according to a first embodiment; FIG. 第1の実施形態に係る火災検知装置のフローチャートである。4 is a flow chart of the fire detection device according to the first embodiment; 第2の実施形態に係る火災検知システムの概略構成の一例を示す図である。It is a figure which shows an example of schematic structure of the fire detection system which concerns on 2nd Embodiment. 第2の実施形態に係る火災検知装置の概略構成図である。It is a schematic block diagram of the fire detection apparatus which concerns on 2nd Embodiment. 第2の実施形態に係る判定閾値(境界線)を示す図である。FIG. 10 is a diagram showing determination thresholds (boundary lines) according to the second embodiment; 第2の実施形態に係る火災検知装置のフローチャートである。It is a flow chart of a fire detection device concerning a 2nd embodiment.
 以下、本実施形態に係る火災検知装置を、図面を用いて説明する。 The fire detection device according to this embodiment will be described below with reference to the drawings.
<第1の実施形態>
 図1は、第1の実施形態に係る火災検知装置を含む火災検知システムAの概略構成の一例を示す図である。火災検知システムAは、人工衛星1、地上受信局2、及び火災検知装置3を備える。
<First Embodiment>
FIG. 1 is a diagram showing an example of a schematic configuration of a fire detection system A including a fire detection device according to the first embodiment. A fire detection system A includes a satellite 1 , a ground receiving station 2 and a fire detection device 3 .
 人工衛星1は、赤道上空の高度約36000kmの静止軌道に配置された静止衛星であり、地球の自転と同じ周期で公転する。例えば、人工衛星1は、静止気象衛星「ひまわり」である。この人工衛星1は、可視赤外放射計100を搭載しており、所定の波長帯(観測バンド)における放射輝度(観測データ)を一定周期ごとに計測することができる。 Artificial satellite 1 is a geostationary satellite placed in a geostationary orbit at an altitude of about 36,000 km above the equator, and revolves at the same period as the rotation of the earth. For example, the artificial satellite 1 is the geostationary meteorological satellite "Himawari". This artificial satellite 1 is equipped with a visible and infrared radiometer 100, and can measure radiance (observation data) in a predetermined wavelength band (observation band) at regular intervals.
 したがって、人工衛星1は、低軌道衛星(地上から高度数千キロメートル程度で地球を周回する衛生)よりも時間分解能が高い観測データを取得することができる。人工衛星1は、観測データを取得すると、その取得した観測データを地球上の地上受信局2に送信する。なお、本実施形態の一例として、以下の説明では、人工衛星1がひまわり8号である場合について説明するが、これに限定されず、他の人工衛星であってもよい。 Therefore, the artificial satellite 1 can acquire observation data with a higher time resolution than a low-orbit satellite (satellite that orbits the earth at an altitude of several thousand kilometers from the ground). When acquiring observation data, the artificial satellite 1 transmits the acquired observation data to a ground receiving station 2 on the earth. As an example of the present embodiment, in the following description, a case where the artificial satellite 1 is Himawari 8 will be explained, but the present invention is not limited to this, and other artificial satellites may be used.
 地上受信局2は、地上の所定場所に設けられ、人工衛星1からの観測データを受信する通信設備である。地上受信局2は、人工衛星1と無線通信を行うための無線アンテナ及び無線通信機を少なくとも備え、人工衛星1が間欠的に送信してくる観測データを受信して火災検知装置3に出力する。 The ground receiving station 2 is a communication facility that is installed at a predetermined location on the ground and receives observation data from the artificial satellite 1. The ground receiving station 2 includes at least a radio antenna and a radio communication device for wireless communication with the satellite 1, receives observation data intermittently transmitted from the satellite 1, and outputs the observation data to the fire detection device 3. .
 火災検知装置3は、人工衛星1に搭載された放射計100の観測データを用いて、地球上における火災検知エリアの火災を検知する。火災検知装置3は、例えば、データセンターであってもよいし、その他の情報処理装置であってもよい。なお、火災検知装置3は、地上受信局2から観測データを取得したが、これに限定されず、例えば、気象庁のサーバから観測データを取得してもよい。 The fire detection device 3 uses the observation data of the radiometer 100 mounted on the satellite 1 to detect fires in the fire detection area on the earth. The fire detection device 3 may be, for example, a data center or other information processing device. Although the fire detection device 3 acquires the observation data from the ground receiving station 2, it is not limited to this, and may acquire the observation data from a server of the Meteorological Agency, for example.
 ここで、一例として、人工衛星1の可視赤外放射計100は、可視3バンド、近赤外・赤外13バンドの合計16のバンド構成である。そのうち、人工衛星1がひまわり8号である場合には、火災検知装置3は、バンド番号1のバンドB1、バンド番号3のバンドB3、バンド番号6のバンドB6、バンド番号7のバンドB7、バンド番号14のバンドB14、バンド番号15のバンドB15の観測データを用いる。 Here, as an example, the visible/infrared radiometer 100 of the artificial satellite 1 has a total of 16 bands, including 3 visible bands and 13 near-infrared/infrared bands. Among them, when the satellite 1 is Himawari 8, the fire detection device 3 uses band B1 with band number 1, band B3 with band number 3, band B6 with band number 6, band B7 with band number 7, band Observation data of band B14 of number 14 and band B15 of band number 15 are used.
 以下に、第1の実施形態に係る火災検知装置3の構成について説明する。図2は、第1の実施形態に係る火災検知装置3の概略構成図である。 The configuration of the fire detection device 3 according to the first embodiment will be described below. FIG. 2 is a schematic configuration diagram of the fire detection device 3 according to the first embodiment.
 火災検知装置3は、火災候補位置特定部10及び火災検知部20を備える。これらの構成要素は、例えば、CPU(Central Processing Unit)等のハードウェアプロセッサがプログラム(ソフトウェア)を実行することにより実現される。また、これらの構成要素のうち一部または全部は、LSI(Large Scale Integrated circuit)やASIC(Application Specific Integrated Circuit)、FPGA(Field-Programmable Gate Array)、GPU(Graphics Processing Unit)等のハードウェア(回路部;circuitryを含む)によって実現されてもよいし、ソフトウェアとハードウェアの協働によって実現されてもよい。プログラムは、予めHDD(Hard Disk Drive)やフラッシュメモリ等の記憶装置(非一過性の記憶媒体を備える記憶装置)に格納されていてもよいし、DVDやCD-ROM等の着脱可能な記憶媒体(非一過性の記憶媒体)に格納されており、記憶媒体がドライブ装置に装着されることで記憶装置にインストールされてもよい。記憶装置は、例えば、HDD、フラッシュメモリ、EEPROM(Electrically Erasable Programmable Read Only Memory)、ROM(Read Only Memory)、またはRAM(Random Access Memory)等により構成される。 The fire detection device 3 includes a fire candidate position specifying unit 10 and a fire detection unit 20. These components are realized by executing a program (software) by a hardware processor such as a CPU (Central Processing Unit). In addition, some or all of these components are hardware such as LSI (Large Scale Integrated Circuit), ASIC (Application Specific Integrated Circuit), FPGA (Field-Programmable Gate Array), GPU (Graphics Processing Unit), etc. (including circuitry), or by cooperation of software and hardware. The program may be stored in advance in a storage device such as a HDD (Hard Disk Drive) or flash memory (a storage device with a non-transitory storage medium), or may be stored in a removable storage such as a DVD or CD-ROM. It may be stored in a medium (non-transitory storage medium) and installed in the storage device by loading the storage medium into the drive device. The storage device is configured by, for example, an HDD, flash memory, EEPROM (Electrically Erasable Programmable Read Only Memory), ROM (Read Only Memory), or RAM (Random Access Memory).
 火災候補位置特定部10は、観測バンドがバンドB7の観測データを用いて、火災の発生の疑いがある位置(以下、「火災候補位置」という。)を特定する。例えば、火災候補位置特定部10は、バンドB7の観測データを輝度温度の情報として使用し、その輝度温度に基づいて火災候補位置を特定する。そして、火災候補位置特定部10は、特定した火災候補位置を示す位置情報を火災検知部20に送信する。 The fire candidate position identifying unit 10 identifies a position where a fire is suspected to occur (hereinafter referred to as a "fire candidate position") using the observation data of band B7 in the observation band. For example, the fire candidate position specifying unit 10 uses the observation data of the band B7 as brightness temperature information, and specifies the fire candidate position based on the brightness temperature. Then, the fire candidate position identification unit 10 transmits position information indicating the identified fire candidate position to the fire detection unit 20 .
 火災検知部20は、画像生成部30、マスク部31、火災検知エリア作成部32、抽出部33、処理部34、及び判定部35を備える。 The fire detection unit 20 includes an image generation unit 30, a mask unit 31, a fire detection area creation unit 32, an extraction unit 33, a processing unit 34, and a determination unit 35.
 画像生成部30は、観測データに基づいて、RGBカラーの疑似カラー合成画像を生成する。例えば、画像生成部30は、非特許文献1に開示された手法を用いて、疑似カラー合成画像を生成してもよい。 The image generation unit 30 generates a pseudo-color composite image of RGB colors based on the observation data. For example, the image generator 30 may use the technique disclosed in Non-Patent Document 1 to generate a pseudo-color composite image.
 以下において、疑似カラー合成画像を生成する方法の一例を説明する。画像生成部30は、バンドB1、バンドB3、バンドB6、バンドB14、及びバンドB15の観測データに基づいて、疑似カラー合成画像を生成する。具体的には、まず、画像生成部30は、バンドB1及びバンドB3の煙エアロゾルに対する反射率の違いを利用し、バンドB1とバンドB3との差分を取ることで煙エアロゾル反射率AE(Aerosol Enhancement:AE)を求める。この煙エアロゾル反射率AEは、地上被覆物の反射率が抑えられた反射率である。例えば、画像生成部30は、より効果的に煙エアロゾル反射率AEを得るために、バンドB3の2倍の反射率とバンドB1の反射率との差分を利用して煙エアロゾル反射率AEを算出してもよい。 An example of a method for generating a pseudo-color composite image will be described below. The image generator 30 generates a pseudo-color composite image based on observation data of band B1, band B3, band B6, band B14, and band B15. Specifically, first, the image generation unit 30 utilizes the difference in reflectance for smoke aerosol between the band B1 and the band B3, and obtains the difference between the band B1 and the band B3 to calculate the smoke aerosol reflectance AE (Aerosol Enhancement : AE). This smoke aerosol reflectance AE is the reflectance with the ground cover reflectance suppressed. For example, in order to obtain the smoke aerosol reflectance AE more effectively, the image generation unit 30 calculates the smoke aerosol reflectance AE using the difference between twice the reflectance of the band B3 and the reflectance of the band B1. You may
 次に、画像生成部30は、煙エアロゾル反射率AEに基づいて、煙エアロゾル反射率指数(以下、「SARI」という。:Smoke Aerosol Reflectance Index)を求める。SARIは、煙エアロゾル反射率AEを指数関数で画像強調させて濃度の違いを識別できるようにするための値であって、煙エアロゾル反射率AEを指数関数強調で得られる指数である。このSARIは、煙濃度との相関を示すものである。このように、SARIは、バンドB3とバンドB1との差分を取ることで土壌,植生,都市などの主要被覆物の反射率が抑えられた値であって煙エアロゾル反射率AEを強調した指数である。 Next, the image generator 30 obtains a smoke aerosol reflectance index (hereinafter referred to as "SARI": Smoke Aerosol Reflectance Index) based on the smoke aerosol reflectance AE. SARI is a value for image-enhancing the smoke aerosol reflectance AE with an exponential function so that differences in density can be identified, and is an index obtained by exponentially enhancing the smoke aerosol reflectance AE. This SARI indicates a correlation with smoke density. In this way, SARI is an index that emphasizes smoke aerosol reflectance AE, which is a value in which the reflectance of main coverings such as soil, vegetation, and cities is suppressed by taking the difference between band B3 and band B1. be.
 画像生成部30は、バンドB14とバンド15との輝度温度の差分を用いて、疑似カラー合成画像を生成する。画像生成部30は、火災煙と雲や雪氷とを判別するための水分指数(以下、「WI」という。(Water Index))を生成する。このWIは、大気と地表の水分量に関する情報である。WIの算出方法は、例えば、非特許文献1に記載されている手法を用いて算出される。 The image generation unit 30 generates a pseudo-color composite image using the difference in brightness temperature between band B14 and band B15. The image generator 30 generates a water index (hereinafter referred to as “WI”) for distinguishing between fire smoke, clouds, and snow and ice. This WI is information about the amount of water in the atmosphere and on the ground. WI is calculated using the method described in Non-Patent Document 1, for example.
 画像生成部30は、SARIとWIとに基づいて、疑似カラー合成画像を生成する。例えば、画像生成部30は、「R」にSARI、「G」にバンドB6の反射率、「B」にWIを割り当てることにより、煙を赤色に表現した疑似カラー合成画像を生成する。ここで、疑似カラー合成画像においては、バンドB6(中間赤外反射率)は煙には反射を示さず、厚い雲には反射を示す。このように、「G」にバンドB6を割り当てることで、煙を赤色で表し、厚い雲を白色に表現することができる。このように、疑似カラー合成画像では、煙が赤色、土壌や森林が緑色、厚い雲がピンク色から白色、薄雲や霧、雪氷、水面が青系色で示される。これにより、疑似カラー合成画像によって森林火災の煙の識別が可能となる。 The image generator 30 generates a pseudo-color composite image based on SARI and WI. For example, the image generation unit 30 generates a pseudo-color composite image expressing smoke in red by assigning SARI to "R", reflectance of band B6 to "G", and WI to "B". Here, in the false-color composite image, band B6 (mid-infrared reflectance) shows no reflection in smoke and reflection in thick clouds. By assigning band B6 to "G" in this way, smoke can be represented in red and thick clouds can be represented in white. Thus, in the pseudo-color composite image, smoke is red, soil and forests are green, thick clouds are pink to white, and thin clouds, fog, snow and ice, and water surfaces are blue. This allows the identification of forest fire smoke with a pseudo-color composite image.
 火災検知エリア作成部32は、火災候補位置特定部10からの火災候補位置に基づいて、火災検知エリアROI(Region Of Interest)を作成する。例えば、火災検知エリアROIは、火災候補位置を有する矩形形状の領域である。例えば、火災検知エリアROIは、火災候補位置を中心とする正方形(例えば、25[km]×25[km])の領域である。 The fire detection area creating unit 32 creates a fire detection area ROI (Region Of Interest) based on the fire candidate positions from the fire candidate position specifying unit 10 . For example, the fire detection area ROI is a rectangular area having fire candidate positions. For example, the fire detection area ROI is a square area (for example, 25 [km]×25 [km]) centered on the fire candidate position.
 マスク部31は、第1マスク部310及び第2マスク部320を備える。 The mask section 31 includes a first mask section 310 and a second mask section 320 .
 第1マスク部310は、WIとバンドB6とに基づいて、疑似カラー合成画像の各ピクセルデータにおいて雲の有無を判定する。マスク部31は、疑似カラー合成画像において、雲があると判定したピクセルデータを検知対象としないようにマスク処理して除外する。これは、雲が有ると地表のデータが疑似カラー合成画像に反映されないためである。 The first mask unit 310 determines the presence or absence of clouds in each pixel data of the pseudo-color composite image based on WI and band B6. The mask unit 31 performs mask processing to exclude pixel data determined to have clouds in the pseudo-color composite image so as not to be detected. This is because the ground surface data is not reflected in the pseudo-color composite image if there are clouds.
 第2マスク部320は、疑似カラー合成画像において、海の領域に相当するピクセルデータを検知対象としないように除外する。海上は太陽からの入光角度によって高い輝度温度を示してしまうこと、また、海上で火災は生じる可能性が低いことを考慮して、検知対象としないようマスク処理して除外する。 The second mask unit 320 excludes pixel data corresponding to the sea area from the pseudo-color composite image so as not to be detected. Considering that the sea shows a high brightness temperature depending on the incident angle from the sun, and that the possibility of fires breaking out on the sea is low, it is excluded by masking so as not to be detected.
 抽出部33は、疑似カラー合成画像のうち、火災検知エリア作成部32が作成した火災検知エリアROIの範囲の画像(以下、「抽出画像」という。)を抽出する。 The extracting unit 33 extracts an image of the range of the fire detection area ROI created by the fire detection area creating unit 32 (hereinafter referred to as "extracted image") from the pseudo-color composite image.
 処理部34は、抽出部33によって抽出された抽出画像内のピクセル毎のピクセルデータを伸張化する。ここで、伸張化するピクセルデータは、SARI及びWIのうち、いずれか又は両方のデータである。伸張化とは、抽出画像内のピクセル毎のピクセルデータの分布を、より広い範囲に分布するように拡げる処理である。なお、以下において、伸張化する前のピクセルデータを「第1データ」と称する場合がある。 The processing unit 34 decompresses pixel data for each pixel in the extraction image extracted by the extraction unit 33 . Here, the pixel data to be decompressed is either or both of SARI and WI data. Expansion is a process of expanding the distribution of pixel data for each pixel in the extracted image so that it is distributed over a wider range. In the following, the pixel data before being decompressed may be referred to as "first data".
 例えば、処理部34は、引き伸ばしたい範囲(a,b)を決定する。本実施形態では、一例として処理部34は、aを「データの中の最小値」、bを「データの中の最大値」として決定する。そして、処理部34は、aを「0レベル」、bを「255レベル」になるように伸張化する。例えば、この伸張化は、ヒストグラム伸張であってもよい。 For example, the processing unit 34 determines the range (a, b) to be stretched. In this embodiment, as an example, the processing unit 34 determines a as the "minimum value in the data" and b as the "maximum value in the data". Then, the processing unit 34 decompresses a to "0 level" and b to "255 level". For example, this stretching may be histogram stretching.
 次に、処理部34は、第1データを伸張化した後のピクセルデータ(以下、「第2データ」という。)を正規化する。ここで、第2データは、伸張化されたSARIとWIとのデータ、伸張化されたSARIと伸張化されていないWIとのデータ、及び、伸張化されていないSARIと伸張化されたWIとのデータ、のいずれかのデータである。例えば、処理部34は、ピクセル毎の第2データの平均値を「1」として第2データを正規化する。なお、第2データを正規化した後のピクセルデータを第3データと称する。ただし、これに限定されず、処理部34は、正規化処理を省略してもよい。
 処理部34が、SARI及びWIの両方のデータを伸長化する場合に、伸長化されたSARI及びWIのうち、いずれか一方のみを正規化してもよい。
Next, the processing unit 34 normalizes the pixel data after decompressing the first data (hereinafter referred to as "second data"). Here, the second data is decompressed SARI and WI data, decompressed SARI and non-decompressed WI data, and non-decompressed SARI and decompressed WI data. data of For example, the processing unit 34 normalizes the second data by setting the average value of the second data for each pixel to "1". Pixel data after normalizing the second data is referred to as third data. However, it is not limited to this, and the processing unit 34 may omit the normalization processing.
When the processing unit 34 decompresses both SARI and WI data, only one of the decompressed SARI and WI may be normalized.
 判定部35は、処理部34によって伸張化されたピクセルデータに基づいて、火災検知エリアROI内の火災の有無を判定する。例えば、判定部35は、処理部34によって伸張化され、且つ、正規化されたピクセルデータ(すなわち第3データ)に基づいて、火災検知エリアROI内の火災の有無を判定してもよいし、伸張化後のピクセルデータ(すなわち第2データ)に基づいて火災検知エリアROI内の火災の有無を判定してもよい。 The determination unit 35 determines whether or not there is a fire within the fire detection area ROI based on the pixel data decompressed by the processing unit 34 . For example, the determination unit 35 may determine whether or not there is a fire in the fire detection area ROI based on the pixel data (i.e., the third data) decompressed and normalized by the processing unit 34, Whether or not there is a fire in the fire detection area ROI may be determined based on the decompressed pixel data (that is, the second data).
 例えば、判定部35は、ピクセルごとに判定値を設定し、この判定値と閾値(以下、「判定閾値」という。)とを比較することで火災の有無をピクセルごとに判定する。判定閾値とは、一つの値であってもよいし、所定の範囲を示す値であってもよい。ここで、判定値とは、第2データ又は第3データのうち、少なくとも伸張化が行われたSARIとWIとのいずれか又は両方であってもよい。すなわち、判定部35は、第2データ又は第3データのSARIとWIとを判定値としてもよい。この場合には、判定部35は、そのSARIと第1判定閾値(例えば、第1の範囲)とを比較し、且つ、WIと第2判定閾値(例えば、第2の範囲)とを比較する処理をピクセルごとに行うことで火災の有無をピクセルごとに判定する。ただし、これに限定されず、判定値とは、第2データ又は第3データのSARIとWIとを用いて計算される値であってもよい。この場合には、例えば、判定部35は、ピクセルごとに一つの判定値が設定するため、判定値と判定閾値とを比較する処理をピクセルごとに実行することで火災の有無をピクセルごとに判定してもよい。なお、判定値が例えば、判定閾値が示す所定範囲内である場合に火災として判定してもよいし、判定値が所定の範囲外となった場合に火災と判定してもよい。すなわち、判定値と判定閾値との比較方法は、判定値の値や判定閾値の設定方法によって異なる場合があり、ユーザによって任意の調整可能である。 For example, the determination unit 35 sets a determination value for each pixel, and compares this determination value with a threshold (hereinafter referred to as "determination threshold") to determine the presence or absence of fire for each pixel. The determination threshold may be a single value or a value representing a predetermined range. Here, the determination value may be either or both of at least decompressed SARI and WI among the second data and the third data. That is, the determination unit 35 may use SARI and WI of the second data or the third data as determination values. In this case, the determination unit 35 compares the SARI with a first determination threshold (e.g., first range), and compares WI with a second determination threshold (e.g., second range). By performing processing for each pixel, the presence or absence of fire is determined for each pixel. However, the determination value is not limited to this, and may be a value calculated using SARI and WI of the second data or the third data. In this case, for example, since one determination value is set for each pixel, the determination unit 35 performs a process of comparing the determination value and the determination threshold value for each pixel, thereby determining whether or not there is a fire for each pixel. You may For example, when the determination value is within a predetermined range indicated by the determination threshold value, it may be determined as a fire, and when the determination value is outside the predetermined range, it may be determined as a fire. That is, the method of comparing the determination value and the determination threshold may vary depending on the value of the determination value and the setting method of the determination threshold, and can be arbitrarily adjusted by the user.
 また、判定部35は、伸張化及び正規化の処理が行われた第2データ又は第3データを用いて合成画像を作成し、この合成画像に対して判定閾値を適用することにより、火災の有無を判定してもよい。 In addition, the determination unit 35 creates a composite image using the second data or the third data that has undergone decompression and normalization processing, and applies a determination threshold to this composite image to determine whether the fire has occurred. The presence or absence may be determined.
 以下に、第1の実施形態に係る火災検知装置3の動作の流れについて、図3を用いて説明する。図3は、第1の実施形態に係る火災検知装置3のフローチャートである。 The operation flow of the fire detection device 3 according to the first embodiment will be described below with reference to FIG. FIG. 3 is a flow chart of the fire detection device 3 according to the first embodiment.
 火災検知装置3は、人工衛星1からの観測データを取得する(ステップS101)。火災検知装置3は、観測データに基づいて、SARIとWIとを算出する(ステップS102)。また、火災検知装置3は、観測データに基づいて火災検知エリアROIを作成する(ステップS103)。火災検知装置3は、観測データ、SARI、及びWIに基づいて疑似カラー合成画像を生成する(ステップS104)。火災検知装置3は、疑似カラー合成画像に対して、雲及び海に相当するピクセルデータをマスク処理する(ステップS105)。火災検知装置3は、マスク処理後の疑似カラー合成画像から、火災検知エリアROIの範囲の画像を抽出画像として抽出する(ステップS106)。 The fire detection device 3 acquires observation data from the satellite 1 (step S101). The fire detection device 3 calculates SARI and WI based on the observation data (step S102). Also, the fire detection device 3 creates a fire detection area ROI based on the observation data (step S103). The fire detection device 3 generates a pseudo-color composite image based on the observation data, SARI, and WI (step S104). The fire detection device 3 masks the pixel data corresponding to the clouds and the sea in the pseudo-color composite image (step S105). The fire detection device 3 extracts an image within the range of the fire detection area ROI from the masked pseudo-color composite image as an extraction image (step S106).
 火災検知装置3は、抽出した抽出画像内のピクセル毎のSARI及びWIのうち、いずれか又は両方のデータを伸張化する(ステップS107)。そして、火災検知装置3は、伸張化したデータを正規化する(ステップS108)。火災検知装置3は、伸張化及び正規化された値から判定値を設定し(ステップS109)、その判定値と予め設定された判定閾値とを用いて火災の有無を判定する(ステップS110)。 The fire detection device 3 decompresses either or both of the SARI and WI data for each pixel in the extracted extraction image (step S107). The fire detection device 3 then normalizes the decompressed data (step S108). The fire detection device 3 sets a judgment value from the expanded and normalized values (step S109), and judges whether or not there is a fire using the judgment value and a preset judgment threshold (step S110).
 このように、第1の実施形態に係る火災検知装置3は、画像生成部30、抽出部33、処理部34及び判定部35を備える。画像生成部30は、人工衛星1からの観測データに基づいて疑似カラー合成画像を生成する。処理部34は、疑似カラー合成画像のうち、火災検知エリアに相当する範囲の画像を抽出する。処理部34は、抽出部33によって抽出された画像内のピクセル毎のSARI及びWIのうち、いずれか又は両方のデータを伸張化する。判定部35は、処理部34によって伸張化されたデータに基づいて、火災の有無を判定する。 As described above, the fire detection device 3 according to the first embodiment includes the image generation unit 30, the extraction unit 33, the processing unit 34, and the determination unit 35. The image generator 30 generates a pseudo-color composite image based on observation data from the artificial satellite 1 . The processing unit 34 extracts an image of a range corresponding to the fire detection area from the pseudo-color composite image. The processing unit 34 decompresses one or both of the SARI and WI data for each pixel in the image extracted by the extraction unit 33 . The determination unit 35 determines whether or not there is a fire based on the data decompressed by the processing unit 34 .
 このような構成により、大気に影響があっても疑似カラー合成画像における火災の発生を示すデータと火災の発生を示さないデータとの区別が可能となり、疑似カラー合成画像のデータから火災検知を行う場合において、その火災検知の精度を向上させることができる。 With such a configuration, it is possible to distinguish between data indicating the occurrence of fire in the pseudo-color composite image and data not indicating the occurrence of fire even if there is an influence on the atmosphere, and fire detection is performed from the data of the pseudo-color composite image. In some cases, the accuracy of the fire detection can be improved.
<第2の実施形態>
 図4は、第2の実施形態に係る火災検知装置3Bを含む火災検知システムBの概略構成の一例を示す図である。第2の実施形態に係る火災検知装置3Bは、第1の実施形態と比較して、学習モデルを用いて火災の有無を判定する点が異なる。以下の説明において、第1の実施形態で説明した内容と同様の機能を有する部分については、同様の名称及び符号を付するものとし、その構成及び機能に関する具体的な説明は省略する場合がある。
<Second embodiment>
FIG. 4 is a diagram showing an example of a schematic configuration of a fire detection system B including a fire detection device 3B according to the second embodiment. A fire detection device 3B according to the second embodiment differs from the first embodiment in that it uses a learning model to determine the presence or absence of a fire. In the following description, portions having functions similar to those described in the first embodiment are given the same names and symbols, and specific descriptions of their configurations and functions may be omitted. .
 火災検知システムBは、人工衛星1、地上受信局2、及び火災検知装置3Bを備える。 A fire detection system B includes a satellite 1, a ground receiving station 2, and a fire detection device 3B.
 図5は、第2の実施形態に係る火災検知装置3Bの概略構成の一例を示す図である。図5に示すように、火災検知装置3Bは、火災候補位置特定部10、学習モデル作成部40及び火災検知部20Bを備える。これらの構成要素は、例えば、CPU等のハードウェアプロセッサがプログラム(ソフトウェア)を実行することにより実現される。また、これらの構成要素のうち一部または全部は、LSIやASIC、FPGA、GPU等のハードウェア(回路部;circuitryを含む)によって実現されてもよいし、ソフトウェアとハードウェアの協働によって実現されてもよい。プログラムは、予めHDD(Hard Disk Drive)やフラッシュメモリ等の記憶装置(非一過性の記憶媒体を備える記憶装置)に格納されていてもよいし、DVDやCD-ROM等の着脱可能な記憶媒体(非一過性の記憶媒体)に格納されており、記憶媒体がドライブ装置に装着されることで記憶装置にインストールされてもよい。記憶装置は、例えば、HDD、フラッシュメモリ、EEPROM、ROM、またはRAM等により構成される。 FIG. 5 is a diagram showing an example of a schematic configuration of a fire detection device 3B according to the second embodiment. As shown in FIG. 5, the fire detection device 3B includes a fire candidate position identification unit 10, a learning model creation unit 40, and a fire detection unit 20B. These components are implemented by, for example, a hardware processor such as a CPU executing a program (software). Also, some or all of these components may be realized by hardware (circuitry) such as LSI, ASIC, FPGA, GPU, etc., or by cooperation of software and hardware may be The program may be stored in advance in a storage device such as a HDD (Hard Disk Drive) or flash memory (a storage device with a non-transitory storage medium), or may be stored in a removable storage such as a DVD or CD-ROM. It may be stored in a medium (non-transitory storage medium) and installed in the storage device by loading the storage medium into the drive device. The storage device is configured by, for example, an HDD, flash memory, EEPROM, ROM, or RAM.
 学習モデル作成部40は、機械学習を用いることにより適切な判定閾値を設定する。一例として、学習モデル作成部40は、前処理部41及び学習部42を備える。 The learning model creation unit 40 sets an appropriate judgment threshold by using machine learning. As an example, the learning model creation unit 40 includes a preprocessing unit 41 and a learning unit 42 .
 前処理部41は、学習部42で機械学習するための学習データを作成する。この学習データは、実際に火災が発生した箇所の観測データから算出された判定値(以下、「第1過去判定値」という。)と、実際に火災が発生していない箇所の観測データから算出された判定値(以下、「第2過去判定値」という。)とを含む。例えば、前処理部41は、過去に火災があったときの観測データを、気象庁サーバなどから取得する。なお、過去に火災があったときの観測データを「過去観測データ」と称する。 The preprocessing unit 41 creates learning data for machine learning by the learning unit 42 . This learning data is calculated from the judgment value (hereinafter referred to as the "first past judgment value") calculated from the observation data of the location where the fire actually broke out and the observation data of the location where the fire did not actually break out. and the determined value (hereinafter referred to as "second past determination value"). For example, the preprocessing unit 41 acquires observation data from past fires from the Meteorological Agency server or the like. In addition, the observation data when there was a fire in the past is called "past observation data."
 前処理部41は、過去観測データを用いて疑似カラー合成画像を生成する。なお、前処理部41により疑似カラー合成画像の生成方法は、画像生成部30により疑似カラー合成画像の生成方法と同様であるため、説明を省略する。なお、以下の説明において、前処理部41により生成された疑似カラー合成画像を、「学習用疑似カラー合成画像」と称して、画像生成部30により生成された疑似カラー合成画像と区別する。 The preprocessing unit 41 generates a pseudo-color composite image using past observation data. Note that the method of generating the pseudo-color composite image by the preprocessing unit 41 is the same as the method of generating the pseudo-color composite image by the image generation unit 30, so the description is omitted. In the following description, the pseudo-color composite image generated by the preprocessing unit 41 is referred to as a “learning pseudo-color composite image” to distinguish it from the pseudo-color composite image generated by the image generation unit 30 .
 前処理部41は、学習用疑似カラー合成画像から、所定のエリアの画像(以下、「学習抽出画像」という。)を抽出する。ここで、所定のエリアの大きさや形状等は、火災検知エリアROIと同様であることが望ましい。また、所定のエリアは、実際に火災が発生した箇所を含むエリアである火災エリアである。 The preprocessing unit 41 extracts an image of a predetermined area (hereinafter referred to as "learning extracted image") from the learning pseudo-color composite image. Here, the size, shape, etc. of the predetermined area are preferably the same as those of the fire detection area ROI. Also, the predetermined area is a fire area, which is an area including a place where a fire actually broke out.
 前処理部41は、学習抽出画像内のピクセル毎のピクセルデータに対して、処理部34が実施する処理と同様の処理を適用する。前処理部41は、学習抽出画像内のピクセル毎のピクセルデータであるSARI及びWIのうち、いずれか又は両方のデータを伸張化する。また、前処理部41は、処理部34が正規化処理を実行する場合には、SARI及びWIのうち、伸張化したデータを正規化する。そして、前処理部41は、処理部34が実施する処理と同様の処理を行った後のピクセルデータからピクセルごとの判定値を求める。ここで、火災の発生箇所は、既知である。したがって、前処理部41は、火災の発生箇所に基づいて、ピクセルごとの判定値のうち、火災の発生箇所に対応するピクセルの判定値(第1過去判定値)と、火災の発生箇所ではない箇所に対応するピクセルの判定値(第2過去判定値)と、を判別する。そして、前処理部41は、火災が有ることを示すラベルを付した第1過去判定値と、火災が無いことを示すラベルを付した第2過去判定値とを学習データとして学習部42に送信する。 The preprocessing unit 41 applies the same processing as the processing performed by the processing unit 34 to the pixel data of each pixel in the learning extraction image. The preprocessing unit 41 decompresses one or both of SARI and WI, which are pixel data for each pixel in the learning extraction image. Further, the preprocessing unit 41 normalizes the decompressed data out of the SARI and WI when the processing unit 34 executes normalization processing. Then, the preprocessing unit 41 obtains a determination value for each pixel from the pixel data after performing the same processing as the processing performed by the processing unit 34 . Here, the location of the fire is known. Therefore, based on the location where the fire occurred, the preprocessing unit 41 determines the determination value (first past determination value) of the pixel corresponding to the location where the fire occurred, among the determination values for each pixel, and the determination value of the pixel corresponding to the location where the fire occurred. A determination value (second past determination value) of a pixel corresponding to the location is determined. Then, the preprocessing unit 41 transmits the first past determination value labeled indicating that there is a fire and the second past determination value labeled indicating that there is no fire to the learning unit 42 as learning data. do.
 学習部42は、前処理部41が生成した学習データを用いて、入力を判定値とし、出力を火災の有無とする学習モデルを機械学習(例えば、サポートベクターマシーン(support vector machine;VM)などの教師あり学習)により構築する。例えば、この学習モデルでは、学習データを用いて機械学習することによって、第1過去判定値と第2過去判定値とを分ける、図6に示すような境界線Hを引く。そして、学習後の学習済みモデルは、入力された判定値(学習データではない判定値)が入力データとして入力された場合に、境界線Hに基づいて、入力データが、火災が有ることを示す値か否かを出力する。学習部42は、構築した学習済みモデルを火災検知部20Bに送信する。なお、この境界線Hが判定閾値に相当する。例えば、図6に示す例では、×が火災が有ることを示し、黒塗りの四角が火災が無いことを示す。 The learning unit 42 uses the learning data generated by the preprocessing unit 41 to machine-learn a learning model in which the input is the judgment value and the output is the presence or absence of fire (for example, a support vector machine (VM), etc. supervised learning). For example, in this learning model, machine learning is performed using learning data to draw a boundary line H as shown in FIG. 6 that divides the first past judgment value and the second past judgment value. Then, in the learned model after learning, when an input judgment value (a judgment value that is not learning data) is input as input data, the input data indicates that there is a fire based on the boundary line H. Output whether it is a value or not. The learning unit 42 transmits the constructed learned model to the fire detection unit 20B. Note that this boundary line H corresponds to the determination threshold. For example, in the example shown in FIG. 6, x indicates that there is a fire, and black squares indicate that there is no fire.
 火災検知部20Bは、画像生成部30、マスク部31、火災検知エリア作成部32、抽出部33、処理部34、及び判定部35Bを備える。 The fire detection unit 20B includes an image generation unit 30, a mask unit 31, a fire detection area creation unit 32, an extraction unit 33, a processing unit 34, and a determination unit 35B.
 判定部35Bは、学習部42が構築した学習済みモデルを備える。判定部35Bは、処理部34によって設定された判定値を入力データとして学習済みモデルに入力することで火災の有無をピクセルごとに判定する。また、判定部35Bは、火災があると判定した入力データのピクセルに基づいて、火災の発生箇所を特定してもよい。 The determination unit 35B has a learned model constructed by the learning unit 42 . The determination unit 35B determines whether or not there is a fire for each pixel by inputting the determination value set by the processing unit 34 into the learned model as input data. Further, the determination unit 35B may specify the location of the fire based on the pixels of the input data determined to have the fire.
 以下に、第2の実施形態に係る火災検知装置3Bの動作の流れについて、図7を用いて説明する。図7は、第2の実施形態に係る火災検知装置3Bの第2の実施形態に係る火災検知装置3Bのフローチャートである。 The operation flow of the fire detection device 3B according to the second embodiment will be described below with reference to FIG. FIG. 7 is a flow chart of the fire detection device 3B according to the second embodiment of the fire detection device 3B.
 火災検知装置3Bは、人工衛星1からの観測データを取得する(ステップS201)。火災検知装置3Bは、観測データに基づいて、SARIとWIとを算出する(ステップS202)。また、火災検知装置3Bは、観測データに基づいて火災検知エリアROIを作成する(ステップS203)。火災検知装置3Bは、観測データ、SARI、及びWIに基づいて疑似カラー合成画像を生成する(ステップS204)。火災検知装置3Bは、疑似カラー合成画像に対して、雲及び海に相当するピクセルデータをマスク処理する(ステップS205)。火災検知装置3Bは、マスク処理後の疑似カラー合成画像から、火災検知エリアROIの範囲の画像を抽出画像として抽出する(ステップS206)。 The fire detection device 3B acquires observation data from the satellite 1 (step S201). The fire detection device 3B calculates SARI and WI based on the observation data (step S202). Also, the fire detection device 3B creates a fire detection area ROI based on the observation data (step S203). The fire detection device 3B generates a pseudo-color composite image based on the observation data, SARI, and WI (step S204). The fire detection device 3B masks the pixel data corresponding to the clouds and the sea in the pseudo-color composite image (step S205). The fire detection device 3B extracts an image of the range of the fire detection area ROI from the masked pseudo-color composite image as an extraction image (step S206).
 火災検知装置3Bは、抽出した抽出画像内のピクセル毎のSARI及びWIのうち、いずれか又は両方のデータを伸張化する(ステップS207)。そして、火災検知装置3Bは、伸張化したデータを正規化する(ステップS208)。火災検知装置3Bは、伸張化及び正規化された値から判定値を設定し(ステップS209)、その判定値を学習済みモデルに入力することでピクセルごとに火災の有無を判定する(ステップS210)。 The fire detection device 3B decompresses either or both of the SARI and WI data for each pixel in the extracted extraction image (step S207). The fire detection device 3B then normalizes the decompressed data (step S208). The fire detection device 3B sets a determination value from the decompressed and normalized values (step S209), and inputs the determination value to the learned model to determine whether or not there is a fire for each pixel (step S210). .
 なお、第2の実施形態では、学習モデル作成部40は、火災検知装置3Bに含まれているが、これに限定されず、火災検知装置3Bとは別の装置であってもよい。すなわち、火災検知装置3は、少なくとも学習済みモデルを有していればよく、火災検知装置3とは異なる装置が学習済みモデルを作成してもよい。 Note that in the second embodiment, the learning model creation unit 40 is included in the fire detection device 3B, but is not limited to this, and may be a device separate from the fire detection device 3B. That is, the fire detection device 3 should have at least a trained model, and a device different from the fire detection device 3 may create the trained model.
 以上説明した少なくともひとつの実施形態によれば、観測データに基づいて生成した疑似カラー合成画像内のピクセル毎のピクセルデータを伸張化することによって疑似カラー合成画像における火災の発生を示すデータと火災の発生を示さないデータとの区別が明確になり、煙による火災検知の精度の低下を抑制することができる。その結果、火災検知の精度を向上させることができる。 According to at least one embodiment described above, by decompressing the pixel data of each pixel in the pseudo-color composite image generated based on the observation data, the data indicating the occurrence of the fire in the pseudo-color composite image and the fire The distinction from the data that does not show the occurrence of fire becomes clear, and it is possible to suppress the deterioration of the accuracy of fire detection due to smoke. As a result, the accuracy of fire detection can be improved.
 また、上記学習済みモデルは、伸張化されたピクセルデータに基づいて生成された過去の判定値を学習データとして用いることで、火災検知エリアの大気環境や土地に依存することなく火災を検知することが可能となり、火災検知エリアごとに学習データを収集して学習する必要がない。換言すれば、境界線Hは、世界中の各地で用いることができる閾値である。 In addition, the trained model uses past judgment values generated based on decompressed pixel data as learning data to detect fires without depending on the atmospheric environment or land in the fire detection area. This eliminates the need to collect and learn learning data for each fire detection area. In other words, the boundary line H is a threshold that can be used all over the world.
 また、以上説明した少なくともひとつの実施形態の火災検知装置3,3Bは、例えば、コンピュータであって、1つ又は複数のプロセッサと、その1つ又複数のプロセッサによって実行されるように構成された1つ又複数のプログラムを格納するメモリとを備える。1つ又複数のプログラムは、火災検知装置3,3Bに対して観測データに基づいて疑似カラー合成画像を生成させ、疑似カラー合成画像のうち、火災検知エリアに相当する範囲の画像(抽出画像)を抽出させ、抽出画像内のピクセル毎のピクセルデータを伸張化させ、伸張化されたピクセルデータに基づいて、火災の有無を判定させる命令を有する。
 火災検知装置3または3Bが備える構成要素のうち、1以上の構成要素を1つのコンピュータで構成し、すなわち複数のコンピュータによって火災検知装置3または3Bを構成してもよいし、火災検知装置3または3Bの全ての構成要素を1つのコンピュータで構成してもよい。
Further, the fire detection devices 3 and 3B of at least one embodiment described above are, for example, computers, and are configured to be executed by one or more processors and the one or more processors. and a memory storing one or more programs. One or more programs cause the fire detection devices 3 and 3B to generate a pseudo-color composite image based on the observation data, and out of the pseudo-color composite image, an image (extracted image) of a range corresponding to the fire detection area. is extracted, pixel data for each pixel in the extracted image is decompressed, and the presence or absence of fire is determined based on the decompressed pixel data.
Among the components provided in the fire detection device 3 or 3B, one or more components may be configured by one computer, that is, the fire detection device 3 or 3B may be configured by a plurality of computers, or the fire detection device 3 or 3B may be configured in one computer.
 上記実施形態の火災検知装置3または3Bは、火災候補位置特定部10、マスク部31、及び火災検知エリア作成部32を備えているが、これらの構成は火災検知装置3または3Bにとって必須の要素ではない。火災候補位置特定部10や火災検知エリア作成部32に代えて、火災検知装置3または3Bが外部の装置から火災候補位置や火災検知エリアROIを取得する構成であってもよい。本開示の構成において、マスク処理が不要な場合は、火災検知装置3または3Bからマスク部31を除外してもよい。 The fire detection device 3 or 3B of the above embodiment includes the fire candidate position specifying unit 10, the mask unit 31, and the fire detection area creation unit 32, and these configurations are essential elements for the fire detection device 3 or 3B. is not. Instead of the fire candidate position specifying unit 10 and the fire detection area creating unit 32, the fire detection device 3 or 3B may acquire the fire candidate positions and the fire detection area ROI from an external device. In the configuration of the present disclosure, the mask section 31 may be excluded from the fire detection device 3 or 3B if mask processing is unnecessary.
 明細書の全体において、ある部分がある構成要素を「含む」、「有する」や「備える」とする時、これは、特に反対の記載がない限り、他の構成要素を除くものではなく、他の構成要素をさらに含むことができるということを意味する。 Throughout the specification, when a part is referred to as "including", "having" or "comprising" an element, this does not mean excluding the other element, unless specifically stated to the contrary. It means that it can further include a component of
 また、明細書に記載の「…部」の用語は、少なくとも1つの機能や動作を処理する単位を意味し、これは、ハードウェアまたはソフトウェアとして具現されてもよいし、ハードウェアとソフトウェアとの組み合わせで具現されてもよい。 In addition, the term "... unit" described in the specification means a unit that processes at least one function or operation, which may be embodied as hardware or software, or a combination of hardware and software. It may be embodied in combination.
 以上、この発明の実施形態について図面を参照して詳述してきたが、具体的な構成はこの実施形態に限られるものではなく、この発明の要旨を逸脱しない範囲の設計等も含まれる。 Although the embodiment of the present invention has been described in detail with reference to the drawings, the specific configuration is not limited to this embodiment, and includes design within the scope of the gist of the present invention.
 特許請求の範囲、明細書、及び図面中において示した装置、システム、プログラム、及び方法における動作、手順、ステップ、及び段階等の各処理の実行順序は、特段「より前に」、「先立って」等と明示しておらず、また、前の処理の出力を後の処理で用いるのでない限り、任意の順序で実現しうることに留意すべきである。特許請求の範囲、明細書、及び図面中の動作フローに関して、便宜上「まず、」、「次に、」等を用いて説明したとしても、この順で実施することが必須であることを意味するものではない。 The execution order of each process such as actions, procedures, steps, and stages in the devices, systems, programs, and methods shown in the claims, the specification, and the drawings is etc., and it should be noted that they can be implemented in any order unless the output of the previous process is used in the subsequent process. Regarding the operation flow in the claims, the specification, and the drawings, even if the description is made using "first," "next," etc. for convenience, it means that it is essential to carry out in this order. not a thing
A,B 火災検知システム
1 人工衛星
2 地上受信局
3,3B 火災検知装置
10 火災候補位置特定部
20 火災検知部
30 画像生成部
31 マスク部
32 火災検知エリア作成部
33 抽出部
34 処理部
35,35B 判定部
40 学習モデル作成部
A, B Fire detection system 1 Artificial satellite 2 Ground receiving station 3, 3B Fire detection device 10 Fire candidate position specifying unit 20 Fire detection unit 30 Image generation unit 31 Mask unit 32 Fire detection area creation unit 33 Extraction unit 34 Processing unit 35, 35B Determination unit 40 Learning model creation unit

Claims (7)

  1.  人工衛星に搭載された放射計の観測データを用いて、地球上の火災を検知する火災検知装置であって、
     前記観測データに基づいて疑似カラー合成画像を生成する画像生成部と、
     前記疑似カラー合成画像のうち、火災検知エリアに相当する範囲の画像を抽出する抽出部と、
     前記抽出部によって抽出された画像内のピクセル毎のピクセルデータを伸張化する処理部と、
     前記処理部によって伸張化されたピクセルデータに基づいて、火災の有無を判定する判定部と、
     を備える、火災検知装置。
    A fire detection device that detects fires on the earth using observation data from a radiometer mounted on a satellite,
    an image generation unit that generates a pseudo-color composite image based on the observation data;
    an extraction unit for extracting an image of a range corresponding to a fire detection area from the pseudo-color composite image;
    a processing unit for decompressing pixel data for each pixel in the image extracted by the extracting unit;
    a determination unit that determines whether there is a fire based on the pixel data decompressed by the processing unit;
    A fire detection device comprising:
  2.  前記処理部は、伸張化したピクセルデータを正規化し、
     前記判定部は、正規化後のピクセルデータに基づいて、火災の有無を判定する、
     請求項1に記載の火災検知装置。
    The processing unit normalizes the decompressed pixel data,
    The determination unit determines whether or not there is a fire based on the normalized pixel data.
    The fire detection device according to claim 1.
  3.  前記ピクセルデータは、煙に対する反射率に関する指数である煙エアロゾル反射率指数と、大気及び地表の水分量に関する水分指数と、を含み、
     前記処理部は、前記抽出部によって抽出された画像内のピクセル毎のピクセルデータのうち、煙エアロゾル反射率指数及び水分指数のいずれか又は両方を伸張化する、
     請求項1に記載の火災検知装置。
    The pixel data includes a smoke aerosol reflectance index, which is an index for reflectance to smoke, and a moisture index, which is an index for atmospheric and surface water content;
    The processing unit decompresses one or both of the smoke aerosol reflectance index and the moisture index out of the pixel data for each pixel in the image extracted by the extraction unit.
    The fire detection device according to claim 1.
  4.  前記処理部は、煙エアロゾル反射率指数及び水分指数のうち、伸張化された指数を正規化する、
     請求項3に記載の火災検知装置。
    The processing unit normalizes the expanded exponent of the smoke aerosol reflectance exponent and the moisture exponent.
    The fire detection device according to claim 3.
  5.  前記観測データのうち、輝度温度を示す観測バンドの情報に基づいて火災の発生の疑いがある位置である火災候補位置を特定する火災候補位置特定部と、
     前記火災候補位置を含む所定のエリアを火災検知エリアとして作成する火災検知エリア作成部と、
     を備える、請求項1から4のいずれか一項に記載の火災検知装置。
    a fire candidate position specifying unit that specifies a fire candidate position, which is a position where a fire is suspected to occur, based on information of an observation band indicating brightness temperature among the observation data;
    a fire detection area creation unit that creates a predetermined area including the fire candidate position as a fire detection area;
    The fire detection device according to any one of claims 1 to 4, comprising:
  6.  前記判定部は、ピクセルデータ又はピクセルデータに基づく値を入力データとし、入力データが火災を示すか否かの判定結果を出力する学習モデルを有し、前記処理部による伸張化後の前記ピクセルデータ又は当該ピクセルデータに基づく値を前記学習モデルに入力することで火災の有無を判定する、
     請求項1から5のいずれか一項に記載の火災検知装置。
    The determination unit has a learning model that receives pixel data or a value based on the pixel data as input data and outputs a determination result as to whether or not the input data indicates a fire, and the pixel data decompressed by the processing unit. or determining whether there is a fire by inputting a value based on the pixel data into the learning model;
    The fire detection device according to any one of claims 1 to 5.
  7.  人工衛星に搭載された放射計の観測データを用いて、地球上の火災を検知する火災検知装置の火災検知方法であって、
     前記観測データに基づいて疑似カラー合成画像を生成するステップと、
     前記疑似カラー合成画像のうち、火災検知エリアに相当する範囲の画像を抽出するステップと、
     抽出された画像内のピクセル毎のピクセルデータを伸張化するステップと、
     伸張化されたピクセルデータに基づいて、火災の有無を判定するステップと、
     を含む火災検知方法。
    A fire detection method for a fire detection device that detects a fire on the earth using observation data from a radiometer mounted on a satellite, comprising:
    generating a pseudo-color composite image based on the observed data;
    a step of extracting an image of a range corresponding to a fire detection area from the pseudo-color composite image;
    decompressing pixel data for each pixel in the extracted image;
    determining whether there is a fire based on the decompressed pixel data;
    fire detection methods including;
PCT/JP2022/008985 2021-03-02 2022-03-02 Fire detecting device, and fire detecting method WO2022186306A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
JP2023503931A JPWO2022186306A1 (en) 2021-03-02 2022-03-02

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2021-032432 2021-03-02
JP2021032432 2021-03-02

Publications (1)

Publication Number Publication Date
WO2022186306A1 true WO2022186306A1 (en) 2022-09-09

Family

ID=83155136

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/008985 WO2022186306A1 (en) 2021-03-02 2022-03-02 Fire detecting device, and fire detecting method

Country Status (2)

Country Link
JP (1) JPWO2022186306A1 (en)
WO (1) WO2022186306A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005328333A (en) * 2004-05-14 2005-11-24 Hitachi Kokusai Electric Inc Monitor system
JP2006179030A (en) * 2006-03-15 2006-07-06 Nissan Motor Co Ltd Facial region detection apparatus
JP2019513315A (en) * 2016-02-29 2019-05-23 ウルグス ソシエダード アノニマ System for planet-scale analysis

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2005328333A (en) * 2004-05-14 2005-11-24 Hitachi Kokusai Electric Inc Monitor system
JP2006179030A (en) * 2006-03-15 2006-07-06 Nissan Motor Co Ltd Facial region detection apparatus
JP2019513315A (en) * 2016-02-29 2019-05-23 ウルグス ソシエダード アノニマ System for planet-scale analysis

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
NAGATANI IZUMI, KUDOH JUN-ICHI: "A False-color Composite Method for Wildfire Smoke Plume Identification Using MODIS Data", JOURNAL OF THE REMOTE SENSING SOCIETY OF JAPAN, vol. 33, no. 1, 30 January 2013 (2013-01-30), pages 38 - 47, XP055963414 *

Also Published As

Publication number Publication date
JPWO2022186306A1 (en) 2022-09-09

Similar Documents

Publication Publication Date Title
Shimada et al. New global forest/non-forest maps from ALOS PALSAR data (2007–2010)
Giglio et al. The collection 6 MODIS active fire detection algorithm and fire products
JP6977873B2 (en) Image processing device, image processing method, and image processing program
Feyisa et al. Automated Water Extraction Index: A new technique for surface water mapping using Landsat imagery
Kantakumar et al. Multi-temporal land use classification using hybrid approach
CN109211793B (en) Fire spot identification method combining spectral index and neural network
He et al. Dry and wet snow cover mapping in mountain areas using SAR and optical remote sensing data
Lyapustin et al. Improved cloud and snow screening in MAIAC aerosol retrievals using spectral and spatial analysis
KR100894482B1 (en) A fog forecasting system using weather satellite and fog forecasting method thereof
Lizundia-Loiola et al. Global burned area mapping from Sentinel-3 Synergy and VIIRS active fires
Brand et al. Semantic segmentation of burned areas in satellite images using a U-net-based convolutional neural network
Bucha et al. Analysis of MODIS imagery for detection of clear cuts in the boreal forest in north-west Russia
Nkeumoe Numbisi et al. Multi-date sentinel1 SAR image textures discriminate perennial agroforests in a tropical forest-savannah transition landscape
Li et al. Automatic smoke detection in modis satellite data based on k-means clustering and fisher linear discrimination
WO2022186306A1 (en) Fire detecting device, and fire detecting method
Roteta et al. Optimization of A Random Forest Classifier for Burned Area Detection in Chile Using Sentinel-2 Data
Chung et al. Wildfire damage assessment using multi-temporal Sentinel-2 data
EP3862967A1 (en) Learning device, image processing device, learning method, image processing method, learning program, and image processing program
CN112580549A (en) Fire point detection method and device and storage medium
CN114581793A (en) Cloud identification method and device for remote sensing image, electronic equipment and readable storage medium
Asakuma et al. Detection of biomass burning smoke in satellite images using texture analysis
CN109978862B (en) Air pollution estimation method based on satellite image
Rivas-Perea et al. Automatic dust storm detection based on supervised classification of multispectral data
DaCamara et al. A User-Oriented Simplification of the ($ V, W $) Burn-Sensitive Vegetation Index System
Laneve et al. SIGRI project: Products validation results

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22763359

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2023503931

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 22763359

Country of ref document: EP

Kind code of ref document: A1