WO2021246210A1 - Machine learning data generation device, machine learning data generation method, program, and learning data set - Google Patents

Machine learning data generation device, machine learning data generation method, program, and learning data set Download PDF

Info

Publication number
WO2021246210A1
WO2021246210A1 PCT/JP2021/019504 JP2021019504W WO2021246210A1 WO 2021246210 A1 WO2021246210 A1 WO 2021246210A1 JP 2021019504 W JP2021019504 W JP 2021019504W WO 2021246210 A1 WO2021246210 A1 WO 2021246210A1
Authority
WO
WIPO (PCT)
Prior art keywords
gas
dimensional
machine learning
image data
gas distribution
Prior art date
Application number
PCT/JP2021/019504
Other languages
French (fr)
Japanese (ja)
Inventor
隆史 森本
基広 浅野
俊介 ▲高▼村
Original Assignee
コニカミノルタ株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by コニカミノルタ株式会社 filed Critical コニカミノルタ株式会社
Publication of WO2021246210A1 publication Critical patent/WO2021246210A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M3/00Investigating fluid-tightness of structures
    • G01M3/02Investigating fluid-tightness of structures by using fluid or vacuum
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present disclosure relates to a machine learning data generator, a machine learning data generation method, a program, and a learning data set, and more particularly to a machine learning data generation method used in a gas leak detection device.
  • Gas equipment Equipment that uses gas, such as production facilities that produce natural gas and petroleum, production plants that produce chemical products using gas, gas transmission equipment, petrochemical plants and thermal power plants, and steel-related facilities (hereinafter referred to as "" (Sometimes referred to as "gas equipment”), the danger of gas leakage is recognized due to aging of the facility and operational mistakes, and a gas detection device is installed to minimize gas leakage.
  • the gas detection method using infrared moving images has the advantage that the emission state such as gas flow and the leak position can be easily detected because the gas can be visualized by the image.
  • the state of the leaked gas is recorded as an image, there is an advantage that it can be used as evidence of the occurrence of gas leak and its repair.
  • the equipment such as piping is complicatedly arranged in the gas equipment to be monitored by the gas leakage detection device
  • the location where the gas leakage occurs may be hidden behind the equipment and cannot be seen from the image pickup device. Can occur.
  • the wind direction is changing, it becomes difficult to identify the location where the gas leak occurs.
  • This disclosure has been made in view of the above problems, and efficiently sets a large amount of training data consisting of input and correct output under various conditions for learning a machine learning model in a gas leak detection device. It is an object of the present invention to provide a data generation device for machine learning, a data generation method for machine learning, a program, and a data set for learning, which are collected and contribute to improvement of learning accuracy.
  • three-dimensional gas distribution image data showing the existence range of gas leaking from a gas leakage source into a three-dimensional space is observed from a predetermined viewpoint position2. It is characterized by including a step of converting into three-dimensional gas distribution image data and a step of generating the two-dimensional gas distribution image data and predetermined feature information about the gas as teacher data for machine learning.
  • various conditions are used for learning the machine learning model in the gas leakage detection device. It is possible to efficiently generate a set of a large amount of learning data consisting of an input and a correct answer output in, and it is possible to contribute to improvement of learning accuracy.
  • FIG. 1 It is a schematic block diagram of the gas leak detection system 1 which concerns on embodiment. It is a schematic diagram which shows the relationship between the monitoring object 300 and the gas visualization image pickup apparatus 10. It is a figure which shows the structure of the gas leak detection device 20.
  • (A) is a functional block diagram of the control unit 21, and (b) is a schematic diagram showing an outline of the logical configuration of the machine learning model.
  • (A) is a functional block diagram of the machine learning data generation device 30, and (b) is a functional block diagram of the machine learning data generation device 30A according to the first modification.
  • (A) and (b) are schematic diagrams showing the data structure of the structure three-dimensional data and the three-dimensional gas distribution image data, respectively.
  • FIG. (A) is a schematic diagram showing an outline of density-thickness product image data
  • (b) is a schematic diagram for explaining an outline of a density-thickness product calculation method in a light absorption rate image conversion process
  • (c) is light. It is a schematic diagram which shows the outline of the absorption rate image data.
  • FIG. (A) is a schematic diagram showing an outline of the three-dimensional structure data
  • (b) is a schematic diagram showing an outline of the extracted background location data
  • (c) is background image data based on the background location data. It is a conceptual diagram which showed the method of generating.
  • (A) is a schematic diagram showing an outline of background image data
  • (b) is a schematic diagram showing an outline of light absorption rate image data
  • (c) is a schematic diagram showing an outline of light intensity image data.
  • (A) is a functional block diagram of the machine learning data generation device 30E according to the modification 3
  • (b) is a functional block diagram of the machine learning data generation device 30F according to the modification 4.
  • FIG. It is a flowchart which shows the outline of a gas distribution enhancement process.
  • (A) is a functional block diagram of the machine learning data generation device 30G according to the modified example 5
  • (b) is a functional block diagram of the machine learning data generating device 30H according to the modified example 6.
  • Embodiment 1 >> ⁇ Configuration of gas leak detection system 1>
  • the embodiment of the present disclosure is realized as a gas leak detection system 1 that analyzes a gas leak from a gas leak inspection image of a gas facility.
  • the gas leak detection system 1 according to the embodiment will be described in detail with reference to the drawings.
  • FIG. 1 is a schematic configuration diagram of a gas leak detection system 1 according to an embodiment.
  • the gas leak detection system 1 includes a plurality of gas visualization image pickup devices 10 connected to a communication network N, a gas leak detection device 20, a machine learning data generation device 30, and a storage means 40. To.
  • the communication network N is, for example, the Internet, and a gas visualization image pickup device 10, a gas leak detection device 20, a plurality of machine learning data generation devices 30, and a storage means 40 are connected so as to be able to exchange information with each other. ..
  • the gas visualization image pickup device 10 images the monitored object, processes the captured infrared image, colors the leaked gas portion in the inspection image, visualizes the gas leak portion of the inspection target, and displays the image on the gas leak detection device 20.
  • the device or system provided.
  • an imaging means (not shown) consisting of an infrared camera that detects and captures infrared rays
  • a visualization processing means (not shown) that visualizes a gas leak portion of the inspection target from an inspection image captured by the imaging means
  • An interface circuit (not shown) that outputs to N is provided.
  • Images taken with an infrared camera are generally used for detecting hydrocarbon gases.
  • it is a so-called infrared camera that detects and images infrared light having a wavelength of 3.2 to 3.4 ⁇ m, and can detect hydrocarbon-based gases such as methane, ethane, ethylene, and propylene.
  • the gas visualization image pickup device 10 is installed so that the monitoring target 300 is included in the field of view range 310 of the infrared camera.
  • the obtained inspection image is, for example, a video signal for transmitting an image of 30 frames per second.
  • the gas visualization image pickup device 10 converts the captured image into a predetermined video signal.
  • the infrared image signal acquired from the infrared camera is restored to an image and output to the visualization processing means as a moving image composed of a plurality of frames.
  • the image is an infrared photograph of the monitored object, and has the intensity of infrared rays as a pixel value.
  • the number of pixels of the gas distribution image is 224 ⁇ 224 pixels, and the number of frames is 16.
  • the leaked gas portion in the inspection image is colored to visualize the gas leak to be inspected.
  • the gas visualization image pickup device detects the presence of gas by capturing a change in the amount of electromagnetic waves radiated from a background object having an absolute temperature of 0 (K) or higher.
  • the change in the amount of electromagnetic waves is mainly caused by the absorption of electromagnetic waves in the infrared region by the gas or the generation of blackbody radiation from the gas itself.
  • gas leakage can be captured as an image by photographing the monitored space, so that gas can be captured earlier than the conventional detection probe type that can only monitor grid-like locations. Leakage can be detected and the location of gas can be accurately detected.
  • the infrared change amount is converted into color information and the color information is mapped.
  • the gas leak portion in the inspection image can be visualized with.
  • the visualized inspection image is temporarily stored in a memory or the like, and is transferred to and stored in the storage means 40 via the communication network N based on the operation input.
  • the gas visualization image pickup device 10 is not limited to this, and may be any image pickup device capable of detecting the gas to be monitored.
  • the monitoring target is a gas that can be detected by visible light such as white smoked water vapor. It may be a general visible light camera.
  • the gas refers to a gas leaked from a closed space such as a pipe or a tank, and is not intentionally diffused into the atmosphere.
  • the storage means 40 is a storage device for storing inspection images transmitted from the gas visualization image pickup device 10, and is a volatile memory such as a DRAM (Dynamic Random Access Memory) and a non-volatile memory such as a hard disk. It is configured to include sexual memory.
  • a volatile memory such as a DRAM (Dynamic Random Access Memory) and a non-volatile memory such as a hard disk. It is configured to include sexual memory.
  • the gas leak detection device 20 is a device that acquires an image of a monitored object from the gas visualization image pickup device 10, detects a gas region based on the image, and notifies the user of the gas detection through the display unit 24.
  • the gas leak detection device 20 is realized as, for example, a computer including a general CPU (Central Processing Unit), a RAM (Random Access Memory), and a program executed by these.
  • the gas leak detection device 20 may further include a GPU (Graphics Processing Unit) and a RAM as arithmetic units.
  • FIG. 3 is a diagram showing the configuration of the gas leak detection device 20.
  • the gas leak detection device 20 includes a control unit (CPU) 21, a communication unit 22, a storage unit 23, a display unit 24, and an operation input unit 25, and the control unit 21 executes a gas leak detection program. It is realized as a computer.
  • the communication unit 22 transmits / receives information to / from the gas leak detection device 20 and the storage means 40.
  • the display unit 24 is, for example, a liquid crystal panel or the like, and displays a display screen generated by the CPU 21.
  • the storage unit 23 stores the program 231 and the like required for the gas leak detection device 20 to operate, and also has a function as a temporary storage area for temporarily storing the calculation result of the CPU 21.
  • the storage unit 23 includes, for example, a volatile memory such as a DRAM and a non-volatile memory such as a hard disk.
  • the control unit 21 realizes each function of the gas leak detection device 20 by executing the gas leak detection program 231 in the storage unit 23.
  • FIG. 4A is a functional block diagram of the control unit 21.
  • FIG. 4B is a functional block diagram of the machine learning unit 2141 in the control unit 21.
  • the gas leak detection device 20 includes an inspection image acquisition unit 211, a gas distribution image acquisition unit 212, a leak position information acquisition unit 213, a machine learning unit 2141, a learning model holding unit 2142, and a determination result.
  • the output unit 215 is provided.
  • the machine learning unit 2141 and the learning model holding unit 2142 constitute a gas leak position identification unit 214.
  • the inspection image acquisition unit 211, the gas distribution image acquisition unit 212, the leak position information acquisition unit 213, the gas leak position identification unit 214, and the determination result output unit 215 constitute the gas leak position identification device 210.
  • the inspection image acquisition unit 211 is a circuit that acquires an inspection image from the gas visualization image pickup device 10.
  • the inspection image is an image showing a gas distribution obtained by processing an infrared image captured by an infrared camera and coloring the leaked gas portion in the inspection image to visualize the gas leaked portion to be inspected.
  • the gas distribution image acquisition unit 212 is a circuit for acquiring a gas distribution image having the same format as the inspection image generated by the gas visualization image pickup device 10 and having a known gas leak position.
  • the gas distribution image is an image of an image of a gas cloud leaking from one leakage source, and there is only one gas leakage position, which is the position of the leakage source, with respect to the gas distribution image.
  • the gas distribution image acquisition unit 212 performs processing such as cutting out and gain adjustment so as to have the same format. You may go. Further, for example, when the acquired image is three-dimensional voxel data, it may be converted into a two-dimensional image of a viewpoint from one point.
  • the leak position information acquisition unit 213 is a circuit for acquiring the gas leakage position corresponding to the gas distribution image acquired by the gas distribution image acquisition unit 212.
  • the gas leak position is designated as the coordinates in the gas distribution image acquired by the gas distribution image acquisition unit 212. If the acquired gas leak position is a coordinate in space, it is converted to the coordinate in the gas distribution image.
  • the machine learning unit 2141 executes machine learning based on the combination of the gas distribution image received by the gas distribution image acquisition unit 212 and the gas leakage position corresponding to the gas distribution image received by the leak position information acquisition unit 213. It is a circuit that generates a machine learning model.
  • the machine learning model predicts the gas leak position indicating the position of the gas leak source based on the combination of the feature amount of the gas distribution image, for example, the outer peripheral shape of the gas region, the gas shading distribution, and the time change of these. Formed to do.
  • a convolutional neural network can be used, and known software such as PyTorch can be used.
  • FIG. 4B is a schematic diagram showing an outline of the logical configuration of the machine learning model.
  • the machine learning model includes an input layer 51, an intermediate layer 52-1, an intermediate layer 52-2, ..., An intermediate layer 52-n, and an output layer 53, and the interlayer filter is optimized by learning.
  • the input layer 51 accepts a 224 ⁇ 224 ⁇ 16 three-dimensional tensor in which the pixel value of the gas distribution image is input.
  • the intermediate layer 52-1 is, for example, a convolution layer, and receives a 224 ⁇ 224 ⁇ 16 three-dimensional tensor generated by a convolution operation from the data of the input layer 51.
  • the intermediate layer 52-2 is, for example, a pooling layer, and accepts a three-dimensional tensor obtained by resizing the data of the intermediate layer 52-1.
  • the intermediate layer 52-n is, for example, a fully connected layer, and the data of the intermediate layer 52- (n-1) is converted into a two-dimensional vector showing coordinate values.
  • the configuration of the intermediate layer is an example, and the number n of the intermediate layers is about 3 to 5, but the number n is not limited to this. Further, in FIG. 4B, the number of neurons in each layer is drawn as the same, but each layer may have an arbitrary number of neurons.
  • the machine learning unit 2141 receives a moving image as a gas distribution image as an input, performs learning with the gas leakage position as the correct answer, generates a machine learning model, and outputs it to the learning model holding unit 2142.
  • the machine learning unit 2141 may be realized by the GPU and software when the gas leak detection device 20 includes a GPU and a RAM as arithmetic units.
  • the learning model holding unit 2142 holds the machine learning model generated by the machine learning unit 2141 and outputs the gas leakage position corresponding to the gas distribution image acquired by the inspection image acquisition unit 211 using the machine learning model. Is.
  • the gas leak position is specified and output as a coordinate value in the input gas distribution image.
  • the determination result output unit 215 is a circuit for generating an image for displaying on the display unit 24 by superimposing the gas leak position output by the learning model holding unit 2142 on the moving image acquired by the inspection image acquisition unit 211. ..
  • FIG. 5 is a diagram showing the configuration of the machine learning data generation device 30.
  • the machine learning data generation device 30 includes a control unit (CPU) 31, a communication unit 32, a storage unit 33, a display unit 34, and an operation input unit 35, and the control unit 31 provides machine learning data. It is realized as a computer that executes the generator.
  • the control unit 31 realizes the function of the machine learning data generation device 30 by executing the machine learning data generation program 331 in the storage unit 33.
  • FIG. 6A is a functional block diagram of the machine learning data generation device 30.
  • FIG. 6B is a functional block diagram of the machine learning data generation device 30A according to the first modification, and the machine learning data generation device 30A will be described later.
  • the condition parameters required for the processing input in each functional block in FIGS. 6A and 6B are as shown in the table below.
  • the machine learning data generation device 30 includes a three-dimensional structure modeling unit 311, a three-dimensional fluid simulation execution unit 312, and a two-dimensional single-viewpoint gas distribution image conversion processing unit 313.
  • the 3D structure modeling unit 311 designs a 3D structure unit model based on the operation input from the operator to the operation input unit 35, and performs 3D structure modeling for laying out the structure in the 3D space.
  • the structure 3D data DTstr is output to the subsequent stage.
  • the three-dimensional structure data DTstr is shape data representing, for example, the three-dimensional shape of a pipe or other plant facility.
  • Commercially available 3D CAD (Computer-Aided Design) software can be used for 3D structure modeling.
  • FIG. 7A is a schematic diagram showing the data structure of the three-dimensional structure data DTstr.
  • the X direction, the Y direction, and the Z direction in each figure are the width direction, the depth direction, and the height direction, respectively.
  • the structure three-dimensional data DTstr is three-dimensional boxel data representing a three-dimensional space, and is derived from the structure identification information Std arranged at the coordinates in the X direction, the Y direction, and the Z direction. It is composed. Since the structure identification information Std is expressed as three-dimensional shape data, it may be recorded as a binary image with 0 and 1 such as "with structure" and "without structure".
  • the 3D fluid simulation execution unit 312 acquires the structure 3D data DTstr as an input, and further acquires the condition parameter CP1 required for the fluid simulation based on the operation input from the operator to the operation input unit 35.
  • the condition parameter CP1 is a setting required for fluid simulation mainly related to gas leakage, such as gas type, gas flow rate, gas flow velocity in three-dimensional space, shape, diameter, and position of gas leakage source, as shown in Table 1. It is a parameter that determines the conditions. By generating an image by changing many kinds of these conditional parameters, it is possible to generate a large number of training data.
  • Three-dimensional gas distribution image data DTgas is data including at least three-dimensional gas concentration distribution. The calculation is performed using commercially available 3D fluid simulation software, and for example, ANSYS Fluent, Flo EFD, and Femap / Flow may be used.
  • FIG. 7B is a schematic diagram showing the data structure of the three-dimensional gas distribution image data DTgas.
  • the three-dimensional gas distribution image data DTgas is three-dimensional voxel data representing a three-dimensional space, and voxel gas concentration data arranged at coordinates in the X, Y, and Z directions. It is composed of Dst (%) and may further include voxel gas flow velocity vector data Vc (Vx, Vy, Vz).
  • the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 inputs and acquires the three-dimensional gas distribution image data DTgas of the gas leaking from the gas leakage source into the three-dimensional space, and further, to the operation input unit 35 from the operator. Based on the operation input of, the condition parameter CP2 necessary for the conversion process to the two-dimensional single-viewpoint gas distribution image is acquired.
  • the condition parameter CP2 is a parameter related to the imaging conditions of the gas visualization image pickup device, such as the angle of view of the image pickup device, the line-of-sight direction, the distance, and the image resolution, as shown in Table 1, for example.
  • the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 converts the three-dimensional gas distribution image data DTgas into the two-dimensional gas distribution image data observed from a predetermined viewpoint position.
  • the process of converting the two-dimensional gas distribution image data into the density-thickness product image data DTdt is performed.
  • the density thickness product image DTdt and the leak source coordinate data output from the three-dimensional structure modeling unit 311 are output to the gas leak detection device 20 as teacher data for machine learning.
  • the density-thickness product image DTdt is an image corresponding to an inspection image in which the leaked gas acquired by the gas visualization image pickup device 10 is captured, and is an image showing how the gas is seen from a viewpoint. Further, by considering the information of the three-dimensional structure data DTstr, it is possible to generate the gas concentration thickness product image data DTdt that is blocked by the structure and does not reflect the gas image that cannot be observed from the viewpoint.
  • FIG. 8 is a schematic diagram for explaining an outline of the concentration thickness product calculation method in the three-dimensional single-viewpoint gas distribution image conversion process, and is a gas concentration thickness from the three-dimensional gas distribution image data DTgas of gas showing the behavior of gas. It is a conceptual diagram of the method of generating the product image data DTdt.
  • the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 spatially integrates the three-dimensional gas concentration image indicated by the three-dimensional gas distribution image data DTgas in the line-of-sight direction from a preset viewpoint position (X, Y, Z).
  • a plurality of thickness product Dst values are generated by changing the angles ⁇ and ⁇ in the line-of-sight direction, and the obtained density thickness product Dst values are arranged two-dimensionally to generate density thickness product image data DTdt.
  • an arbitrary viewpoint position SP (X, Y, Z) is set in the three-dimensional space, and the three-dimensional gas distribution image data DTgas is set from the viewpoint position SP (X, Y, Z).
  • the virtual image plane VF is set at a position separated by a predetermined distance in the direction of the three-dimensional gas concentration image indicated by.
  • the virtual image plane VF is set so that the center O intersects the viewpoint position SP (X, Y, Z) and the straight line passing through the center voxel of the three-dimensional gas distribution image data DTgas.
  • the image frame of the virtual image plane VF is set according to the angle of view of the gas visualization image pickup apparatus 10.
  • the line-of-sight direction DA directed from the viewpoint position SP (X, Y, Z) toward the pixel A (x, y) of interest on the virtual image plane VF is the line-of-sight direction DO toward the central pixel O, that is, a gas visualization image pickup device.
  • the angle ⁇ is in the X direction, the angle is ⁇ in the Y direction, and the direction is tilted with respect to the line-of-sight direction DO.
  • the gas concentration distribution data corresponding to the voxels of the three-dimensional gas distribution image that intersects the line of sight along the line-of-sight direction DA corresponding to the pixel A (x, y) of interest is spatially distributed in the line-of-sight direction DA with respect to the voxels that intersect the line of sight.
  • the value of the gas concentration thickness product for the pixel A (x, y) of interest is calculated.
  • the positions of the pixels of interest A (x, y) are gradually moved, and all the pixels on the virtual image plane VF are the pixels of interest.
  • the density thickness product image data DTdt is calculated.
  • FIG. 9 is a schematic diagram for explaining the outline of the concentration-thickness product calculation method under the condition that the structure exists in the three-dimensional single-viewpoint gas distribution image conversion process, and is a gas visualization image pickup device blocked by the structure. The case where it cannot be observed from 10 is shown.
  • the pixel A of interest is a method that does not perform spatial integration of the gas concentration distribution existing behind the structure when viewed from the viewpoint position SP (X, Y, Z).
  • the value of the gas concentration thickness product is calculated for (x, y).
  • the gas concentration thickness product for the pixel A (x, y) of interest is obtained by spatially integrating the viewpoint position SP (X, Y, Z) into the line-of-sight direction DA corresponding to the pixel A (x, y) of interest. Calculate the value of.
  • the storage unit 33 stores the program 331 and the like necessary for the machine learning data generation device 30 to operate, and also serves as a temporary storage area for temporarily storing the calculation result of the control unit 31. Has the function of.
  • the storage unit 33 includes a volatile memory such as a DRAM and a non-volatile memory such as a hard disk.
  • the communication unit 32 transmits / receives information to / from the machine learning data generation device 30 and the storage means 40.
  • the display unit 34 is, for example, a liquid crystal panel or the like, and displays a display screen generated by the CPU 31.
  • FIG. 10 is a flowchart showing an outline of the two-dimensional single-viewpoint gas distribution image conversion process. The processing is executed by the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 whose function is configured by the control unit 31.
  • the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 acquires the structure three-dimensional data DTstr (step S1), and further acquires the three-dimensional gas distribution image data DTgas (step S2).
  • the condition parameter CP2 for example, the input of information regarding the angle of view of the image pickup device, the line-of-sight direction, the distance, and the image resolution is accepted (step S3).
  • the viewpoint position SP X, Y, Z
  • the viewpoint position SP corresponding to the position of the image pickup portion of the gas visualization image pickup apparatus 10 is set in the three-dimensional space (step S4).
  • a virtual image plane VF separated from the viewpoint position SP (X, Y, Z) in the direction of the three-dimensional gas concentration image by a predetermined distance is set, and as described above, the position of the image frame of the virtual image plane VF is visualized by gas. It is calculated according to the angle of view of the image pickup apparatus 10 (step S5).
  • step S6 the coordinates of the pixel of interest A (x, y) are set to the initial value (step S6), and the pixel of interest A (x, y) on the virtual image plane VF is changed from the viewpoint position SP (X, Y, Z).
  • the position LV on the line of sight is set to the initial value (step S7).
  • step S10 the presence or absence of voxels in the three-dimensional gas distribution image data DTgas that intersects the line of sight is determined, and if there is no voxel in the structure three-dimensional data DTstr that intersects the line of sight, the position LV on the line of sight is used as the unit length. (For example, increment by 1) (step S11), return to step S8, and if there is a voxel that intersects the line of sight, the density thickness value Dst of the voxel is read and the integration stored in the addition register or the like. The sum with the value Dsti is stored in an addition register or the like as a new integrated value Dsti (step S12).
  • step S13 it is determined whether or not the calculation is completed for the total length of the line of sight corresponding to the range where the line of sight and the voxel intersect (step S13), and if not, the position LV on the line of sight is incremented by the unit length. Then (step S14), the process returns to step S8, and if completed, the voxels of the three-dimensional gas distribution image intersecting the line of sight along the line-of-sight direction DA corresponding to the pixel A (x, y) of interest.
  • the gas concentration distribution data is spatially integrated to calculate the value of the gas concentration thickness product for the pixel A (x, y) of interest.
  • step S15 it is determined whether or not the calculation of the gas concentration thickness product is completed for all the pixels on the virtual image plane VF (step S15), and if not, the pixel of interest A (x, y). ) Is gradually moved (step S16) and returned to step S7. If completed, the value of the gas concentration thickness product is calculated for all the pixels on the virtual image surface VF, and the virtual image surface VF is calculated.
  • a gas concentration thickness product image DTdt is generated as the two-dimensional gas distribution image data.
  • step S17 it is determined whether or not the generation of the gas concentration thickness product image DTdt is completed for all the viewpoint positions SP (X, Y, Z) to be calculated (step S17), and if not, the step is not completed.
  • the gas concentration thickness product image DTdt is generated for the new viewpoint position SP (X, Y, Z) input by the operation, and if it is completed, the process is terminated.
  • FIG. 6B is a functional block diagram of the machine learning data generation device 30A according to the first modification.
  • the machine learning data generation device 30A may be configured by removing the three-dimensional structure modeling unit 311 from the machine learning data generation device 30.
  • the 3D fluid simulation execution unit 312 performs a 3D fluid simulation in a 3D space without a 3D structure and generates 3D gas distribution image data DTgas.
  • the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 generates the density-thickness product image DTdt without considering the information of the structure three-dimensional data DTstr.
  • the structure 3D data DTstr is not generated, and the output gas concentration thickness product image data DTdt is in a state where the leakage source can be seen on the image.
  • a masking process for concealing the leakage source portion in the gas concentration thickness product image is performed.
  • Gas concentration thickness product image data DTdt including masking information can be used as learning data.
  • a large number of various gas concentration thickness product image data DTdt can be created as learning data.
  • the various gas concentration thickness product image data DTdt generated by the machine learning data generation device 30 and the leakage source coordinate data PTlk are set and used as the machine learning teacher data used in the gas leakage detection device 20. Can be done.
  • Equipment such as piping is complicatedly arranged in the gas equipment to be monitored by the gas leak detection device 20.
  • the image pickup device is installed at a specific location completely fixed at a specific location, or is installed at a specific location with only the shooting direction variable.
  • the gas leak source may be hidden behind the equipment and cannot be seen from the image pickup apparatus.
  • machine learning generally requires 10,000 units of correct answer data, and in order to realize it, it is necessary to efficiently acquire a large amount of learning data for teachers under various conditions regarding leaked gas clouds. Become.
  • the machine learning data generation device and the machine learning data generation method according to the present embodiment for learning the machine learning model in the gas leak detection device 20, for example, the gas leak image in this example. It is possible to efficiently generate tens of thousands of sets of learning data consisting of input and correct answer output, such as gas leakage source position coordinate information, and contribute to improving learning accuracy.
  • the shooting conditions are taken. Since learning data can be obtained by freely changing various parameters that affect the behavior of gas leakage and gas leakage, as an example, when finding the gas leakage position from a gas leakage image using a machine learning model, under various conditions. A large amount of training data sets can be efficiently generated, which can contribute to improving learning accuracy.
  • FIG. 11A is a functional block diagram of the machine learning data generation device 30B according to the second embodiment.
  • the same components as those of the machine learning data generation device 30 are assigned the same numbers, and the description thereof will be omitted.
  • the machine learning data generation device 30B is provided with a light absorption rate image conversion unit 314B after the two-dimensional single-viewpoint gas distribution image conversion processing unit 313, acquires the condition parameter CP3 based on the operation input, and obtains the gas concentration thickness product.
  • the image data DTdt is further converted into the light absorption rate image data DT ⁇ and output to the subsequent stage.
  • FIG. 12 (a) is a schematic diagram showing an outline of density-thickness product image data
  • FIG. 12 (b) is a schematic diagram for explaining an outline of a density-thickness product calculation method in a light absorption rate image conversion process
  • FIG. 12 (c) is a schematic diagram.
  • It is a schematic diagram which shows the outline of the light absorption rate image data.
  • the value Dt of the gas concentration thickness product in the pixels (x, y) in the gas concentration thickness product image data DTdt shown in FIG. 12 (a) is as shown in FIG. 12 (b).
  • the value ⁇ of the light absorption rate corresponding to the gas concentration thickness product in the pixels (x, y) shown in FIG. 12 (c) is calculated.
  • the light absorption rate ⁇ differs depending on the gas type specified by the condition parameter CP3, and the relationship data between the concentration thickness product value Dt and the light absorption rate ⁇ , for example, the gas concentration thickness product value Dt stored in a data table or the like.
  • the gas concentration thickness product image data DTdt is converted into the light absorption rate image data Dt ⁇ based on the light absorption characteristics of the gas, based on the value of the light absorption rate ⁇ corresponding to Convert.
  • the relationship between the value Dt of the gas concentration thickness product for each gas type and the light absorption rate ⁇ may be obtained in advance by actual measurement.
  • the light absorption rate image data DT ⁇ generated by the machine learning data generation device 30B and the leakage source coordinate data PTlk can be used as a set as the machine learning teacher data used in the gas leakage detection device 20.
  • the machine learning data generation device 30A in addition to the effect that a large amount of learning data can be efficiently generated by the machine learning data generation device 30A according to the first embodiment, the light closer to the gas distribution image obtained from the gas visualization image pickup device 10.
  • the absorption rate image data DT ⁇ as the teacher data, it is possible to generate the teacher data closer to the inspection image. As a result, it is possible to further improve the learning accuracy in the machine learning model in the gas leak detection device.
  • the machine learning data generation device 30B is three-dimensionally similar to the machine learning data generation device 30A according to the modification 1 shown in FIG. 6B.
  • the machine learning data generation device 30C excluding the structure modeling unit 311 may be configured.
  • the masking process for concealing the leakage source portion in the gas concentration thickness product image is performed by changing the masking shape in various ways, so that the light absorption rate image data including various masking information is included.
  • DT ⁇ can be created as learning data.
  • FIG. 13 is a functional block diagram of the machine learning data generation device 30D according to the third embodiment.
  • the same components as those of the machine learning data generation device 30B are numbered the same, and the description thereof will be omitted.
  • the machine learning data generation device 30D is newly provided with a background image generation unit 315D after the two-dimensional single-viewpoint gas distribution image conversion processing unit 313, and a new light intensity image is provided at the rear stage of the light absorption rate image conversion unit 314B. It differs from the machine learning data generation device 30B in that it includes a generation unit 316D.
  • the background image generation unit 315D acquires the background location data PTbk and the condition parameter CP5 based on the operation input, and generates the background image data DTIback.
  • FIG. 14A is a schematic showing an outline of the three-dimensional structure data.
  • the structure three-dimensional data DTstr is three-dimensional voxel data representing a three-dimensional space
  • the structure identification information Std is as three-dimensional shape data as in the first embodiment.
  • a value indicating the classification of the structure surface is given for each pixel, and a multi-value such as 0, 1, 2, 3, ... It is recorded as an image.
  • the structure identification information Std composed of this multi-valued image is subjected to a two-dimensional single-viewpoint conversion process using the virtual image plane VF observed from the viewpoint position SP (X, Y, Z) as an image frame. Background location data PTbk is generated.
  • FIG. 14B is a schematic diagram showing an outline of the extracted background location data PTbk.
  • the background classification Std is a classification number classified based on, for example, the optical characteristics of the surface of the structure.
  • the unpainted pipe may be set to 1
  • the painted pipe may be set to 2
  • the concrete may be set to 3.
  • FIG. 14C is a conceptual diagram showing a method of generating background image data DTIback based on background location data PTbk. As shown in FIG. 14 (c), the background image data DTIback is generated based on the background location data PTbk generated as a multi-valued image and the condition parameter CP5 based on the operation input.
  • the condition parameter CP5 is a lighting condition for a structure such as a background two-dimensional temperature distribution, a background surface spectral emissivity, a background surface spectral reflectance, an illumination light wavelength distribution, a spectral illuminance, and an illumination angle, as shown in Table 1. ,
  • the background image data DTIback is generated by adding a different condition parameter CP5 for each background classification Std.
  • the background location data is a binary image
  • the background image data DTIback is generated by giving different conditions for each background classification Std with and without the background.
  • the light intensity image generation unit 316D generates light intensity image data DTI based on the light absorption rate image data DT ⁇ , the background image data DTIback, and the gas temperature condition provided as the condition parameter CP4 based on the operation input.
  • FIG. 15A is a schematic diagram showing an outline of background image data DTIback
  • FIG. 15B is a schematic diagram showing an outline of light absorption rate image data DT ⁇
  • FIG. 15C is a schematic diagram showing an outline of light intensity image data. It is a figure.
  • the infrared intensity at the coordinates (x, y) of the background image is DTIback (x, y)
  • the blackbody radiation brightness corresponding to the gas temperature is Igas
  • the light absorption rate at the coordinates (x, y) of the light absorption rate image is calculated by Equation 1 as DTI (x, y).
  • the light intensity image generation unit 316D has coordinates (x, y) on the virtual image plane VF based on (Equation 1) based on the light absorption rate image data DT ⁇ , the background image data DTIback, and the gas temperature condition. ), The infrared intensity DTI (x, y) is calculated. As a result, the infrared radiation emitted by the background structure and the infrared rays emitted by the gas present in the voxels of the three-dimensional gas distribution image intersecting the line of sight along the line-of-sight direction DA corresponding to the pixel A (x, y) of interest. The total radiation can be calculated.
  • the infrared intensity DTI (x, y) is calculated as all the pixels A (x, y) on the virtual image surface VF, and the light intensity image data DTI is generated.
  • FIG. 16 is a flowchart showing an outline of the background image generation process. The process is executed by the background image generation unit 315D whose function is configured by the control unit 31.
  • the background image generation unit 315D acquires the structure three-dimensional data DTstr (step S1).
  • the operation of steps S3 to S8 is the same as the operation of each step in FIG.
  • step 8 when the background location data PTbk of the voxel intersecting the line of sight is “with a structure”, the background classification Std of the background location data PTbk is acquired (step S121A), and the condition parameter CP5 corresponding to the background classification Std is obtained. (Step S122A) is performed to determine the background image data value in the pixel A of interest, the position of the pixel A (x, y) of interest is gradually moved (step S9), and the process returns to step S7.
  • the condition parameter CP5 is, for example, a condition such as a background two-dimensional temperature distribution, a background surface spectral emissivity, a background surface spectral reflectance, an illumination light wavelength distribution, a spectral illuminance, and an illumination angle.
  • step S13 it is determined whether or not the calculation is completed for the total length of the line of sight corresponding to the range where the line of sight and the voxel intersect (step S13), and if not, the line of sight is not completed.
  • the position LV on the line is incremented by the unit length (step S14), and the process returns to step S8.
  • step S14 the standard value set when there is no structure is fixed as the background image data value in the pixel A of interest, and the calculation is completed for all the pixels on the virtual image surface VF.
  • step S15 It is determined whether or not (step S15), and if it is not completed, the position of the pixel A (x, y) of interest is gradually moved (step S16), the process returns to step S7, and if it is completed, it is returned to step S7. End the process.
  • the standard value set when there is no structure is, for example, a background image data value corresponding to the ground or the sky in the real space.
  • a standard value can be obtained by appropriately setting the conditions indicated by the condition parameter CP5.
  • the background classification Std of the background location data PTbk is acquired for all the pixels on the virtual image surface VF, and the background image data DTIback related to the virtual image surface VF is generated.
  • FIG. 17 is a flowchart showing an outline of the light intensity image data generation process.
  • the background image data DTIback (x, y) relating to the virtual image plane VF and the light absorption rate image data DT ⁇ (x, y) are acquired (steps S101 and 102), and the condition parameter CP4 relating to the gas temperature condition is input (step). S103).
  • step S104 the blackbody radiation luminance Igas corresponding to the gas temperature is acquired (step S104), the infrared intensity DTI (x, y) of the light intensity image is calculated by (Equation 1) (step S105), and the light intensity image is obtained. It is output as data DTI (step S106), and the process is terminated.
  • the light intensity image data DTI generated by the machine learning data generation device 30D and the leakage source coordinate data PTlk can be used as a set as machine learning teacher data used in the gas leakage detection device 20.
  • the following effects are obtained. That is, by generating the light intensity image data DTI in a mode more similar to the gas distribution image obtained from the gas visualization image pickup device 10 as the teacher data, it is possible to generate the teacher data closer to the inspection image. The gas part can be easily extracted. As a result, it is possible to further improve the learning accuracy in the machine learning model in the gas leak detection device.
  • FIG. 18A is a functional block diagram of the machine learning data generation device 30E according to the third modification.
  • the background image generation unit 315E is different from the machine learning data generation device 30D in that the background image data DTIback is generated without using the background location data PTbk.
  • the background image data DTIback having uniform brightness may be generated regardless of the background location data PTbk.
  • FIG. 18B is a functional block diagram of the machine learning data generation device 30F according to the modified example 4.
  • a modification 4 a three-dimensional structure is formed from the machine learning data generation device 30D in the same manner as the machine learning data generation device 30A according to the modification 1 shown in FIG. 6B.
  • the machine learning data generation device 30F excluding the modeling unit 311 may be configured.
  • the gas concentration thickness product image data DTdt including various masking information can be created as learning data.
  • FIG. 19 is a functional block diagram of the machine learning data generation device 30G according to the fourth embodiment.
  • the same components as those of the machine learning data generation device 30D are numbered the same, and the description thereof will be omitted.
  • the machine learning data generation device 30G is different from the machine learning data generation device 30D in that a gas distribution enhancement processing unit 317G is newly provided after the light intensity image generation unit 316D.
  • the gas distribution enhancement process is an enhancement process for a predetermined frequency with respect to a time-series image so that the behavior of the gas distribution image diffused in space can be emphasized.
  • the gas distribution enhancement processing unit 317G generates gas distribution enhancement image data DTIem for the light intensity image data DTI by, for example, a known method described in International Publication No. 2017/0743430.
  • FIG. 20 is a flowchart showing an outline of the gas distribution emphasized image data generation process.
  • a simple moving average with a predetermined number of frames as a unit is calculated to extract specific frequency component data (step S201), and the difference between the time-series pixel data and the specific frequency component data is obtained. Calculated as specific difference data (step S202).
  • the moving standard deviation in units of a predetermined number of frames is calculated as the specific variation data (step S203), and the specific difference data or the specific variation data is used as the gas distribution-enhanced image data DTIem.
  • Output step S204 and end the process.
  • the gas distribution-enhanced image data DTIem generated by the machine learning data generation device 30G and the leakage source coordinate data PTlk can be used as a set as the machine learning teacher data used in the gas leakage detection device 20.
  • FIG. 21A is a functional block diagram of the machine learning data generation device 30H according to the modified example 5. Similar to the modification 3, the background image generation unit 315E is different from the machine learning data generation device 30G in that the background image data DTIback is generated without using the background location data PTbk. The background image generation unit 315E may generate, for example, background image data DTIback having uniform brightness.
  • FIG. 18B is a functional block diagram of the machine learning data generation device 30I according to the modified example 6.
  • the machine learning data generation device 30I may be configured by removing the three-dimensional structure modeling unit 311 from the machine learning data generation device 30G.
  • the gas concentration thickness product image data DTdt including various masking information can be created as learning data.
  • the two-dimensional gas distribution image obtained by observing the three-dimensional gas distribution image data showing the existence range of the gas leaking from the gas leakage source into the three-dimensional space from a predetermined viewpoint position.
  • the data and the location information of the gas leak source are generated as teacher data for machine learning.
  • the two-dimensional gas distribution image data and the direction of the gas flow instead of the position information of the gas leak source are generated as teacher data for machine learning.
  • This is different from the first to fourth embodiments. Except for the point that the direction of the gas flow is used as the teacher data, the same configuration as that of any of the first to fourth embodiments can be used, and the description thereof will be omitted.
  • the three-dimensional gas distribution image data may include voxel gas flow velocity vector data Vc (Vx, Vy, Vz).
  • a combination of the two-dimensional gas distribution image data generated based on the three-dimensional gas distribution image data and the parameter indicating the direction of the gas flow in the three-dimensional gas distribution image data is combined. Generated as teacher data and supplied to the machine learning department.
  • the parameter indicating the direction of the gas flow in the three-dimensional gas distribution image data can be calculated based on the gas flow velocity vector data Vc (Vx, Vy, Vz) of each voxel in the three-dimensional gas distribution image data.
  • the gas flow velocity component in the line-of-sight direction DO when viewed from the viewpoint position SP is acquired from the gas flow velocity vector data Vc (Vx, Vy, Vz) of the voxel corresponding to the gas region in the three-dimensional gas distribution image data. ..
  • the flow velocity component of the gas may be specified as a relative value with respect to the flow velocity component of the gas in the direction orthogonal to the line-of-sight direction DO.
  • the direction orthogonal to the line-of-sight direction DO corresponds to the x-direction, y-direction, or the combined direction of the two-dimensional gas distribution image data, and is the direction in which the distance from the viewpoint position SP does not change.
  • the flow velocity component of the gas in the direction orthogonal to the line-of-sight direction DO may be shown as a relative value with respect to the image size and the number of pixels of the gas distribution moving image.
  • the parameter is calculated for the two-dimensional gas distribution image data such as the average value and the maximum value of the flow velocity component of the gas.
  • the machine learning unit uses machine learning to perform learning using two-dimensional gas distribution image data that images leaked gas and parameters that indicate the direction of gas flow as correct data, and generates a machine learning model. Thereby, it is possible to construct a gas flow direction identification device and a method for identifying the gas flow direction by using machine learning from the inspection image obtained by imaging the gas distribution.
  • the gas visualization image pickup device 10 is configured to realize a function of coloring the leaked gas portion in the inspection image captured by the infrared camera to visualize the gas leaked portion to be inspected.
  • a function of the gas leak detection device 20 a function of performing gas detection processing on a moving image output by an infrared camera to visualize a gas leak portion and generating a gas distribution image including a gas region is realized. May be good.
  • a known method can be used for the gas detection process. Specifically, for example, the method described in Patent Document 1 can be used. Then, a gas distribution image as a moving image obtained by cutting out a region including a gas region from each frame of the moving image is generated.
  • processing such as gain adjustment may be performed, or the pixel value of the moving image output by the infrared camera may be used. Instead, the difference in pixel values may be mapped to obtain a gas distribution image.
  • a gas plant is illustrated as a gas facility as an example of an inspection image.
  • the present disclosure is not limited to this, and may be applied to the generation of display images in equipment, devices, laboratories, laboratories, factories, and business establishments that use gas.
  • the present invention is a computer system including a microprocessor and a memory, and the memory may store the computer program, and the microprocessor may operate according to the computer program.
  • the present invention may be a computer system that has a computer program for processing in the gas leak detection system 1 of the present disclosure or a component thereof, and operates according to this program (or instructs each connected part to operate). ..
  • all or part of the processing in the gas leak detection system 1 or its constituent elements is configured by a computer system composed of a microprocessor, a recording medium such as a ROM, a RAM, a hard disk unit, and the like. included.
  • the RAM or the hard disk unit stores a computer program that achieves the same operation as each of the above devices.
  • each device achieves its function.
  • a system LSI is a super-multifunctional LSI manufactured by integrating a plurality of components on one chip, and specifically, is a computer system including a microprocessor, ROM, RAM, and the like. .. These may be individually integrated into one chip, or may be integrated into one chip so as to include a part or all of them.
  • a computer program that achieves the same operation as each of the above devices is stored in the RAM.
  • the microprocessor operates according to the computer program, the system LSI achieves its function.
  • the present invention also includes a case where the processing in the gas leak detection system 1 or its constituent elements is stored as an LSI program, and this LSI is inserted into a computer to execute a predetermined program (gas inspection management method). Is done.
  • the method of making an integrated circuit is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor.
  • An FPGA Field Programmable Gate Array
  • a reconfigurable processor that can reconfigure the connection and settings of the circuit cells inside the LSI may be used.
  • a part or all of the functions of the gas leak detection system 1 or its components according to each embodiment may be realized by executing a program by a processor such as a CPU. It may be a non-temporary computer-readable recording medium in which a program for performing the operation of the gas leak detection system 1 or its components is recorded. By recording a program or signal on a recording medium and transferring it, the program may be executed by another independent computer system. Needless to say, the above program can be distributed via a transmission medium such as the Internet.
  • the gas leak detection system 1 or each component thereof according to the above embodiment may be configured to be realized by a programmable device such as a CPU, a GPU (Graphics Processing Unit) or a processor, and software.
  • a programmable device such as a CPU, a GPU (Graphics Processing Unit) or a processor, and software.
  • These components can be a single circuit component or an aggregate of a plurality of circuit components. Further, a plurality of components can be combined into one circuit component, or an aggregate of a plurality of circuit components can be formed.
  • Division of functional blocks is an example, and even if multiple functional blocks are realized as one functional block, one functional block is divided into multiple, or some functions are transferred to other functional blocks. good. Further, the functions of a plurality of functional blocks having similar functions may be processed by a single hardware or software in parallel or in a time division manner.
  • the order in which the above steps are executed is for exemplifying in order to specifically explain the present invention, and may be an order other than the above. Further, a part of the above steps may be executed simultaneously with other steps (parallel).
  • three-dimensional gas distribution image data showing the existence range of gas leaking from the gas leakage source into the three-dimensional space is observed from a predetermined viewpoint position. It is characterized by including a step of converting into two-dimensional gas distribution image data and a step of generating the two-dimensional gas distribution image data and predetermined feature information about the gas as teacher data for machine learning.
  • the configuration may further include a step of generating the three-dimensional gas distribution image data based on the three-dimensional fluid simulation.
  • two-dimensional gas distribution image data that is imaged from a predetermined viewpoint position and shows the existence range of the gas leaked into the space is input. It may be configured to construct a speculative model for identifying predetermined characteristic information regarding the gas.
  • the step of modeling the three-dimensional structure in the three-dimensional space is further included, and in the three-dimensional fluid simulation, under the condition that the three-dimensional structure exists. It may be configured to calculate the three-dimensional gas distribution image in the three-dimensional space.
  • the step of converting to the two-dimensional gas distribution image data is the viewpoint of the three-dimensional gas distribution image in the three-dimensional space indicated by the three-dimensional gas distribution image data.
  • a plurality of values of the density thickness product spatially integrated in the line-of-sight direction from the position are generated by changing the angle in the line-of-sight direction, and the obtained density-thickness product values are arranged two-dimensionally to obtain the density-thickness product image data.
  • the configuration may include a step of generating.
  • the step of converting to the two-dimensional gas distribution image data converts the concentration-thickness product value into a light absorption rate based on the light absorption characteristics of the gas and obtains light.
  • the configuration may include a step of generating absorption rate distribution image data.
  • the step of converting to the two-dimensional gas distribution image data is based on the light absorption rate distribution image data and at least a temperature condition including a gas temperature condition.
  • the configuration may include a step of calculating the light intensity distribution image data.
  • the predetermined feature information may be configured to be the position information of the leakage source or the direction of the gas flow.
  • the two-dimensional gas distribution image data may be configured as a moving image showing a time-series change of the gas distribution.
  • the three-dimensional gas distribution image data of the gas leaking from the gas leakage source into the three-dimensional space is converted into the two-dimensional gas distribution image data observed from a predetermined viewpoint position. It is also possible to include a two-dimensional single-viewpoint gas distribution image conversion processing unit and a teacher data generation unit that generates the two-dimensional gas distribution image data and predetermined feature information related to the gas as teacher data for machine learning. good.
  • the two-dimensional gas distribution image data in which the existence range of the gas leaked into the space captured from a predetermined viewpoint position is shown as a gas region is obtained.
  • it may be configured to construct a guess model that identifies predetermined characteristic information regarding the gas.
  • the step of converting to the two-dimensional gas distribution image data is the viewpoint of the three-dimensional gas distribution image in the three-dimensional space indicated by the three-dimensional gas distribution image data.
  • a plurality of values of the density thickness product spatially integrated in the line-of-sight direction from the position are generated by changing the angle in the line-of-sight direction, and the obtained density-thickness product values are arranged two-dimensionally to obtain the density-thickness product image data.
  • the configuration may include a step of generating.
  • the program according to the embodiment is a program for causing a computer to perform machine learning data generation processing, and the machine learning data generation processing has a range of existence of gas leaking from a gas leakage source into a three-dimensional space.
  • the shown 3D gas distribution image data is converted into 2D gas distribution image data observed from a predetermined viewpoint position, and the 2D gas distribution image data and predetermined feature information related to the gas are used for machine learning. It may be a program generated as teacher data of.
  • the two-dimensional gas distribution image data generated from the three-dimensional gas distribution image data showing the existence range of the gas leaking from the gas leakage source into the three-dimensional space and the three-dimensional gas distribution image data.
  • It is a learning data set for inferring the characteristic information by machine learning with the predetermined characteristic information about the gas contained in the above, and the three-dimensional gas distribution image data generated by the three-dimensional fluid simulation is It may be a learning data set including a plurality of data sets consisting of two-dimensional gas distribution image data converted into a two-dimensional image observed from a predetermined viewpoint position and the feature information corresponding to the two-dimensional gas distribution image data. ..
  • the predetermined feature information may be the position information of the gas leak source or the learning data set which is the direction of the gas flow.
  • a learning data set including the two-dimensional gas distribution image data that does not include information on the gas distribution at the leakage source may be configured.
  • the two-dimensional gas distribution image data may be configured as a learning data set which is a moving image showing a time-series change of the gas distribution.
  • the order in which the above method is executed is for exemplifying for concretely explaining the present invention, and may be an order other than the above. Moreover, a part of the above-mentioned method may be executed at the same time (parallel) with another method.
  • the machine learning data generation device, machine learning data generation method, program, and learning data set according to the embodiment of the present disclosure can be widely applied to a system that uses a gas leak in a gas facility for inspection.

Abstract

For the purpose of learning a machine learning model in a gas leakage detection device, the present invention can efficiently generate a large amount of learning data under various conditions and contribute to learning accuracy improvement. This machine learning data generation method comprising a step for converting three-dimensional gas distribution image data indicating an existence range of a gas leaking from a gas leakage source into a three-dimensional space into two-dimensional gas distribution image data observed from a prescribed view point position and a step for generating, as training data for machine learning, the two-dimensional gas distribution image data and prescribed characteristic information about the gas.

Description

機械学習用データ生成装置、機械学習用データ生成方法、プログラム及び学習用データセットMachine learning data generator, machine learning data generation method, program and learning dataset
 本開示は、機械学習用データ生成装置、機械学習用データ生成方法、プログラム及び学習用データセットに関し、特に、ガス漏洩検知装置に用いる機械学習用データの生成方法に関する。 The present disclosure relates to a machine learning data generator, a machine learning data generation method, a program, and a learning data set, and more particularly to a machine learning data generation method used in a gas leak detection device.
 天然ガスや石油を生産する生産施設、ガスを使用して化学製品を生産する生産プラント、ガス送管設備、石油化学プラントや火力発電所、製鉄関連施設等といった、ガスを使用する設備(以下「ガス設備」と記す場合もある)では、施設の経年劣化や運転ミスにより、ガス漏洩の危険性が認識されており、ガス漏洩を最小限にとどめるためガス検知装置が備え付けられている。 Equipment that uses gas, such as production facilities that produce natural gas and petroleum, production plants that produce chemical products using gas, gas transmission equipment, petrochemical plants and thermal power plants, and steel-related facilities (hereinafter referred to as "" (Sometimes referred to as "gas equipment"), the danger of gas leakage is recognized due to aging of the facility and operational mistakes, and a gas detection device is installed to minimize gas leakage.
 このガス検知において、検知プローブにガス分子が接触することでプローブの電気的特性が変化することを利用したガス検知装置の他、近年では、ガスの赤外線吸光特性を利用して赤外線動画を撮影することにより、検査領域のガス漏れを検出する光学的ガス漏れ検出方法が取り入れられている(例えば、特許文献1,2)。 In this gas detection, in addition to gas detection devices that utilize the fact that the electrical characteristics of the probe change when gas molecules come into contact with the detection probe, in recent years, infrared moving images have been taken using the infrared absorption characteristics of gas. As a result, an optical gas leak detection method for detecting a gas leak in the inspection region has been adopted (for example, Patent Documents 1 and 2).
 赤外線動画によるガス検出方法は、映像によりガスを可視化することができるため、ガスの流れ等の放出状態や、漏れ位置を容易に検出できるという利点がある。また、漏れたガスの状態が映像として記録されるために、ガス漏れの発生やその修復の証拠としても活用できるという利点がある。 The gas detection method using infrared moving images has the advantage that the emission state such as gas flow and the leak position can be easily detected because the gas can be visualized by the image. In addition, since the state of the leaked gas is recorded as an image, there is an advantage that it can be used as evidence of the occurrence of gas leak and its repair.
国際公開第2016/143754号International Publication No. 2016/143754 国際公開第2017/150565号International Publication No. 2017/150565
 ところが、ガス漏洩検知装置の監視対象であるガス設備には、配管等の機器設備が複雑に配置されているため、ガス漏洩の発生個所が機器設備の奥に隠れて撮像装置から見えない場合が発生し得る。また、風向きが変化している場合には、ガス漏洩の発生個所を特定することが難しくなる。 However, since the equipment such as piping is complicatedly arranged in the gas equipment to be monitored by the gas leakage detection device, the location where the gas leakage occurs may be hidden behind the equipment and cannot be seen from the image pickup device. Can occur. In addition, when the wind direction is changing, it becomes difficult to identify the location where the gas leak occurs.
 そこで、機械学習を用いて、漏洩したガスを撮像したガス画像と当該ガスに関する特徴量とを学習させることにより、例えば、検査画像からガス漏洩源の位置やガス流速といった、様々な条件下においてガスに関する特徴量を同定する方法の実現を考えた場合、実現のためには、漏洩したガス雲に関する様々な条件下における多量の教師用データを効率的に収集することが必要であった。 Therefore, by using machine learning to learn the gas image of the leaked gas and the feature amount related to the gas, for example, the gas from the inspection image under various conditions such as the position of the gas leak source and the gas flow velocity. When considering the realization of a method for identifying the features of a leaked gas cloud, it was necessary to efficiently collect a large amount of teacher data under various conditions regarding the leaked gas cloud.
 本開示は、上記課題に鑑みてなされたものであり、ガス漏洩検知装置における機械学習モデルの学習のために、様々な条件下における入力と正解出力からなる多量の学習データの組を効率的に収集し、学習精度向上に資する機械学習用データ生成装置、機械学習用データ生成方法、プログラム及び学習用データセットを提供することを目的とする。 This disclosure has been made in view of the above problems, and efficiently sets a large amount of training data consisting of input and correct output under various conditions for learning a machine learning model in a gas leak detection device. It is an object of the present invention to provide a data generation device for machine learning, a data generation method for machine learning, a program, and a data set for learning, which are collected and contribute to improvement of learning accuracy.
 本開示の一態様に係る機械学習用データ生成方法は、ガス漏洩源から3次元空間に漏洩するガスの存在範囲が示された3次元ガス分布画像データを、所定の視点位置から観察された2次元ガス分布画像データに変換する工程と、前記2次元ガス分布画像データと前記ガスに関する所定の特徴情報とを、機械学習用の教師データとして生成する工程とを含むことを特徴とする。 In the machine learning data generation method according to one aspect of the present disclosure, three-dimensional gas distribution image data showing the existence range of gas leaking from a gas leakage source into a three-dimensional space is observed from a predetermined viewpoint position2. It is characterized by including a step of converting into three-dimensional gas distribution image data and a step of generating the two-dimensional gas distribution image data and predetermined feature information about the gas as teacher data for machine learning.
 本開示の一態様に係る機械学習用データ生成装置、機械学習用データ生成方法、プログラム、及び学習用データセットによれば、ガス漏洩検知装置における機械学習モデルの学習のために、様々な条件下における入力と正解出力からなる多量の学習データの組を効率的に生成することができ、学習精度向上に寄与できる。 According to the machine learning data generation device, the machine learning data generation method, the program, and the learning data set according to one aspect of the present disclosure, various conditions are used for learning the machine learning model in the gas leakage detection device. It is possible to efficiently generate a set of a large amount of learning data consisting of an input and a correct answer output in, and it is possible to contribute to improvement of learning accuracy.
実施の形態に係るガス漏洩検知システム1の概略構成図である。It is a schematic block diagram of the gas leak detection system 1 which concerns on embodiment. 監視対象300とガス可視化撮像装置10との関係を示す概略図である。It is a schematic diagram which shows the relationship between the monitoring object 300 and the gas visualization image pickup apparatus 10. ガス漏洩検知装置20の構成を示す図である。It is a figure which shows the structure of the gas leak detection device 20. (a)は、制御部21の機能ブロック図、(b)は、機械学習モデルの論理構成の概要を示す模式図である。(A) is a functional block diagram of the control unit 21, and (b) is a schematic diagram showing an outline of the logical configuration of the machine learning model. 機械学習用データ生成装置30の機能ブロック図である。It is a functional block diagram of the data generation apparatus 30 for machine learning. (a)は、機械学習用データ生成装置30の機能ブロック図、(b)は、変形例1に係る機械学習用データ生成装置30Aの機能ブロック図である。(A) is a functional block diagram of the machine learning data generation device 30, and (b) is a functional block diagram of the machine learning data generation device 30A according to the first modification. (a)(b)は、それぞれ、構造物3次元データ、3次元ガス分布画像データのデータ構造を示す模式図である。(A) and (b) are schematic diagrams showing the data structure of the structure three-dimensional data and the three-dimensional gas distribution image data, respectively. 2次元単視点ガス分布画像変換処理における濃度厚み積算出方法の概要を説明するための模式図である。It is a schematic diagram for demonstrating the outline of the density-thickness product calculation method in two-dimensional single-viewpoint gas distribution image conversion processing. 2次元単視点ガス分布画像変換処理において、構造物が存在する条件における濃度厚み積算出方法の概要を説明するための模式図である。It is a schematic diagram for demonstrating the outline of the density-thickness product calculation method under the condition that a structure exists in a two-dimensional single-viewpoint gas distribution image conversion process. 2次元単視点ガス分布画像変換処理の概要を示すフローチャートである。It is a flowchart which shows the outline of the 2D single-viewpoint gas distribution image conversion processing. (a)は、実施の形態2に係る機械学習用データ生成装置30Bの機能ブロック図、(b)は、その変形例2に係る機械学習用データ生成装置30Cの機能ブロック図である。(A) is a functional block diagram of the machine learning data generation device 30B according to the second embodiment, and (b) is a functional block diagram of the machine learning data generation device 30C according to the second modification. (a)は、濃度厚み積画像データの概要を示す模式図、(b)は、光吸収率画像変換処理における濃度厚み積算出方法の概要を説明するための模式図、(c)は、光吸収率画像データの概要を示す模式図である。(A) is a schematic diagram showing an outline of density-thickness product image data, (b) is a schematic diagram for explaining an outline of a density-thickness product calculation method in a light absorption rate image conversion process, and (c) is light. It is a schematic diagram which shows the outline of the absorption rate image data. 実施の形態3に係る機械学習用データ生成装置30Dの機能ブロック図である。It is a functional block diagram of the machine learning data generation apparatus 30D which concerns on Embodiment 3. FIG. (a)は、構造物3次元データの概要を示す模式図、(b)は、抽出された背景箇所データの概要を示す模式図、(c)は、背景箇所データをもとに背景画像データを生成する方法を示した概念図である。(A) is a schematic diagram showing an outline of the three-dimensional structure data, (b) is a schematic diagram showing an outline of the extracted background location data, and (c) is background image data based on the background location data. It is a conceptual diagram which showed the method of generating. (a)は、背景画像データの概要を示す模式図、(b)は、光吸収率画像データの概要を示す模式図、(c)は、光強度画像データの概要を示す模式図である。(A) is a schematic diagram showing an outline of background image data, (b) is a schematic diagram showing an outline of light absorption rate image data, and (c) is a schematic diagram showing an outline of light intensity image data. 背景画像生成処理の概要を示すフローチャートである。It is a flowchart which shows the outline of the background image generation processing. 光強度画像データ生成処理の概要を示すフローチャートである。It is a flowchart which shows the outline of the light intensity image data generation processing. (a)は、変形例3に係る機械学習用データ生成装置30Eの機能ブロック図、(b)は、その変形例4に係る機械学習用データ生成装置30Fの機能ブロック図である。(A) is a functional block diagram of the machine learning data generation device 30E according to the modification 3, and (b) is a functional block diagram of the machine learning data generation device 30F according to the modification 4. 実施の形態4に係る機械学習用データ生成装置30Fの機能ブロック図である。It is a functional block diagram of the machine learning data generation apparatus 30F which concerns on Embodiment 4. FIG. ガス分布強調処理の概要を示すフローチャートである。It is a flowchart which shows the outline of a gas distribution enhancement process. (a)は、変形例5に係る機械学習用データ生成装置30Gの機能ブロック図、(b)は、その変形例6に係る機械学習用データ生成装置30Hの機能ブロック図である。(A) is a functional block diagram of the machine learning data generation device 30G according to the modified example 5, and (b) is a functional block diagram of the machine learning data generating device 30H according to the modified example 6.
 ≪実施の形態1≫
 <ガス漏洩検知システム1の構成>
 本開示の実施の形態は、ガス設備のガス漏れ検査画像からガス漏れを分析する、ガス漏洩検知システム1として実現される。以下、実施の形態に係るガス漏洩検知システム1について、図面を用いて詳細に説明する。
<< Embodiment 1 >>
<Configuration of gas leak detection system 1>
The embodiment of the present disclosure is realized as a gas leak detection system 1 that analyzes a gas leak from a gas leak inspection image of a gas facility. Hereinafter, the gas leak detection system 1 according to the embodiment will be described in detail with reference to the drawings.
 図1は、実施の形態に係るガス漏洩検知システム1の概略構成図である。図1に示すように、ガス漏洩検知システム1は、通信ネットワークNに接続された複数のガス可視化撮像装置10、ガス漏洩検知装置20、機械学習用データ生成装置30、及び記憶手段40で構成される。 FIG. 1 is a schematic configuration diagram of a gas leak detection system 1 according to an embodiment. As shown in FIG. 1, the gas leak detection system 1 includes a plurality of gas visualization image pickup devices 10 connected to a communication network N, a gas leak detection device 20, a machine learning data generation device 30, and a storage means 40. To.
 通信ネットワークNは、例えば、インターネットであり、ガス可視化撮像装置10、ガス漏洩検知装置20、複数の機械学習用データ生成装置30、及び記憶手段40が、互いに情報を交換できるように接続されている。 The communication network N is, for example, the Internet, and a gas visualization image pickup device 10, a gas leak detection device 20, a plurality of machine learning data generation devices 30, and a storage means 40 are connected so as to be able to exchange information with each other. ..
 (ガス可視化撮像装置10、他)
 ガス可視化撮像装置10は、監視対象を撮像し、撮像された赤外線画像を処理し、検査画像における漏洩ガス部分を着色して検査対象のガス漏れ部分を可視化してガス漏洩検知装置20に画像を提供する装置またはシステムである。例えば、赤外線を検知して撮像する赤外線カメラからなる撮像手段(不図示)、撮像手段によって検査対象を撮像した検査画像から検査対象のガス漏れ部分を可視化する可視化処理手段(不図示)、通信ネットワークNに出力するインターフェイス回路(不図示)を備える。
(Gas visualization imager 10, etc.)
The gas visualization image pickup device 10 images the monitored object, processes the captured infrared image, colors the leaked gas portion in the inspection image, visualizes the gas leak portion of the inspection target, and displays the image on the gas leak detection device 20. The device or system provided. For example, an imaging means (not shown) consisting of an infrared camera that detects and captures infrared rays, a visualization processing means (not shown) that visualizes a gas leak portion of the inspection target from an inspection image captured by the imaging means, and a communication network. An interface circuit (not shown) that outputs to N is provided.
 赤外線カメラによる画像は、一般に炭化水素系ガスの検出に用いられる。例えば、波長3.2~3.4μmの赤外光を検知して画像化する、いわゆる赤外線カメラであり、メタン、エタン、エチレン、プロピレンなど炭化水素系ガスを検知可能である。 Images taken with an infrared camera are generally used for detecting hydrocarbon gases. For example, it is a so-called infrared camera that detects and images infrared light having a wavelength of 3.2 to 3.4 μm, and can detect hydrocarbon-based gases such as methane, ethane, ethylene, and propylene.
 ガス可視化撮像装置10は、図2の模式図に示すように、赤外線カメラの視野範囲310に監視対象300が含まれるように設置される。得られた検査画像は、例えば、秒間30フレームの画像を伝送するための映像信号である。ガス可視化撮像装置10は、撮像した画像を所定の映像信号に変換する。本実施の形態では、赤外線カメラから取得された赤外線画像信号は、映像信号を画像に復元されて、複数のフレームからなる動画像として可視化処理手段に出力される。画像は監視対象を撮像した赤外線写真であり、画素値として赤外線の強度を有する。 As shown in the schematic diagram of FIG. 2, the gas visualization image pickup device 10 is installed so that the monitoring target 300 is included in the field of view range 310 of the infrared camera. The obtained inspection image is, for example, a video signal for transmitting an image of 30 frames per second. The gas visualization image pickup device 10 converts the captured image into a predetermined video signal. In the present embodiment, the infrared image signal acquired from the infrared camera is restored to an image and output to the visualization processing means as a moving image composed of a plurality of frames. The image is an infrared photograph of the monitored object, and has the intensity of infrared rays as a pixel value.
 なお、ガス分布画像のサイズや動画としてのフレーム数が過大であると機械学習および機械学習に基づく判定の演算量が大きくなる。実施の形態1では、ガス分布画像の画素数は224×224ピクセルであり、フレーム数は16である。 If the size of the gas distribution image or the number of frames as a moving image is excessive, the amount of calculation for machine learning and judgment based on machine learning becomes large. In the first embodiment, the number of pixels of the gas distribution image is 224 × 224 pixels, and the number of frames is 16.
 可視化処理手段では、検査画像における漏洩ガス部分を着色して検査対象のガス漏れを検出する可視化する。例えば、特許文献1等に記載された公知の光学的ガス漏れ検出方法を実行することにより、光学的ガス漏れ検出方法の機能を実現する。具体的には、ガス可視化撮像装置では、絶対温度0(K)以上の背景物体から放射される電磁波量の変化をとらえることでガスの存在を検知する。電磁波量の変化は、主に赤外線領域の電磁波がガスによって吸収されたり、ガス自身から黒体放射が発生することで生じる。ガス可視化撮像装置10では、監視対象空間を撮影することで、ガス漏洩を画像としてとらえることができるため、格子点状の場所の監視しかできない従来の検知プローブ式に比較して、より早期にガス漏洩を検知し、ガスの存在箇所を正確にとらえることができる。 In the visualization processing means, the leaked gas portion in the inspection image is colored to visualize the gas leak to be inspected. For example, by executing the known optical gas leak detection method described in Patent Document 1 and the like, the function of the optical gas leak detection method is realized. Specifically, the gas visualization image pickup device detects the presence of gas by capturing a change in the amount of electromagnetic waves radiated from a background object having an absolute temperature of 0 (K) or higher. The change in the amount of electromagnetic waves is mainly caused by the absorption of electromagnetic waves in the infrared region by the gas or the generation of blackbody radiation from the gas itself. In the gas visualization image pickup device 10, gas leakage can be captured as an image by photographing the monitored space, so that gas can be captured earlier than the conventional detection probe type that can only monitor grid-like locations. Leakage can be detected and the location of gas can be accurately detected.
 以上により、検査対象を撮影した赤外画像に、炭化水素系ガスによる吸収や放射が生じた領域があるとき、その領域を、例えば、赤外線変化量を色情報に変換し色情報をマッピングすることで検査画像におけるガス漏れ部分を可視化することができる。 As described above, when there is a region where absorption or radiation is generated by the hydrocarbon gas in the infrared image of the inspection target, for example, the infrared change amount is converted into color information and the color information is mapped. The gas leak portion in the inspection image can be visualized with.
 可視化された検査画像は、メモリ等に一時的に記憶され、操作入力に基づき、通信ネットワークNを介して記憶手段40に転送されて保存される。 The visualized inspection image is temporarily stored in a memory or the like, and is transferred to and stored in the storage means 40 via the communication network N based on the operation input.
 なお、ガス可視化撮像装置10はこれに限らず、監視対象のガスを検知可能な撮像装置であればよく、例えば、監視対象が白煙化した水蒸気など可視光で検知可能なガスであれば、一般的な可視光カメラであってもよい。なお、本明細書において、ガスとは、配管やタンク等の閉鎖空間から漏出した気体であって、意図的に大気中に拡散させたものではないものを指す。 The gas visualization image pickup device 10 is not limited to this, and may be any image pickup device capable of detecting the gas to be monitored. For example, if the monitoring target is a gas that can be detected by visible light such as white smoked water vapor. It may be a general visible light camera. In the present specification, the gas refers to a gas leaked from a closed space such as a pipe or a tank, and is not intentionally diffused into the atmosphere.
 図1に戻り、記憶手段40は、ガス可視化撮像装置10から送信される検査画像を記憶する記憶装置であり、例えばDRAM(Dynamic Random Access Memory)などの揮発性メモリ、及び、例えばハードディスクなどの不揮発性メモリを含んで構成される。 Returning to FIG. 1, the storage means 40 is a storage device for storing inspection images transmitted from the gas visualization image pickup device 10, and is a volatile memory such as a DRAM (Dynamic Random Access Memory) and a non-volatile memory such as a hard disk. It is configured to include sexual memory.
 (ガス漏洩検知装置20)
 ガス漏洩検知装置20は、ガス可視化撮像装置10から監視対象を撮像した画像を取得し、画像に基づいてガス領域の検出を行い、表示部24を通じてユーザにガス検知を通知する装置である。ガス漏洩検知装置20は、例えば、一般的なCPU(Central Processing Unit)とRAM(Random Access Memory)と、これらで実行されるプログラムを備えるコンピュータとして実現される。なお、後述するように、ガス漏洩検知装置20は、演算装置としてのGPU(Graphics Processing Unit)とRAMをさらに備えてもよい。
(Gas leak detection device 20)
The gas leak detection device 20 is a device that acquires an image of a monitored object from the gas visualization image pickup device 10, detects a gas region based on the image, and notifies the user of the gas detection through the display unit 24. The gas leak detection device 20 is realized as, for example, a computer including a general CPU (Central Processing Unit), a RAM (Random Access Memory), and a program executed by these. As will be described later, the gas leak detection device 20 may further include a GPU (Graphics Processing Unit) and a RAM as arithmetic units.
 以下、ガス漏洩検知装置20の構成について説明する。図3は、ガス漏洩検知装置20の構成を示す図である。図3に示すように、ガス漏洩検知装置20は、制御部(CPU)21、通信部22、記憶部23、表示部24、操作入力部25を備え、制御部21によりガス漏洩検知プログラムを実行するコンピュータとして実現される。 Hereinafter, the configuration of the gas leak detection device 20 will be described. FIG. 3 is a diagram showing the configuration of the gas leak detection device 20. As shown in FIG. 3, the gas leak detection device 20 includes a control unit (CPU) 21, a communication unit 22, a storage unit 23, a display unit 24, and an operation input unit 25, and the control unit 21 executes a gas leak detection program. It is realized as a computer.
 通信部22は、ガス漏洩検知装置20、記憶手段40と情報の送受信を行う。 The communication unit 22 transmits / receives information to / from the gas leak detection device 20 and the storage means 40.
 表示部24は、例えば液晶パネル等であり、CPU21が生成した表示画面を表示する。 The display unit 24 is, for example, a liquid crystal panel or the like, and displays a display screen generated by the CPU 21.
 記憶部23は、ガス漏洩検知装置20が動作するために必要なプログラム231などを記憶している他、CPU21の計算結果を一時的に格納する一時記憶領域としての機能を有する。記憶部23は、例えばDRAMなどの揮発性メモリ、及び、例えばハードディスクなどの不揮発性メモリを含んで構成される。 The storage unit 23 stores the program 231 and the like required for the gas leak detection device 20 to operate, and also has a function as a temporary storage area for temporarily storing the calculation result of the CPU 21. The storage unit 23 includes, for example, a volatile memory such as a DRAM and a non-volatile memory such as a hard disk.
 制御部21は、記憶部23にガス漏洩検知プログラム231を実行することにより、ガス漏洩検知装置20の各機能を実現する。 The control unit 21 realizes each function of the gas leak detection device 20 by executing the gas leak detection program 231 in the storage unit 23.
 図4(a)は、制御部21の機能ブロック図である。図4(b)は、制御部21における、機械学習部2141の機能ブロック図である。 FIG. 4A is a functional block diagram of the control unit 21. FIG. 4B is a functional block diagram of the machine learning unit 2141 in the control unit 21.
 ガス漏洩検知装置20は、図4(a)に示すように、検査画像取得部211、ガス分布画像取得部212、漏洩位置情報取得部213、機械学習部2141、学習モデル保持部2142、判定結果出力部215を備える。機械学習部2141と学習モデル保持部2142とは、ガス漏洩位置同定部214を構成する。また、検査画像取得部211、ガス分布画像取得部212、漏洩位置情報取得部213、ガス漏洩位置同定部214、及び判定結果出力部215は、ガス漏洩位置同定装置210を構成する。 As shown in FIG. 4A, the gas leak detection device 20 includes an inspection image acquisition unit 211, a gas distribution image acquisition unit 212, a leak position information acquisition unit 213, a machine learning unit 2141, a learning model holding unit 2142, and a determination result. The output unit 215 is provided. The machine learning unit 2141 and the learning model holding unit 2142 constitute a gas leak position identification unit 214. Further, the inspection image acquisition unit 211, the gas distribution image acquisition unit 212, the leak position information acquisition unit 213, the gas leak position identification unit 214, and the determination result output unit 215 constitute the gas leak position identification device 210.
 検査画像取得部211は、ガス可視化撮像装置10から検査画像を取得する回路である。検査画像は、赤外線カメラによって撮像された赤外線画像を処理し、検査画像における漏洩ガス部分を着色して検査対象のガス漏れ部分を可視化したガス分布を示す画像である。 The inspection image acquisition unit 211 is a circuit that acquires an inspection image from the gas visualization image pickup device 10. The inspection image is an image showing a gas distribution obtained by processing an infrared image captured by an infrared camera and coloring the leaked gas portion in the inspection image to visualize the gas leaked portion to be inspected.
 ガス分布画像取得部212は、ガス可視化撮像装置10が生成する検査画像と同一のフォーマットからなるガス分布画像であって、ガス漏洩位置が既知である画像を取得する回路である。ここで、ガス分布画像は、1つの漏洩源から漏出したガス雲を撮像した画像であり、漏洩源の位置であるガス漏洩位置は、ガス分布画像に対して1つだけ存在する。なお、ガス分布画像取得部212は、取得した画像が、検査画像取得部211が生成するガス分布画像と同一のフォーマットでない場合には、同一のフォーマットとなるように切り出しやゲイン調整等の加工を行ってもよい。また、例えば、取得した画像が3次元ボクセルデータである場合、1点からの視点の2次元画像に変換を行ってもよい。 The gas distribution image acquisition unit 212 is a circuit for acquiring a gas distribution image having the same format as the inspection image generated by the gas visualization image pickup device 10 and having a known gas leak position. Here, the gas distribution image is an image of an image of a gas cloud leaking from one leakage source, and there is only one gas leakage position, which is the position of the leakage source, with respect to the gas distribution image. If the acquired image is not in the same format as the gas distribution image generated by the inspection image acquisition unit 211, the gas distribution image acquisition unit 212 performs processing such as cutting out and gain adjustment so as to have the same format. You may go. Further, for example, when the acquired image is three-dimensional voxel data, it may be converted into a two-dimensional image of a viewpoint from one point.
 漏洩位置情報取得部213は、ガス分布画像取得部212が取得するガス分布画像に対応するガス漏洩位置を取得する回路である。ガス漏洩位置は、ガス分布画像取得部212が取得するガス分布画像内の座標として指定される。なお、取得したガス漏洩位置が空間上の座標である場合には、ガス分布画像内の座標に変換を行う。 The leak position information acquisition unit 213 is a circuit for acquiring the gas leakage position corresponding to the gas distribution image acquired by the gas distribution image acquisition unit 212. The gas leak position is designated as the coordinates in the gas distribution image acquired by the gas distribution image acquisition unit 212. If the acquired gas leak position is a coordinate in space, it is converted to the coordinate in the gas distribution image.
 機械学習部2141は、ガス分布画像取得部212が受け付けたガス分布画像と、漏洩位置情報取得部213が受け付けたガス分布画像に対応するガス漏洩位置との組み合わせに基づいて機械学習を実行し、機械学習モデルを生成する回路である。機械学習モデルは、ガス分布画像が有する特徴量、例えば、ガス領域の外周形状、ガスの濃淡分布、および、これらの時間変化等の組み合わせに基づいてガス漏洩源の位置を示すガス漏洩位置を予測するように形成される。機械学習としては、例えば、畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)を用いることができ、PyTorchなどの公知のソフトウェアを用いることができる。 The machine learning unit 2141 executes machine learning based on the combination of the gas distribution image received by the gas distribution image acquisition unit 212 and the gas leakage position corresponding to the gas distribution image received by the leak position information acquisition unit 213. It is a circuit that generates a machine learning model. The machine learning model predicts the gas leak position indicating the position of the gas leak source based on the combination of the feature amount of the gas distribution image, for example, the outer peripheral shape of the gas region, the gas shading distribution, and the time change of these. Formed to do. As machine learning, for example, a convolutional neural network (CNN) can be used, and known software such as PyTorch can be used.
 図4(b)は、機械学習モデルの論理構成の概要を示す模式図である。機械学習モデルは、入力層51、中間層52-1、中間層52-2、…、中間層52-n、出力層53を備え、学習によって層間フィルタが最適化される。例えば、ガス分布画像の画素数が224×224ピクセルでありフレーム数が16である場合、入力層51は、ガス分布画像の画素値を入力した224×224×16の3次元テンソルを受け付ける。中間層52-1は例えば畳み込み層であり、入力層51のデータから畳み込み演算によって生成される224×224×16の3次元テンソルを受け付ける。中間層52-2は例えばプーリング層であり、中間層52-1のデータをリサイズした3次元テンソルを受け付ける。中間層52-nは例えば全結合層であり、中間層52-(n-1)のデータを、座標値を示す2次元ベクトルに変換する。なお、中間層の構成は一例であり、また、中間層の数nは3~5程度であるが、これに限られない。また、図4(b)では各層のニューロン数は同一として描画しているが、各層は任意の数のニューロンを有してよい。機械学習部2141は、ガス分布画像としての動画像を入力とし、ガス漏洩位置を正解とする学習を行って機械学習モデルを生成し、学習モデル保持部2142に出力する。なお、機械学習部2141は、ガス漏洩検知装置20が演算装置としてのGPUとRAMを備える場合には、GPUとソフトウェアとによって実現されてもよい。 FIG. 4B is a schematic diagram showing an outline of the logical configuration of the machine learning model. The machine learning model includes an input layer 51, an intermediate layer 52-1, an intermediate layer 52-2, ..., An intermediate layer 52-n, and an output layer 53, and the interlayer filter is optimized by learning. For example, when the number of pixels of the gas distribution image is 224 × 224 pixels and the number of frames is 16, the input layer 51 accepts a 224 × 224 × 16 three-dimensional tensor in which the pixel value of the gas distribution image is input. The intermediate layer 52-1 is, for example, a convolution layer, and receives a 224 × 224 × 16 three-dimensional tensor generated by a convolution operation from the data of the input layer 51. The intermediate layer 52-2 is, for example, a pooling layer, and accepts a three-dimensional tensor obtained by resizing the data of the intermediate layer 52-1. The intermediate layer 52-n is, for example, a fully connected layer, and the data of the intermediate layer 52- (n-1) is converted into a two-dimensional vector showing coordinate values. The configuration of the intermediate layer is an example, and the number n of the intermediate layers is about 3 to 5, but the number n is not limited to this. Further, in FIG. 4B, the number of neurons in each layer is drawn as the same, but each layer may have an arbitrary number of neurons. The machine learning unit 2141 receives a moving image as a gas distribution image as an input, performs learning with the gas leakage position as the correct answer, generates a machine learning model, and outputs it to the learning model holding unit 2142. The machine learning unit 2141 may be realized by the GPU and software when the gas leak detection device 20 includes a GPU and a RAM as arithmetic units.
 学習モデル保持部2142は、機械学習部2141によって生成された機械学習モデルを保持し、当該機械学習モデルを用いて検査画像取得部211が取得したガス分布画像に対応するガス漏洩位置を出力する回路である。ガス漏洩位置は、入力されたガス分布画像内の座標値として特定され出力される。 The learning model holding unit 2142 holds the machine learning model generated by the machine learning unit 2141 and outputs the gas leakage position corresponding to the gas distribution image acquired by the inspection image acquisition unit 211 using the machine learning model. Is. The gas leak position is specified and output as a coordinate value in the input gas distribution image.
 判定結果出力部215は、検査画像取得部211が取得した動画像上に、学習モデル保持部2142が出力したガス漏洩位置を重畳して表示部24に表示するための画像を生成する回路である。 The determination result output unit 215 is a circuit for generating an image for displaying on the display unit 24 by superimposing the gas leak position output by the learning model holding unit 2142 on the moving image acquired by the inspection image acquisition unit 211. ..
 (機械学習用データ生成装置30)
 以下、機械学習用データ生成装置30の構成について説明する。図5は、機械学習用データ生成装置30の構成を示す図である。図5に示すように、機械学習用データ生成装置30は、制御部(CPU)31、通信部32、記憶部33、表示部34、操作入力部35を備え、制御部31により機械学習用データ生成プログラムを実行するコンピュータとして実現される。
(Data generation device 30 for machine learning)
Hereinafter, the configuration of the machine learning data generation device 30 will be described. FIG. 5 is a diagram showing the configuration of the machine learning data generation device 30. As shown in FIG. 5, the machine learning data generation device 30 includes a control unit (CPU) 31, a communication unit 32, a storage unit 33, a display unit 34, and an operation input unit 35, and the control unit 31 provides machine learning data. It is realized as a computer that executes the generator.
 制御部31は、記憶部33に機械学習用データ生成プログラム331を実行することにより、機械学習用データ生成装置30の機能を実現する。 The control unit 31 realizes the function of the machine learning data generation device 30 by executing the machine learning data generation program 331 in the storage unit 33.
 図6(a)は、機械学習用データ生成装置30の機能ブロック図である。図6(b)は、変形例1に係る機械学習用データ生成装置30Aの機能ブロック図であり、機械学習用データ生成装置30Aについては後述する。図6(a)(b)における、各機能ブロックにおいて入力される処理に必要な、条件パラメータは下表のとおりである。 FIG. 6A is a functional block diagram of the machine learning data generation device 30. FIG. 6B is a functional block diagram of the machine learning data generation device 30A according to the first modification, and the machine learning data generation device 30A will be described later. The condition parameters required for the processing input in each functional block in FIGS. 6A and 6B are as shown in the table below.
Figure JPOXMLDOC01-appb-T000001
Figure JPOXMLDOC01-appb-T000001
 図6(a)に示すように、機械学習用データ生成装置30は、3次元構造物モデリング部311、3次元流体シミュレーション実行部312、2次元単視点ガス分布画像変換処理部313を備える。 As shown in FIG. 6A, the machine learning data generation device 30 includes a three-dimensional structure modeling unit 311, a three-dimensional fluid simulation execution unit 312, and a two-dimensional single-viewpoint gas distribution image conversion processing unit 313.
 3次元構造物モデリング部311は、操作者からの操作入力部35への操作入力に基づき、3次元構造部モデル設計を行い、3次元空間に構造物をレイアウトする3次元構造物モデリングを行い、構造物3次元データDTstrを後段に出力する。構造物3次元データDTstrとは、例えば配管やその他のプラント施設の3次元的な形状を表す形状データである。3次元構造物モデリングには、市販の3次元CAD(Computer-Aided Design)ソフトウェアを用いることができる。 The 3D structure modeling unit 311 designs a 3D structure unit model based on the operation input from the operator to the operation input unit 35, and performs 3D structure modeling for laying out the structure in the 3D space. The structure 3D data DTstr is output to the subsequent stage. The three-dimensional structure data DTstr is shape data representing, for example, the three-dimensional shape of a pipe or other plant facility. Commercially available 3D CAD (Computer-Aided Design) software can be used for 3D structure modeling.
 図7(a)は、構造物3次元データDTstrのデータ構造を示す模式図である。ここで、本明細書では、各図におけるX方向、Y方向、Z方向を、それぞれ、幅方向、奥行方向、高さ方向とする。図7(a)に示すように、構造物3次元データDTstrは、3次元空間を表す3次元ボクセルデータであり、X方向、Y方向、Z方向の座標に配された構造物識別情報Stdから構成される。構造物識別情報Stdは、3次元の形状データとして表現されているため、例えば、「構造物あり」、「構造物なし」といった、0、1による2値画像として記録されてもよい。 FIG. 7A is a schematic diagram showing the data structure of the three-dimensional structure data DTstr. Here, in the present specification, the X direction, the Y direction, and the Z direction in each figure are the width direction, the depth direction, and the height direction, respectively. As shown in FIG. 7A, the structure three-dimensional data DTstr is three-dimensional boxel data representing a three-dimensional space, and is derived from the structure identification information Std arranged at the coordinates in the X direction, the Y direction, and the Z direction. It is composed. Since the structure identification information Std is expressed as three-dimensional shape data, it may be recorded as a binary image with 0 and 1 such as "with structure" and "without structure".
 3次元流体シミュレーション実行部312は、構造物3次元データDTstrを入力として取得し、さらに、操作者からの操作入力部35への操作入力に基づき、流体シミュレーションに必要な条件パラメータCP1を取得する。条件パラメータCP1は、例えば、表1に示すような、ガス種、ガス流量、3次元空間におけるガス流速、ガス漏洩源の形状、口径、位置といった、主にガス漏洩にかかわる流体シミュレーションに必要な設定条件を定めるパラメータである。これらの条件パラメータを何種類も変化させて画像を生成することで、多数の学習データを生成することが可能となる。 The 3D fluid simulation execution unit 312 acquires the structure 3D data DTstr as an input, and further acquires the condition parameter CP1 required for the fluid simulation based on the operation input from the operator to the operation input unit 35. The condition parameter CP1 is a setting required for fluid simulation mainly related to gas leakage, such as gas type, gas flow rate, gas flow velocity in three-dimensional space, shape, diameter, and position of gas leakage source, as shown in Table 1. It is a parameter that determines the conditions. By generating an image by changing many kinds of these conditional parameters, it is possible to generate a large number of training data.
 そして、3次元構造物モデリングが行われた3次元空間において、3次元流体シミュレーションを行い、3次元ガス分布画像データDTgasを生成し後段に出力する。3次元ガス分布画像データDTgasとは、少なくとも3次元のガス濃度分布を含むデータである。計算は、市販の3次元流体シミュレーション用ソフトウェアを用いて行い、例えば、ANSYS Fluent, Flo EFD, Femap/Flowを用いてもよい。 Then, in the 3D space where the 3D structure modeling is performed, the 3D fluid simulation is performed, and the 3D gas distribution image data DTgas is generated and output to the subsequent stage. Three-dimensional gas distribution image data DTgas is data including at least three-dimensional gas concentration distribution. The calculation is performed using commercially available 3D fluid simulation software, and for example, ANSYS Fluent, Flo EFD, and Femap / Flow may be used.
 図7(b)は、3次元ガス分布画像データDTgasのデータ構造を示す模式図である。図7(b)に示すように、3次元ガス分布画像データDTgasは、3次元空間を表す3次元ボクセルデータであり、X方向、Y方向、Z方向の座標に配されたボクセルのガス濃度データDst(%)から構成され、さらに、ボクセルのガス流速ベクトルデータVc(Vx、Vy、Vz)を含んでいてもよい。 FIG. 7B is a schematic diagram showing the data structure of the three-dimensional gas distribution image data DTgas. As shown in FIG. 7B, the three-dimensional gas distribution image data DTgas is three-dimensional voxel data representing a three-dimensional space, and voxel gas concentration data arranged at coordinates in the X, Y, and Z directions. It is composed of Dst (%) and may further include voxel gas flow velocity vector data Vc (Vx, Vy, Vz).
 2次元単視点ガス分布画像変換処理部313は、ガス漏洩源から3次元空間に漏洩するガスの3次元ガス分布画像データDTgasを入力して取得し、さらに、操作者からの操作入力部35への操作入力に基づき、2次元単視点ガス分布画像への変換処理に必要な条件パラメータCP2を取得する。条件パラメータCP2は、例えば、表1に示すような、撮像装置画角、視線方向、距離、画像解像度といった、ガス可視化撮像装置の撮影条件にかかわるパラメータである。そして、2次元単視点ガス分布画像変換処理部313は、3次元ガス分布画像データDTgasを所定の視点位置から観察された2次元ガス分布画像データに変換する。実施の形態1では、2次元ガス分布画像データとして、濃度厚み積画像データDTdtに変換する処理が行われる。この濃度厚み積画像DTdtと、3次元構造物モデリング部311から出力される漏洩源座標データとが機械学習用教師データとしてガス漏洩検知装置20に出力される。 The two-dimensional single-viewpoint gas distribution image conversion processing unit 313 inputs and acquires the three-dimensional gas distribution image data DTgas of the gas leaking from the gas leakage source into the three-dimensional space, and further, to the operation input unit 35 from the operator. Based on the operation input of, the condition parameter CP2 necessary for the conversion process to the two-dimensional single-viewpoint gas distribution image is acquired. The condition parameter CP2 is a parameter related to the imaging conditions of the gas visualization image pickup device, such as the angle of view of the image pickup device, the line-of-sight direction, the distance, and the image resolution, as shown in Table 1, for example. Then, the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 converts the three-dimensional gas distribution image data DTgas into the two-dimensional gas distribution image data observed from a predetermined viewpoint position. In the first embodiment, the process of converting the two-dimensional gas distribution image data into the density-thickness product image data DTdt is performed. The density thickness product image DTdt and the leak source coordinate data output from the three-dimensional structure modeling unit 311 are output to the gas leak detection device 20 as teacher data for machine learning.
 濃度厚み積画像DTdtは、ガス可視化撮像装置10によって取得される漏洩ガスが撮像された検査画像に相当する画像であって、視点からどのようにガスが見えるかを表した画像である。さらに、構造物3次元データDTstrの情報を考慮することにより、構造物に遮られて視点から観察できないガス画像を反映しないガス濃度厚み積画像データDTdtを生成することができる。 The density-thickness product image DTdt is an image corresponding to an inspection image in which the leaked gas acquired by the gas visualization image pickup device 10 is captured, and is an image showing how the gas is seen from a viewpoint. Further, by considering the information of the three-dimensional structure data DTstr, it is possible to generate the gas concentration thickness product image data DTdt that is blocked by the structure and does not reflect the gas image that cannot be observed from the viewpoint.
 図8は、3次元単視点ガス分布画像変換処理における濃度厚み積算出方法の概要を説明するための模式図であって、ガスの挙動を表すガスの3次元ガス分布画像データDTgasからガス濃度厚み積画像データDTdtを生成する方法の概念図である。 FIG. 8 is a schematic diagram for explaining an outline of the concentration thickness product calculation method in the three-dimensional single-viewpoint gas distribution image conversion process, and is a gas concentration thickness from the three-dimensional gas distribution image data DTgas of gas showing the behavior of gas. It is a conceptual diagram of the method of generating the product image data DTdt.
 2次元単視点ガス分布画像変換処理部313は、3次元ガス分布画像データDTgasの示す3次元ガス濃度画像を予め設定された視点位置(X,Y,Z)からの視線方向に空間積分した濃度厚み積Dstの値を、視線方向の角度θ及びσを変化させて複数生成し、得られた濃度厚み積Dstの値を2次元に配列して濃度厚み積画像データDTdtを生成する。 The two-dimensional single-viewpoint gas distribution image conversion processing unit 313 spatially integrates the three-dimensional gas concentration image indicated by the three-dimensional gas distribution image data DTgas in the line-of-sight direction from a preset viewpoint position (X, Y, Z). A plurality of thickness product Dst values are generated by changing the angles θ and σ in the line-of-sight direction, and the obtained density thickness product Dst values are arranged two-dimensionally to generate density thickness product image data DTdt.
 具体的には、図8に示すように、3次元空間に任意の視点位置SP(X,Y,Z)を設定し、視点位置SP(X,Y,Z)から3次元ガス分布画像データDTgasの示す3次元ガス濃度画像の方向に所定距離離れた位置に仮想画像面VFを設定する。このとき、仮想画像面VFは、中心Oが視点位置SP(X,Y,Z)と3次元ガス分布画像データDTgasの中心ボクセルを通る直線と交わるように設定される。また、仮想画像面VFの画像枠は、ガス可視化撮像装置10の画角に応じて設定される。そうすると、視点位置SP(X,Y,Z)から仮想画像面VF上の着目画素A(x,y)に向いた視線方向DAは、中心画素Oに向けた視線方向DO、すなわちガス可視化撮像装置の視線方向DOに対し、X方向に角度θ、Y方向に角度σ、傾いた方向となっている。 Specifically, as shown in FIG. 8, an arbitrary viewpoint position SP (X, Y, Z) is set in the three-dimensional space, and the three-dimensional gas distribution image data DTgas is set from the viewpoint position SP (X, Y, Z). The virtual image plane VF is set at a position separated by a predetermined distance in the direction of the three-dimensional gas concentration image indicated by. At this time, the virtual image plane VF is set so that the center O intersects the viewpoint position SP (X, Y, Z) and the straight line passing through the center voxel of the three-dimensional gas distribution image data DTgas. Further, the image frame of the virtual image plane VF is set according to the angle of view of the gas visualization image pickup apparatus 10. Then, the line-of-sight direction DA directed from the viewpoint position SP (X, Y, Z) toward the pixel A (x, y) of interest on the virtual image plane VF is the line-of-sight direction DO toward the central pixel O, that is, a gas visualization image pickup device. The angle θ is in the X direction, the angle is σ in the Y direction, and the direction is tilted with respect to the line-of-sight direction DO.
 この着目画素A(x,y)に対応する視線方向DAに沿って、視線と交差する3次元ガス分布画像のボクセルに対応するガス濃度分布データを、視線と交差するボクセルに関して視線方向DAに空間積分することにより、着目画素A(x,y)に関するガス濃度厚み積の値を算出する。そして、角度θ、σをガス可視化撮像装置10の画角に応じて変化させながら、着目画素A(x,y)の位置を漸動させて、仮想画像面VF上のすべての画素を着目画素A(x,y)としてガス濃度厚み積の値の算出を繰り返すことで、濃度厚み積画像データDTdtが算出される。 The gas concentration distribution data corresponding to the voxels of the three-dimensional gas distribution image that intersects the line of sight along the line-of-sight direction DA corresponding to the pixel A (x, y) of interest is spatially distributed in the line-of-sight direction DA with respect to the voxels that intersect the line of sight. By integrating, the value of the gas concentration thickness product for the pixel A (x, y) of interest is calculated. Then, while changing the angles θ and σ according to the angle of view of the gas visualization image pickup device 10, the positions of the pixels of interest A (x, y) are gradually moved, and all the pixels on the virtual image plane VF are the pixels of interest. By repeating the calculation of the value of the gas concentration thickness product as A (x, y), the density thickness product image data DTdt is calculated.
 さらに、同一の3次元ガス分布画像データDTgasを用い視点位置SP(X,Y,Z)を異ならせて、濃度厚み積画像データDTdtを生成することにより、1回の流体シミュレーションから複数の濃度厚み積画像データDTdtを簡易に生成することができる。 Furthermore, by using the same three-dimensional gas distribution image data DTgas and differentiating the viewpoint position SP (X, Y, Z) to generate the density thickness product image data DTdt, a plurality of concentration thicknesses can be obtained from one fluid simulation. Product image data DTdt can be easily generated.
 図9は、3次元単視点ガス分布画像変換処理において、構造物が存在する条件における濃度厚み積算出方法の概要を説明するための模式図であって、構造物に遮られてガス可視化撮像装置10から観察できない場合を示している。 FIG. 9 is a schematic diagram for explaining the outline of the concentration-thickness product calculation method under the condition that the structure exists in the three-dimensional single-viewpoint gas distribution image conversion process, and is a gas visualization image pickup device blocked by the structure. The case where it cannot be observed from 10 is shown.
 この場合、構造物の3次元的位置を考慮して、視点位置SP(X,Y,Z)からみて構造物より後ろ側に存在するガス濃度分布の空間積分を行わない方法で、着目画素A(x,y)についてガス濃度厚み積の値の算出を行う。 In this case, in consideration of the three-dimensional position of the structure, the pixel A of interest is a method that does not perform spatial integration of the gas concentration distribution existing behind the structure when viewed from the viewpoint position SP (X, Y, Z). The value of the gas concentration thickness product is calculated for (x, y).
 具体的には、視点位置SP(X,Y,Z)から着目画素A(x,y)に対応する視線方向DAに空間積分することにより、着目画素A(x,y)に関するガス濃度厚み積の値を算出する。 Specifically, the gas concentration thickness product for the pixel A (x, y) of interest is obtained by spatially integrating the viewpoint position SP (X, Y, Z) into the line-of-sight direction DA corresponding to the pixel A (x, y) of interest. Calculate the value of.
 そして、仮想画像面VF上のすべての画素について、構造物の3次元的位置を考慮したガス濃度厚み積の値の算出を繰り返すことで、構造物に遮られて視点から観察できないガス画像を反映しないガス濃度厚み積画像DTdtを生成する。 Then, by repeating the calculation of the value of the gas concentration thickness product in consideration of the three-dimensional position of the structure for all the pixels on the virtual image plane VF, the gas image that cannot be observed from the viewpoint because of being blocked by the structure is reflected. No gas concentration thickness product image DTdt is generated.
 図5に戻り、記憶部33は、機械学習用データ生成装置30が動作するために必要なプログラム331などを記憶している他、制御部31の計算結果を一時的に格納する一時記憶領域としての機能を有する。記憶部33は、例えばDRAMなどの揮発性メモリ、及び、例えばハードディスクなどの不揮発性メモリを含んで構成される。 Returning to FIG. 5, the storage unit 33 stores the program 331 and the like necessary for the machine learning data generation device 30 to operate, and also serves as a temporary storage area for temporarily storing the calculation result of the control unit 31. Has the function of. The storage unit 33 includes a volatile memory such as a DRAM and a non-volatile memory such as a hard disk.
 通信部32は、機械学習用データ生成装置30、記憶手段40と情報の送受信を行う。 The communication unit 32 transmits / receives information to / from the machine learning data generation device 30 and the storage means 40.
 表示部34は、例えば液晶パネル等であり、CPU31の生成した表示画面を表示する。 The display unit 34 is, for example, a liquid crystal panel or the like, and displays a display screen generated by the CPU 31.
 <2次元単視点ガス分布画像変換処理の動作>
 次に、図面を用いて、機械学習用データ生成装置30における2次元単視点ガス分布画像変換処理動作を説明する。
<Operation of two-dimensional single-viewpoint gas distribution image conversion processing>
Next, the two-dimensional single-viewpoint gas distribution image conversion processing operation in the machine learning data generation device 30 will be described with reference to the drawings.
 図10は、2次元単視点ガス分布画像変換処理の概要を示すフローチャートである。当該処理は、制御部31によってその機能が構成される2次元単視点ガス分布画像変換処理部313によって実行される。 FIG. 10 is a flowchart showing an outline of the two-dimensional single-viewpoint gas distribution image conversion process. The processing is executed by the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 whose function is configured by the control unit 31.
 先ず、2次元単視点ガス分布画像変換処理部313は、構造物3次元データDTstrを取得し(ステップS1)、さらに、3次元ガス分布画像データDTgasを取得する(ステップS2)。次に、操作入力に基づき、条件パラメータCP2として、例えば、撮像装置画角、視線方向、距離、画像解像度に関する情報の入力を受け付ける(ステップS3)。さらに、操作入力に基づき、ガス可視化撮像装置10の撮像部分の位置に相当する視点位置SP(X,Y,Z)を3次元空間内に設定する(ステップS4)。 First, the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 acquires the structure three-dimensional data DTstr (step S1), and further acquires the three-dimensional gas distribution image data DTgas (step S2). Next, based on the operation input, as the condition parameter CP2, for example, the input of information regarding the angle of view of the image pickup device, the line-of-sight direction, the distance, and the image resolution is accepted (step S3). Further, based on the operation input, the viewpoint position SP (X, Y, Z) corresponding to the position of the image pickup portion of the gas visualization image pickup apparatus 10 is set in the three-dimensional space (step S4).
 次に、視点位置SP(X,Y,Z)から3次元ガス濃度画像の方向に所定距離離れた仮想画像面VFを設定し、上述のとおり、仮想画像面VFの画像枠の位置をガス可視化撮像装置10の画角に応じて算出する(ステップS5)。 Next, a virtual image plane VF separated from the viewpoint position SP (X, Y, Z) in the direction of the three-dimensional gas concentration image by a predetermined distance is set, and as described above, the position of the image frame of the virtual image plane VF is visualized by gas. It is calculated according to the angle of view of the image pickup apparatus 10 (step S5).
 次に、着目画素A(x,y)の座標を初期値に設定し(ステップS6)、視点位置SP(X,Y,Z)から仮想画像面VF上の着目画素A(x,y)に向いた視線上の位置LVを初期値に設定する(ステップS7)。 Next, the coordinates of the pixel of interest A (x, y) are set to the initial value (step S6), and the pixel of interest A (x, y) on the virtual image plane VF is changed from the viewpoint position SP (X, Y, Z). The position LV on the line of sight is set to the initial value (step S7).
 次に、視線と交差する構造物3次元データDTstrのボクセルの構造物識別情報Stdが「構造物なし」(Std=0)を表すものであるか否かを判定し(ステップS8)、「構造物あり」である場合には、着目画素A(x,y)の位置を漸動させて(ステップS9)、ステップS7に戻り、「構造物あり」ではない場合にはステップS10に進む。 Next, it is determined whether or not the structure identification information Std of the boxel of the structure three-dimensional data DTstr that intersects the line of sight represents "no structure" (Std = 0) (step S8), and "structure". If "there is an object", the position of the pixel A (x, y) of interest is gradually moved (step S9), the process returns to step S7, and if it is not "there is a structure", the process proceeds to step S10.
 ステップS10では、視線と交わる3次元ガス分布画像データDTgasのボクセルの存否を判定し、視線と交差する構造物3次元データDTstrのボクセルが存在しない場合には、視線上の位置LVを単位長さ(例えば、1)だけインクリメントして(ステップS11)、ステップS8に戻り、視線と交差するボクセルが存在する場合には、そのボクセルの濃度厚み値Dstを読込み、加算レジスタ等に記憶されている積分値Dstiとの和を新たな積分値Dstiとして加算レジスタ等に保存する(ステップS12)。 In step S10, the presence or absence of voxels in the three-dimensional gas distribution image data DTgas that intersects the line of sight is determined, and if there is no voxel in the structure three-dimensional data DTstr that intersects the line of sight, the position LV on the line of sight is used as the unit length. (For example, increment by 1) (step S11), return to step S8, and if there is a voxel that intersects the line of sight, the density thickness value Dst of the voxel is read and the integration stored in the addition register or the like. The sum with the value Dsti is stored in an addition register or the like as a new integrated value Dsti (step S12).
 次に、視線とボクセルが交差する範囲に相当する視線全長について計算を完了したか否かを判定し(ステップS13)、完了していない場合には、視線上の位置LVを単位長さだけインクリメントして(ステップS14)、ステップS8に戻り、完了している場合には、着目画素A(x,y)に対応する視線方向DAに沿って、視線と交差する3次元ガス分布画像のボクセルのガス濃度分布データを空間積分して、着目画素A(x,y)に関するガス濃度厚み積の値が算出される。 Next, it is determined whether or not the calculation is completed for the total length of the line of sight corresponding to the range where the line of sight and the voxel intersect (step S13), and if not, the position LV on the line of sight is incremented by the unit length. Then (step S14), the process returns to step S8, and if completed, the voxels of the three-dimensional gas distribution image intersecting the line of sight along the line-of-sight direction DA corresponding to the pixel A (x, y) of interest. The gas concentration distribution data is spatially integrated to calculate the value of the gas concentration thickness product for the pixel A (x, y) of interest.
 次に、仮想画像面VF上の全ての画素についてガス濃度厚み積の値の算出が完了したか否かを判定し(ステップS15)、完了していない場合には、着目画素A(x,y)の位置を漸動させて(ステップS16)、ステップS7に戻り、完了している場合には、仮想画像面VF上のすべての画素についてガス濃度厚み積の値が算出され、仮想画像面VFに関する2次元ガス分布画像データとしてガス濃度厚み積画像DTdtが生成される。 Next, it is determined whether or not the calculation of the gas concentration thickness product is completed for all the pixels on the virtual image plane VF (step S15), and if not, the pixel of interest A (x, y). ) Is gradually moved (step S16) and returned to step S7. If completed, the value of the gas concentration thickness product is calculated for all the pixels on the virtual image surface VF, and the virtual image surface VF is calculated. A gas concentration thickness product image DTdt is generated as the two-dimensional gas distribution image data.
 次に、算出すべき全ての視点位置SP(X,Y,Z)についてガス濃度厚み積画像DTdtの生成が完了したか否かを判定し(ステップS17)、完了していない場合には、ステップS4に戻り、操作入力された新たな視点位置SP(X,Y,Z)についてガス濃度厚み積画像DTdtの生成し、完了している場合には処理を終了する。 Next, it is determined whether or not the generation of the gas concentration thickness product image DTdt is completed for all the viewpoint positions SP (X, Y, Z) to be calculated (step S17), and if not, the step is not completed. Returning to S4, the gas concentration thickness product image DTdt is generated for the new viewpoint position SP (X, Y, Z) input by the operation, and if it is completed, the process is terminated.
 <変形例1>
 以上、実施の形態1に係る機械学習用データ生成装置30を説明したが、本開示は、その本質的な特徴的構成要素を除き、以上の実施の形態に何ら限定を受けるものではない。以下では、そのような形態の一例として、上記した実施の形態の変形例を説明する。
<Modification 1>
Although the machine learning data generation device 30 according to the first embodiment has been described above, the present disclosure is not limited to the above embodiments except for its essential characteristic components. Hereinafter, as an example of such an embodiment, a modified example of the above-described embodiment will be described.
 図6(b)は、変形例1に係る機械学習用データ生成装置30Aの機能ブロック図である。図6(b)に示すように、機械学習用データ生成装置30Aは、機械学習用データ生成装置30から3次元構造物モデリング部311を除いた構成としてもよい。この場合、3次元流体シミュレーション実行部312では、3次元構造物がない3次元空間において、3次元流体シミュレーションを行い、3次元ガス分布画像データDTgasを生成する。また、2次元単視点ガス分布画像変換処理部313では、構造物3次元データDTstrの情報を考慮せずに濃度厚み積画像DTdtを生成する。 FIG. 6B is a functional block diagram of the machine learning data generation device 30A according to the first modification. As shown in FIG. 6B, the machine learning data generation device 30A may be configured by removing the three-dimensional structure modeling unit 311 from the machine learning data generation device 30. In this case, the 3D fluid simulation execution unit 312 performs a 3D fluid simulation in a 3D space without a 3D structure and generates 3D gas distribution image data DTgas. Further, the two-dimensional single-viewpoint gas distribution image conversion processing unit 313 generates the density-thickness product image DTdt without considering the information of the structure three-dimensional data DTstr.
 機械学習用データ生成装置30Aでは、構造物3次元データDTstrが生成されてなく、出力されたガス濃度厚み積画像データDTdtは漏洩源が画像上に見えた状態になっている。しかしながら、例えば、ガス濃度厚み積画像データDTdtを生成した後に、あるいは、ガス濃度厚み積画像データDTdtを生成する際に、ガス濃度厚み積画像における漏洩源の部分を隠蔽するマスキング処理を行うことで、マスキング情報を含むガス濃度厚み積画像データDTdtを学習用データとして用いることができる。ここでも、条件パラメータCP1,CP2を変化させて、さらにはマスキングの形状を変化させてことで、多様なガス濃度厚み積画像データDTdtを学習データとして多数作成することができる。 In the machine learning data generation device 30A, the structure 3D data DTstr is not generated, and the output gas concentration thickness product image data DTdt is in a state where the leakage source can be seen on the image. However, for example, after the gas concentration thickness product image data DTdt is generated, or when the gas concentration thickness product image data DTdt is generated, a masking process for concealing the leakage source portion in the gas concentration thickness product image is performed. , Gas concentration thickness product image data DTdt including masking information can be used as learning data. Here, too, by changing the condition parameters CP1 and CP2 and further changing the shape of the masking, a large number of various gas concentration thickness product image data DTdt can be created as learning data.
 <小 括>
 以上により、機械学習用データ生成装置30により生成された多様なガス濃度厚み積画像データDTdtと、漏洩源座標データPTlkとをセットとして、ガス漏洩検知装置20に用いる機械学習用教師データとして用いることができる。
<Summary>
As described above, the various gas concentration thickness product image data DTdt generated by the machine learning data generation device 30 and the leakage source coordinate data PTlk are set and used as the machine learning teacher data used in the gas leakage detection device 20. Can be done.
 ガス漏洩検知装置20の監視対象であるガス設備には、配管等の機器設備が複雑に配置されている。常時監視する用途を想定した場合、撮像装置は特定箇所に完全に固定して設置されるか、あるいは、撮影方向だけ可変とした状態で特定箇所に設置される。この場合、ガス漏洩源が機器設備の奥に隠れてしまって撮像装置から見えない場合が発生し得る。 Equipment such as piping is complicatedly arranged in the gas equipment to be monitored by the gas leak detection device 20. Assuming an application for constant monitoring, the image pickup device is installed at a specific location completely fixed at a specific location, or is installed at a specific location with only the shooting direction variable. In this case, the gas leak source may be hidden behind the equipment and cannot be seen from the image pickup apparatus.
 従来、ガス漏洩画像から漏洩源位置を特定する技術はあったものの、いずれも撮像装置から見えている範囲において漏洩源位置を同定するため、間違った場所を漏洩源位置として提示してしまう危険性があった。 Conventionally, there was a technique to identify the leak source position from the gas leak image, but in each case, since the leak source position is identified within the range visible from the image pickup device, there is a risk of presenting the wrong place as the leak source position. was there.
 また、風向きが変化している場合には、ガス漏洩の発生個所を特定することが難しくなる。 Also, if the wind direction is changing, it will be difficult to identify the location of the gas leak.
 そこで、機械学習を用いて、漏洩したガスを撮像したガス画像と漏洩源位置の正解データを学習させて、ガス分布を撮像した検査画像から、ガス漏洩源の位置やガス流速といった、様々な条件下におけるガスに関する特徴量を同定する方法を実現する、機械学習を用いたガスに関する特徴量同定方法を構築することが対応として考えられる。 Therefore, using machine learning, we trained the gas image that captured the leaked gas and the correct answer data of the leak source position, and from the inspection image that captured the gas distribution, various conditions such as the position of the gas leak source and the gas flow velocity. It is conceivable to construct a method for identifying gas-related features using machine learning, which realizes a method for identifying gas-related features below.
 しかしながら、機械学習には一般的に万単位の正解データが必要であり、実現のためには、漏洩したガス雲に関する様々な条件下における多量の教師用学習データを効率よく取得することが必要となる。 However, machine learning generally requires 10,000 units of correct answer data, and in order to realize it, it is necessary to efficiently acquire a large amount of learning data for teachers under various conditions regarding leaked gas clouds. Become.
 これに対し、本実施の形態に係る機械学習用データ生成装置、機械学習用データ生成方法によれば、ガス漏洩検知装置20における機械学習モデルの学習のために、例えば、本例におけるガス漏洩画像と、ガス漏洩源位置座標情報といった、入力と正解出力からなる数万組程度の学習データの組を効率的に生成することができ、学習精度向上に寄与できる。 On the other hand, according to the machine learning data generation device and the machine learning data generation method according to the present embodiment, for learning the machine learning model in the gas leak detection device 20, for example, the gas leak image in this example. It is possible to efficiently generate tens of thousands of sets of learning data consisting of input and correct answer output, such as gas leakage source position coordinate information, and contribute to improving learning accuracy.
 すなわち、流体シミュレーション技術によって得られた3次元のガス漏洩挙動シミュレーション結果から、撮像装置によって得られるガス漏洩画像に相当する画像データを算出し、機械学習用の学習データセットとすることで、撮影条件やガス漏洩の挙動に影響を与える各種パラメータを自在に変化させた学習データが得られるので、一例として、機械学習モデルを用いてガス漏洩画像からガス漏洩位置を求める場合において、様々な条件下における多量の学習データの組を効率的に生成でき、学習精度向上に寄与することができる。 That is, by calculating image data corresponding to the gas leak image obtained by the image pickup device from the three-dimensional gas leak behavior simulation result obtained by the fluid simulation technology and using it as a learning data set for machine learning, the shooting conditions are taken. Since learning data can be obtained by freely changing various parameters that affect the behavior of gas leakage and gas leakage, as an example, when finding the gas leakage position from a gas leakage image using a machine learning model, under various conditions. A large amount of training data sets can be efficiently generated, which can contribute to improving learning accuracy.
 ≪実施の形態2≫
 次に、実施の形態2に係る機械学習用データ生成装置30Bについて、説明する。図11(a)は、実施の形態2に係る機械学習用データ生成装置30Bの機能ブロック図である。機械学習用データ生成装置30と同一の構成要素については、同一の番号を付して説明を省略する。
<< Embodiment 2 >>
Next, the machine learning data generation device 30B according to the second embodiment will be described. FIG. 11A is a functional block diagram of the machine learning data generation device 30B according to the second embodiment. The same components as those of the machine learning data generation device 30 are assigned the same numbers, and the description thereof will be omitted.
 機械学習用データ生成装置30Bは、2次元単視点ガス分布画像変換処理部313の後段に、光吸収率画像変換部314Bを備え、操作入力に基づく条件パラメータCP3を取得して、ガス濃度厚み積画像データDTdtを、さらに、光吸収率画像データDTαに変換して後段に出力する。 The machine learning data generation device 30B is provided with a light absorption rate image conversion unit 314B after the two-dimensional single-viewpoint gas distribution image conversion processing unit 313, acquires the condition parameter CP3 based on the operation input, and obtains the gas concentration thickness product. The image data DTdt is further converted into the light absorption rate image data DTα and output to the subsequent stage.
 図12(a)は、濃度厚み積画像データの概要を示す模式図、(b)は、光吸収率画像変換処理における濃度厚み積算出方法の概要を説明するための模式図、(c)は、光吸収率画像データの概要を示す模式図である。 12 (a) is a schematic diagram showing an outline of density-thickness product image data, FIG. 12 (b) is a schematic diagram for explaining an outline of a density-thickness product calculation method in a light absorption rate image conversion process, and FIG. 12 (c) is a schematic diagram. , It is a schematic diagram which shows the outline of the light absorption rate image data.
 機械学習用データ生成装置30Bでは、図12(a)に示すガス濃度厚み積画像データDTdtにおける画素(x,y)におけるガス濃度厚み積の値Dtに対し、図12(b)に示すような、ガス濃度厚み積と光吸収率αの関係性を用いて、図12(c)に示す画素(x,y)におけるガス濃度厚み積に対応した光吸収率の値αを算出する。光吸収率αは条件パラメータCP3で指定されるガス種によって異なり、濃度厚み積の値Dtと光吸収率αとの関係性データ、例えば、データテーブル等に格納されたガス濃度厚み積の値Dtに対応する光吸収率αの値、あるいは、近似曲線を表す数式等に基づき、ガス濃度厚み積画像データDTdtを、濃度厚み積値をガスの光吸収特性に基づいて光吸収率画像データDtαに変換する。ガス種ごとのガス濃度厚み積の値Dtと光吸収率αとの関係は、予め実測により取得してもよい。 In the machine learning data generation device 30B, the value Dt of the gas concentration thickness product in the pixels (x, y) in the gas concentration thickness product image data DTdt shown in FIG. 12 (a) is as shown in FIG. 12 (b). Using the relationship between the gas concentration thickness product and the light absorption rate α, the value α of the light absorption rate corresponding to the gas concentration thickness product in the pixels (x, y) shown in FIG. 12 (c) is calculated. The light absorption rate α differs depending on the gas type specified by the condition parameter CP3, and the relationship data between the concentration thickness product value Dt and the light absorption rate α, for example, the gas concentration thickness product value Dt stored in a data table or the like. The gas concentration thickness product image data DTdt is converted into the light absorption rate image data Dtα based on the light absorption characteristics of the gas, based on the value of the light absorption rate α corresponding to Convert. The relationship between the value Dt of the gas concentration thickness product for each gas type and the light absorption rate α may be obtained in advance by actual measurement.
 <小 括>
 以上により、機械学習用データ生成装置30Bにより生成された光吸収率画像データDTαと、漏洩源座標データPTlkとをセットとして、ガス漏洩検知装置20に用いる機械学習用教師データとして用いることができる。
<Summary>
As described above, the light absorption rate image data DTα generated by the machine learning data generation device 30B and the leakage source coordinate data PTlk can be used as a set as the machine learning teacher data used in the gas leakage detection device 20.
 これにより、実施の形態1に係る機械学習用データ生成装置30Aによる多量の学習データを効率的に生成することができるという効果に加えて、ガス可視化撮像装置10より得られるガス分布画像により近い光吸収率画像データDTαを教師データとして生成することにより、より検査画像に近い教師データを生成することができる。これより、ガス漏洩検知装置における機械学習モデルにおいて、より学習精度向上に寄与できる。 As a result, in addition to the effect that a large amount of learning data can be efficiently generated by the machine learning data generation device 30A according to the first embodiment, the light closer to the gas distribution image obtained from the gas visualization image pickup device 10. By generating the absorption rate image data DTα as the teacher data, it is possible to generate the teacher data closer to the inspection image. As a result, it is possible to further improve the learning accuracy in the machine learning model in the gas leak detection device.
 <変形例2>
 また、図11(b)に示すように、変形例2として、図6(b)に示す変形例1に係る機械学習用データ生成装置30Aと同様に、機械学習用データ生成装置30Bから3次元構造物モデリング部311を除いた機械学習用データ生成装置30Cを構成してもよい。これにより、変形例1と同様、ガス濃度厚み積画像における漏洩源の部分を隠蔽するマスキング処理を、マスキングの形状を多様に変化させて行うことで、多様なマスキング情報を含む光吸収率画像データDTαを学習用データとして作成できる。
<Modification 2>
Further, as shown in FIG. 11B, as the second modification, the machine learning data generation device 30B is three-dimensionally similar to the machine learning data generation device 30A according to the modification 1 shown in FIG. 6B. The machine learning data generation device 30C excluding the structure modeling unit 311 may be configured. As a result, as in the first modification, the masking process for concealing the leakage source portion in the gas concentration thickness product image is performed by changing the masking shape in various ways, so that the light absorption rate image data including various masking information is included. DTα can be created as learning data.
 ≪実施の形態3≫
 次に、実施の形態3に係る機械学習用データ生成装置30Dについて、説明する。図13は、実施の形態3に係る機械学習用データ生成装置30Dの機能ブロック図である。機械学習用データ生成装置30Bと同一の構成要素については、同一の番号を付して説明を省略する。
<< Embodiment 3 >>
Next, the machine learning data generation device 30D according to the third embodiment will be described. FIG. 13 is a functional block diagram of the machine learning data generation device 30D according to the third embodiment. The same components as those of the machine learning data generation device 30B are numbered the same, and the description thereof will be omitted.
 <構 成>
 機械学習用データ生成装置30Dは、2次元単視点ガス分布画像変換処理部313の後段に、新たに背景画像生成部315Dを備え、光吸収率画像変換部314Bの後段に、新たに光強度画像生成部316Dを備えた点で、機械学習用データ生成装置30Bと相違する。
<Structure>
The machine learning data generation device 30D is newly provided with a background image generation unit 315D after the two-dimensional single-viewpoint gas distribution image conversion processing unit 313, and a new light intensity image is provided at the rear stage of the light absorption rate image conversion unit 314B. It differs from the machine learning data generation device 30B in that it includes a generation unit 316D.
 背景画像生成部315Dは、背景箇所データPTbkと、操作入力に基づく条件パラメータCP5を取得して、背景画像データDTIbackを生成する。 The background image generation unit 315D acquires the background location data PTbk and the condition parameter CP5 based on the operation input, and generates the background image data DTIback.
 図14(a)は、構造物3次元データの概要を示す模式である。構造物3次元データDTstrは、図14(a)に示すように、3次元空間を表す3次元ボクセルデータであり、構造物識別情報Stdは、実施の形態1のように3次元の形状データとして「構造物あり」、「構造物なし」といった2値画像として記憶する構成と異なり、構造物表面の分類を示した値を画素ごとに付与し、0,1,2,3・・といった多値画像として記録されている。 FIG. 14A is a schematic showing an outline of the three-dimensional structure data. As shown in FIG. 14A, the structure three-dimensional data DTstr is three-dimensional voxel data representing a three-dimensional space, and the structure identification information Std is as three-dimensional shape data as in the first embodiment. Unlike the configuration that stores as a binary image such as "with structure" and "without structure", a value indicating the classification of the structure surface is given for each pixel, and a multi-value such as 0, 1, 2, 3, ... It is recorded as an image.
 この多値画像からなる構造物識別情報Stdを、上述のとおり、視点位置SP(X,Y,Z)から観察した仮想画像面VFを画像枠とする2次元単視点化処理を行うことにより、背景箇所データPTbkが生成される。 As described above, the structure identification information Std composed of this multi-valued image is subjected to a two-dimensional single-viewpoint conversion process using the virtual image plane VF observed from the viewpoint position SP (X, Y, Z) as an image frame. Background location data PTbk is generated.
 図14(b)は、抽出された背景箇所データPTbkの概要を示す模式図である。背景箇所データPTbkは、仮想画像面VF内の画素A(x,y)に関する背景分類Std(Std=0,1,2,3・・)からなる2次元画像データである。ここで背景分類Stdは、例えば構造物表面の光学的特性に基づいて分類された分類番号である。例えば、塗装なし配管を1、塗装ありを2、コンクリートを3といった設定にしてもよい。 FIG. 14B is a schematic diagram showing an outline of the extracted background location data PTbk. The background location data PTbk is two-dimensional image data composed of background classification Std (Std = 0, 1, 2, 3 ...) With respect to pixels A (x, y) in the virtual image plane VF. Here, the background classification Std is a classification number classified based on, for example, the optical characteristics of the surface of the structure. For example, the unpainted pipe may be set to 1, the painted pipe may be set to 2, and the concrete may be set to 3.
 図14(c)は、背景箇所データPTbkをもとに背景画像データDTIbackを生成する方法を示した概念図である。図14(c)に示すように、多値画像として生成された背景箇所データPTbkと、操作入力に基づく条件パラメータCP5に基づき背景画像データDTIbackが生成される。 FIG. 14C is a conceptual diagram showing a method of generating background image data DTIback based on background location data PTbk. As shown in FIG. 14 (c), the background image data DTIback is generated based on the background location data PTbk generated as a multi-valued image and the condition parameter CP5 based on the operation input.
 条件パラメータCP5は、表1に示すような、例えば、背景2次元温度分布、背景表面分光放射率、背景表面分光反射率、照明光波長分布、分光照度、照明角といった、構造物に対する照明条件や、構造物自体の温度条件である。背景箇所データPTbkが多値画像である場合には、背景分類Stdごとに異なる条件パラメータCP5を付与して背景画像データDTIbackを生成する。また、背景箇所データを2値画像とした場合は、背景分類Stdごとに背景ありとなしで異なる条件を付与して背景画像データDTIbackを生成する。 The condition parameter CP5 is a lighting condition for a structure such as a background two-dimensional temperature distribution, a background surface spectral emissivity, a background surface spectral reflectance, an illumination light wavelength distribution, a spectral illuminance, and an illumination angle, as shown in Table 1. , The temperature condition of the structure itself. When the background location data PTbk is a multi-valued image, the background image data DTIback is generated by adding a different condition parameter CP5 for each background classification Std. When the background location data is a binary image, the background image data DTIback is generated by giving different conditions for each background classification Std with and without the background.
 光強度画像生成部316Dは、光吸収率画像データDTαと、背景画像データDTIbackと、操作入力に基づき条件パラメータCP4として提供されるガス温度条件とに基づいて光強度画像データDTIを生成する。 The light intensity image generation unit 316D generates light intensity image data DTI based on the light absorption rate image data DTα, the background image data DTIback, and the gas temperature condition provided as the condition parameter CP4 based on the operation input.
 図15(a)は、背景画像データDTIbackの概要を示す模式図、(b)は、光吸収率画像データDTαの概要を示す模式図、(c)は、光強度画像データの概要を示す模式図である。 15A is a schematic diagram showing an outline of background image data DTIback, FIG. 15B is a schematic diagram showing an outline of light absorption rate image data DTα, and FIG. 15C is a schematic diagram showing an outline of light intensity image data. It is a figure.
 背景画像の座標(x,y)での赤外強度を、DTIback(x,y)、ガス温度相当の黒体放射輝度をIgas、光吸収率画像の座標(x,y)での光吸収率の値をDTα(x,y)としたとき、光強度画像の座標(x,y)での赤外強度をDTI(x,y)は、式1により算出される。 The infrared intensity at the coordinates (x, y) of the background image is DTIback (x, y), the blackbody radiation brightness corresponding to the gas temperature is Igas, and the light absorption rate at the coordinates (x, y) of the light absorption rate image. When the value of is DTα (x, y), the infrared intensity at the coordinates (x, y) of the light intensity image is calculated by Equation 1 as DTI (x, y).
Figure JPOXMLDOC01-appb-M000002
Figure JPOXMLDOC01-appb-M000002
 光強度画像生成部316Dは、光吸収率画像データDTαと、背景画像データDTIbackと、ガスの温度条件とに基づいて、(式1)に基づいて、仮想画像面VF上の座標(x,y)での赤外強度DTI(x,y)を算出する。これにより、着目画素A(x,y)に対応する視線方向DAに沿って、背景となる構造物が発する赤外線放射と、視線と交差する3次元ガス分布画像のボクセルに存在するガスが発する赤外線放射の総和を算出することができる。 The light intensity image generation unit 316D has coordinates (x, y) on the virtual image plane VF based on (Equation 1) based on the light absorption rate image data DTα, the background image data DTIback, and the gas temperature condition. ), The infrared intensity DTI (x, y) is calculated. As a result, the infrared radiation emitted by the background structure and the infrared rays emitted by the gas present in the voxels of the three-dimensional gas distribution image intersecting the line of sight along the line-of-sight direction DA corresponding to the pixel A (x, y) of interest. The total radiation can be calculated.
 そして、仮想画像面VF上のすべての画素A(x,y)として赤外強度DTI(x,y)を算出して、光強度画像データDTIを生成する。 Then, the infrared intensity DTI (x, y) is calculated as all the pixels A (x, y) on the virtual image surface VF, and the light intensity image data DTI is generated.
 <背景画像生成処理の動作>
 次に、図面を用いて、機械学習用データ生成装置30Dにおける背景画像生成処理動作を説明する。
<Operation of background image generation processing>
Next, the background image generation processing operation in the machine learning data generation device 30D will be described with reference to the drawings.
 図16は、背景画像生成処理の概要を示すフローチャートである。当該処理は、制御部31によってその機能が構成される背景画像生成部315Dによって実行される。 FIG. 16 is a flowchart showing an outline of the background image generation process. The process is executed by the background image generation unit 315D whose function is configured by the control unit 31.
 先ず、背景画像生成部315Dは、構造物3次元データDTstrを取得する(ステップS1)。ステップS3~S8の動作は、図10における各ステップの動作と同じである。 First, the background image generation unit 315D acquires the structure three-dimensional data DTstr (step S1). The operation of steps S3 to S8 is the same as the operation of each step in FIG.
 ステップ8において、視線と交差するボクセルの背景箇所データPTbkが「構造物あり」である場合には、背景箇所データPTbkの背景分類Stdを取得(ステップS121A)、背景分類Stdに対応する条件パラメータCP5の入力(ステップS122A)を行い、着目画素Aにおける背景画像データ値を確定した後、着目画素A(x,y)の位置を漸動させて(ステップS9)、ステップS7に戻る。 In step 8, when the background location data PTbk of the voxel intersecting the line of sight is “with a structure”, the background classification Std of the background location data PTbk is acquired (step S121A), and the condition parameter CP5 corresponding to the background classification Std is obtained. (Step S122A) is performed to determine the background image data value in the pixel A of interest, the position of the pixel A (x, y) of interest is gradually moved (step S9), and the process returns to step S7.
 条件パラメータCP5は、例えば、背景2次元温度分布、背景表面分光放射率、背景表面分光反射率、照明光波長分布、分光照度、照明角度といった条件である。 The condition parameter CP5 is, for example, a condition such as a background two-dimensional temperature distribution, a background surface spectral emissivity, a background surface spectral reflectance, an illumination light wavelength distribution, a spectral illuminance, and an illumination angle.
 一方、「構造物あり」ではない場合には、視線とボクセルが交差する範囲に相当する視線全長について計算を完了したか否かを判定し(ステップS13)、完了していない場合には、視線上の位置LVを単位長さだけインクリメントして(ステップS14)、ステップS8に戻る。一方、完了している場合には、構造物が無かった場合に設定される標準値を着目画素Aにおける背景画像データ値として確定し、仮想画像面VF上の全ての画素について計算が完了したか否かを判定し(ステップS15)、完了していない場合には、着目画素A(x,y)の位置を漸動させて(ステップS16)、ステップS7に戻り、完了している場合には処理を終了する。ここで構造物がなかった場合に設定される標準値とは、例えば実空間における地面や空に相当する背景画像データ値である。条件パラメータCP5で示される条件を適宜設定することで標準値を得ることができる。 On the other hand, if it is not "with a structure", it is determined whether or not the calculation is completed for the total length of the line of sight corresponding to the range where the line of sight and the voxel intersect (step S13), and if not, the line of sight is not completed. The position LV on the line is incremented by the unit length (step S14), and the process returns to step S8. On the other hand, if it is completed, the standard value set when there is no structure is fixed as the background image data value in the pixel A of interest, and the calculation is completed for all the pixels on the virtual image surface VF. It is determined whether or not (step S15), and if it is not completed, the position of the pixel A (x, y) of interest is gradually moved (step S16), the process returns to step S7, and if it is completed, it is returned to step S7. End the process. Here, the standard value set when there is no structure is, for example, a background image data value corresponding to the ground or the sky in the real space. A standard value can be obtained by appropriately setting the conditions indicated by the condition parameter CP5.
 以上により、仮想画像面VF上のすべての画素について背景箇所データPTbkの背景分類Stdが取得され、仮想画像面VFに関する背景画像データDTIbackが生成される。 As described above, the background classification Std of the background location data PTbk is acquired for all the pixels on the virtual image surface VF, and the background image data DTIback related to the virtual image surface VF is generated.
 <光強度画像データ生成処理の動作>
 次に、図面を用いて、機械学習用データ生成装置30Dにおける光強度画像データ生成処理動作を説明する。
<Operation of light intensity image data generation processing>
Next, the light intensity image data generation processing operation in the machine learning data generation device 30D will be described with reference to the drawings.
 図17は、光強度画像データ生成処理の概要を示すフローチャートである。 FIG. 17 is a flowchart showing an outline of the light intensity image data generation process.
 先ず、仮想画像面VFに関する背景画像データDTIback(x,y)、光吸収率画像データDTα(x,y)を取得し(ステップS101、102)、ガス温度条件に関する条件パラメータCP4を入力する(ステップS103)。 First, the background image data DTIback (x, y) relating to the virtual image plane VF and the light absorption rate image data DTα (x, y) are acquired (steps S101 and 102), and the condition parameter CP4 relating to the gas temperature condition is input (step). S103).
 次に、ガス温度相当の黒体放射輝度Igasを取得し(ステップS104)、(式1)により光強度画像の赤外強度DTI(x,y)を算出して(ステップS105)、光強度画像データDTIとして出力し(ステップS106)、処理を終了する。 Next, the blackbody radiation luminance Igas corresponding to the gas temperature is acquired (step S104), the infrared intensity DTI (x, y) of the light intensity image is calculated by (Equation 1) (step S105), and the light intensity image is obtained. It is output as data DTI (step S106), and the process is terminated.
 <小 括>
 以上により、機械学習用データ生成装置30Dにより生成された光強度画像データDTIと、漏洩源座標データPTlkとをセットとして、ガス漏洩検知装置20に用いる機械学習用教師データとして用いることができる。
<Summary>
As described above, the light intensity image data DTI generated by the machine learning data generation device 30D and the leakage source coordinate data PTlk can be used as a set as machine learning teacher data used in the gas leakage detection device 20.
 これにより、実施の形態1、2に係る機械学習用データ生成装置30、30Bによる多量の学習データを効率的に生成することができるという効果に加えて、以下の効果を奏する。すなわち、ガス可視化撮像装置10より得られるガス分布画像により一層類似した態様の光強度画像データDTIを教師データとして生成することにより、より一層検査画像に近い教師データを生成することができ、対象となるガス部分の抽出を容易にできる。これより、ガス漏洩検知装置における機械学習モデルにおいて、より一層学習精度向上に寄与できる。 As a result, in addition to the effect that a large amount of learning data can be efficiently generated by the machine learning data generation devices 30 and 30B according to the first and second embodiments, the following effects are obtained. That is, by generating the light intensity image data DTI in a mode more similar to the gas distribution image obtained from the gas visualization image pickup device 10 as the teacher data, it is possible to generate the teacher data closer to the inspection image. The gas part can be easily extracted. As a result, it is possible to further improve the learning accuracy in the machine learning model in the gas leak detection device.
 <変形例3、4>
 図18(a)は、変形例3に係る機械学習用データ生成装置30Eの機能ブロック図である。図18(a)に示すように、背景画像生成部315Eは、背景箇所データPTbkを用いずに背景画像データDTIbackを生成している点が機械学習用データ生成装置30Dと異なる態様となる。背景画像生成部315Eでは、例えば、背景箇所データPTbkにかかわらず、一様な輝度を持つ背景画像データDTIbackを生成してもよい。
<Modifications 3 and 4>
FIG. 18A is a functional block diagram of the machine learning data generation device 30E according to the third modification. As shown in FIG. 18A, the background image generation unit 315E is different from the machine learning data generation device 30D in that the background image data DTIback is generated without using the background location data PTbk. In the background image generation unit 315E, for example, the background image data DTIback having uniform brightness may be generated regardless of the background location data PTbk.
 図18(b)は、変形例4に係る機械学習用データ生成装置30Fの機能ブロック図である。図18(b)に示すように、変形例4として、図6(b)に示す変形例1に係る機械学習用データ生成装置30Aと同様に、機械学習用データ生成装置30Dから3次元構造物モデリング部311を除いた機械学習用データ生成装置30Fを構成してもよい。これにより、変形例1、2と同様、多様なマスキング情報を含むガス濃度厚み積画像データDTdtを学習用データとして作成できる。 FIG. 18B is a functional block diagram of the machine learning data generation device 30F according to the modified example 4. As shown in FIG. 18B, as a modification 4, a three-dimensional structure is formed from the machine learning data generation device 30D in the same manner as the machine learning data generation device 30A according to the modification 1 shown in FIG. 6B. The machine learning data generation device 30F excluding the modeling unit 311 may be configured. Thereby, as in the modifications 1 and 2, the gas concentration thickness product image data DTdt including various masking information can be created as learning data.
 ≪実施の形態4≫
 次に、実施の形態4に係る機械学習用データ生成装置30Gについて、説明する。図19は、実施の形態4に係る機械学習用データ生成装置30Gの機能ブロック図である。機械学習用データ生成装置30Dと同一の構成要素については、同一の番号を付して説明を省略する。
<< Embodiment 4 >>
Next, the machine learning data generation device 30G according to the fourth embodiment will be described. FIG. 19 is a functional block diagram of the machine learning data generation device 30G according to the fourth embodiment. The same components as those of the machine learning data generation device 30D are numbered the same, and the description thereof will be omitted.
 <構 成>
 機械学習用データ生成装置30Gは、光強度画像生成部316Dの後段に、新たにガス分布強調処理部317Gを備えた点で、機械学習用データ生成装置30Dと相違する。ガス分布強調処理とは、空間に放散されるガス分布画像の挙動を強調できるように、時系列画像に対し所定の周波数に対する強調処理である。
<Structure>
The machine learning data generation device 30G is different from the machine learning data generation device 30D in that a gas distribution enhancement processing unit 317G is newly provided after the light intensity image generation unit 316D. The gas distribution enhancement process is an enhancement process for a predetermined frequency with respect to a time-series image so that the behavior of the gas distribution image diffused in space can be emphasized.
 ガス分布強調処理部317Gは、光強度画像データDTIに対し、例えば、国際公開第2017/073430号に記載された公知の方法により、ガス分布強調画像データDTIemを生成する。 The gas distribution enhancement processing unit 317G generates gas distribution enhancement image data DTIem for the light intensity image data DTI by, for example, a known method described in International Publication No. 2017/0743430.
 <ガス分布強調画像データ生成処理の動作>
 次に、図面を用いて、機械学習用データ生成装置30Gにおけるガス分布強調画像データ生成処理動作を説明する。
<Operation of gas distribution emphasized image data generation processing>
Next, the gas distribution-enhanced image data generation processing operation in the machine learning data generation device 30G will be described with reference to the drawings.
 図20は、ガス分布強調画像データ生成処理の概要を示すフローチャートである。先ず、時系列画素データに対して、所定数のフレームを単位とする単純移動平均を算出して特定周波数成分データを抽出し(ステップS201)、時系列画素データと特定周波数成分データとの差分を特定差分データとして算出する(ステップS202)。 FIG. 20 is a flowchart showing an outline of the gas distribution emphasized image data generation process. First, for the time-series pixel data, a simple moving average with a predetermined number of frames as a unit is calculated to extract specific frequency component data (step S201), and the difference between the time-series pixel data and the specific frequency component data is obtained. Calculated as specific difference data (step S202).
 次に、特定差分データに対して、所定数のフレームを単位とする移動標準偏差を特定ばらつきデータとして算出して(ステップS203)、特定差分データ、又は特定ばらつきデータをガス分布強調画像データDTIemとして出力し(ステップS204)、処理を終了する。 Next, for the specific difference data, the moving standard deviation in units of a predetermined number of frames is calculated as the specific variation data (step S203), and the specific difference data or the specific variation data is used as the gas distribution-enhanced image data DTIem. Output (step S204) and end the process.
 <小 括>
 以上により、機械学習用データ生成装置30Gにより生成されたガス分布強調画像データDTIemと、漏洩源座標データPTlkとをセットとして、ガス漏洩検知装置20に用いる機械学習用教師データとして用いることができる。
<Summary>
As described above, the gas distribution-enhanced image data DTIem generated by the machine learning data generation device 30G and the leakage source coordinate data PTlk can be used as a set as the machine learning teacher data used in the gas leakage detection device 20.
 これにより、実施の形態1、2、3に係る機械学習用データ生成装置30、30B、30Dによる多量の学習データを効率的に生成することができるという効果に加えて、以下の効果を奏する。すなわち、ガス可視化撮像装置10より得られるガス分布画像におけるガス部分の周波数を抽出し、ガス部分を強調したガス分布強調画像データDTIemを教師データとして生成することにより、対象となるガス部分の抽出を容易にできる。これより、ガス漏洩検知装置における機械学習モデルにおいてさらに、学習精度向上に寄与できる。 This has the following effects in addition to the effect that a large amount of learning data can be efficiently generated by the machine learning data generation devices 30, 30B, and 30D according to the first, second, and third embodiments. That is, the frequency of the gas portion in the gas distribution image obtained from the gas visualization image pickup apparatus 10 is extracted, and the gas distribution emphasized image data DTIem that emphasizes the gas portion is generated as teacher data to extract the target gas portion. You can easily do it. This can further contribute to the improvement of learning accuracy in the machine learning model in the gas leak detection device.
 また、ガス分布画像における抽出の対象となる周波数成分を異ならせることにより、同一条件の流体シミュレーションの結果から、より多様な教師データを生成することができ、多量の学習データを、さらに効率的に生成することができる。 In addition, by making the frequency components to be extracted in the gas distribution image different, more diverse teacher data can be generated from the results of fluid simulation under the same conditions, and a large amount of training data can be obtained more efficiently. Can be generated.
 <変形例5、6>
 図21(a)は、変形例5に係る機械学習用データ生成装置30Hの機能ブロック図である。背景画像生成部315Eは、変形例3と同様に、背景箇所データPTbkを用いずに背景画像データDTIbackを生成している点が機械学習用データ生成装置30Gと異なる態様となる。背景画像生成部315Eでは、例えば、一様な輝度を持つ背景画像データDTIbackを生成してもよい。
<Modifications 5 and 6>
FIG. 21A is a functional block diagram of the machine learning data generation device 30H according to the modified example 5. Similar to the modification 3, the background image generation unit 315E is different from the machine learning data generation device 30G in that the background image data DTIback is generated without using the background location data PTbk. The background image generation unit 315E may generate, for example, background image data DTIback having uniform brightness.
 図18(b)は、変形例6に係る機械学習用データ生成装置30Iの機能ブロック図である。変形例6では、機械学習用データ生成装置30Gから3次元構造物モデリング部311を除いた機械学習用データ生成装置30Iを構成してもよい。これにより、変形例1、2、4と同様、多様なマスキング情報を含むガス濃度厚み積画像データDTdtを学習用データとして作成できる。 FIG. 18B is a functional block diagram of the machine learning data generation device 30I according to the modified example 6. In the sixth modification, the machine learning data generation device 30I may be configured by removing the three-dimensional structure modeling unit 311 from the machine learning data generation device 30G. Thereby, as in the modified examples 1, 2 and 4, the gas concentration thickness product image data DTdt including various masking information can be created as learning data.
 <変形例7>
 上記した実施の形態1~4では、ガス漏洩源から3次元空間に漏洩するガスの存在範囲が示された3次元ガス分布画像データを、所定の視点位置から観察された前記2次元ガス分布画像データとガス漏洩源の位置情報とを、機械学習用の教師データとして生成する構成とした。
<Modification 7>
In the above-described embodiments 1 to 4, the two-dimensional gas distribution image obtained by observing the three-dimensional gas distribution image data showing the existence range of the gas leaking from the gas leakage source into the three-dimensional space from a predetermined viewpoint position. The data and the location information of the gas leak source are generated as teacher data for machine learning.
 これに対し、変形例7に係る機械学習用データ生成装置では、2次元ガス分布画像データと、ガス漏洩源の位置情報に替えてガスの流れの方向とを、機械学習用の教師データとして生成する点で、実施の形態1~4と相違する。教師データとしてガスの流れの方向を用いる点以外については、実施の形態1~4の何れかと同じ構成を用いることができ、説明を省略する。 On the other hand, in the machine learning data generation device according to the modification 7, the two-dimensional gas distribution image data and the direction of the gas flow instead of the position information of the gas leak source are generated as teacher data for machine learning. This is different from the first to fourth embodiments. Except for the point that the direction of the gas flow is used as the teacher data, the same configuration as that of any of the first to fourth embodiments can be used, and the description thereof will be omitted.
 図4(b)の説明において、上述したように、3次元ガス分布画像データは、ボクセルのガス流速ベクトルデータVc(Vx、Vy、Vz)を含み得る構成としている。 In the description of FIG. 4B, as described above, the three-dimensional gas distribution image data may include voxel gas flow velocity vector data Vc (Vx, Vy, Vz).
 変形例7に係る機械学習用データ生成装置では、3次元ガス分布画像データに基づき生成された2次元ガス分布画像データと、3次元ガス分布画像データにおけるガスの流れの方向を示すパラメータの組み合わせを教師データとして生成して、機械学習部に供給する。? 3次元ガス分布画像データにおけるガスの流れの方向を示すパラメータは、3次元ガス分布画像データにおける各ボクセルのガス流速ベクトルデータVc(Vx、Vy、Vz)に基づき算出することができる。 In the machine learning data generation device according to the modification 7, a combination of the two-dimensional gas distribution image data generated based on the three-dimensional gas distribution image data and the parameter indicating the direction of the gas flow in the three-dimensional gas distribution image data is combined. Generated as teacher data and supplied to the machine learning department. ?? The parameter indicating the direction of the gas flow in the three-dimensional gas distribution image data can be calculated based on the gas flow velocity vector data Vc (Vx, Vy, Vz) of each voxel in the three-dimensional gas distribution image data.
 具体的には、3次元ガス分布画像データにおけるガス領域に対応するボクセルのガス流速ベクトルデータVc(Vx、Vy、Vz)から、視点位置SPから見て視線方向DOのガスの流速成分を取得する。ガスの流速成分は、視線方向DOに直交する向きのガスの流速成分に対する相対値として指定してもよい。視線方向DOに直交する向きは、2次元ガス分布画像データのx方向、y方向、またはその合成方向に対応し、視点位置SPからの距離が変化しない向きである。なお、視線方向DOに直交する向きのガスの流速成分は、ガス分布動画の画像サイズやピクセル数に対する相対値として示してもよい。 Specifically, the gas flow velocity component in the line-of-sight direction DO when viewed from the viewpoint position SP is acquired from the gas flow velocity vector data Vc (Vx, Vy, Vz) of the voxel corresponding to the gas region in the three-dimensional gas distribution image data. .. The flow velocity component of the gas may be specified as a relative value with respect to the flow velocity component of the gas in the direction orthogonal to the line-of-sight direction DO. The direction orthogonal to the line-of-sight direction DO corresponds to the x-direction, y-direction, or the combined direction of the two-dimensional gas distribution image data, and is the direction in which the distance from the viewpoint position SP does not change. The flow velocity component of the gas in the direction orthogonal to the line-of-sight direction DO may be shown as a relative value with respect to the image size and the number of pixels of the gas distribution moving image.
 そして、ガスの流れの方向を示すパラメータとして、例えば、ガスの流速成分の平均値、最大値等、2次元ガス分布画像データことにパラメータを算出する。 Then, as a parameter indicating the direction of the gas flow, for example, the parameter is calculated for the two-dimensional gas distribution image data such as the average value and the maximum value of the flow velocity component of the gas.
 機械学習部は、機械学習を用いて、漏洩したガスを撮像した2次元ガス分布画像データとガスの流れの方向を示すパラメータを正解データとする学習を行って、機械学習モデルを生成する。これにより、ガス分布を撮像した検査画像から、機械学習を用いてガスの流れの方向を同定するガスの流れの方向同定装置及び方法を構築することができる。 The machine learning unit uses machine learning to perform learning using two-dimensional gas distribution image data that images leaked gas and parameters that indicate the direction of gas flow as correct data, and generates a machine learning model. Thereby, it is possible to construct a gas flow direction identification device and a method for identifying the gas flow direction by using machine learning from the inspection image obtained by imaging the gas distribution.
 ≪その他の変形例≫
 以上、実施の形態に係るガス漏洩検知装置を説明したが、本開示は、その本質的な特徴的構成要素を除き、以上の実施の形態に何ら限定を受けるものではない。例えば、実施の形態に対して当業者が思いつく各種変形を施して得られる形態や、本発明の趣旨を逸脱しない範囲で各実施の形態における構成要素及び機能を任意に組み合わせることで実現される形態も本開示に含まれる。以下では、そのような形態の一例として、上記した実施の形態の変形例を説明する。
≪Other variants≫
Although the gas leak detection device according to the embodiment has been described above, the present disclosure is not limited to the above embodiment except for its essential characteristic components. For example, a form obtained by subjecting various modifications to the embodiment that a person skilled in the art can think of, or a form realized by arbitrarily combining the components and functions in each embodiment without departing from the spirit of the present invention. Is also included in this disclosure. Hereinafter, as an example of such an embodiment, a modified example of the above-described embodiment will be described.
 (1)上記した実施の形態では、ガス可視化撮像装置10において、赤外線カメラによって撮像された検査画像における漏洩ガス部分を着色して検査対象のガス漏れ部分を可視化する機能を実現する態様とした。 (1) In the above-described embodiment, the gas visualization image pickup device 10 is configured to realize a function of coloring the leaked gas portion in the inspection image captured by the infrared camera to visualize the gas leaked portion to be inspected.
 しかしながら、ガス漏洩検知装置20の機能として、赤外線カメラが出力した動画像に対してガス検知処理を行って、ガス漏れ部分を可視化してガス領域を含むガス分布画像を生成する機能を実現してもよい。ガス検知処理は、ガス可視化撮像装置10において上述したように、公知の方法を用いることができる。具体的には、例えば、特許文献1に記載の方法を用いることができる。そして、動画像の各フレームからガス領域を含む領域を切り出した動画像としてのガス分布画像を生成する。なお、ガス漏れ部分を可視化する際に、動画像の各フレームからガス領域を含む領域を切り出した後、ゲイン調整等の加工を行ってもよいし、赤外線カメラが出力した動画像の画素値ではなく画素値の差分をマッピングしてガス分布画像としてもよい。 However, as a function of the gas leak detection device 20, a function of performing gas detection processing on a moving image output by an infrared camera to visualize a gas leak portion and generating a gas distribution image including a gas region is realized. May be good. As described above in the gas visualization image pickup apparatus 10, a known method can be used for the gas detection process. Specifically, for example, the method described in Patent Document 1 can be used. Then, a gas distribution image as a moving image obtained by cutting out a region including a gas region from each frame of the moving image is generated. When visualizing the gas leak portion, after cutting out the region including the gas region from each frame of the moving image, processing such as gain adjustment may be performed, or the pixel value of the moving image output by the infrared camera may be used. Instead, the difference in pixel values may be mapped to obtain a gas distribution image.
 (2)上記した実施の形態では、検査画像の一例として、ガス設備としてガスプラントを例示して説明を行った。しかしながら、本開示はこれに限定されるものではなく、ガスを利用する機器、装置、実験室、研究室、工場、事業所における表示画像の生成に適用してもよい。 (2) In the above-described embodiment, a gas plant is illustrated as a gas facility as an example of an inspection image. However, the present disclosure is not limited to this, and may be applied to the generation of display images in equipment, devices, laboratories, laboratories, factories, and business establishments that use gas.
 (3)上記した実施の形態では、光学的ガス漏れ検出方法の処理を、ガス可視化撮像装置10において実行する例を示して説明を行った。しかしながら、記憶手段40に保存された検査画像の原画像に基づいて、光学的ガス漏れ検出方法の処理がガス漏洩検知装置20のCPU21において実行され、ガス成分が検出された領域に色情報のマッピングが付された検査画像が、通信ネットワークNを介してガス漏洩検知装置20から記憶手段40に送信される構成としてもよい。 (3) In the above-described embodiment, an example of executing the processing of the optical gas leak detection method in the gas visualization image pickup apparatus 10 has been described. However, based on the original image of the inspection image stored in the storage means 40, the processing of the optical gas leak detection method is executed in the CPU 21 of the gas leak detection device 20, and the color information is mapped to the region where the gas component is detected. The inspection image with the mark may be transmitted from the gas leak detecting device 20 to the storage means 40 via the communication network N.
 (4)本開示を上記実施の形態に基づいて説明してきたが、本開示は、上記の実施の形態に限定されず、以下のような場合も本発明に含まれる。 (4) Although the present disclosure has been described based on the above embodiment, the present disclosure is not limited to the above embodiment, and the following cases are also included in the present invention.
 例えば、本発明は、マイクロプロセッサとメモリを備えたコンピュータシステムであって、上記メモリは、上記コンピュータプログラムを記憶しており、上記マイクロプロセッサは、上記コンピュータプログラムにしたがって動作するとしてもよい。例えば、本開示のガス漏洩検知システム1又はその構成要素における処理のコンピュータプログラムを有しており、このプログラムに従って動作する(又は接続された各部位に動作を指示する)コンピュータシステムであってもよい。 For example, the present invention is a computer system including a microprocessor and a memory, and the memory may store the computer program, and the microprocessor may operate according to the computer program. For example, it may be a computer system that has a computer program for processing in the gas leak detection system 1 of the present disclosure or a component thereof, and operates according to this program (or instructs each connected part to operate). ..
 また、上記ガス漏洩検知システム1又はその構成要素における処理の全部、もしくは一部を、マイクロプロセッサ、ROM、RAM等の記録媒体、ハードディスクユニットなどから構成されるコンピュータシステムで構成した場合も本発明に含まれる。上記RAM又はハードディスクユニットには、上記各装置と同様の動作を達成するコンピュータプログラムが記憶されている。上記マイクロプロセッサが、上記コンピュータプログラムにしたがって動作することにより、各装置はその機能を達成する。 Further, in the present invention, all or part of the processing in the gas leak detection system 1 or its constituent elements is configured by a computer system composed of a microprocessor, a recording medium such as a ROM, a RAM, a hard disk unit, and the like. included. The RAM or the hard disk unit stores a computer program that achieves the same operation as each of the above devices. When the microprocessor operates according to the computer program, each device achieves its function.
 また、上記の各装置を構成する構成要素の一部又は全部は、1つのシステムLSI(Large scale Integration(大規模集積回路))から構成されているとしてもよい。システムLSIは、複数の構成部を1個のチップ上に集積して製造された超多機能LSIであり、具体的には、マイクロプロセッサ、ROM、RAMなどを含んで構成されるコンピュータシステムである。これらは個別に1チップ化されてもよいし、一部又は全てを含むように1チップ化されてもよい。上記RAMには、上記各装置と同様の動作を達成するコンピュータプログラムが記憶されている。上記マイクロプロセッサが、上記コンピュータプログラムにしたがって動作することにより、システムLSIは、その機能を達成する。例えば、ガス漏洩検知システム1又はその構成要素における処理がLSIのプログラムとして格納されており、このLSIがコンピュータ内に挿入され、所定のプログラム(ガス検査管理方法)を実施する場合も本発明に含まれる。 Further, some or all of the components constituting each of the above devices may be composed of one system LSI (Large scale Integration). A system LSI is a super-multifunctional LSI manufactured by integrating a plurality of components on one chip, and specifically, is a computer system including a microprocessor, ROM, RAM, and the like. .. These may be individually integrated into one chip, or may be integrated into one chip so as to include a part or all of them. A computer program that achieves the same operation as each of the above devices is stored in the RAM. When the microprocessor operates according to the computer program, the system LSI achieves its function. For example, the present invention also includes a case where the processing in the gas leak detection system 1 or its constituent elements is stored as an LSI program, and this LSI is inserted into a computer to execute a predetermined program (gas inspection management method). Is done.
 なお、集積回路化の手法はLSIに限るものではなく、専用回路または汎用プロセッサで実現してもよい。LSI製造後に、プログラムすることが可能なFPGA(Field Programmable Gate Array)や、LSI内部の回路セルの接続や設定を再構成可能なリコンフィギュラブル・プロセッサー(Reconfigurable Processor)を利用してもよい。 The method of making an integrated circuit is not limited to LSI, and may be realized by a dedicated circuit or a general-purpose processor. An FPGA (Field Programmable Gate Array) that can be programmed after the LSI is manufactured, or a reconfigurable processor that can reconfigure the connection and settings of the circuit cells inside the LSI may be used.
 さらには、半導体技術の進歩または派生する別技術によりLSIに置き換わる集積回路化の技術が登場すれば、当然、その技術を用いて機能ブロックの集積化を行ってもよい。 Furthermore, if an integrated circuit technology that replaces an LSI appears due to advances in semiconductor technology or another technology derived from it, it is naturally possible to integrate functional blocks using that technology.
 また、各実施の形態に係る、ガス漏洩検知システム1又はその構成要素の機能の一部又は全てを、CPU等のプロセッサがプログラムを実行することにより実現してもよい。ガス漏洩検知システム1又はその構成要素の動作を実施させるプログラムが記録された非一時的なコンピュータ読み取り可能な記録媒体であってもよい。プログラムや信号を記録媒体に記録して移送することにより、プログラムを独立した他のコンピュータシステムにより実施するとしてもよい。また、上記プログラムは、インターネット等の伝送媒体を介して流通させることができるのは言うまでもない。 Further, a part or all of the functions of the gas leak detection system 1 or its components according to each embodiment may be realized by executing a program by a processor such as a CPU. It may be a non-temporary computer-readable recording medium in which a program for performing the operation of the gas leak detection system 1 or its components is recorded. By recording a program or signal on a recording medium and transferring it, the program may be executed by another independent computer system. Needless to say, the above program can be distributed via a transmission medium such as the Internet.
 また、上記実施形態に係るガス漏洩検知システム1又はその各構成要素は、CPUやGPU(Graphics Processing Unit)やプロセッサなどのプログラマブルデバイスとソフトウェアにより実現される構成であってもよい。これらの構成要素は一個の回路部品とすることができるし、複数の回路部品の集合体にすることもできる。また、複数の構成要素を組合せて一個の回路部品とすることができるし、複数の回路部品の集合体にすることもできる。 Further, the gas leak detection system 1 or each component thereof according to the above embodiment may be configured to be realized by a programmable device such as a CPU, a GPU (Graphics Processing Unit) or a processor, and software. These components can be a single circuit component or an aggregate of a plurality of circuit components. Further, a plurality of components can be combined into one circuit component, or an aggregate of a plurality of circuit components can be formed.
 (5)機能ブロックの分割は一例であり、複数の機能ブロックを一つの機能ブロックとして実現したり、一つの機能ブロックを複数に分割したり、一部の機能を他の機能ブロックに移してもよい。また、類似する機能を有する複数の機能ブロックの機能を、単一のハードウェア又はソフトウェアが並列又は時分割に処理してもよい。 (5) Division of functional blocks is an example, and even if multiple functional blocks are realized as one functional block, one functional block is divided into multiple, or some functions are transferred to other functional blocks. good. Further, the functions of a plurality of functional blocks having similar functions may be processed by a single hardware or software in parallel or in a time division manner.
 また、上記のステップが実行される順序は、本発明を具体的に説明するために例示するためのものであり、上記以外の順序であってもよい。また、上記ステップの一部が、他のステップと同時(並列)に実行されてもよい。 Further, the order in which the above steps are executed is for exemplifying in order to specifically explain the present invention, and may be an order other than the above. Further, a part of the above steps may be executed simultaneously with other steps (parallel).
 また、各実施の形態、及びその変形例の機能のうち少なくとも一部を組み合わせてもよい。更に上記で用いた数字は、全て本発明を具体的に説明するために例示するものであり、本発明は例示された数字に制限されない。 Further, at least a part of the functions of each embodiment and its modified examples may be combined. Further, the numbers used above are all exemplified for the purpose of specifically explaining the present invention, and the present invention is not limited to the exemplified numbers.
 ≪まとめ≫
 以上のとおり、実施の形態に係る機械学習用データ生成方法は、ガス漏洩源から3次元空間に漏洩するガスの存在範囲が示された3次元ガス分布画像データを、所定の視点位置から観察された2次元ガス分布画像データに変換する工程と、前記2次元ガス分布画像データと前記ガスに関する所定の特徴情報とを、機械学習用の教師データとして生成する工程とを含むことを特徴とする。
≪Summary≫
As described above, in the machine learning data generation method according to the embodiment, three-dimensional gas distribution image data showing the existence range of gas leaking from the gas leakage source into the three-dimensional space is observed from a predetermined viewpoint position. It is characterized by including a step of converting into two-dimensional gas distribution image data and a step of generating the two-dimensional gas distribution image data and predetermined feature information about the gas as teacher data for machine learning.
 また、別の態様では、上記の何れかの態様において、さらに、前記3次元ガス分布画像データを3次元流体シミュレーションに基づき生成する工程を含む構成としてもよい。 Further, in another aspect, in any of the above embodiments, the configuration may further include a step of generating the three-dimensional gas distribution image data based on the three-dimensional fluid simulation.
 また、別の態様では、上記の何れかの態様において、前記機械学習では、所定の視点位置から撮像され空間中に漏洩したガスの存在範囲が示された2次元ガス分布画像データを入力とし、前記ガスに関する所定の特徴情報を同定する推測モデルを構築する構成としてもよい。 In another aspect, in any of the above embodiments, in the machine learning, two-dimensional gas distribution image data that is imaged from a predetermined viewpoint position and shows the existence range of the gas leaked into the space is input. It may be configured to construct a speculative model for identifying predetermined characteristic information regarding the gas.
 また、別の態様では、上記の何れかの態様において、さらに、前記3次元空間に3次元構造物をモデリングする工程を含み、前記3次元流体シミュレーションでは、前記3次元構造物が存在する条件において前記3次元空間における3次元ガス分布画像を算出する構成としてもよい。 Further, in another aspect, in any of the above embodiments, the step of modeling the three-dimensional structure in the three-dimensional space is further included, and in the three-dimensional fluid simulation, under the condition that the three-dimensional structure exists. It may be configured to calculate the three-dimensional gas distribution image in the three-dimensional space.
 係る構成により、ガス漏洩検知装置における機械学習モデルの学習のために、様々な条件下における入力と正解出力からなる多量の学習データの組を効率的に生成することができ、学習精度向上に寄与できる。 With this configuration, it is possible to efficiently generate a large amount of training data sets consisting of inputs and correct answer outputs under various conditions for learning machine learning models in gas leak detection devices, which contributes to improving learning accuracy. can.
 また、別の態様では、上記の何れかの態様において、前記2次元ガス分布画像データに変換する工程は、前記3次元ガス分布画像データが示す前記3次元空間における3次元ガス分布画像を前記視点位置からの視線方向に空間積分した濃度厚み積の値を、前記視線方向の角度を変化させて複数生成し、得られた濃度厚み積の値を2次元に配列して濃度厚み積画像データを生成する工程を含む構成としてもよい。 In another aspect, in any of the above embodiments, the step of converting to the two-dimensional gas distribution image data is the viewpoint of the three-dimensional gas distribution image in the three-dimensional space indicated by the three-dimensional gas distribution image data. A plurality of values of the density thickness product spatially integrated in the line-of-sight direction from the position are generated by changing the angle in the line-of-sight direction, and the obtained density-thickness product values are arranged two-dimensionally to obtain the density-thickness product image data. The configuration may include a step of generating.
 また、別の態様では、上記の何れかの態様において、前記2次元ガス分布画像データに変換する工程は、前記濃度厚み積値をガスの光吸収特性に基づいて光吸収率に変換して光吸収率分布画像データを生成する工程を含む構成としてもよい。 In another aspect, in any of the above embodiments, the step of converting to the two-dimensional gas distribution image data converts the concentration-thickness product value into a light absorption rate based on the light absorption characteristics of the gas and obtains light. The configuration may include a step of generating absorption rate distribution image data.
 また、別の態様では、上記の何れかの態様において、前記2次元ガス分布画像データに変換する工程は、前記光吸収率分布画像データと、少なくともガスの温度条件を含む温度条件とに基づいて光強度分布画像データを算出する工程を含む構成としてもよい。 In another aspect, in any of the above embodiments, the step of converting to the two-dimensional gas distribution image data is based on the light absorption rate distribution image data and at least a temperature condition including a gas temperature condition. The configuration may include a step of calculating the light intensity distribution image data.
 また、別の態様では、上記の何れかの態様において、前記所定の特徴情報は、前記漏洩源の位置情報、又は前記ガスの流れの方向である構成としてもよい。 Further, in another aspect, in any of the above aspects, the predetermined feature information may be configured to be the position information of the leakage source or the direction of the gas flow.
 また、別の態様では、上記の何れかの態様において、前記2次元ガス分布画像データは、前記ガス分布の時系列変化を示す動画像である構成としてもよい。 Further, in another aspect, in any of the above embodiments, the two-dimensional gas distribution image data may be configured as a moving image showing a time-series change of the gas distribution.
 また、実施の形態に係る機械学習用データ生成方法はガス漏洩源から3次元空間に漏洩するガスの3次元ガス分布画像データを、所定の視点位置から観察された2次元ガス分布画像データに変換する2次元単視点ガス分布画像変換処理部と、前記2次元ガス分布画像データと前記ガスに関する所定の特徴情報とを、機械学習用の教師データとして生成する教師データ生成部とを含む構成としてもよい。 Further, in the machine learning data generation method according to the embodiment, the three-dimensional gas distribution image data of the gas leaking from the gas leakage source into the three-dimensional space is converted into the two-dimensional gas distribution image data observed from a predetermined viewpoint position. It is also possible to include a two-dimensional single-viewpoint gas distribution image conversion processing unit and a teacher data generation unit that generates the two-dimensional gas distribution image data and predetermined feature information related to the gas as teacher data for machine learning. good.
 また、別の態様では、上記の何れかの態様において、前記機械学習では、所定の視点位置から撮像され空間中に漏洩したガスの存在範囲がガス領域として示された2次元ガス分布画像データを入力とし、前記ガスに関する所定の特徴情報を同定する推測モデルを構築する構成としてもよい。 In another aspect, in any of the above embodiments, in the machine learning, the two-dimensional gas distribution image data in which the existence range of the gas leaked into the space captured from a predetermined viewpoint position is shown as a gas region is obtained. As an input, it may be configured to construct a guess model that identifies predetermined characteristic information regarding the gas.
 また、別の態様では、上記の何れかの態様において、前記2次元ガス分布画像データに変換する工程は、前記3次元ガス分布画像データが示す前記3次元空間における3次元ガス分布画像を前記視点位置からの視線方向に空間積分した濃度厚み積の値を、前記視線方向の角度を変化させて複数生成し、得られた濃度厚み積の値を2次元に配列して濃度厚み積画像データを生成する工程を含む構成としてもよい。 In another aspect, in any of the above embodiments, the step of converting to the two-dimensional gas distribution image data is the viewpoint of the three-dimensional gas distribution image in the three-dimensional space indicated by the three-dimensional gas distribution image data. A plurality of values of the density thickness product spatially integrated in the line-of-sight direction from the position are generated by changing the angle in the line-of-sight direction, and the obtained density-thickness product values are arranged two-dimensionally to obtain the density-thickness product image data. The configuration may include a step of generating.
 また、実施の形態に係るプログラムは、コンピュータに機械学習用データ生成処理を行わせるプログラムであって、前記機械学習用データ生成処理は、ガス漏洩源から3次元空間に漏洩するガスの存在範囲が示された3次元ガス分布画像データを、所定の視点位置から観察された2次元ガス分布画像データに変換し、前記2次元ガス分布画像データと前記ガスに関する所定の特徴情報とを、機械学習用の教師データとして生成するプログラムとしてもよい。 Further, the program according to the embodiment is a program for causing a computer to perform machine learning data generation processing, and the machine learning data generation processing has a range of existence of gas leaking from a gas leakage source into a three-dimensional space. The shown 3D gas distribution image data is converted into 2D gas distribution image data observed from a predetermined viewpoint position, and the 2D gas distribution image data and predetermined feature information related to the gas are used for machine learning. It may be a program generated as teacher data of.
 また、別の態様では、ガス漏洩源から3次元空間に漏洩するガスの存在範囲が示された3次元ガス分布画像データから生成された2次元ガス分布画像データと、前記3次元ガス分布画像データに含まれる前記ガスに関する所定の特徴情報とを機械学習させて、前記特徴情報を推測するための学習用データセットであって、3次元の流体シミュレーションにより生成された3次元ガス分布画像データが、所定の視点位置から観察された2次元画像に変換された2次元ガス分布画像データと、当該2次元ガス分布画像データに対応する前記特徴情報からなるデータセットを複数含む 学習用データセットとしてもよい。 In another aspect, the two-dimensional gas distribution image data generated from the three-dimensional gas distribution image data showing the existence range of the gas leaking from the gas leakage source into the three-dimensional space, and the three-dimensional gas distribution image data. It is a learning data set for inferring the characteristic information by machine learning with the predetermined characteristic information about the gas contained in the above, and the three-dimensional gas distribution image data generated by the three-dimensional fluid simulation is It may be a learning data set including a plurality of data sets consisting of two-dimensional gas distribution image data converted into a two-dimensional image observed from a predetermined viewpoint position and the feature information corresponding to the two-dimensional gas distribution image data. ..
 また、別の態様では、上記の何れかの態様において、記所定の特徴情報は、ガスの漏洩源の位置情報、又は前記ガスの流れの方向である学習用データセットとしてもよい。 In another aspect, in any of the above embodiments, the predetermined feature information may be the position information of the gas leak source or the learning data set which is the direction of the gas flow.
 また、別の態様では、上記の何れかの態様において、前記漏洩源におけるガス分布の情報が含まれていない前記2次元ガス分布画像データを含む学習用データセットの構成としてもよい。 In another aspect, in any of the above embodiments, a learning data set including the two-dimensional gas distribution image data that does not include information on the gas distribution at the leakage source may be configured.
 また、別の態様では、上記の何れかの態様において、前記2次元ガス分布画像データは、前記ガス分布の時系列変化を示す動画像である学習用データセットの構成としてもよい。 In another aspect, in any of the above embodiments, the two-dimensional gas distribution image data may be configured as a learning data set which is a moving image showing a time-series change of the gas distribution.
 ≪補足≫
 以上で説明した実施の形態は、いずれも本発明の好ましい一具体例を示すものである。実施の形態で示される数値、構成要素、構成要素の配置位置及び接続形態、処理方法、処理の順序などは一例であり、本発明を限定する主旨ではない。また、実施の形態における構成要素のうち、本発明の最上位概念を示す独立請求項に記載されていないものについては、より好ましい形態を構成する任意の構成要素として説明される。
≪Supplement≫
The embodiments described above all show a preferable specific example of the present invention. Numerical values, components, arrangement positions and connection forms of components, processing methods, processing orders, and the like shown in the embodiments are examples, and are not intended to limit the present invention. Further, among the components in the embodiment, those not described in the independent claims showing the highest level concept of the present invention will be described as arbitrary components constituting the more preferable form.
 また、上記の方法が実行される順序は、本発明を具体的に説明するために例示するためのものであり、上記以外の順序であってもよい。また、上記方法の一部が、他の方法と同時(並列)に実行されてもよい。 Further, the order in which the above method is executed is for exemplifying for concretely explaining the present invention, and may be an order other than the above. Moreover, a part of the above-mentioned method may be executed at the same time (parallel) with another method.
 また、発明の理解の容易のため、上記各実施の形態で挙げた各図の構成要素の縮尺は実際のものと異なる場合がある。また本発明は上記各実施の形態の記載によって限定されるものではなく、本発明の要旨を逸脱しない範囲において適宜変更可能である。 Further, for the sake of easy understanding of the invention, the scale of the components of each figure mentioned in each of the above embodiments may differ from the actual ones. Further, the present invention is not limited to the description of each of the above embodiments, and can be appropriately modified without departing from the gist of the present invention.
 本開示の実施の形態に係る機械学習用データ生成装置、機械学習用データ生成方法、プログラム及び学習用データセットは、ガス設備のガス漏れを検査に用いるシステムに広く適用可能である。 The machine learning data generation device, machine learning data generation method, program, and learning data set according to the embodiment of the present disclosure can be widely applied to a system that uses a gas leak in a gas facility for inspection.
1 ガス検査管理システム
10 ガス可視化撮像装置
20 ガス漏洩検知装置
 21 制御部(CPU)
  210 ガス漏洩位置同定装置
  211 検査画像取得部
  212 ガス分布画像取得部
  213 漏洩源位置取得部
  214 ガス漏洩源位置同定部
   2141 機械学習部
   2142 学習モデル保持部
  215 判定結果出力部
 22 通信部
 23 記憶部
  231 プログラム
 24 表示部
 25 操作入力部
30、30A、30B、30C,30D、30E,30F、30G、30H
     機械学習用データ生成装置
 31 制御部(CPU)
  311 3次元構造物モデリング部
  312 3次元流体シミュレーション実行部
  313 2次元単視点ガス分布画像変換処理部
  314B 光吸収率画像変換部
  315D 背景画像生成部
  316D 光強度画像生成部
  317G ガス分布強調処理部
 32 通信部
 33 記憶部
  331 プログラム
 34 表示部
 35 操作入力部
40 記憶手段
1 Gas inspection management system 10 Gas visualization image pickup device 20 Gas leak detection device 21 Control unit (CPU)
210 Gas leak position identification device 211 Inspection image acquisition unit 212 Gas distribution image acquisition unit 213 Leakage source position acquisition unit 214 Gas leak source position identification unit 2141 Machine learning unit 2142 Learning model holding unit 215 Judgment result output unit 22 Communication unit 23 Storage unit 231 Program 24 Display unit 25 Operation input unit 30, 30A, 30B, 30C, 30D, 30E, 30F, 30G, 30H
Machine learning data generator 31 Control unit (CPU)
311 3D structure modeling unit 312 3D fluid simulation execution unit 313 2D single-viewpoint gas distribution image conversion processing unit 314B light absorption rate image conversion unit 315D background image generation unit 316D light intensity image generation unit 317G gas distribution enhancement processing unit 32 Communication unit 33 Storage unit 331 Program 34 Display unit 35 Operation input unit 40 Storage means

Claims (17)

  1.  ガス漏洩源から3次元空間に漏洩するガスの存在範囲が示された3次元ガス分布画像データを、所定の視点位置から観察された2次元ガス分布画像データに変換する工程と、
     前記2次元ガス分布画像データと前記ガスに関する所定の特徴情報とを、機械学習用の教師データとして生成する工程とを含む
     機械学習用データ生成方法。
    A process of converting 3D gas distribution image data showing the existence range of gas leaking from a gas leakage source into a 3D space into 2D gas distribution image data observed from a predetermined viewpoint position.
    A machine learning data generation method including a step of generating the two-dimensional gas distribution image data and predetermined feature information related to the gas as teacher data for machine learning.
  2.  さらに、前記3次元ガス分布画像データを3次元流体シミュレーションに基づき生成する工程を含む
     請求項1に記載の機械学習用データ生成方法。
    The machine learning data generation method according to claim 1, further comprising a step of generating the three-dimensional gas distribution image data based on a three-dimensional fluid simulation.
  3.  前記機械学習では、所定の視点位置から撮像され空間中に漏洩したガスの存在範囲が示された2次元ガス分布画像データを入力とし、前記ガスに関する所定の特徴情報を同定する推測モデルを構築する
     請求項1又は2に記載の機械学習用データ生成方法。
    In the machine learning, a two-dimensional gas distribution image data that is imaged from a predetermined viewpoint position and shows the existence range of the gas leaked into the space is input, and a guess model for identifying a predetermined characteristic information about the gas is constructed. The machine learning data generation method according to claim 1 or 2.
  4.  さらに、前記3次元空間に3次元構造物をモデリングする工程を含み、
     前記3次元流体シミュレーションでは、前記3次元構造物が存在する条件において前記3次元空間における3次元ガス分布画像を算出する
     請求項1~3の何れか1項に記載の機械学習用データ生成方法。
    Further, a step of modeling a three-dimensional structure in the three-dimensional space is included.
    The machine learning data generation method according to any one of claims 1 to 3, wherein in the three-dimensional fluid simulation, a three-dimensional gas distribution image in the three-dimensional space is calculated under the condition that the three-dimensional structure exists.
  5.  前記2次元ガス分布画像データに変換する工程は、前記3次元ガス分布画像データが示す前記3次元空間における3次元ガス分布画像を前記視点位置からの視線方向に空間積分した濃度厚み積の値を、前記視線方向の角度を変化させて複数生成し、得られた濃度厚み積の値を2次元に配列して濃度厚み積画像データを生成する工程を含む
     請求項1~4の何れか1項に記載の機械学習用データ生成方法。
    In the step of converting to the two-dimensional gas distribution image data, the value of the concentration thickness product obtained by spatially integrating the three-dimensional gas distribution image in the three-dimensional space indicated by the three-dimensional gas distribution image data in the line-of-sight direction from the viewpoint position is obtained. 1. Data generation method for machine learning described in.
  6.  前記2次元ガス分布画像データに変換する工程は、前記濃度厚み積値をガスの光吸収特性に基づいて光吸収率に変換して光吸収率分布画像データを生成する工程を含む
     請求項5に記載の機械学習用データ生成方法。
    The step of converting to the two-dimensional gas distribution image data includes the step of converting the concentration thickness product value into a light absorption rate based on the light absorption characteristics of the gas to generate the light absorption rate distribution image data according to claim 5. The described data generation method for machine learning.
  7.  前記2次元ガス分布画像データに変換する工程は、前記光吸収率分布画像データと、少なくともガスの温度条件を含む温度条件とに基づいて光強度分布画像データを算出する工程を含む
     請求項6に記載の機械学習用データ生成方法。
    The step of converting into the two-dimensional gas distribution image data includes the step of calculating the light intensity distribution image data based on the light absorption rate distribution image data and the temperature condition including at least the gas temperature condition according to claim 6. The described data generation method for machine learning.
  8.  前記所定の特徴情報は、前記漏洩源の位置情報、又は前記ガスの流れの方向である
     請求項1~7の何れか1項に記載の機械学習用データ生成方法。
    The machine learning data generation method according to any one of claims 1 to 7, wherein the predetermined feature information is the position information of the leak source or the direction of the gas flow.
  9.  前記2次元ガス分布画像データは、前記ガス分布の時系列変化を示す動画像である、
     請求項1~8の何れか1項に記載の機械学習用データ生成方法。
    The two-dimensional gas distribution image data is a moving image showing a time-series change of the gas distribution.
    The machine learning data generation method according to any one of claims 1 to 8.
  10.  ガス漏洩源から3次元空間に漏洩するガスの3次元ガス分布画像データを、所定の視点位置から観察された2次元ガス分布画像データに変換する2次元単視点ガス分布画像変換処理部と、
     前記2次元ガス分布画像データと前記ガスに関する所定の特徴情報とを、機械学習用の教師データとして生成する教師データ生成部とを含む
     機械学習用データ生成装置。
    A two-dimensional single-viewpoint gas distribution image conversion processing unit that converts three-dimensional gas distribution image data of gas leaking from a gas leakage source into a three-dimensional space into two-dimensional gas distribution image data observed from a predetermined viewpoint position.
    A machine learning data generation device including a teacher data generation unit that generates the two-dimensional gas distribution image data and predetermined feature information related to the gas as teacher data for machine learning.
  11.  前記機械学習では、所定の視点位置から撮像され空間中に漏洩したガスの存在範囲がガス領域として示された2次元ガス分布画像データを入力とし、前記ガスに関する所定の特徴情報を同定する推測モデルを構築する
     請求項10に記載の機械学習用データ生成装置。
    In the machine learning, a two-dimensional gas distribution image data in which the existence range of the gas leaked into the space, which is imaged from a predetermined viewpoint position, is shown as a gas region is input, and a guess model for identifying a predetermined characteristic information about the gas is specified. The machine learning data generation device according to claim 10.
  12.  前記2次元ガス分布画像データに変換する工程は、前記3次元ガス分布画像データが示す前記3次元空間における3次元ガス分布画像を前記視点位置からの視線方向に空間積分した濃度厚み積の値を、前記視線方向の角度を変化させて複数生成し、得られた濃度厚み積の値を2次元に配列して濃度厚み積画像データを生成する工程を含む
     請求項10又は11に記載の機械学習用データ生成装置。
    In the step of converting to the two-dimensional gas distribution image data, the value of the concentration thickness product obtained by spatially integrating the three-dimensional gas distribution image in the three-dimensional space indicated by the three-dimensional gas distribution image data in the line-of-sight direction from the viewpoint position is obtained. The machine learning according to claim 10 or 11, further comprising a step of generating a plurality of values by changing the angle in the line-of-sight direction and arranging the obtained values of the density-thickness product in two dimensions to generate the density-thickness product image data. Data generator for.
  13.  コンピュータに機械学習用データ生成処理を行わせるプログラムであって、
     前記機械学習用データ生成処理は、
     ガス漏洩源から3次元空間に漏洩するガスの存在範囲が示された3次元ガス分布画像データを、所定の視点位置から観察された2次元ガス分布画像データに変換し、
     前記2次元ガス分布画像データと前記ガスに関する所定の特徴情報とを、機械学習用の教師データとして生成する
     プログラム。
    A program that causes a computer to perform machine learning data generation processing.
    The machine learning data generation process is
    The 3D gas distribution image data showing the existence range of the gas leaking from the gas leakage source into the 3D space is converted into the 2D gas distribution image data observed from a predetermined viewpoint position.
    A program that generates the two-dimensional gas distribution image data and predetermined feature information about the gas as teacher data for machine learning.
  14.  ガス漏洩源から3次元空間に漏洩するガスの存在範囲が示された3次元ガス分布画像データから生成された2次元ガス分布画像データと、前記3次元ガス分布画像データに含まれる前記ガスに関する所定の特徴情報とを機械学習させて、前記特徴情報を推測するための学習用データセットであって、
     3次元の流体シミュレーションにより生成された3次元ガス分布画像データが、所定の視点位置から観察された2次元画像に変換された2次元ガス分布画像データと、
     当該2次元ガス分布画像データに対応する前記特徴情報からなるデータセットを複数含む
     学習用データセット。
    The two-dimensional gas distribution image data generated from the three-dimensional gas distribution image data showing the existence range of the gas leaking from the gas leakage source into the three-dimensional space, and the predetermined gas included in the three-dimensional gas distribution image data. It is a learning data set for inferring the feature information by machine learning with the feature information of.
    Two-dimensional gas distribution image data obtained by converting three-dimensional gas distribution image data generated by a three-dimensional fluid simulation into a two-dimensional image observed from a predetermined viewpoint position, and
    A learning data set including a plurality of data sets consisting of the feature information corresponding to the two-dimensional gas distribution image data.
  15.  前記所定の特徴情報は、ガスの漏洩源の位置情報、又は前記ガスの流れの方向である
     請求項14に記載の学習用データセット。
    The learning data set according to claim 14, wherein the predetermined feature information is the position information of the gas leak source or the direction of the gas flow.
  16.  前記漏洩源におけるガス分布の情報が含まれていない前記2次元ガス分布画像データを含む
     請求項14に記載の学習用データセット。
    The learning data set according to claim 14, which includes the two-dimensional gas distribution image data that does not include information on the gas distribution at the leak source.
  17.  前記2次元ガス分布画像データは、前記ガス分布の時系列変化を示す動画像である
     請求項14~16の何れか1項に記載の学習用データセット。
    The learning data set according to any one of claims 14 to 16, wherein the two-dimensional gas distribution image data is a moving image showing a time-series change of the gas distribution.
PCT/JP2021/019504 2020-06-05 2021-05-24 Machine learning data generation device, machine learning data generation method, program, and learning data set WO2021246210A1 (en)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
JP2020-098117 2020-06-05
JP2020098117 2020-06-05

Publications (1)

Publication Number Publication Date
WO2021246210A1 true WO2021246210A1 (en) 2021-12-09

Family

ID=78831039

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2021/019504 WO2021246210A1 (en) 2020-06-05 2021-05-24 Machine learning data generation device, machine learning data generation method, program, and learning data set

Country Status (1)

Country Link
WO (1) WO2021246210A1 (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080246622A1 (en) * 2007-04-09 2008-10-09 Honeywell International Inc. Analyzing smoke or other emissions with pattern recognition
WO2018190004A1 (en) * 2017-04-14 2018-10-18 コニカミノルタ株式会社 Simulated gas leak image generation device and method, and gas sensing device
JP2020016527A (en) * 2018-07-25 2020-01-30 コニカミノルタ株式会社 Method of investigating stationary gas detection device installation site
JP2020063955A (en) * 2018-10-16 2020-04-23 千代田化工建設株式会社 Fluid leakage detection system, fluid leakage detection device, and learning device

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20080246622A1 (en) * 2007-04-09 2008-10-09 Honeywell International Inc. Analyzing smoke or other emissions with pattern recognition
WO2018190004A1 (en) * 2017-04-14 2018-10-18 コニカミノルタ株式会社 Simulated gas leak image generation device and method, and gas sensing device
JP2020016527A (en) * 2018-07-25 2020-01-30 コニカミノルタ株式会社 Method of investigating stationary gas detection device installation site
JP2020063955A (en) * 2018-10-16 2020-04-23 千代田化工建設株式会社 Fluid leakage detection system, fluid leakage detection device, and learning device

Similar Documents

Publication Publication Date Title
US10031040B1 (en) Method and system for analyzing gas leak based on machine learning
CN102982560B (en) According to the surface segmentation of RGB and depth image
US8064684B2 (en) Methods and apparatus for visualizing volumetric data using deformable physical object
JP2012517652A (en) Fusion of 2D electro-optic images and 3D point cloud data for scene interpolation and registration performance evaluation
den Bieman et al. Deep learning video analysis as measurement technique in physical models
US20110069070A1 (en) Efficient visualization of object properties using volume rendering
JP2014049127A (en) Method for simulating hyperspectral images
Petkov et al. Interactive visibility retargeting in vr using conformal visualization
WO2021246210A1 (en) Machine learning data generation device, machine learning data generation method, program, and learning data set
CN106415198A (en) Image capturing simulation in a coordinate measuring apparatus
KR20230042706A (en) Neural network analysis of LFA test strips
CN107833631A (en) A kind of medical image computer-aided analysis method
CN112598792A (en) Multi-fractal quantification method and system for terrain complexity in three-dimensional scene
CN108431816A (en) Mathematical model about equipment carrys out control device
WO2020066209A1 (en) Method for generating data for particle analysis, program for generating data for particle analysis, and device for generating data for particle analysis
CN115828642A (en) Unity-based GPU (graphics processing Unit) accelerated X-ray digital imaging simulation method
Schoor et al. VR based visualization and exploration of plant biological data
WO2021251062A1 (en) Reflection-component-reduced image generating device, reflection component reduction inference model generating device, reflection-component-reduced image generating method, and program
He et al. IPC-Net: Incomplete point cloud classification network based on data augmentation and similarity measurement
JP7343336B2 (en) Inspection support device and inspection support method
WO2022264604A1 (en) Gas concentration feature quantity estimation device, gas concentration feature quantity estimation method, program, and gas concentration feature quantity inference model generation device
WO2021246130A1 (en) Gas leak location identification device, gas leak location identification system, gas leak location identification method, gas leak location estimation model generation device, gas leak location estimation model generation method, and program
Polyakov et al. Synthetic datasets for testing video security systems
Meuschke et al. Automatic Generation of Web-Based User Studies to Evaluate Depth Perception in Vascular Surface Visualizations.
SA520411041B1 (en) System and Method for Image Processing and Feature Recognition

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 21817500

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 21817500

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: JP