WO2022097275A1 - Information processing system - Google Patents

Information processing system Download PDF

Info

Publication number
WO2022097275A1
WO2022097275A1 PCT/JP2020/041543 JP2020041543W WO2022097275A1 WO 2022097275 A1 WO2022097275 A1 WO 2022097275A1 JP 2020041543 W JP2020041543 W JP 2020041543W WO 2022097275 A1 WO2022097275 A1 WO 2022097275A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
abnormality
mesh
degree
unit
Prior art date
Application number
PCT/JP2020/041543
Other languages
French (fr)
Japanese (ja)
Inventor
尚紀 北村
Original Assignee
株式会社インキュビット
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社インキュビット filed Critical 株式会社インキュビット
Priority to JP2022560602A priority Critical patent/JPWO2022097275A1/ja
Priority to PCT/JP2020/041543 priority patent/WO2022097275A1/en
Publication of WO2022097275A1 publication Critical patent/WO2022097275A1/en

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • the present invention relates to an information processing system.
  • Patent Document 1 also proposes to detect cracks in roads from images.
  • Patent Document 1 a defect portion is extracted by using a segmentation AI by a neural network, and a defect rate is obtained according to the size of the region.
  • segmentation AI is very vulnerable to over-detection of region extraction, and even a slight over-detection recognizes a normal part as a defect.
  • the present invention has been made in view of such a background, and an object of the present invention is to provide a technique capable of recognizing a defect in consideration of over-detection.
  • the main invention of the present invention for solving the above-mentioned problems is an information processing system, which is a first learning model for detecting the abnormality obtained by learning a first image obtained by photographing the abnormality, and the first learning model.
  • the learning model storage unit for storing the image of the above and the second learning model for estimating the degree of the abnormality trained from the abnormality information contained in the first image, and the input of the captured second image.
  • An image is given to the second learning model, and an estimation unit for estimating the degree of abnormality in the second image and an output unit for outputting the degree of abnormality are provided.
  • the present invention includes, for example, the following configuration.
  • a learning model memory unit that memorizes and An image input unit that accepts the input of the second image taken, and An abnormality detection unit that applies the second image to the first learning model and detects the abnormality in the second image, An estimation unit that estimates the degree of the abnormality in the second image by giving the detected abnormality information indicating the abnormality and the second image to the second learning model.
  • An output unit that outputs the degree of abnormality and An information processing system characterized by being equipped with.
  • the information processing system according to item 1. It also has an image segmentation section that divides the image into meshes.
  • the second learning model is a model created by learning the first mesh obtained by dividing the first image and the abnormality information detected in the first mesh.
  • the image segmentation section divides the second image into the second mesh.
  • the estimation unit obtains the abnormality information detected from the second image, which is included in the second mesh, and the second mesh.
  • To estimate the degree of the anomaly for each mesh by giving it to the learning model of 2.
  • the second image is a photograph of the road.
  • the image segmentation unit detects a white line drawn on the road from the second image and sets the position of the mesh with reference to the position of the white line.
  • An information processing system featuring. [Item 4] The information processing system according to any one of items 1 to 3. Further equipped with a target range detection unit that detects the inspection target range from the image, The target range detection unit detects the inspection target range from the second image, and the target range detection unit detects the inspection target range.
  • the abnormality detection unit applies the detected inspection target range to the first learning model, detects the abnormality information, and detects the abnormality information.
  • the estimation unit applies the detected inspection target range and the abnormality information to the second learning model to estimate the degree of the abnormality.
  • An information processing system featuring.
  • the analyzer 20 of the present embodiment is intended to estimate the degree of abnormality from an image, and is trying to estimate the degree of abnormality using a model obtained by machine learning an image.
  • machine learning using only a segmentation model is vulnerable to over-detection, while detection accuracy does not improve when only a classification model is used. Therefore, in the analyzer 20 of the present embodiment, estimation is performed by combining segmentation and classification. This makes it possible to perform highly accurate estimation that is resistant to over-detection.
  • FIG. 1 is a diagram illustrating an outline of an abnormality degree estimation process by the analyzer 20 of the present embodiment.
  • the analyzer 20 of the present embodiment is a classification that estimates the degree of abnormality (abnormality level 5) from the detector 2 for detecting the abnormal portion 3 from the captured image 1 and the captured image 1 and the abnormal portion 3. It is equipped with a vessel 4.
  • the detector 2 includes, for example, a first learning model in which the captured image 1 is learned by using the result of the annotation specifying the abnormal portion 3 in the captured image 1 as a teacher signal. Thereby, by giving the unknown captured image 1 to the detector 2, the abnormal portion 3 can be detected from the captured image 1.
  • the classifier 4 includes a second learning model in which the captured image 1 and the abnormal portion 3 are learned by using the abnormality level 5 as a teacher signal. In the present embodiment, for the second learning model, not only the captured image 1 but also the abnormal portion 3 is learned.
  • FIG. 2 is a diagram illustrating an outline of an estimation process of the degree of cracking of a road.
  • the analyzer 20 of the present embodiment gives a captured image 1 of a road image to the detector 2 to detect an abnormal portion 3 (hereinafter, also referred to as a cracked portion 3).
  • an abnormal portion 3 hereinafter, also referred to as a cracked portion 3
  • the abnormality level 5 hereinafter, also referred to as the number of cracks 5 is estimated.
  • the category data is divided into a predetermined number of stages such as 5 cracks (for example, 1 to 10). It is possible to learn the captured image 1 and the cracked portion 3 specified by the annotation as input data using the teacher data as the teacher data, but instead of or in addition to the cracked portion 3 by the annotation as the input data, The captured image 1 may be given to the detector 2 to use the cracked portion 3 (detection value) obtained. This makes it possible to learn the overdetection of the detector 2. However, even if the estimation result of the detector 2 is not used for learning the classification model used for the classifier 4, the learning is performed so as to suppress over-detection by adding the cracked portion 3 to the captured image 1 and learning. It is possible.
  • the analyzer 20 of the present embodiment may be a general-purpose computer such as a workstation or a personal computer, or may be logically realized by cloud computing.
  • the analyzer 2 may be realized as a computer directly operated by a user who intends to inspect a road, or may be realized as a server device accessed from a user terminal operated by the user.
  • FIG. 3 is a diagram showing a hardware configuration example of the analyzer 20.
  • the configuration shown in the figure is an example, and may have other configurations.
  • the analyzer 20 includes a CPU 201, a memory 202, a storage device 203, a communication interface 204, an input device 205, and an output device 206.
  • the storage device 203 stores various data and programs, such as a hard disk drive, a solid state drive, and a flash memory.
  • the communication interface 204 is an interface for connecting to a communication network, for example, an adapter for connecting to Ethernet (registered trademark), a modem for connecting to a public telephone network, a wireless communication device for performing wireless communication, and the like. It is a USB (Universal Serial Bus) connector or RS232C connector for serial communication.
  • USB Universal Serial Bus
  • the input device 205 is, for example, a keyboard, a mouse, a touch panel, a button, a microphone, or the like for inputting data.
  • the output device 206 is, for example, a display, a printer, a speaker, or the like that outputs data.
  • Each functional unit included in the analyzer 20, which will be described later, is realized by the CPU 201 reading a program stored in the storage device 203 into the memory 202 and executing the program, and each storage unit included in the analyzer 20 is the memory 202. And as part of the storage area provided by the storage device 203.
  • FIG. 4 is a diagram showing a software configuration example of the analyzer 20.
  • the analyzer 20 includes a crack detector 2, a classifier 4, an area detector 6, an image input unit 21, an image division unit 22, a learning processing unit 23, an output unit 24, a learning model storage unit 25, an image storage unit 26, and annotations.
  • a storage unit 27 is provided.
  • the image input unit 21 accepts the input of the captured image 1 in which the inspection target is captured.
  • the image input unit 21 may, for example, accept input of image data as a file, or may control a camera (not shown) to acquire an image taken by the camera. Further, in the present embodiment, the image input unit 21 accepts the input of the captured image 1 for both the learning process and the inspection, but the image input unit 21 for learning and the image input unit 21 for inspection May be implemented as a separate functional part.
  • the image input unit 21 may accept input of an image file previously taken by a camera or the like at the time of learning, and may control the camera and the like at the time of inspection to take an image in real time.
  • the image input unit 21 can register the received image as, for example, a file in the image storage unit 26.
  • the learning model storage unit 25 stores a learning model learned by machine learning.
  • the learning model storage unit 25 stores each learning model of the area detection model 251, the crack detection model 252, and the classification model 253.
  • the learning processing unit 23 performs learning by machine learning.
  • the learning processing unit 23 learns each learning model stored in the learning model storage unit 25.
  • the learning processing unit 23 can receive designation (annotation) of a region where an abnormality such as a crack has occurred in the captured image 1 from the user and perform learning.
  • the learning processing unit 23 can register the received information (annotation information) related to the annotation in the annotation storage unit 27 in association with the captured image 1.
  • the area detection model 251 is a learning model for extracting the inspection target range from the image.
  • the learning processing unit 23 receives an input (annotation) of an inspection target area for inspecting an abnormality in the captured image 1 for learning, and learns the information indicating the input area and the captured image 1 to detect the area.
  • Model 251 can be updated. For example, when a median strip or a sidewalk other than the road is shown in the image, the area of the road in the captured image 1 is specified (annotation is accepted and specified so that only the road can be specified as the detection target.
  • the region detection model 251 can be updated by learning the region and the captured image 1.
  • the area detector 6 detects the inspection target range in the captured image 1 for inspection.
  • the region detector 6 can specify the inspection target region by giving the captured image 1 for inspection to the region detection model 251. It should be noted that the region detector 6 may specify the inspection target region by using, for example, information indicating the inspection target range in the image (for example, a polygonal vertex list) without using the learning model. good.
  • the crack detection model 252 is a learning model for detecting an abnormality from the captured image 1 by learning the captured image 1 in which the abnormality is captured, and in the present embodiment, it is a learning model for detecting cracks from the image. ..
  • the learning processing unit 23 receives the input (annotation) of the cracked region in the captured image 1 for learning, learns the information indicating the input region and the captured image 1, and updates the crack detection model 252. Can be done.
  • the learning processing unit 23 can also learn only the region specified by the region detector 6 in the captured image 1.
  • the detector 4 for detecting cracks (hereinafter referred to as crack detector 4; corresponding to the abnormality detection unit of the present invention) gives an unknown captured image 1 to the crack detection model 252 and detects an abnormality in the captured image 1. can do.
  • the classification model 253 is a learning model for estimating the degree of abnormality in the captured image 1 by learning the information indicating the abnormality included in the captured image 1 and the captured image 1.
  • the classification model 253 is a learning model for detecting the degree of cracks (deterioration level according to the number of cracks).
  • the learning processing unit 23 can update the classification model 253 by performing machine learning using the degree of cracking as teacher data and the information indicating the cracked region in the captured image 1 and the captured image 1 as input data.
  • the information indicating the crack may be accepted by the annotation of the user, or the result of giving the captured image 1 to the crack detection model 252 may be used.
  • the classifier 4 (corresponding to the estimation unit of the present invention) can determine the degree of cracking by giving the photographed image 1 for inspection to the classification model 253.
  • the captured image 1 for inspection is passed to the crack detector 2 to detect cracks, and the detection result and the captured image 1 are given to the classification model 253 to obtain the degree of cracks. ..
  • the image segmentation unit 22 divides the captured image 1 into meshes.
  • the mesh size can be a predetermined value.
  • the mesh may be a square, a rectangle, or a polygon such as a hexagon or an octagon.
  • the image segmentation unit 22 can set a mesh of a predetermined size in the vertical and horizontal directions from the reference point (for example, the upper left corner, etc.) of the captured image 1.
  • the mesh is set as the mesh of the road with reference to the white line of the road. That is, in the case of the captured image 1 in which the extending direction of the road (the direction in which the vehicle travels on the road) is the vertical direction, the image dividing unit 22 detects a white line from the captured image 1 and among the detected white lines.
  • a mesh can be set for each predetermined distance in the road crossing direction (left-right direction of the image) based on the one detected at the rightmost end (or the left end) of the image.
  • a reference point such as an intersection can be detected and divided so as to arrange a mesh from the reference point, or simply in the vertical direction of the image. It is also possible to set the mesh with reference to the lower end (or upper end) of (the extending direction of the road).
  • the image segmentation unit 22 can also prevent the mesh from being set for the portion of the captured image 1 that is not the inspection target region detected by the region detector 6.
  • the image segmentation unit 22 can divide the shape obtained by plotting the information indicating the crack region detected by the crack detector 2 on the captured image 1 with the same mesh as the mesh set in the captured image 1. .
  • the classifier 4 can estimate the degree of cracking for each mesh divided by the image segmentation unit 22. That is, the captured image 1 divided into meshes and the cracked region divided into meshes can be applied to the classification model 253 for each mesh, and the degree of cracking (degree of deterioration) for each mesh can be estimated. can. This makes it possible to show the degree of cracking in mesh units on the captured image 1 of the road as shown in the example of FIG.
  • the output unit 24 outputs the degree of abnormality.
  • the degree of cracking (the degree of deterioration of the road according to the number of cracks) is output.
  • the classifier 4 estimates the degree of cracking for each mesh, and the output unit 24 calculates the degree of cracking for each mesh in the background image 1, for example, as shown in the lowermost part of FIG. , For example, can be displayed by color.
  • FIG. 5 is a diagram illustrating the operation of the analyzer 20 of the present embodiment.
  • the analyzer 20 acquires the captured image 1 for inspection (S31).
  • the captured image 1 may accept the input of the file, or may control the camera (not shown) to acquire the captured image.
  • the analyzer 20 gives the captured image 1 to the region detection model 251 to detect the inspection target region from the captured image 1 (S32), and imparts the inspection target region portion of the captured image 1 to the crack detection model 252 to provide the captured image 1 to the captured image 1.
  • Cracks are detected from (S33).
  • FIG. 6 is a diagram illustrating the detection of cracks. As shown in FIG. 6, the detector 2 detects the crack 3 in the captured image. The detected crack 3 is superimposed and output on the captured image 1 in the example of FIG. 6, but it is possible to acquire crack data indicating only the location of the crack 3.
  • FIG. 7 is a diagram showing a state in which a mesh is set in the captured image 1.
  • the image segmentation unit 22 can set the mesh 7 on the captured image.
  • the analyzer 20 can arrange the mesh from the edge of the image, but as shown in FIG. 7, the image segmentation portion 22 has a white line drawn on the road from the captured image 1. 8 can be detected and the mesh can be arranged with reference to the position of the white line 8.
  • the analyzer 20 divides the inspection target area of the captured image 1 into a set mesh (S35).
  • FIG. 8 is a diagram showing how the captured image 1 is divided into meshes.
  • the image segmentation unit 22 can divide the captured image 1 into the mesh set in step S34 to create the divided image 1M.
  • the image segmentation unit 22 may, for example, extract a portion corresponding to each mesh from the captured image 1 to create a segmented image 1M.
  • the analyzer 20 can divide the detected crack data (which can include the position of the crack on the captured image 1) into the mesh (S36).
  • FIG. 9 is a diagram showing how the data of the crack 3 is divided into meshes.
  • the image segmentation unit 22 can create segmentation data 3M indicating only the data indicating the crack 3 included in the position of the mesh.
  • the image segmentation unit 22 may divide the data of the crack 3 as an image, or extract the side of the polygon representing the crack 3 included in the mesh in the captured image 1 as the data. You may.
  • the analyzer 20 gives the divided image 1M obtained by dividing the captured image 1 and the divided data 3M obtained by dividing the crack data to the classification model 253 for each mesh, and estimates the degree of cracking (S37).
  • FIG. 10 is a diagram showing a state of estimating the degree of cracking. As shown in FIG. 10, the classifier 4 gives the divided image 1M and the divided data 3M to the classification model 253, and puts the mesh in a category indicating the degree of cracking, for example, "1" or "low level”. Can be classified.
  • the analyzer 20 of the present embodiment can estimate the degree of cracking based on the captured image 1 and the cracked portion detected from the captured image 1. By giving not only the location detected as a crack but also the captured image 1 as input data, it is possible to consider the situation in which the crack is recognized as a crack due to over-detection when estimating the degree of the crack.
  • FIG. 11 is a diagram illustrating a first process of setting the mesh 8 by the image segmentation unit 22.
  • the first process is a process of simply setting the mesh 8 with reference to the white line of the road.
  • FIG. 12 is a diagram showing how the mesh 8 is set.
  • the image segmentation unit 22 detects a white line from the captured image 1 (S401). For the detection process of the white line, a detection process by general image analysis can be adopted. When the white line cannot be recognized in the captured image 1 (S402: NO), the image segmentation unit 22 can consider the inside of the captured image 1 by a predetermined distance (which may be 0) as the white line (may be 0). S403).
  • the image segmentation unit 22 extends the white line when it detects the white line from the captured image 1 (S401: YES) and when the white line is interrupted in the middle (S404: YES) (S405). It was
  • the image segmentation unit 22 sets the box of the mesh 8 toward the left and top of the image with the lower right of the white line in the captured image 1 as a reference (S406).
  • FIG. 12 shows a line (B) constituting the rectangle of the mesh 8.
  • the image segmentation portion 22 arranges the mesh 8s in the left and upward directions, and when the left end of the mesh 8 exceeds the white line located on the leftmost side in the captured image 1 (S407: YES), the left end of the mesh 8 is a white line.
  • the size of the mesh 8 is adjusted so as to move to (S408). In the example of FIG. 12, the size of the mesh 8 is adjusted so that the left end (C1) of the leftmost mesh 8 is aligned with the white line (A3) at the left end.
  • the image segmentation portion 22 moves the upper end of the mesh 8 to the upper end of the captured image 1.
  • the size of the mesh 8 can be adjusted. In the example of FIG. 12, the size of the mesh 8 is adjusted so that the upper end C2 of the uppermost mesh 8 is aligned with the upper end of the captured image 1.
  • the part between the white lines drawn on the road can be divided into mesh 8.
  • FIG. 13 is a diagram illustrating a second process of setting the mesh 8 by the image segmentation unit 22.
  • FIG. 14 is a diagram showing how the mesh 8 is set in the second process.
  • the white line may not be a straight line due to the tilt of the camera (not shown) at the time of shooting, the fluctuation of the thickness of the white line, the shading of the white line, etc.
  • the mesh 8 is operated until the number of white line pixels is equal to or less than a predetermined value. It can be shifted to the left by one pixel.
  • the right end (D1) which is the reference position in the left-right direction of the second rightmost mesh 8D from the bottom, is shifted in the direction of the arrow (D2) according to the pixel indicating the white line A1. It is shown that.
  • FIG. 15 is a diagram illustrating a third process of setting the mesh 8 by the image segmentation unit 22.
  • the mesh 8 is continuously set on the other captured image 1 adjacent to the captured image 1 (a process of setting the mesh 8 on the other captured image 1 adjacent to the captured image 1. S431) has been added.
  • FIG. 16 is a diagram showing how the mesh 8 is set in the third process. As shown in FIG. 16, the mesh 8E at the upper end is set so as to extend beyond the upper end of the captured image 1 and extend to the adjacent captured image 1'. In this way, the image segmentation unit 22 can set the mesh 8 across the continuous captured images 1.
  • the image segmentation unit 22 arranges a plurality of captured images 1 to determine the presence or absence of the same region, and if the same region exists, arranges the same regions so that they overlap each other, and uses the arranged captured images 1 to mesh 1 Can be set. Further, at this time, the image segmentation unit 22 may create a composite image in which the arranged captured images 1 are combined, and set the mesh 8 for the created composite image.
  • the analyzer 20 estimates the degree of cracking of the road, but it can be widely applied to detect some defect from the image and evaluate the risk related to the defect.
  • the analyzer 20 of the present embodiment can detect cracks in structures such as bridges and buildings by inputting captured images of structures such as bridges and buildings, and evaluate the degree of cracks. can.
  • the analyzer 20 of the present embodiment uses, for example, a map image such as an aerial photograph or a satellite photograph (including infrared data and various sensor data) as a captured image, for example, a valley that causes a landslide. It is also possible to detect the area and evaluate the risk of landslides.
  • the classification model 253 is a learning model for categorizing the degree of cracks, but it may be a regression model for estimating the number of cracks. In this case, the classifier 4 can estimate the number of cracks.
  • the number of cracks indicates the degree of cracks, but the present invention is not limited to this, and the depth of cracks may be learned as the degree of cracks.
  • the data of the crack depth may be input, or the information indicating the degree according to the crack depth may be input.
  • the severity of the cracks may be used as the degree of cracks.
  • an expert can browse the image to determine the severity of the crack, and input the determined severity to learn.
  • the degree of cracking of the road was estimated, but similarly, it can be applied to the case of estimating the degree of cracking of a bridge or a tunnel. In addition to cracks, it can be applied when estimating the degree of corrosion, water leakage, free lime, damage, etc. of building materials.
  • leg part of the bridge When inspecting a bridge, only the leg part of the bridge can be extracted as the inspection target area, and the leg part can be divided into meshes to estimate the degree of cracking.
  • the cracks may be detected directly from the photographed image 1 obtained by photographing the inner surface of the tunnel without recognizing the white line.
  • the reference of the mesh arranged in the longitudinal direction of the tunnel can be set at the entrance of the tunnel.
  • the captured image is analyzed and the mesh is set, but the position of the mesh may be given.
  • learning and inference are performed using the captured image 1 captured by the camera, but the present invention is not limited to this, and a composite image obtained by synthesizing a plurality of captured images 1 may be used. , Orthophoto corrected orthophotographed image 1 may be used.
  • the crack detector 2 and the classifier 4 perform supervised learning in which the result of the annotation is given as a teacher signal, but at least one of the crack detector 2 and the classifier 4 is used. Learning may be performed by supervised learning. For example, at least one of the crack detector 2 and the classifier 4 learns a photographed image 1 for normal learning (no abnormality is photographed) to create a generator (for example, an autoencoder), and creates the image. A generator can be used to reproduce a normal image based on the captured image 1 for inspection, and the difference between the reproduced image and the captured image 1 for inspection can be detected as an abnormality.
  • a generator for example, an autoencoder

Landscapes

  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Analysis (AREA)

Abstract

[Problem] To make it possible to recognize faults by considering over-detection. [Solution] This information processing system is characterized by comprising: a training model storage unit which stores a first training model for detecting abnormality from a first image obtained by capturing the abnormality and a second training model for estimating a degree of the abnormality from the first image and the abnormality information indicating the abnormality; an image input unit which receives an input of a second captured image; an abnormality detection unit which provides the second image to the first training model and detects abnormality in the second image; an estimation unit which provides the second image and abnormality information indicating the detected abnormality to the second training model, and estimates a degree of abnormality in the second image; and an output unit which outputs the degree of abnormality.

Description

情報処理システムInformation processing system
 本発明は、情報処理システムに関する。 The present invention relates to an information processing system.
 近年、ディープラーニングと呼ばれる機械学習技術があり、画像認識、音声認識等の分野で、その技術の登場以前よりも非常に高い精度をあげている。また、これらの技術は、インフラ点検の目視検査における自動化にも活用されている。特許文献1にも、画像から道路のひび割れ等を検出することが提案されている。 In recent years, there is a machine learning technology called deep learning, and in the fields of image recognition, voice recognition, etc., the accuracy is much higher than before the technology appeared. These techniques are also being used to automate the visual inspection of infrastructure inspections. Patent Document 1 also proposes to detect cracks in roads from images.
特許第6678267号公報Japanese Patent No. 6678267
 特許文献1では、ニューラルネットワークによるセグメンテーションAIを用いて不具合箇所を領域抽出し、その領域の大小によって、不具合率を求めている。しかしながら、このようなセグメンテーションAIによる手法は領域抽出の過検知に非常に弱く、過検知がすこしあっただけでも正常箇所を不具合と認識してしまう。 In Patent Document 1, a defect portion is extracted by using a segmentation AI by a neural network, and a defect rate is obtained according to the size of the region. However, such a method using segmentation AI is very vulnerable to over-detection of region extraction, and even a slight over-detection recognizes a normal part as a defect.
 本発明はこのような背景を鑑みてなされたものであり、過検知を考慮して不具合を認識することのできる技術を提供することを目的とする。 The present invention has been made in view of such a background, and an object of the present invention is to provide a technique capable of recognizing a defect in consideration of over-detection.
 上記課題を解決するための本発明の主たる発明は、情報処理システムであって、異常を撮影した第1の画像を学習させた前記異常を検出するための第1の学習モデルと、前記第1の画像及び前記第1の画像に含まれる異常情報を学習させた前記異常の度合を推定するための第2の学習モデルとを記憶する学習モデル記憶部と、撮影した第2の画像の入力を受け付ける画像入力部と、前記第2の画像を前記第1の学習モデルに与えて、前記第2の画像における前記異常情報を検出する異常検出部と、検出された前記異常情報及び前記第2の画像を前記第2の学習モデルに与えて、前記第2の画像における前記異常の度合を推定する推定部と、前記異常の度合を出力する出力部と、を備えることを特徴とする。 The main invention of the present invention for solving the above-mentioned problems is an information processing system, which is a first learning model for detecting the abnormality obtained by learning a first image obtained by photographing the abnormality, and the first learning model. The learning model storage unit for storing the image of the above and the second learning model for estimating the degree of the abnormality trained from the abnormality information contained in the first image, and the input of the captured second image. An image input unit to be accepted, an abnormality detection unit that gives the second image to the first learning model and detects the abnormality information in the second image, the detected abnormality information, and the second. An image is given to the second learning model, and an estimation unit for estimating the degree of abnormality in the second image and an output unit for outputting the degree of abnormality are provided.
 その他本願が開示する課題やその解決方法については、発明の実施形態の欄及び図面により明らかにされる。 Other problems disclosed in the present application and solutions thereof will be clarified by the columns and drawings of the embodiments of the invention.
 本発明によれば、過検知を考慮して不具合を認識することができる。 According to the present invention, it is possible to recognize a defect in consideration of over-detection.
本実施形態の分析装置20による異常度合の推定処理の概要を説明する図である。It is a figure explaining the outline of the estimation process of the degree of abnormality by the analyzer 20 of this embodiment. 道路のひび割れの度合の推定処理の概要を説明する図である。It is a figure explaining the outline of the estimation process of the degree of cracking of a road. 分析装置20のハードウェア構成例を示す図である。It is a figure which shows the hardware configuration example of the analyzer 20. 分析装置20のソフトウェア構成例を示す図である。It is a figure which shows the software configuration example of the analyzer 20. 本実施形態の分析装置20の動作について説明する図である。It is a figure explaining the operation of the analyzer 20 of this embodiment. ひび割れの検出を説明する図である。It is a figure explaining the detection of a crack. 撮影画像1にメッシュを設定する様子を示す図である。It is a figure which shows the mode that the mesh is set in the photographed image 1. 撮影画像1をメッシュに分割する様子を示す図である。It is a figure which shows the mode that the photographed image 1 is divided into a mesh. ひび割れ3のデータをメッシュに分割する様子を示す図である。It is a figure which shows the state of dividing the data of a crack 3 into a mesh. ひび割れの度合を推定する様子を示す図である。It is a figure which shows the state of estimating the degree of a crack. 画像分割部22によるメッシュ8を設定する第1の処理について説明する図である。It is a figure explaining the 1st process which sets the mesh 8 by the image segmentation part 22. メッシュ8を設定する様子を示す図である。It is a figure which shows the state of setting the mesh 8. 画像分割部22によるメッシュ8を設定する第2の処理について説明する図である。It is a figure explaining the 2nd process of setting the mesh 8 by the image segmentation part 22. 第2の処理においてメッシュ8を設定する様子を示す図である。It is a figure which shows the mode that the mesh 8 is set in the 2nd process. 画像分割部22によるメッシュ8を設定する第3の処理について説明する図である。It is a figure explaining the 3rd process of setting the mesh 8 by the image segmentation part 22. 第3の処理においてメッシュ8を設定する様子を示す図である。It is a figure which shows the mode that the mesh 8 is set in the 3rd process.
<発明の概要>
 本発明の実施形態の内容を列記して説明する。本発明は、たとえば、以下のような構成を備える。
[項目1]
 異常を撮影した第1の画像から前記異常を検出するための第1の学習モデルと、前記異常を示す異常情報及び前記第1の画像から前記異常の度合を推定するための第2の学習モデルとを記憶する学習モデル記憶部と、
 撮影した第2の画像の入力を受け付ける画像入力部と、
 前記第2の画像を前記第1の学習モデルに与えて、前記第2の画像における前記異常を検出する異常検出部と、
 検出された前記異常を示す前記異常情報及び前記第2の画像を前記第2の学習モデルに与えて、前記第2の画像における前記異常の度合を推定する推定部と、
 前記異常の度合を出力する出力部と、
 を備えることを特徴とする情報処理システム。
[項目2]
 項目1に記載の情報処理システムであって、
 画像をメッシュに分割する画像分割部をさらに備え、
 前記第2の学習モデルは、前記第1の画像を分割した第1の前記メッシュ及び前記第1のメッシュ内に検出された前記異常情報を学習して作成されたモデルであり、
 前記画像分割部は前記第2の画像を第2の前記メッシュに分割し、
 前記推定部は、前記第2のメッシュのそれぞれについて、前記第2の画像から検出された前記異常情報のうち、前記第2のメッシュに含まれているものと前記第2のメッシュとを前記第2の学習モデルに与えて、前記メッシュごとの前記異常の度合を推定すること、
 を特徴とする情報処理システム。
[項目3]
 項目2に記載の情報処理システムであって、
 前記第2の画像は道路を撮影したものであり、
 前記画像分割部は、前記第2の画像から前記道路に引かれた白線を検出し、前記白線の位置を基準として前記メッシュの位置を設定すること、
 を特徴とする情報処理システム。
[項目4]
 項目1乃至3のいずれか1項に記載の情報処理システムであって、
 画像から検査対象範囲を検出する対象範囲検出部をさらに備え、
 前記対象範囲検出部は、前記第2の画像から前記検査対象範囲を検出し、
 前記異常検出部は、検出された前記検査対象範囲を前記第1の学習モデルに与えて前記異常情報を検出し、
 前記推定部は、検出された前記検査対象範囲と、前記異常情報とを前記第2の学習モデルに与えて、前記異常の度合を推定すること、
 を特徴とする情報処理システム。
<Outline of the invention>
The contents of the embodiments of the present invention will be described in a list. The present invention includes, for example, the following configuration.
[Item 1]
A first learning model for detecting the abnormality from a first image obtained by photographing the abnormality, and a second learning model for estimating the degree of the abnormality from the abnormality information indicating the abnormality and the first image. A learning model memory unit that memorizes and
An image input unit that accepts the input of the second image taken, and
An abnormality detection unit that applies the second image to the first learning model and detects the abnormality in the second image,
An estimation unit that estimates the degree of the abnormality in the second image by giving the detected abnormality information indicating the abnormality and the second image to the second learning model.
An output unit that outputs the degree of abnormality and
An information processing system characterized by being equipped with.
[Item 2]
The information processing system according to item 1.
It also has an image segmentation section that divides the image into meshes.
The second learning model is a model created by learning the first mesh obtained by dividing the first image and the abnormality information detected in the first mesh.
The image segmentation section divides the second image into the second mesh.
For each of the second meshes, the estimation unit obtains the abnormality information detected from the second image, which is included in the second mesh, and the second mesh. To estimate the degree of the anomaly for each mesh by giving it to the learning model of 2.
An information processing system featuring.
[Item 3]
The information processing system according to item 2.
The second image is a photograph of the road.
The image segmentation unit detects a white line drawn on the road from the second image and sets the position of the mesh with reference to the position of the white line.
An information processing system featuring.
[Item 4]
The information processing system according to any one of items 1 to 3.
Further equipped with a target range detection unit that detects the inspection target range from the image,
The target range detection unit detects the inspection target range from the second image, and the target range detection unit detects the inspection target range.
The abnormality detection unit applies the detected inspection target range to the first learning model, detects the abnormality information, and detects the abnormality information.
The estimation unit applies the detected inspection target range and the abnormality information to the second learning model to estimate the degree of the abnormality.
An information processing system featuring.
<システムの概要>
 以下、本発明の一実施形態に係る分析装置20について説明する。本実施形態の分析装置20は、画像から異常度合を推定しようとするものであり、画像を機械学習したモデルを用いて異常度合を推定しようとしている。一般にセグメンテーションモデルのみを用いた機械学習では過検知に弱く、一方で、分類モデルのみを用いると検出精度が上がらないところ、本実施形態の分析装置20では、セグメンテーションと分類とを組み合わせて推定を行うことにより、過検知に強く精度の高い推定を行うことを可能としている。図1は、本実施形態の分析装置20による異常度合の推定処理の概要を説明する図である。本実施形態の分析装置20は、撮影画像1から異常箇所3を検出するための検出器2と、撮影画像1と異常箇所3との2つから異常の度合(異常レベル5)を推定する分類器4とを備える。検出器2は、例えば、撮影画像1における異常箇所3を指定したアノテーションの結果を教師信号として、撮影画像1を学習した第1の学習モデルを備える。これにより検出器2に未知の撮影画像1を与えることにより、撮影画像1から異常箇所3を検出することができる。また、分類器4は、異常レベル5を教師信号として、撮影画像1と異常箇所3とを学習した第2の学習モデルを備える。本実施形態では、第2の学習モデルについては、撮影画像1のみでなく、異常箇所3も合わせて学習を行っている。
<Overview of the system>
Hereinafter, the analyzer 20 according to the embodiment of the present invention will be described. The analyzer 20 of the present embodiment is intended to estimate the degree of abnormality from an image, and is trying to estimate the degree of abnormality using a model obtained by machine learning an image. In general, machine learning using only a segmentation model is vulnerable to over-detection, while detection accuracy does not improve when only a classification model is used. Therefore, in the analyzer 20 of the present embodiment, estimation is performed by combining segmentation and classification. This makes it possible to perform highly accurate estimation that is resistant to over-detection. FIG. 1 is a diagram illustrating an outline of an abnormality degree estimation process by the analyzer 20 of the present embodiment. The analyzer 20 of the present embodiment is a classification that estimates the degree of abnormality (abnormality level 5) from the detector 2 for detecting the abnormal portion 3 from the captured image 1 and the captured image 1 and the abnormal portion 3. It is equipped with a vessel 4. The detector 2 includes, for example, a first learning model in which the captured image 1 is learned by using the result of the annotation specifying the abnormal portion 3 in the captured image 1 as a teacher signal. Thereby, by giving the unknown captured image 1 to the detector 2, the abnormal portion 3 can be detected from the captured image 1. Further, the classifier 4 includes a second learning model in which the captured image 1 and the abnormal portion 3 are learned by using the abnormality level 5 as a teacher signal. In the present embodiment, for the second learning model, not only the captured image 1 but also the abnormal portion 3 is learned.
 本実施形態の分析装置20では、道路の検査に用いるべく、道路のひび割れの度合を推定することを想定している。図2は、道路のひび割れの度合の推定処理の概要を説明する図である。本実施形態の分析装置20は、道路を撮影した撮影画像1を検出器2に与えて異常箇所3(以下、ひび割れ箇所3とも称する。)を検出する。検出したひび割れ箇所3とともに、道路の撮影画像1を分類器4に与えることにより、異常レベル5(以下、ひび割れ本数5とも称する。)を推定する。このように、撮影画像1のみ(あるいはひび割れ箇所3のみ)からひび割れの度合を推定しようとした場合には、例えば、マンホールの縁などの筋を過検出してしまうと、現状以上にひび割れの度合(道路の劣化度合)を過度に評価してしまうことになるところ、撮影画像1とひび割れ箇所3との両方を用いることにより、過検出を鑑みた推定を行うことができる。 In the analyzer 20 of the present embodiment, it is assumed that the degree of cracking of the road is estimated for use in the inspection of the road. FIG. 2 is a diagram illustrating an outline of an estimation process of the degree of cracking of a road. The analyzer 20 of the present embodiment gives a captured image 1 of a road image to the detector 2 to detect an abnormal portion 3 (hereinafter, also referred to as a cracked portion 3). By giving the photographed image 1 of the road to the classifier 4 together with the detected cracked portion 3, the abnormality level 5 (hereinafter, also referred to as the number of cracks 5) is estimated. In this way, when trying to estimate the degree of cracking from only the captured image 1 (or only the cracked portion 3), for example, if streaks such as the edge of a manhole are over-detected, the degree of cracking is higher than the current state. Since (the degree of deterioration of the road) is excessively evaluated, it is possible to make an estimation in consideration of over-detection by using both the captured image 1 and the cracked portion 3.
 なお、分類器4が用いる第2の学習モデル(以下、分類モデルともいう。)の学習時には、ひび割れ本数5(例えば、1本乃至10本などの所定数の段階に分かれたカテゴリデータとすることができる。)を教師データとして、撮影画像1と、アノテーションにより指定されたひび割れ箇所3とを入力データとして学習することができるが、入力データとして、アノテーションによるひび割れ箇所3に代えて又は加えて、撮影画像1を検出器2に与えて求められたひび割れ箇所3(検出値)を用いるようにしてもよい。これにより、検出器2の過検出を学習させることができる。もっとも、分類器4に用いる分類モデルの学習に、検出器2の推定結果を用いなくても、撮影画像1にひび割れ箇所3を加えて学習させることにより、過検出を抑制するような学習を行うことが可能である。 When learning the second learning model (hereinafter, also referred to as a classification model) used by the classifier 4, the category data is divided into a predetermined number of stages such as 5 cracks (for example, 1 to 10). It is possible to learn the captured image 1 and the cracked portion 3 specified by the annotation as input data using the teacher data as the teacher data, but instead of or in addition to the cracked portion 3 by the annotation as the input data, The captured image 1 may be given to the detector 2 to use the cracked portion 3 (detection value) obtained. This makes it possible to learn the overdetection of the detector 2. However, even if the estimation result of the detector 2 is not used for learning the classification model used for the classifier 4, the learning is performed so as to suppress over-detection by adding the cracked portion 3 to the captured image 1 and learning. It is possible.
<分析装置20の構成>
 本実施形態の分析装置20は、例えばワークステーションやパーソナルコンピュータのような汎用コンピュータとしてもよいし、あるいはクラウド・コンピューティングによって論理的に実現されてもよい。分析装置2は、道路の検査を行おうとするユーザが直接操作するコンピュータとして実現するようにしてもよいし、ユーザが操作するユーザ端末からアクセスするサーバ装置として実現するようにしてもよい。
<Structure of analyzer 20>
The analyzer 20 of the present embodiment may be a general-purpose computer such as a workstation or a personal computer, or may be logically realized by cloud computing. The analyzer 2 may be realized as a computer directly operated by a user who intends to inspect a road, or may be realized as a server device accessed from a user terminal operated by the user.
 図3は、分析装置20のハードウェア構成例を示す図である。なお、図示された構成は一例であり、これ以外の構成を有していてもよい。分析装置20は、CPU201、メモリ202、記憶装置203、通信インタフェース204、入力装置205、出力装置206を備える。記憶装置203は、各種のデータやプログラムを記憶する、例えばハードディスクドライブやソリッドステートドライブ、フラッシュメモリなどである。通信インタフェース204は、通信ネットワークに接続するためのインタフェースであり、例えばイーサネット(登録商標)に接続するためのアダプタ、公衆電話回線網に接続するためのモデム、無線通信を行うための無線通信機、シリアル通信のためのUSB(Universal Serial Bus)コネクタやRS232Cコネクタなどである。入力装置205は、データを入力する、例えばキーボードやマウス、タッチパネル、ボタン、マイクロフォンなどである。出力装置206は、データを出力する、例えばディスプレイやプリンタ、スピーカなどである。なお、後述する分析装置20が備える各機能部は、CPU201が記憶装置203に記憶されているプログラムをメモリ202に読み出して実行することにより実現され、分析装置20が備える各記憶部は、メモリ202及び記憶装置203が提供する記憶領域の一部として実現される。 FIG. 3 is a diagram showing a hardware configuration example of the analyzer 20. The configuration shown in the figure is an example, and may have other configurations. The analyzer 20 includes a CPU 201, a memory 202, a storage device 203, a communication interface 204, an input device 205, and an output device 206. The storage device 203 stores various data and programs, such as a hard disk drive, a solid state drive, and a flash memory. The communication interface 204 is an interface for connecting to a communication network, for example, an adapter for connecting to Ethernet (registered trademark), a modem for connecting to a public telephone network, a wireless communication device for performing wireless communication, and the like. It is a USB (Universal Serial Bus) connector or RS232C connector for serial communication. The input device 205 is, for example, a keyboard, a mouse, a touch panel, a button, a microphone, or the like for inputting data. The output device 206 is, for example, a display, a printer, a speaker, or the like that outputs data. Each functional unit included in the analyzer 20, which will be described later, is realized by the CPU 201 reading a program stored in the storage device 203 into the memory 202 and executing the program, and each storage unit included in the analyzer 20 is the memory 202. And as part of the storage area provided by the storage device 203.
 図4は、分析装置20のソフトウェア構成例を示す図である。分析装置20は、ひび検出器2、分類器4、領域検出器6、画像入力部21、画像分割部22、学習処理部23、出力部24、学習モデル記憶部25、画像記憶部26、アノテーション記憶部27を備える。 FIG. 4 is a diagram showing a software configuration example of the analyzer 20. The analyzer 20 includes a crack detector 2, a classifier 4, an area detector 6, an image input unit 21, an image division unit 22, a learning processing unit 23, an output unit 24, a learning model storage unit 25, an image storage unit 26, and annotations. A storage unit 27 is provided.
 画像入力部21は、検査対象を撮影した撮影画像1の入力を受け付ける。画像入力部21は、例えば、ファイルとしての画像データの入力を受け付けるようにしてもよいし、カメラ(不図示)を制御してカメラが撮影した画像を取得するようにしてもよい。また、本実施形態では、画像入力部21は、学習処理向け及び検査向けの両方の撮影画像1の入力を受け付けるものとするが、学習向けの画像入力部21と検査向けの画像入力部21とを別の機能部として実装するようにしてもよい。画像入力部21は、例えば、学習時には予めカメラ等で撮影された画像ファイルの入力を受け付けるようにし、検査時にはカメラと等を制御してリアルタイムに画像を撮影させるようにするようにしてもよい。画像入力部21は、受け付けた画像を例えばファイルとして画像記憶部26に登録することができる。 The image input unit 21 accepts the input of the captured image 1 in which the inspection target is captured. The image input unit 21 may, for example, accept input of image data as a file, or may control a camera (not shown) to acquire an image taken by the camera. Further, in the present embodiment, the image input unit 21 accepts the input of the captured image 1 for both the learning process and the inspection, but the image input unit 21 for learning and the image input unit 21 for inspection May be implemented as a separate functional part. For example, the image input unit 21 may accept input of an image file previously taken by a camera or the like at the time of learning, and may control the camera and the like at the time of inspection to take an image in real time. The image input unit 21 can register the received image as, for example, a file in the image storage unit 26.
 学習モデル記憶部25は、機械学習により学習される学習モデルを記憶する。学習モデル記憶部25は、領域検出モデル251、ひび検出モデル252、分類モデル253の各学習モデルを記憶する。 The learning model storage unit 25 stores a learning model learned by machine learning. The learning model storage unit 25 stores each learning model of the area detection model 251, the crack detection model 252, and the classification model 253.
 学習処理部23は、機械学習による学習を行う。学習処理部23は、学習モデル記憶部25に記憶されている各学習モデルの学習を行う。学習処理部23は、例えば、撮影画像1においてユーザからひび割れなどの異常の発生している領域の指定(アノテーション)を受け付けて学習を行うことができる。学習処理部23は、受け付けたアノテーションに関する情報(アノテーション情報)を、撮影画像1に対応付けてアノテーション記憶部27に登録することができる。 The learning processing unit 23 performs learning by machine learning. The learning processing unit 23 learns each learning model stored in the learning model storage unit 25. For example, the learning processing unit 23 can receive designation (annotation) of a region where an abnormality such as a crack has occurred in the captured image 1 from the user and perform learning. The learning processing unit 23 can register the received information (annotation information) related to the annotation in the annotation storage unit 27 in association with the captured image 1.
 領域検出モデル251は、画像から検査対象範囲を抽出するための学習モデルである。学習処理部23は、学習用の撮影画像1中における異常を検査するための検査対象領域の入力(アノテーション)を受け付けて、入力された領域を示す情報と撮影画像1とを学習させて領域検出モデル251を更新することができる。例えば、画像内に道路以外の中央分離帯や歩道などが写っている場合に、道路のみを検出対象として特定できるように、撮影画像1中における道路の領域の指定(アノテーションを受け付け、指定された領域と撮影画像1とを学習させて領域検出モデル251を更新することができる。 The area detection model 251 is a learning model for extracting the inspection target range from the image. The learning processing unit 23 receives an input (annotation) of an inspection target area for inspecting an abnormality in the captured image 1 for learning, and learns the information indicating the input area and the captured image 1 to detect the area. Model 251 can be updated. For example, when a median strip or a sidewalk other than the road is shown in the image, the area of the road in the captured image 1 is specified (annotation is accepted and specified so that only the road can be specified as the detection target. The region detection model 251 can be updated by learning the region and the captured image 1.
 領域検出器6は、検査用の撮影画像1中における検査対象範囲を検出する。領域検出器6は、領域検出モデル251に検査用の撮影画像1を与えることにより検査対象領域を特定することができる。なお、領域検出器6は、学習モデルを用いずに、例えば、画像中における検査対象の範囲を示す情報(例えば、多角形の頂点リストなど)を用いて検査対象領域を特定するようにしてもよい。 The area detector 6 detects the inspection target range in the captured image 1 for inspection. The region detector 6 can specify the inspection target region by giving the captured image 1 for inspection to the region detection model 251. It should be noted that the region detector 6 may specify the inspection target region by using, for example, information indicating the inspection target range in the image (for example, a polygonal vertex list) without using the learning model. good.
 ひび検出モデル252は、異常を撮影した撮影画像1を学習させた、撮影画像1から異常を検出するための学習モデルであり、本実施形態では、画像からひび割れを検出するための学習モデルである。学習処理部23は、学習用の撮影画像1中のひび割れの領域の入力(アノテーション)を受け付けて、入力された領域を示す情報と撮影画像1とを学習させてひび検出モデル252を更新することができる。なお、学習処理部23は、撮影画像1中における領域検出器6が特定した領域のみを学習するようにすることもできる。 The crack detection model 252 is a learning model for detecting an abnormality from the captured image 1 by learning the captured image 1 in which the abnormality is captured, and in the present embodiment, it is a learning model for detecting cracks from the image. .. The learning processing unit 23 receives the input (annotation) of the cracked region in the captured image 1 for learning, learns the information indicating the input region and the captured image 1, and updates the crack detection model 252. Can be done. The learning processing unit 23 can also learn only the region specified by the region detector 6 in the captured image 1.
 ひび割れを検出する検出器4(以下、ひび検出器4という。本発明の異常検出部に該当する。)は、未知の撮影画像1をひび検出モデル252に与えて、撮影画像1における異常を検出することができる。 The detector 4 for detecting cracks (hereinafter referred to as crack detector 4; corresponding to the abnormality detection unit of the present invention) gives an unknown captured image 1 to the crack detection model 252 and detects an abnormality in the captured image 1. can do.
 分類モデル253は、撮影画像1に含まれる異常を示す情報と撮影画像1とを学習させた、撮影画像1における異常の度合を推定するための学習モデルである。本実施形態では、分類モデル253は、ひび割れの度合(ひび割れの本数に応じた劣化レベル)を検出するための学習モデルである。学習処理部23は、ひび割れの度合を教師データとし、撮影画像1中におけるひび割れの領域を示す情報と、撮影画像1とを入力データとして機械学習を行い、分類モデル253を更新することができる。なお、ひび割れを示す情報は、ユーザのアノテーションにより受け付けるようにしてもよいし、ひび検出モデル252に撮影画像1を与えた結果を用いるようにしてもよい。 The classification model 253 is a learning model for estimating the degree of abnormality in the captured image 1 by learning the information indicating the abnormality included in the captured image 1 and the captured image 1. In the present embodiment, the classification model 253 is a learning model for detecting the degree of cracks (deterioration level according to the number of cracks). The learning processing unit 23 can update the classification model 253 by performing machine learning using the degree of cracking as teacher data and the information indicating the cracked region in the captured image 1 and the captured image 1 as input data. The information indicating the crack may be accepted by the annotation of the user, or the result of giving the captured image 1 to the crack detection model 252 may be used.
 分類器4(本発明の推定部に該当する。)は、検査用の撮影画像1を分類モデル253に与えることによりひび割れの度合を求めることができる。本実施形態では、検査用の撮影画像1をひび検出器2に渡してひび割れを検出するとともに、検出結果と撮影画像1とを分類モデル253に与えることにより、ひび割れの度合を求めるようにしている。 The classifier 4 (corresponding to the estimation unit of the present invention) can determine the degree of cracking by giving the photographed image 1 for inspection to the classification model 253. In the present embodiment, the captured image 1 for inspection is passed to the crack detector 2 to detect cracks, and the detection result and the captured image 1 are given to the classification model 253 to obtain the degree of cracks. ..
 画像分割部22は、撮影画像1をメッシュに分割する。メッシュサイズは所定値とすることができる。メッシュは正方形であってもよいし、長方形であってもよいし、六角形や八角形などの多角形であってもよい。画像分割部22は、撮影画像1の基準点(例えば、左上隅等)から上下左右方向に所定サイズのメッシュを設定することができる。本実施形態では、道路のメッシュとして、道路の白線を基準としてメッシュを設定する。すなわち、画像分割部22は、道路の延在方向(道路上で車両等が進行する方向)を上下方向とした撮影画像1の場合に、撮影画像1から白線を検出し、検出した白線のうち画像中で最も右端(又は左端)に検出されたものを基準として、道路横断方向(画像の左右方向)に所定距離ごとにメッシュを設定することができる。なお、道路の延在方向(長手方向)に向けては、例えば、交差点などの基準点を検出して、基準点からメッシュを配置するように分割することもできるし、単純に画像の上下方向(道路の延在方向)の下端(又は上端)を基準としてメッシュを設定することもできる。画像分割部22はまた、領域検出器6により検出された検査対象領域ではない撮影画像1の部分についてはメッシュを設定しないようにすることもできる。加えて、画像分割部22は、ひび検出器2が検出したひび割れの領域を示す情報を撮影画像1上にプロットした形状を、撮影画像1に設定したメッシュと同一のメッシュで分割することができる。 The image segmentation unit 22 divides the captured image 1 into meshes. The mesh size can be a predetermined value. The mesh may be a square, a rectangle, or a polygon such as a hexagon or an octagon. The image segmentation unit 22 can set a mesh of a predetermined size in the vertical and horizontal directions from the reference point (for example, the upper left corner, etc.) of the captured image 1. In the present embodiment, the mesh is set as the mesh of the road with reference to the white line of the road. That is, in the case of the captured image 1 in which the extending direction of the road (the direction in which the vehicle travels on the road) is the vertical direction, the image dividing unit 22 detects a white line from the captured image 1 and among the detected white lines. A mesh can be set for each predetermined distance in the road crossing direction (left-right direction of the image) based on the one detected at the rightmost end (or the left end) of the image. In the extending direction (longitudinal direction) of the road, for example, a reference point such as an intersection can be detected and divided so as to arrange a mesh from the reference point, or simply in the vertical direction of the image. It is also possible to set the mesh with reference to the lower end (or upper end) of (the extending direction of the road). The image segmentation unit 22 can also prevent the mesh from being set for the portion of the captured image 1 that is not the inspection target region detected by the region detector 6. In addition, the image segmentation unit 22 can divide the shape obtained by plotting the information indicating the crack region detected by the crack detector 2 on the captured image 1 with the same mesh as the mesh set in the captured image 1. ..
 分類器4は、画像分割部22により分割されたメッシュごとに、ひび割れの度合を推定することができる。すなわち、メッシュに分割された撮影画像1と、メッシュに分割されたひび割れの領域とを、メッシュ毎に分類モデル253に適用していき、メッシュ毎のひび割れの度合(劣化度合)を推定することができる。これにより、図2の例に示すように、道路を撮影した撮影画像1上に、メッシュ単位でひび割れ度合を示すことが可能となる。 The classifier 4 can estimate the degree of cracking for each mesh divided by the image segmentation unit 22. That is, the captured image 1 divided into meshes and the cracked region divided into meshes can be applied to the classification model 253 for each mesh, and the degree of cracking (degree of deterioration) for each mesh can be estimated. can. This makes it possible to show the degree of cracking in mesh units on the captured image 1 of the road as shown in the example of FIG.
 出力部24は、異常の度合を出力する。本実施形態では、ひび割れの度合(ひび割れの本数に応じた道路の劣化度合)を出力する。また、本実施形態では分類器4がメッシュ毎にひび割れの度合を推定しており、出力部24は、例えば、図2の最下段に示すように、背景画像1にメッシュ毎のひび割れの度合を、例えば色により表示することができる。 The output unit 24 outputs the degree of abnormality. In this embodiment, the degree of cracking (the degree of deterioration of the road according to the number of cracks) is output. Further, in the present embodiment, the classifier 4 estimates the degree of cracking for each mesh, and the output unit 24 calculates the degree of cracking for each mesh in the background image 1, for example, as shown in the lowermost part of FIG. , For example, can be displayed by color.
<動作>
 図5は、本実施形態の分析装置20の動作について説明する図である。
<Operation>
FIG. 5 is a diagram illustrating the operation of the analyzer 20 of the present embodiment.
 分析装置20は、検査用の撮影画像1を取得する(S31)。上述したように、撮影画像1は、ファイルの入力を受け付けるようにしてもよいし、カメラ(不図示)を制御して撮影した画像を取得するようにしてもよい。 The analyzer 20 acquires the captured image 1 for inspection (S31). As described above, the captured image 1 may accept the input of the file, or may control the camera (not shown) to acquire the captured image.
 分析装置20は、撮影画像1を領域検出モデル251に与えて撮影画像1から検査対象領域を検出し(S32)、撮影画像1の検査対象領域部分をひび検出モデル252に与えて、撮影画像1からひび割れを検出する(S33)。図6は、ひび割れの検出を説明する図である。図6に示すように、検出器2により撮影画像からひび割れ3が検出される。検出されたひび割れ3は、図6の例では撮影画像1に重畳させて出力されているが、ひび割れ3の箇所のみを示すひび割れデータを取得するようにすることができる。 The analyzer 20 gives the captured image 1 to the region detection model 251 to detect the inspection target region from the captured image 1 (S32), and imparts the inspection target region portion of the captured image 1 to the crack detection model 252 to provide the captured image 1 to the captured image 1. Cracks are detected from (S33). FIG. 6 is a diagram illustrating the detection of cracks. As shown in FIG. 6, the detector 2 detects the crack 3 in the captured image. The detected crack 3 is superimposed and output on the captured image 1 in the example of FIG. 6, but it is possible to acquire crack data indicating only the location of the crack 3.
 分析装置20は、撮影画像1の検査対象領域にメッシュを設定する(S34)。図7は、撮影画像1にメッシュを設定する様子を示す図である。図7に示すように、画像分割部22は撮影画像にメッシュ7を設定することができる。分析装置20は、例えば、画像の端部からメッシュを配置していくようにすることができるが、図7に示すように、画像分割部22は、撮影画像1から、道路に引かれた白線8を検出し、白線8の位置を基準としてメッシュを配置していくことができる。 The analyzer 20 sets a mesh in the inspection target area of the captured image 1 (S34). FIG. 7 is a diagram showing a state in which a mesh is set in the captured image 1. As shown in FIG. 7, the image segmentation unit 22 can set the mesh 7 on the captured image. For example, the analyzer 20 can arrange the mesh from the edge of the image, but as shown in FIG. 7, the image segmentation portion 22 has a white line drawn on the road from the captured image 1. 8 can be detected and the mesh can be arranged with reference to the position of the white line 8.
 分析装置20は、撮影画像1の検査対象領域を設定したメッシュに分割する(S35)。図8は、撮影画像1をメッシュに分割する様子を示す図である。図8に示すように、画像分割部22は、撮影画像1を、上記ステップS34において設定したメッシュに分割して、分割画像1Mを作成することができる。なお、画像分割部22は、例えば、撮影画像1から各メッシュに対応する部分を抽出して分割画像1Mを作成するようにしてもよい。 The analyzer 20 divides the inspection target area of the captured image 1 into a set mesh (S35). FIG. 8 is a diagram showing how the captured image 1 is divided into meshes. As shown in FIG. 8, the image segmentation unit 22 can divide the captured image 1 into the mesh set in step S34 to create the divided image 1M. The image segmentation unit 22 may, for example, extract a portion corresponding to each mesh from the captured image 1 to create a segmented image 1M.
 分析装置20は、上記検出したひび割れを示すデータ(撮影画像1上のひび割れの位置を含むことができる。)を、上記メッシュに分割することができる(S36)。図9は、ひび割れ3のデータをメッシュに分割する様子を示す図である。図9に示すように、画像分割部22は、ひび割れ3を示すデータを、上記メッシュの位置に含まれるものだけを示す分割データ3Mを作成することができる。なお、画像分割部22は、画像としてひび割れ3のデータを分割するようにしてもよいし、撮影画像1中のメッシュ内に含まれるひび割れ3を表す多角形の辺などをデータとして抽出するようにしてもよい。 The analyzer 20 can divide the detected crack data (which can include the position of the crack on the captured image 1) into the mesh (S36). FIG. 9 is a diagram showing how the data of the crack 3 is divided into meshes. As shown in FIG. 9, the image segmentation unit 22 can create segmentation data 3M indicating only the data indicating the crack 3 included in the position of the mesh. The image segmentation unit 22 may divide the data of the crack 3 as an image, or extract the side of the polygon representing the crack 3 included in the mesh in the captured image 1 as the data. You may.
 分析装置20は、メッシュごとに、撮影画像1を分割した分割画像1Mと、ひび割れデータを分割した分割データ3Mとを分類モデル253に与えて、ひび割れの度合を推定する(S37)。図10は、ひび割れの度合を推定する様子を示す図である。図10に示すように,分類器4は、分割画像1Mと分割データ3Mとを分類モデル253に与えて、例えば、「1本」や「レベル低」など、ひび割れの度合を示すカテゴリに当該メッシュを分類することができる。 The analyzer 20 gives the divided image 1M obtained by dividing the captured image 1 and the divided data 3M obtained by dividing the crack data to the classification model 253 for each mesh, and estimates the degree of cracking (S37). FIG. 10 is a diagram showing a state of estimating the degree of cracking. As shown in FIG. 10, the classifier 4 gives the divided image 1M and the divided data 3M to the classification model 253, and puts the mesh in a category indicating the degree of cracking, for example, "1" or "low level". Can be classified.
 以上のようにして、本実施形態の分析装置20は、撮影画像1と、撮影画像1から検出されたひび割れの箇所とに基づいてひび割れの度合を推定することができる。ひび割れとして検出された箇所だけでなく、撮影画像1も入力データとして与えることにより、ひび割れの度合を推定するにあたり、過検出によりひび割れと認識されてしまう状況を鑑みることができる。 As described above, the analyzer 20 of the present embodiment can estimate the degree of cracking based on the captured image 1 and the cracked portion detected from the captured image 1. By giving not only the location detected as a crack but also the captured image 1 as input data, it is possible to consider the situation in which the crack is recognized as a crack due to over-detection when estimating the degree of the crack.
 図11は、画像分割部22によるメッシュ8を設定する第1の処理について説明する図である。第1の処理は、道路の白線を基準として単純にメッシュ8を設定していく処理である。図12は、メッシュ8を設定する様子を示す図である。 FIG. 11 is a diagram illustrating a first process of setting the mesh 8 by the image segmentation unit 22. The first process is a process of simply setting the mesh 8 with reference to the white line of the road. FIG. 12 is a diagram showing how the mesh 8 is set.
 画像分割部22は、撮影画像1から白線を検出する(S401)。白線の検出処理には、一般的な画像解析による検出処理を採用することができる。撮影画像1に白線を認識できなかった場合(S402:NO)、画像分割部22は、撮影画像1の端部から所定距離内側(0であってもよい。)を白線とみなすことができる(S403)。 The image segmentation unit 22 detects a white line from the captured image 1 (S401). For the detection process of the white line, a detection process by general image analysis can be adopted. When the white line cannot be recognized in the captured image 1 (S402: NO), the image segmentation unit 22 can consider the inside of the captured image 1 by a predetermined distance (which may be 0) as the white line (may be 0). S403).
 画像分割部22は、撮影画像1から白線を検出した場合に(S401:YES)、白線が途中で途切れているときには(S404:YES)、白線を延長させる(S405)。  The image segmentation unit 22 extends the white line when it detects the white line from the captured image 1 (S401: YES) and when the white line is interrupted in the middle (S404: YES) (S405). It was
 図12の例では、途切れた白線を検出し(A1、S401)、白線を延長させている(A2、S405)様子が示されている。 In the example of FIG. 12, a state in which a broken white line is detected (A1, S401) and the white line is extended (A2, S405) is shown.
 次に、画像分割部22は、撮影画像1中の白線の最も右下を基準として、画像左方向及び上方向に向けて、メッシュ8のボックスを設定していく(S406)。図12にメッシュ8の矩形を構成するライン(B)が示されている。 Next, the image segmentation unit 22 sets the box of the mesh 8 toward the left and top of the image with the lower right of the white line in the captured image 1 as a reference (S406). FIG. 12 shows a line (B) constituting the rectangle of the mesh 8.
 画像分割部22は、左及び上方向にメッシュ8を並べていき、メッシュ8の左端が撮影画像1中の最も左に位置する白線を超える場合には(S407:YES)、メッシュ8の左端を白線まで移動させるように、メッシュ8のサイズを調整する(S408)。図12の例では、最左端のメッシュ8の左端(C1)を左端の白線(A3)に合わせるようにメッシュ8のサイズが調整されている。なお、同様に、メッシュ8の上端が撮影画像1の上端(又は上端から所定値内側の位置)を超える場合には、画像分割部22は、メッシュ8の上端を撮影画像1の上端まで移動させるように、メッシュ8のサイズを調整することができる。図12の例では、最上端のメッシュ8の上端C2を撮影画像1の上端に合わせるようにメッシュ8のサイズが調整されている。 The image segmentation portion 22 arranges the mesh 8s in the left and upward directions, and when the left end of the mesh 8 exceeds the white line located on the leftmost side in the captured image 1 (S407: YES), the left end of the mesh 8 is a white line. The size of the mesh 8 is adjusted so as to move to (S408). In the example of FIG. 12, the size of the mesh 8 is adjusted so that the left end (C1) of the leftmost mesh 8 is aligned with the white line (A3) at the left end. Similarly, when the upper end of the mesh 8 exceeds the upper end of the captured image 1 (or a position inside a predetermined value from the upper end), the image segmentation portion 22 moves the upper end of the mesh 8 to the upper end of the captured image 1. As such, the size of the mesh 8 can be adjusted. In the example of FIG. 12, the size of the mesh 8 is adjusted so that the upper end C2 of the uppermost mesh 8 is aligned with the upper end of the captured image 1.
 以上のようにして、道路に引かれた白線の間の部分をメッシュ8に分割することができる。 As described above, the part between the white lines drawn on the road can be divided into mesh 8.
 図13は、画像分割部22によるメッシュ8を設定する第2の処理について説明する図である。第2の処理は、上記図11に示す第1の処理に加えて、ステップS406におけるメッシュ8の設定前に、最右端のメッシュ8の設定開始位置を白線のピクセルに合わせて調整する処理(S421)が追加されている。 FIG. 13 is a diagram illustrating a second process of setting the mesh 8 by the image segmentation unit 22. In the second process, in addition to the first process shown in FIG. 11, the process of adjusting the setting start position of the rightmost mesh 8 according to the pixels of the white line before setting the mesh 8 in step S406 (S421). ) Has been added.
 図14は、第2の処理においてメッシュ8を設定する様子を示す図である。撮影時のカメラ(不図示)の傾きや、白線の太さのゆらぎ、白線の濃淡などにより、白線が直線ではない場合がありうるところ、ステップS421の処理により、画像分割部22は、白線を示すピクセル(例えば白色の画素)に応じて、例えば、メッシュ8に白線のピクセルが含まれる画素数が所定値以上であるような場合に、白線のピクセル数が所定値以下となるまでメッシュ8を左に1ピクセルずつずらしていくことができる。図14には、最下段から2つめの最右端のメッシュ8Dの左右方向の基準位置である右端(D1)が、白線A1を示すピクセルに応じて、矢印(D2)の方向にシフトされていることが示されている。 FIG. 14 is a diagram showing how the mesh 8 is set in the second process. The white line may not be a straight line due to the tilt of the camera (not shown) at the time of shooting, the fluctuation of the thickness of the white line, the shading of the white line, etc. Depending on the indicated pixels (for example, white pixels), for example, when the number of pixels including white line pixels in the mesh 8 is equal to or greater than a predetermined value, the mesh 8 is operated until the number of white line pixels is equal to or less than a predetermined value. It can be shifted to the left by one pixel. In FIG. 14, the right end (D1), which is the reference position in the left-right direction of the second rightmost mesh 8D from the bottom, is shifted in the direction of the arrow (D2) according to the pixel indicating the white line A1. It is shown that.
 図15は、画像分割部22によるメッシュ8を設定する第3の処理について説明する図である。第3の処理は、上記図13に示す第2の処理に加えて、ステップS407,S408の後に、撮影画像1に隣接した他の撮影画像1に継続してメッシュ8を設定していく処理(S431)が追加されている。 FIG. 15 is a diagram illustrating a third process of setting the mesh 8 by the image segmentation unit 22. In the third process, in addition to the second process shown in FIG. 13, after steps S407 and S408, the mesh 8 is continuously set on the other captured image 1 adjacent to the captured image 1 (a process of setting the mesh 8 on the other captured image 1 adjacent to the captured image 1. S431) has been added.
 図16は、第3の処理においてメッシュ8を設定する様子を示す図である。図16に示すように、上端のメッシュ8Eが、撮影画像1の上端を超え、隣接する撮影画像1’まではみ出して設定されている。このように画像分割部22は、連続した撮影画像1に跨がってメッシュ8を設定することができる。 FIG. 16 is a diagram showing how the mesh 8 is set in the third process. As shown in FIG. 16, the mesh 8E at the upper end is set so as to extend beyond the upper end of the captured image 1 and extend to the adjacent captured image 1'. In this way, the image segmentation unit 22 can set the mesh 8 across the continuous captured images 1.
 なお、画像分割部22は、複数の撮影画像1を並べて同一領域の有無を判定し、同一領域が存在する場合には、同一領域が重なるように並べ、並べた撮影画像1を用いてメッシュ1の設定を行うことができる。また、この際に、画像分割部22は、並べた撮影画像1を合成した合成画像を作成し、作成した合成画像に対してメッシュ8を設定するようにしてもよい。 The image segmentation unit 22 arranges a plurality of captured images 1 to determine the presence or absence of the same region, and if the same region exists, arranges the same regions so that they overlap each other, and uses the arranged captured images 1 to mesh 1 Can be set. Further, at this time, the image segmentation unit 22 may create a composite image in which the arranged captured images 1 are combined, and set the mesh 8 for the created composite image.
 以上、本実施形態について説明したが、上記実施形態は本発明の理解を容易にするためのものであり、本発明を限定して解釈するためのものではない。本発明は、その趣旨を逸脱することなく、変更、改良され得ると共に、本発明にはその等価物も含まれる。 Although the present embodiment has been described above, the above embodiment is for facilitating the understanding of the present invention, and is not for limiting the interpretation of the present invention. The present invention can be modified and improved without departing from the spirit thereof, and the present invention also includes an equivalent thereof.
 例えば、本実施形態では、分析装置20は、道路のひび割れの度合を推定するものとしたが、画像から何らかの不具合を検出し、当該不具合に係るリスクを評価するものに広く適用可能である。例えば、本実施形態の分析装置20は、橋梁やビルなどの構造物を撮影した撮影画像を入力することにより、橋梁やビルなどの構造物のひび割れを検出し、ひび割れの度合を評価することができる。また、本実施形態の分析装置20は、例えば、航空写真や衛星写真(赤外線データや各種のセンサデータを含む。)などの地図画像を撮影画像として用いることにより、例えば、土砂崩れの原因となる谷エリアを検出し、土砂崩れの危険度を評価するようにすることもできる。 For example, in the present embodiment, the analyzer 20 estimates the degree of cracking of the road, but it can be widely applied to detect some defect from the image and evaluate the risk related to the defect. For example, the analyzer 20 of the present embodiment can detect cracks in structures such as bridges and buildings by inputting captured images of structures such as bridges and buildings, and evaluate the degree of cracks. can. Further, the analyzer 20 of the present embodiment uses, for example, a map image such as an aerial photograph or a satellite photograph (including infrared data and various sensor data) as a captured image, for example, a valley that causes a landslide. It is also possible to detect the area and evaluate the risk of landslides.
 また、本実施形態では、分類モデル253は、ひび割れの度合を示したカテゴリ分類をするための学習モデルであるものとしたが、ひび割れの本数を推定する回帰モデルであってもよい。この場合、分類器4は、ひび割れの本数を推定することができる。 Further, in the present embodiment, the classification model 253 is a learning model for categorizing the degree of cracks, but it may be a regression model for estimating the number of cracks. In this case, the classifier 4 can estimate the number of cracks.
 また、本実施形態では、ひび割れの本数がひび割れの度合を示すものとしたが、これに限らず、ひび割れの深さをひび割れの度合として学習するようにしてもよい。この場合、ひび割れの深さのデータを入力するようにしてもよいし、ひび割れの深さに応じた度合を示す情報を入力するようにしてもよい。 Further, in the present embodiment, the number of cracks indicates the degree of cracks, but the present invention is not limited to this, and the depth of cracks may be learned as the degree of cracks. In this case, the data of the crack depth may be input, or the information indicating the degree according to the crack depth may be input.
 また、ひび割れの本数に代えて、ひび割れの深刻度をひび割れの度合としてもよい。この場合、専門家が画像を閲覧してひび割れの深刻度を判断し、判断した深刻度を入力して学習するようにすることができる。 Also, instead of the number of cracks, the severity of the cracks may be used as the degree of cracks. In this case, an expert can browse the image to determine the severity of the crack, and input the determined severity to learn.
 また、本実施形態では、道路のひび割れの度合を推定することを想定していたが、同様にして、橋梁やトンネルのひび割れ度合を推定する場合に適用することができる。また、ひび割れ以外にも、建材等の腐食、漏水、遊離石灰、損傷などの度合を推定する場合に適用することができる。 Further, in the present embodiment, it was assumed that the degree of cracking of the road was estimated, but similarly, it can be applied to the case of estimating the degree of cracking of a bridge or a tunnel. In addition to cracks, it can be applied when estimating the degree of corrosion, water leakage, free lime, damage, etc. of building materials.
 橋梁を検査する場合には、橋梁の脚部分のみを検査対象領域として抽出し、脚部分をメッシュに分割してひび割れ度合を推定することができる。 When inspecting a bridge, only the leg part of the bridge can be extracted as the inspection target area, and the leg part can be divided into meshes to estimate the degree of cracking.
 また、トンネルのひび割れを推定する場合、白線を認識せずにトンネルの内面を撮影した撮影画像1から直接ひび割れを検出するようにすればよい。また、トンネルの場合には、トンネルの長手方向に並べるメッシュの基準を、トンネルの入口に設定することができる。 Further, when estimating the cracks in the tunnel, the cracks may be detected directly from the photographed image 1 obtained by photographing the inner surface of the tunnel without recognizing the white line. Further, in the case of a tunnel, the reference of the mesh arranged in the longitudinal direction of the tunnel can be set at the entrance of the tunnel.
 また、本実施形態では、撮影画像を解析してメッシュを設定するものとしたが、メッシュの位置を与えるようにしてもよい。 Further, in the present embodiment, the captured image is analyzed and the mesh is set, but the position of the mesh may be given.
 また、本実施形態では、カメラが撮影した撮影画像1を用いて学習及び推論を行うものとしたが、これに限らず、複数の撮影画像1を合成した合成画像を用いるようにしてもよいし、撮影画像1をオルソ補正したオルソ画像を用いるようにしてもよい。 Further, in the present embodiment, learning and inference are performed using the captured image 1 captured by the camera, but the present invention is not limited to this, and a composite image obtained by synthesizing a plurality of captured images 1 may be used. , Orthophoto corrected orthophotographed image 1 may be used.
 また、本実施形態では、ひび検出器2及び分類器4は、アノテーションの結果を教師信号として与えた教師あり学習を行うものとしたが、ひび検出器2及び分類器4の少なくともいずれかが、教師なし学習により学習を行うようにしてもよい。例えば、ひび検出器2及び分類器4の少なくともいずれかは、正常な(異常が撮影されていない)学習用の撮影画像1を学習して生成器(例えば、オートエンコーダ)を作成し、作成した生成器を用いて検査用の撮影画像1に基づく正常な画像を再現し、再現した画像と検査用の撮影画像1との差分を異常として検出するようにすることができる。 Further, in the present embodiment, the crack detector 2 and the classifier 4 perform supervised learning in which the result of the annotation is given as a teacher signal, but at least one of the crack detector 2 and the classifier 4 is used. Learning may be performed by supervised learning. For example, at least one of the crack detector 2 and the classifier 4 learns a photographed image 1 for normal learning (no abnormality is photographed) to create a generator (for example, an autoencoder), and creates the image. A generator can be used to reproduce a normal image based on the captured image 1 for inspection, and the difference between the reproduced image and the captured image 1 for inspection can be detected as an abnormality.
  2   ひび検出器
  4   分類器
  6   領域検出器
  20  分析装置
  21  画像入力部
  22  画像分割部
  23  学習処理部
  24  出力部
  25  学習モデル記憶部
  26  画像記憶部
  27  アノテーション記憶部
  251 領域検出モデル
  252 ひび検出モデル
  253 分類モデル
2 Crack detector 4 Classifier 6 Area detector 20 Analyzer 21 Image input unit 22 Image division unit 23 Learning processing unit 24 Output unit 25 Learning model storage unit 26 Image storage unit 27 Annotation storage unit 251 Area detection model 252 Crack detection model 253 classification model

Claims (4)

  1.  異常を撮影した第1の画像から前記異常を検出するための第1の学習モデルと、前記異常を示す異常情報及び前記第1の画像から前記異常の度合を推定するための第2の学習モデルとを記憶する学習モデル記憶部と、
     前記異常が含まれるか否か不明の第2の画像の入力を受け付ける画像入力部と、
     前記第2の画像を前記第1の学習モデルに与えて、前記第2の画像における前記異常を検出する異常検出部と、
     検出された前記異常を示す前記異常情報及び前記第2の画像を前記第2の学習モデルに与えて、前記第2の画像における前記異常の度合を推定する推定部と、
     前記異常の度合を出力する出力部と、
     を備えることを特徴とする情報処理システム。
    A first learning model for detecting the abnormality from a first image obtained by photographing the abnormality, and a second learning model for estimating the degree of the abnormality from the abnormality information indicating the abnormality and the first image. A learning model memory unit that memorizes and
    An image input unit that accepts input of a second image whose abnormality is unknown or not, and an image input unit.
    An abnormality detection unit that applies the second image to the first learning model and detects the abnormality in the second image,
    An estimation unit that estimates the degree of the abnormality in the second image by giving the detected abnormality information indicating the abnormality and the second image to the second learning model.
    An output unit that outputs the degree of abnormality and
    An information processing system characterized by being equipped with.
  2.  請求項1に記載の情報処理システムであって、
     画像をメッシュに分割する画像分割部をさらに備え、
     前記第2の学習モデルは、前記第1の画像を分割した第1の前記メッシュ及び前記第1のメッシュ内に検出された前記異常情報を学習して作成されたモデルであり、
     前記画像分割部は前記第2の画像を第2の前記メッシュに分割し、
     前記推定部は、前記第2のメッシュのそれぞれについて、前記第2の画像から検出された前記異常情報のうち、前記第2のメッシュに含まれているものと前記第2のメッシュとを前記第2の学習モデルに与えて、前記メッシュごとの前記異常の度合を推定すること、
     を特徴とする情報処理システム。
    The information processing system according to claim 1.
    It also has an image segmentation section that divides the image into meshes.
    The second learning model is a model created by learning the first mesh obtained by dividing the first image and the abnormality information detected in the first mesh.
    The image segmentation section divides the second image into the second mesh.
    For each of the second meshes, the estimation unit obtains the abnormality information detected from the second image, which is included in the second mesh, and the second mesh. To estimate the degree of the anomaly for each mesh by giving it to the learning model of 2.
    An information processing system featuring.
  3.  請求項2に記載の情報処理システムであって、
     前記第2の画像は道路を撮影したものであり、
     前記画像分割部は、前記第2の画像から前記道路に引かれた白線を検出し、前記白線の位置を基準として前記メッシュの位置を設定すること、
     を特徴とする情報処理システム。
    The information processing system according to claim 2.
    The second image is a photograph of the road.
    The image segmentation unit detects a white line drawn on the road from the second image and sets the position of the mesh with reference to the position of the white line.
    An information processing system featuring.
  4.  請求項1乃至3のいずれか1項に記載の情報処理システムであって、
     画像から検査対象範囲を検出する対象範囲検出部をさらに備え、
     前記対象範囲検出部は、前記第2の画像から前記検査対象範囲を検出し、
     前記異常検出部は、検出された前記検査対象範囲を前記第1の学習モデルに与えて前記異常情報を検出し、
     前記推定部は、検出された前記検査対象範囲と、前記異常情報とを前記第2の学習モデルに与えて、前記異常の度合を推定すること、
     を特徴とする情報処理システム。
    The information processing system according to any one of claims 1 to 3.
    Further equipped with a target range detection unit that detects the inspection target range from the image,
    The target range detection unit detects the inspection target range from the second image, and the target range detection unit detects the inspection target range.
    The abnormality detection unit applies the detected inspection target range to the first learning model, detects the abnormality information, and detects the abnormality information.
    The estimation unit applies the detected inspection target range and the abnormality information to the second learning model to estimate the degree of the abnormality.
    An information processing system featuring.
PCT/JP2020/041543 2020-11-06 2020-11-06 Information processing system WO2022097275A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022560602A JPWO2022097275A1 (en) 2020-11-06 2020-11-06
PCT/JP2020/041543 WO2022097275A1 (en) 2020-11-06 2020-11-06 Information processing system

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/041543 WO2022097275A1 (en) 2020-11-06 2020-11-06 Information processing system

Publications (1)

Publication Number Publication Date
WO2022097275A1 true WO2022097275A1 (en) 2022-05-12

Family

ID=81457083

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/041543 WO2022097275A1 (en) 2020-11-06 2020-11-06 Information processing system

Country Status (2)

Country Link
JP (1) JPWO2022097275A1 (en)
WO (1) WO2022097275A1 (en)

Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018021375A (en) * 2016-08-03 2018-02-08 株式会社東芝 Pavement crack analyzer, pavement crack analysis method, and pavement crack analysis program
JP6678267B1 (en) * 2019-03-06 2020-04-08 エヌ・ティ・ティ・コムウェア株式会社 Road defect detecting device, road defect detecting method, and road defect detecting program
JP2020159969A (en) * 2019-03-27 2020-10-01 三菱電機株式会社 Auxiliary facility state evaluation device, auxiliary facility state evaluation method, and auxiliary facility state evaluation program

Patent Citations (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2018021375A (en) * 2016-08-03 2018-02-08 株式会社東芝 Pavement crack analyzer, pavement crack analysis method, and pavement crack analysis program
JP6678267B1 (en) * 2019-03-06 2020-04-08 エヌ・ティ・ティ・コムウェア株式会社 Road defect detecting device, road defect detecting method, and road defect detecting program
JP2020159969A (en) * 2019-03-27 2020-10-01 三菱電機株式会社 Auxiliary facility state evaluation device, auxiliary facility state evaluation method, and auxiliary facility state evaluation program

Also Published As

Publication number Publication date
JPWO2022097275A1 (en) 2022-05-12

Similar Documents

Publication Publication Date Title
CN110678901B (en) Information processing apparatus, information processing method, and computer-readable storage medium
Mei et al. A cost effective solution for pavement crack inspection using cameras and deep neural networks
Wei et al. Instance-level recognition and quantification for concrete surface bughole based on deep learning
Loverdos et al. Automatic image-based brick segmentation and crack detection of masonry walls using machine learning
CN109615611B (en) Inspection image-based insulator self-explosion defect detection method
US20220092856A1 (en) Crack detection, assessment and visualization using deep learning with 3d mesh model
JP7104799B2 (en) Learning data collection device, learning data collection method, and program
Pathak et al. An object detection approach for detecting damages in heritage sites using 3-D point clouds and 2-D visual data
EP3889588A1 (en) Inspection assistance device, inspection assistance method, and inspection assistance program for concrete structure
Dang et al. Deep learning-based masonry crack segmentation and real-life crack length measurement
Loverdos et al. An innovative image processing-based framework for the numerical modelling of cracked masonry structures
JP2019207535A (en) Information processing apparatus, information processing method, and program
CN111008956B (en) Beam bottom crack detection method, system, device and medium based on image processing
US11906441B2 (en) Inspection apparatus, control method, and program
JP7059889B2 (en) Learning device, image generator, learning method, and learning program
Guo et al. Surface defect detection of civil structures using images: Review from data perspective
JP6980208B2 (en) Structure maintenance work support system
JP7156527B2 (en) Road surface inspection device, road surface inspection method, and program
WO2022097275A1 (en) Information processing system
CN117351505A (en) Information code identification method, device, equipment and storage medium
CN114114457B (en) Fracture characterization method, device and equipment based on multi-modal logging data
US20210304417A1 (en) Observation device and observation method
CN115661851A (en) Sample data acquisition and component identification method and electronic equipment
Galarreta Urban Structural Damage Assessment Using Object-Oriented Analysis and Semantic Reasoning
Zhang et al. Automated recognition of a wall between windows from a single image

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20960827

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022560602

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20960827

Country of ref document: EP

Kind code of ref document: A1