WO2022074746A1 - Deterioration detection apparatus, deterioration detection method, and program - Google Patents

Deterioration detection apparatus, deterioration detection method, and program Download PDF

Info

Publication number
WO2022074746A1
WO2022074746A1 PCT/JP2020/037908 JP2020037908W WO2022074746A1 WO 2022074746 A1 WO2022074746 A1 WO 2022074746A1 JP 2020037908 W JP2020037908 W JP 2020037908W WO 2022074746 A1 WO2022074746 A1 WO 2022074746A1
Authority
WO
WIPO (PCT)
Prior art keywords
image
unit
region
teacher
deterioration
Prior art date
Application number
PCT/JP2020/037908
Other languages
French (fr)
Japanese (ja)
Inventor
大輔 内堀
一旭 渡邉
勇臣 濱野
雅史 中川
淳 荒武
Original Assignee
日本電信電話株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 日本電信電話株式会社 filed Critical 日本電信電話株式会社
Priority to JP2022555012A priority Critical patent/JP7410442B2/en
Priority to PCT/JP2020/037908 priority patent/WO2022074746A1/en
Publication of WO2022074746A1 publication Critical patent/WO2022074746A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • This disclosure relates to a deterioration detection device, a deterioration detection method, and a program.
  • Non-Patent Document 1 There are many methods to detect a specific image area using image processing. Among these methods, recently, the segmentation method in deep learning is considered to be effective because of its high detection accuracy and ease of construction. Using this segmentation method, it is conceivable to construct an image processing system that detects deterioration of structures such as dew streaks generated on concrete walls and corrosion of hardware installed on the skeleton from images taken of communication manholes. Be done.
  • An object of the present disclosure is to provide a deterioration detection device, a deterioration detection method, and a program for accurately detecting deterioration that may affect a structure.
  • the deterioration detection device acquires a machine-learned determination device using a teacher image, which is an image obtained by photographing, and a teacher label that identifies a pixel indicating a deteriorated portion in the teacher image as teacher data.
  • the photographed image obtained by photographing the structure is acquired, the photographed image is converted into a predetermined color space, and the structure is deteriorated in the converted photographed image by using the determination device. It is provided with a control unit that predicts the area occupied by the portion and determines as the deteriorated area of the captured image the area including the pixels of the predetermined number of pixels or more among the predicted areas.
  • the control unit of the deterioration detection device uses a teacher image, which is an image obtained by photographing, and a teacher label for specifying a pixel indicating a deteriorated portion in the teacher image as teacher data.
  • the learned determination device is acquired, the captured image obtained by photographing the structure is acquired, the captured image is converted into a predetermined color space, and the converted determination device is used.
  • a region occupied by the deteriorated portion of the structure in the captured image is predicted, and among the predicted regions, a region including a predetermined number of pixels or more is determined as a degraded region of the captured image.
  • the program according to the embodiment causes the computer to function as the deterioration detection device.
  • a deterioration detection device it is possible to provide a deterioration detection device, a deterioration detection method, and a program for accurately detecting deterioration that may affect a structure.
  • FIG. 1 It is a figure which shows the hardware configuration of the deterioration detection apparatus which concerns on one Embodiment of this disclosure. It is a figure which shows the functional structure of the deterioration detection apparatus which concerns on one Embodiment of this disclosure. It is a figure which shows an example of a teacher image. It is a figure which shows an example of a teacher label. It is a figure which shows an example of the change of the detection rate according to the type of a color space. It is a figure which shows an example of the change of the detection rate according to the type of a color space. It is a figure which shows the structural example of the learning part provided in the deterioration detection apparatus of FIG.
  • a deterioration detection device 10 for detecting deterioration from an image will be described.
  • the deterioration detection device 10 relates to a device and a system for detecting deterioration of a communication manhole from a captured image. Deterioration to be detected is dew streaks generated in the skeleton of the communication manhole in the captured image and corrosion of hardware installed inside the communication manhole.
  • the deterioration detection device 10 detects deterioration by using the segmentation method, the deterioration detection device 10 improves the detection accuracy by removing dirt and the like from the detection result, and deletes minute deterioration that does not affect the structure.
  • FIG. 1 is a diagram showing a hardware configuration of a deterioration detection device 10 according to an embodiment of the present disclosure.
  • the deterioration detection device 10 is one or a plurality of server devices capable of communicating with each other.
  • the deterioration detection device 10 is not limited to these, and may be any electronic device such as a general-purpose computer, a dedicated computer, a workstation, a PC (Personal Computer), and an electronic notepad.
  • the deterioration detection device 10 includes a control unit 11, a storage unit 12, a communication unit 13, an input unit 14, an output unit 15, and a bus 16.
  • the control unit 11 includes one or more processors.
  • the "processor” is a general-purpose processor or a dedicated processor specialized for a specific process, but is not limited thereto.
  • the processor may be, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or the like.
  • the control unit 11 is communicably connected to each component constituting the deterioration detection device 10 via the bus 16 and controls the operation of the entire deterioration detection device 10.
  • the storage unit 12 includes an arbitrary storage module including an HDD, SSD, EEPROM, ROM, and RAM.
  • the storage unit 12 may function as, for example, a main storage device, an auxiliary storage device, or a cache memory.
  • the storage unit 12 stores arbitrary information used for the operation of the deterioration detection device 10.
  • the storage unit 12 may store various information received by the system program, the application program, and the communication unit 13.
  • the storage unit 12 is not limited to the one built in the deterioration detection device 10, and may be an external database or an external storage module connected by a digital input / output port such as USB.
  • HDD is an abbreviation for Hard Disk Drive.
  • SSD is an abbreviation for Solid State Drive.
  • EEPROM is an abbreviation for Electrically Erasable Programmable Read-Only Memory.
  • ROM is an abbreviation for Read-Only Memory.
  • RAM is an abbreviation for Random Access Memory.
  • USB is an abbreviation for Universal Serial Bus.
  • the communication unit 13 includes an arbitrary communication module capable of communicating with another device by any communication technology.
  • the communication unit 13 may further include a communication control module for controlling communication with other devices, and a storage module for storing communication data such as identification information required for communication with other devices.
  • the input unit 14 includes one or more input interfaces that accept user input operations and acquire input information based on the user's operations.
  • the input unit 14 is, but is not limited to, a physical key, a capacitance key, a pointing device, a touch screen provided integrally with the display of the output unit 15, a microphone that accepts voice input, and the like.
  • the output unit 15 includes one or more output interfaces that output information to the user and notify the user.
  • the output unit 15 is, but is not limited to, a display that outputs information as an image, a speaker that outputs information as voice, and the like.
  • At least one of the above-mentioned input unit 14 and output unit 15 may be integrally configured with the deterioration detection device 10 or may be provided as a separate body.
  • the function of the deterioration detection device 10 is realized by executing the program according to the present embodiment on the processor included in the control unit 11. That is, the function of the deterioration detection device 10 is realized by software.
  • the program causes the computer to execute the processing of the steps included in the operation of the deterioration detection device 10, so that the computer realizes the function corresponding to the processing of the steps. That is, the program is a program for making the computer function as the deterioration detection device 10 according to the present embodiment.
  • the program instruction may be a program code, a code segment, or the like for executing a necessary task.
  • the program may be recorded on a computer-readable recording medium. Using such a recording medium, it is possible to install the program on the computer.
  • the recording medium on which the program is recorded may be a non-transient (non-temporary) recording medium. Even if the non-transient recording medium is a CD (Compact Disk) -ROM (Read-Only Memory), DVD (Digital Versatile Disc) -ROM, BD (Blu-ray (registered trademark) Disc) -ROM, etc. good.
  • the program may be distributed by storing the program in the storage of the server and transferring the program from the server to another computer via the network.
  • the program may be provided as a program product.
  • the computer temporarily stores the program recorded on the portable recording medium or the program transferred from the server in the main storage device. Then, the computer reads the program stored in the main storage device by the processor, and executes the processing according to the read program by the processor.
  • the computer may read the program directly from the portable recording medium and perform processing according to the program.
  • the computer may sequentially execute processing according to the received program each time the program is transferred from the server to the computer. Such processing may be executed by a so-called ASP type service that realizes a function only by an execution instruction and result acquisition without transferring a program from a server to a computer.
  • "ASP" is an abbreviation for Application Service Provider.
  • the program includes information used for processing by a computer and equivalent to the program. For example, data that is not a direct command to the computer but has the property of defining the processing of the computer corresponds to "a program-like data".
  • a part or all the functions of the deterioration detection device 10 may be realized by a dedicated circuit included in the control unit 11. That is, some or all the functions of the deterioration detection device 10 may be realized by hardware. Further, the deterioration detection device 10 may be realized by a single information processing device or may be realized by the cooperation of a plurality of information processing devices.
  • FIG. 2 is a diagram showing a functional configuration of the deterioration detection device 10 according to the first embodiment of the present disclosure.
  • the deterioration detection device 10 includes a determination device construction unit 20, a determination unit 30, and a filtering unit 40.
  • the determination device construction unit 20 includes a teacher image input unit 21, a color space conversion unit 22, a teacher label input unit 23, and a learning unit 24.
  • the determination unit 30 includes an image input unit 31, a color space conversion unit 32, and a determination device storage unit 33.
  • the teacher image input unit 21 in the determination device construction unit 20 inputs a captured image that is a learning image (teacher image) for constructing the determination device.
  • the captured image refers to digital data of a still image captured by a digital camera or the like.
  • the teacher image input unit 21 the captured image is input as a bitmap image, and the data of the multidimensional array is stored. For example, if the still image is a gray image, it is stored as one-layer matrix data, and if it is a color image, it is stored as a plurality of layers of matrix data.
  • the resolution of the bitmap image is arbitrary.
  • the teacher label input unit 23 in the determination device construction unit 20 has a function of inputting matrix data of the teacher label.
  • FIG. 3A is a diagram showing an example of a teacher image.
  • FIG. 3B is a diagram showing an example of a teacher label.
  • the teacher in which the pixel of the corrosion of the hardware corresponding to the deterioration is associated with "1" (white) and the other pixels are associated with "0" (black).
  • the teacher label input unit 23 inputs the matrix data of this label.
  • 3A and 3B show an example in which deterioration is corrosion of hardware, but the type of deterioration is not limited to corrosion of hardware.
  • the state where the surface concrete is extruded and peeled off due to the corrosive expansion of the reinforcing bar inside the concrete and the corroded reinforcing bar is exposed (called dew bar) is also defined as deterioration.
  • dew bar the state where the surface concrete is extruded and peeled off due to the corrosive expansion of the reinforcing bar inside the concrete and the corroded reinforcing bar is exposed
  • dew bar the state where the surface concrete is extruded and peeled off due to the corrosive expansion of the reinforcing bar inside the concrete and the corroded reinforcing bar is exposed
  • the color space conversion unit 22 in the determination device construction unit 20 has a function of converting the color space of the bitmap image input to the teacher image input unit 21.
  • a color space a color image taken by a general digital camera image is stored as matrix data in an image input unit 31 in a numerical arrangement of an RGB (Red, Green, Blue) color space.
  • the color space conversion unit 22 converts the matrix data of such a numerical arrangement of the RGB color space into the matrix data of another color space such as HSV (Hue, Saturation, Value) or L * a * b * . be able to.
  • HSV Human, Saturation, Value
  • L * a * b * L * a * b * .
  • the L * a * b * color space is a color space devised according to the human sense.
  • the corrosion of the hardware contains a large amount of minute rust, rust-like stains, rust juice, etc. in the process of progress, so that it is difficult to clearly identify the corroded pixel region. Therefore, it is important to rely heavily on human vision when creating teacher labels.
  • the area of the corroded reinforcing bar contains rust juice or unevenness due to rust on the surrounding concrete, so that the pixel area cannot be clearly specified, and the teacher label depends on human vision. Therefore, the L * a * b * color space, which more expresses human vision, is more effective for the corrosion of hardware and the color space of dew streaks.
  • FIGS. 4A and 4B are diagrams showing an example of a change in the detection rate depending on the type of color space.
  • FIGS. 4A and 4B show the results of verifying the deterioration detection rate by creating a judgment device in three types of color spaces: RGB color space, HSV color space, and L * a * b * color space. There is.
  • FIG. 4A shows the result of verifying the detection rate with 320 images of dew streaks
  • FIG. 4B shows the result of verifying the detection rate with 400 images of corrosion of hardware.
  • FIGS. 4A and 4B show the results of creating a determination device by a segmentation method and evaluating the detection rate using the created determination device for each of the dew streaks and the corrosion of hardware. As shown in FIGS.
  • the detection rate was the highest when the matrix data of the captured image was converted into the L * a * b * color space.
  • the detection rate is the matching rate between the deteriorated region determined by the determination device and the pixel region of the deteriorated region given as a true value by a human.
  • the L * a * b * color space was used, but when learning and constructing the judgment device, the color space with the highest judgment accuracy can be arbitrarily set.
  • the color space conversion unit 22 in the determination device construction unit 20 performs a process of normalizing the stored matrix data.
  • the normalization here means that when the value of each element of the matrix data does not fit within 0 or more and 1 or less, the numerical value is converted so that each element fits within 0 or more and 1 or less in each layer of the matrix.
  • the color space conversion unit 22 normalizes by the process of dividing the numerical value of each element by 255.
  • the color space conversion unit 22 determines the processing within the numerical range of the elements input to each layer so that the numerical value of the element of each layer is 0 or more and 1 or less.
  • the learning unit 24 in the determination device construction unit 20 creates a determination device for detecting deterioration from the teacher image and the teacher label.
  • the learning unit 24 constructs a determination device by inputting matrix data output by each of the color space conversion unit 22 and the teacher label input unit 23.
  • the learning unit 24 outputs the constructed determination device to the determination unit 30, and also outputs the threshold value h for distinguishing between deterioration and non-deterioration, which will be described later, to the filtering unit 40.
  • FIG. 5 is a diagram showing a configuration example of the learning unit 24 included in the deterioration detection device 10 of FIG.
  • the learning unit 24 includes a loss function determination unit 241 and a learning progress unit 242.
  • the loss function determination unit 241 specifies a loss function used for evaluating the prediction accuracy of the neural network in the learning progress unit 242.
  • the loss function that can be specified can arbitrarily set the cross entropy error, square error, etc., but AUC (Area Under the Curve) maximization and F1 score are effective for detecting deterioration including corrosion of hardware and dew streaks.
  • FIG. 6 is a diagram showing an example of a confusion matrix showing the relationship between the true value and the judgment result by the judgment device.
  • the confusion matrix is a matrix that summarizes the results of class classification output in the binary classification problem.
  • the binary classification problem is a problem of determining true (Positive) or false (False) for a certain proposition (for example, does the pixel correspond to deterioration).
  • TP True Positive
  • TN True Negative
  • FP Falalse Positive
  • FN False Negative
  • TP means that when the prediction by the determination device is Positive (for example, the pixel corresponds to deterioration), the prediction correctly indicates the actual (True), that is, the case where the prediction is also Positive.
  • TN means that when the prediction by the determination device is Negative (for example, the pixel does not correspond to deterioration), the prediction correctly indicates the actual (True), that is, the case where the prediction is also Negative.
  • the FP does not indicate the actual fact correctly (False) when the prediction by the determiner is Positive (for example, the pixel corresponds to deterioration), that is, it is actually Negative (for example, the pixel does not correspond to deterioration). Refers to the case of.
  • FN does not indicate the actual fact correctly (False), that is, it is actually Positive (eg, the pixel corresponds to deterioration) when the prediction by the determiner is Negative (for example, the pixel does not correspond to deterioration).
  • the control unit 11 labels the captured image with "1" for pixels with deterioration such as corrosion of hardware and dew streaks and "0" for other non-deteriorated pixels as a teacher label that reflects the actual situation.
  • the deteriorated pixel is Positive and the non-deteriorated pixel is Negative.
  • the relationship between the true value (actual: Actual) and the determination result (predicted) by the determination device can be shown by the mixed matrix as shown in FIG.
  • a numerical value of 0 or more and 1 or less is input to each pixel of the matrix data output by the judgment device.
  • the judgment result by the judgment device is Positive, which judges that the pixel which becomes “1” is deteriorated, and , "0" can be classified as Negative, which is determined to be a non-degraded pixel.
  • FIG. 7 is a diagram showing the relationship between the ROC curve and the AUC.
  • the curve showing the performance of the judgment result by the judgment device using the mixed matrix is called a ROC curve (Receiver Operating Characteristic curve, receiver operating characteristic curve).
  • the ROC curve is a curve showing the relationship between the FP rate and the TP rate when the threshold value for determining deterioration or non-deterioration is changed.
  • the FP rate (False Positive Rate) is the ratio of the data that is erroneously determined to be Positive among the data that should be determined to be Negative, and the smaller this value, the higher the performance of the determination device.
  • the FP rate is expressed as the number of FP samples / (the number of FP samples + the number of TN samples).
  • the TP rate (True Positive Rate) is the ratio of the data that can be correctly determined to be Positive among the data that should be actually determined to be Positive, and the larger this value is, the higher the performance of the determination device is.
  • the TP rate is expressed as the number of TP samples / (the number of TP samples + the number of FN samples).
  • the size of the AUC is one of the indexes indicating the performance of the determination device. Setting the loss function so that the area of the AUC is maximized is called AUC maximization. Deterioration phenomena such as corrosion of hardware and dew streaks are relatively small as a pixel region in each captured image.
  • the loss function is AUC maximization that suppresses the FP rate while improving the TP rate so that positive pixels with a smaller ratio than the Negative pixels are detected. It is effective to set to.
  • the F1 score is a mathematical formula shown in the formula (1).
  • F1 (2Recall ⁇ Precision) / (Recall + Precision) (1)
  • Recall (recall rate) and Precision (precision rate) are expressed by equations (2) and (3).
  • Recall TP / (TP + FN)
  • Precision TP / (TP + FP) (3)
  • the F1 score is also effective for the data imbalance problem, and setting this F1 score as a loss function is used throughout the image, such as corrosion of hardware of structures or deterioration of dew streaks, as in this case. It is effective for detecting a small area.
  • the learning progress unit 242 in the learning unit 24 uses the matrix data input to the color space conversion unit 22 and the matrix data input to the teacher label input unit 23 to perform learning using deep learning of the neural network model. implement.
  • the learning progress unit 242 proceeds with learning so that the index indicated by the loss function determined by the loss function determination unit 241 is improved. For example, when the determined loss function is AUC maximization, the area of the AUC region increases as the learning progresses. A large area of AUC corresponds to good performance as a detector. Even when the F1 score is used as the loss function, the value of the F1 score improves as the learning progresses. In the learning progress unit 242, when the learning is completed for a certain period or more, the loss function hardly changes even if the learning is further performed.
  • the learning progress unit 242 proceeds with learning until, for example, the rate of change of the loss function becomes less than a predetermined threshold value. In this way, the learning progress unit 242 learns sufficiently until the value of the loss function determined by the loss function determination unit 241 does not substantially change even if the learning is further advanced, and determines the trained model. To construct.
  • the model of deep learning here corresponds to a segmentation method such as U-net and SegNet of deep learning.
  • This segmentation method it becomes possible to detect the corrosion of hardware and the deteriorated region such as dew streaks from the image.
  • corrosion of hardware of a certain size or more and deterioration of dew streaks, etc. occur, in order to repair or reinforce, not only the presence or absence of corrosion and dew streaks, but also the area of deterioration is detected. It is important to.
  • the learning progress unit 242 performs learning using the matrix data output by the color space conversion unit 22 and the matrix data output by the teacher label input unit 23, and constructs a determination device.
  • the learning progress unit 242 obtains an ROC curve from the matrix data (predicted value of deterioration) processed by the determination device that constructs the matrix data output by the color space conversion unit 22 and the matrix data output by the teacher label input unit 23. Generate. Even when training is performed using the F1 score as a loss function, a ROC curve is generated based on the training result. As shown in FIG. 8, the learning progress unit 242 sets a threshold value h that minimizes the Euclidean distance. FIG. 8 is a diagram illustrating a process of determining the threshold value h based on the ROC curve. As shown in FIG. 8, the learning progress unit 242 determines the threshold value h at which the Euclidean distance between the coordinates (0,1) and the ROC curve is minimized.
  • the image input unit 31 of the determination unit 30 inputs a captured image.
  • the image input unit 31 stores the captured image as matrix data of the bitmap image in the same manner as the teacher image input unit 21.
  • the image to be input is an image different from the image input to the teacher image input unit 21.
  • the color space conversion unit 32 of the determination unit 30 has a function of converting the color space of the bitmap image input by the image input unit 31 of the determination unit 30 by the same processing as the color space conversion unit 22 in the determination unit construction unit 20. Have. Here, what is important is to perform conversion to the same color space specified by the color space conversion unit 22 of the determination device construction unit 20. The determination accuracy is improved by using the same color space arrangement as the color space used when creating the determination device in the learning unit 24 of the determination device construction unit 20.
  • the determination device storage unit 33 in the determination unit 30 stores the determination device generated by the determination device construction unit 20.
  • the determination device storage unit 33 performs arithmetic processing on the matrix data input from the color space conversion unit 32 using the determination device.
  • the data output as a result of the arithmetic processing is one-layer matrix data showing the result of estimating whether or not each pixel of the captured image input to the image input unit 31 is a deteriorated region.
  • the result of estimating whether or not it is the deteriorated region output by the determination device storage unit 33 shows a value of 0 or more and 1 or less for each pixel.
  • FIG. 9 is a diagram showing a configuration example of the filtering unit 40 included in the deterioration detection device 10 of FIG.
  • the filtering unit 40 includes a binarization processing unit 41, a connectivity recognition unit 42, and a removal unit 43.
  • the binarization processing unit 41 has a function of performing binarization processing of the matrix data output from the determination device storage unit 33 in the determination unit 30.
  • the threshold value when performing the binarization process the threshold value h output by the learning progress unit 242 in the learning unit 24 is used. That is, the binarization processing unit 41 sets the value of the pixel whose value indicating the estimation result indicated by the matrix data is equal to or greater than the threshold value h to 1, and sets the value of the pixel whose value is less than the threshold value h to 0.
  • the connectivity recognition unit 42 in the filtering unit 40 counts the number of connected elements in which "1" is input to the matrix data after the binarization process.
  • “Number of connected elements” is defined as the number of elements included in a region formed by connecting elements having the same value. A region formed by connecting elements having the same value is called a "connecting element”.
  • “Concatenation” is defined by whether or not the eight elements surrounding the element of interest 71 have the same numerical value as the element of interest, as shown in FIG. 10A. Focus on the elements of interest, up, down, left, and right. If any of the pixels on the top, bottom, left, or right is the same as the pixel of interest, "4 neighborhood connection”. It is defined as “8 neighborhood connection”.
  • the element in which "1" is input after the binarization process is set as the element of interest.
  • the number of connected elements is counted.
  • the setting of the vicinity of 4 or the vicinity of 8 can be arbitrarily performed according to the user's setting or the like.
  • the connectivity recognition unit 42 will have numerical data regarding the number of connected elements for which "1" is input.
  • FIG. 10B shows an example in the case of 8 neighborhood connection. Only the upper right element is connected to the attention element 71. The element on the right is connected to the element on the upper right of the element of interest (the element 71 on the lower left has already been counted). For the element on the right, there is no concatenated element other than the element on the left (which has already been counted). Therefore, the number of connected elements in the example of FIG. 10B is 3.
  • the removal unit 43 sets a predetermined threshold value k, which is a predetermined integer of 1 or more, and replaces the value of the element with “0” for an element whose number of connected elements is less than the threshold value k. I do. That is, the removing unit 43 determines that the pixel region in which the number of connected elements counted by the connectivity recognition unit 42 is less than the threshold value k is noise, and the region is included in the output result of the deterioration detection device 10 as a region showing deterioration. The process of replacing the pixel in the region with "0" is performed so as not to prevent the pixel.
  • a predetermined threshold value k which is a predetermined integer of 1 or more
  • FIG. 11A is a diagram showing an example of data after binarization processing.
  • FIG. 11B is a diagram showing an example of data after filtering processing.
  • a part of the pixel displayed as the value “1” (white) in FIG. 11A is replaced with the value “0” (black).
  • connection in the vicinity of 8 is effective. This is because corrosion and dew streaks spread in unspecified directions in the image, so it is better to consider all the periphery of the pixel of interest as a connection to better reflect the actual situation.
  • the probability of connecting in the 4-neighborhood connection is lower than that in the 8-neighborhood connection. Therefore, there may be cases where corrosion that should be evaluated as being actually connected and deterioration of the dew streaks are divided, and corrosion or dew streaks that should not be replaced with the value "0" (black). In some cases, the deteriorated area is replaced with "0" (black).
  • the four-neighborhood connection is highly effective for problems such as geometric figures where the connection direction can be predicted. By outputting only the deteriorated region having a large pixel region of a certain size or more and having a large influence on the structure such as corrosion and dew streaks, the removing unit 43 reduces the labor of human confirmation of the output result. be able to.
  • the result output unit 50 outputs the result processed by the filtering unit 40 in comparison with a digital camera image or the like.
  • the result output unit 50 can display the result processed by the filtering unit 40 by the monitor or the display.
  • FIG. 12A is a diagram showing an example of data after filtering processing.
  • FIG. 12B is a diagram showing an example of an image in which a deteriorated portion is shown.
  • the portion determined to be deteriorated is distinguished by the highlighting 73.
  • the highlighting 73 distinguishes the deteriorated portion by hatching, but the highlighting 73 is not limited to this.
  • the highlighting 73 may be colored, bordered, or a combination thereof. As a result, the user can easily recognize the portion determined to be deteriorated in the captured image by using the highlighting 73 as a clue.
  • the control unit 11 of the deterioration detection device 10 determines that the teacher image, which is an image obtained by photographing, and the teacher label that identifies the pixel indicating the deteriorated portion in the teacher image are machine-learned as teacher data. Get a vessel.
  • the control unit 11 acquires a photographed image obtained by photographing the structure, converts the photographed image into a predetermined color space, and then uses a determination device to capture the structure in the converted photographed image. Predict the area occupied by the deteriorated part. Further, the control unit 11 determines, among the predicted regions, a region including a predetermined number of pixels or more as a deteriorated region of the captured image. Therefore, according to the deterioration detection device 10, deterioration that may affect the structure can be detected with high accuracy.
  • FIG. 13 is a diagram showing a functional configuration of the deterioration detection device 10 according to the second embodiment of the present disclosure.
  • the filtering unit 40 performs a process of replacing the value of the element whose value is less than the predetermined threshold value k of "1" with "0".
  • the threshold value of the number of connected elements as a reference when replacing the value of the element whose value is “1” with “0” in the filtering unit 40 is determined by machine learning.
  • the deterioration detection device 10 according to the present embodiment has a structure in which the filtering construction unit 60 is added to the deterioration detection device 10 according to the first embodiment.
  • the same functional configuration as that of the deterioration detection device 10 according to the first embodiment will be omitted in detail, and the configuration peculiar to the deterioration detection device 10 according to the present embodiment will be mainly described.
  • the filtering construction unit 60 includes a test image input unit 61, a color space conversion unit 62, a test label input unit 63, a determination device storage unit 64, a binarization processing unit 65, a connectivity recognition unit 66, and a connection number determination unit 67. Be prepared.
  • the test image input unit 61 in the filtering construction unit 60 stores the captured image as matrix data of the bitmap image in the same manner as the teacher image input unit 21.
  • the image to be input is different from the image input to the teacher image input unit 21.
  • the image input to the test image input unit 61 may be different from the image input unit 31 in the determination unit 30, or may include a similar image.
  • test label input unit 63 in the filtering construction unit 60 has "1" for the deteriorated pixel corresponding to corrosion or dew streaks of the metal in the image, and "1" for the other pixels. Input the matrix data corresponding to "0".
  • the number of data to be input to the test image input unit 61 and the test label input unit 63 is arbitrary. However, these data must be paired.
  • the color space conversion unit 62 in the filtering construction unit 60 has a function of converting the color space of the bitmap image input by the teacher image input unit 21 in the same manner as the color space conversion unit 22 in the determination device construction unit 20.
  • the determination device to be stored in the determination device storage unit 64 in the filtering construction unit 60 is created, the color space is converted to the same color space as the color space specified by the color space conversion unit 22 of the determination device construction unit 20. Is to do.
  • the judgment accuracy is improved.
  • the determination device storage unit 64 in the filtering construction unit 60 has a function of storing the determination device created in the learning unit 24 of the determination device construction unit 20. Further, the determination device storage unit 64 has a function of performing arithmetic processing on the matrix data in the color space conversion unit 62 in the filtering construction unit 60 using the determination device.
  • the data output as a result of the arithmetic processing is one-layer matrix data, and a numerical value of 0 or more and 1 or less is input to each matrix element.
  • the numerical value from 0 to 1 is called "confidence".
  • the determination device determines that each element of the matrix data is deteriorated as it is closer to 1, and is not deteriorated as it is closer to 0. Further, the determination device storage unit 64 creates an ROC curve for the captured image input by the test image input unit 61 by using the determination result and the data input by the test label input unit 63.
  • the binarization processing unit 65 in the filtering construction unit 60 sets the threshold value t and binarizes each element of the matrix data determined by the determination device storage unit 64 in the filtering construction unit 60 to "0" or "1". do.
  • a plurality of threshold values t for binarization are set between 0 and 1.
  • the threshold value t is an arbitrary positive number of 0 or more and 1 or less.
  • An example in the case of the threshold value t 0.7 is shown in FIG.
  • FIG. 14 is a diagram showing an example of the matrix data determined by the determination device storage unit 64 and the matrix data after the binarization process.
  • the connectivity recognition unit 66 in the filtering construction unit 60 counts the number of connected elements in which "1" is input to the matrix data after the binarization processing by the binarization processing unit 65 in the filtering construction unit 60. do.
  • a threshold value k is set, and a process of replacing an element with "0" is performed for an element having a number of connected elements less than the threshold value k.
  • the connectivity recognition unit 66 increases the value of the threshold value k from 0 and outputs the data processed by the plurality of threshold values k.
  • k is an integer value.
  • the maximum value of k is the number of elements of the matrix data.
  • the connectivity recognition unit 66 obtains the result when k is changed in each of the matrix data binarized by the plurality of t by the determination device storage unit 64 in the filtering construction unit 60. Assuming that the number of set threshold values t is i and the number of set threshold values k is j, the total number of data is "i ⁇ j ⁇ (number of captured images input to the test image input unit 61)". Obviously Obviously, the connectivity recognition unit 66 obtains the result when k is changed in each of the matrix data binarized by the plurality of t by the determination device storage unit 64 in the filtering construction unit 60. Assuming that the number of set threshold values t is i and the number of set threshold values k is j, the total number of data is "i ⁇ j ⁇ (number of captured images input to the test image input unit 61)". Become.
  • the connection number determination unit 67 in the filtering construction unit 60 inputs matrix data of the total number of calculated data “i ⁇ j ⁇ (number of captured images input to the test image input unit)” in the connectivity recognition unit 66.
  • the ROC curve is created with the matrix data input to the test label input unit 63 as the determination result (Predicted) as the true value (Actual).
  • FIG. 15 is a diagram showing an example of a ROC curve created by a photographed image of corrosion of hardware.
  • the connectivity recognition unit 66 has a function of creating a ROC curve when k is changed in various ways.
  • improvement of TPR and reduction of FPR can be realized.
  • FIG. 16 is a diagram illustrating a process of obtaining a threshold value t and a threshold value k based on the ROC curve. As shown in FIG. 16, the connection number determination unit 67 determines the combination of the threshold value t and the threshold value k that minimizes the Euclidean distance between the coordinates (0,1) and the ROC curve.
  • the threshold value k determined by the connection number determination unit 67 can be input to the removal unit 43 of the filtering unit 40. Further, the threshold value t determined by the connection number determination unit 67 is input as the threshold value of the determination device stored in the determination device storage unit 33 of the determination unit 30, and the threshold value of the determination device is updated. Based on the threshold value k, the removal unit 43 performs a process of replacing the matrix data processed by the connectivity recognition unit 42 in the filtering unit 40 with "0" for the matrix data whose number of connected elements is less than the threshold value k. conduct. By the function of this filtering unit 40, it is possible to detect a deteriorated region by a segmentation method using a determination device. By using the threshold value k determined by the connection number determination unit 67, it is possible to improve the detection accuracy of the deteriorated portion in the captured image input to the image input unit 31 of the determination unit 30.
  • FIG. 17 is a flowchart showing an example of the operation of the deterioration detection device 10 according to the embodiment of the present disclosure.
  • the operation of the deterioration detection device 10 described with reference to FIG. 17 corresponds to the deterioration detection method according to the present embodiment.
  • the operation of each step in FIG. 17 is executed based on the control of the control unit 11.
  • the program for causing the computer to execute the deterioration detection method according to the present embodiment includes each step shown in FIG.
  • step S1 the control unit 11 performs machine learning using the teacher image, which is an image obtained by shooting, and the teacher label that identifies the pixel indicating the deteriorated portion in the teacher image as teacher data, and generates a determination device.
  • the detailed operation content of this step is the same as the operation of the determination device construction unit 20.
  • step S2 the control unit 11 determines the number of connecting elements that the region to be determined as the deteriorated portion should have. That is, the control unit 11 acquires the ROC curve by using a test image different from the teacher image and a test label that identifies a pixel indicating a deteriorated portion in the test image. Based on the ROC curve, the control unit 11 uses a determination device to determine a threshold value of at least the number of pixels that the region to be determined as the deteriorated region has at least among the regions predicted to be occupied by the deteriorated portion. The detailed operation content of this step is the same as the operation of the filtering construction unit 60. By this step, the deteriorated portion can be determined more appropriately than when the number of connecting elements to be possessed by the region to be determined as the deteriorated portion is set to a predetermined fixed value. Note that this step is optional and may be omitted.
  • step S3 the control unit 11 acquires a photographed image obtained by photographing the structure.
  • the detailed operation content of this step is the same as the operation of the image input unit 31.
  • step S4 the control unit 11 converts the captured image acquired in step S3 into a predetermined color space. Specifically, the control unit 11 reflects the captured image as a predetermined color space, for example, the HSV color space or the L * a * b * color space, which reflects human vision more than the RGB color space. Convert to the color space. By this step, it is possible to perform accurate deterioration detection that more reflects human vision.
  • the detailed operation content of this step is the same as the operation of the color space conversion unit 32.
  • step S5 the control unit 11 predicts the area occupied by the deteriorated portion of the structure in the captured image converted in step S4 by using the determination device acquired in step S1.
  • the detailed operation content of this step is the same as the operation of the determination device storage unit 33.
  • step S6 the control unit 11 determines, among the regions predicted to be occupied by the deteriorated portion in step S5, a region including a pixel equal to or larger than a predetermined number of pixels as a degraded region of the captured image.
  • the region including the pixels equal to or larger than the threshold value determined in step S2 is determined as the deteriorated region of the captured image as the predetermined number of pixels in the predicted region.
  • step S7 the control unit 11 distinguishes between the area included in the deteriorated area and the area not included in the deteriorated area and displays the captured image on a display means (result output unit 50) such as a monitor or a display.
  • a display means such as a monitor or a display.
  • the present disclosure is not limited to the above-described embodiment.
  • the plurality of blocks shown in the block diagram may be integrated, or one block may be divided.
  • the plurality of steps described in the flowchart may be executed in parallel or in a different order depending on the processing power of the device that executes each step, or as necessary, instead of executing the steps in chronological order according to the description. ..
  • Other changes are possible without departing from the spirit of this disclosure.

Abstract

A deterioration detection apparatus (10) comprises a control unit (11) for acquiring a determination device that has machine learned, as teaching data, a teaching image, which is an image obtained through imaging, and a teaching label that identifies pixels indicating a deteriorated portion in the teaching image, acquiring a captured image obtained by imaging a structure (31), converting the captured image to a predetermined color space (32), using the determination device to predict a region occupied by the deteriorated portion of the structure in the captured image that has been converted (33), and determining, as a deteriorated region of the captured image, a region, in the predicted region, that includes pixels of a quantity equal to or greater than a predetermined number of pixels (40).

Description

劣化検出装置、劣化検出方法、及びプログラムDeterioration detection device, deterioration detection method, and program
 本開示は、劣化検出装置、劣化検出方法、及びプログラムに関する。 This disclosure relates to a deterioration detection device, a deterioration detection method, and a program.
 画像処理を用いて特定の画像領域を検出する手法は数多く存在する。これらの手法の中でも、昨今は検出精度の高さ及び構築の手軽さから、ディープラーニングにおけるセグメンテーション手法が有効とされている(非特許文献1)。このセグメンテーション手法を用いて、通信用マンホールの撮影画像から、コンクリート壁面に発生した露筋及び躯体部に設置された金物の腐食といった、構造物の劣化を検出する画像処理システムを構築することが考えられる。 There are many methods to detect a specific image area using image processing. Among these methods, recently, the segmentation method in deep learning is considered to be effective because of its high detection accuracy and ease of construction (Non-Patent Document 1). Using this segmentation method, it is conceivable to construct an image processing system that detects deterioration of structures such as dew streaks generated on concrete walls and corrosion of hardware installed on the skeleton from images taken of communication manholes. Be done.
 しかしながら、通信用マンホールは地中に設置されているため、撮影画像中には多く泥等の汚れが映り込み、また、これらの汚れは露筋及び金物の腐食等の劣化部分と画像中の色が似ている。そのため、従来の技術をそのまま適用しただけでは、これらの汚れ等が誤って構造物の劣化として検出されてしまう。また、従来の技術は、撮影画像から漏れなく劣化と思われる領域を検出して、構造物の耐久性という意味では全く影響のない微小な劣化までをも検出してしまう。そのため、従来の技術をそのまま適用しただけでは、検出後において、検出された劣化の領域が構造物の耐久性に影響を及ぼしうるものかを人間が再確認する工程が発生してしまう。 However, since the communication manhole is installed in the ground, a lot of dirt such as mud is reflected in the captured image, and these dirt are deteriorated parts such as dew streaks and corrosion of hardware and the color in the image. Is similar. Therefore, if the conventional technique is applied as it is, these stains and the like are erroneously detected as deterioration of the structure. Further, the conventional technique detects a region that seems to be deteriorated without omission from the captured image, and detects even a minute deterioration that has no effect in terms of the durability of the structure. Therefore, if the conventional technique is simply applied as it is, a step of reconfirming whether the detected deteriorated region can affect the durability of the structure after detection is required.
 本開示の目的は、構造物に影響を及ぼしうる劣化を精度よく検出する劣化検出装置、劣化検出方法、及びプログラムを提供することである。 An object of the present disclosure is to provide a deterioration detection device, a deterioration detection method, and a program for accurately detecting deterioration that may affect a structure.
 一実施形態に係る劣化検出装置は、撮影により得られた画像である教師画像と、当該教師画像において劣化部分を示す画素を特定する教師ラベルとを教師データとして機械学習された判定器を取得し、構造物を撮影して得られた撮影画像を取得し、前記撮影画像を予め定められた色空間に変換し、前記判定器を用いて、前記変換された前記撮影画像において前記構造物の劣化部分が占める領域を予測し、予測された前記領域のうち、予め定められた画素数以上の画素を含む領域を前記撮影画像の劣化領域として判定する制御部を備える。 The deterioration detection device according to one embodiment acquires a machine-learned determination device using a teacher image, which is an image obtained by photographing, and a teacher label that identifies a pixel indicating a deteriorated portion in the teacher image as teacher data. , The photographed image obtained by photographing the structure is acquired, the photographed image is converted into a predetermined color space, and the structure is deteriorated in the converted photographed image by using the determination device. It is provided with a control unit that predicts the area occupied by the portion and determines as the deteriorated area of the captured image the area including the pixels of the predetermined number of pixels or more among the predicted areas.
 一実施形態に係る劣化検出方法は、劣化検出装置の制御部が、撮影により得られた画像である教師画像と、当該教師画像において劣化部分を示す画素を特定する教師ラベルとを教師データとして機械学習された判定器を取得し、構造物を撮影して得られた撮影画像を取得し、前記撮影画像を予め定められた色空間に変換し、前記判定器を用いて、前記変換された前記撮影画像において前記構造物の劣化部分が占める領域を予測し、予測された前記領域のうち、予め定められた画素数以上の画素を含む領域を前記撮影画像の劣化領域として判定する。 In the deterioration detection method according to one embodiment, the control unit of the deterioration detection device uses a teacher image, which is an image obtained by photographing, and a teacher label for specifying a pixel indicating a deteriorated portion in the teacher image as teacher data. The learned determination device is acquired, the captured image obtained by photographing the structure is acquired, the captured image is converted into a predetermined color space, and the converted determination device is used. A region occupied by the deteriorated portion of the structure in the captured image is predicted, and among the predicted regions, a region including a predetermined number of pixels or more is determined as a degraded region of the captured image.
 一実施形態に係るプログラムは、コンピュータを上記劣化検出装置として機能させる。 The program according to the embodiment causes the computer to function as the deterioration detection device.
 本開示の一実施形態によれば、構造物に影響を及ぼしうる劣化を精度よく検出する劣化検出装置、劣化検出方法、及びプログラムを提供することができる。 According to one embodiment of the present disclosure, it is possible to provide a deterioration detection device, a deterioration detection method, and a program for accurately detecting deterioration that may affect a structure.
本開示の一実施形態に係る劣化検出装置のハードウェア構成を示す図である。It is a figure which shows the hardware configuration of the deterioration detection apparatus which concerns on one Embodiment of this disclosure. 本開示の一実施形態に係る劣化検出装置の機能構成を示す図である。It is a figure which shows the functional structure of the deterioration detection apparatus which concerns on one Embodiment of this disclosure. 教師画像の一例を示す図である。It is a figure which shows an example of a teacher image. 教師ラベルの一例を示す図である。It is a figure which shows an example of a teacher label. 色空間の種類に応じた検出率の変化の一例を示す図である。It is a figure which shows an example of the change of the detection rate according to the type of a color space. 色空間の種類に応じた検出率の変化の一例を示す図である。It is a figure which shows an example of the change of the detection rate according to the type of a color space. 図2の劣化検出装置が備える学習部の構成例を示す図である。It is a figure which shows the structural example of the learning part provided in the deterioration detection apparatus of FIG. 真値と判定器による判定結果との関係を示す混同行列の一例を示す図である。It is a figure which shows an example of the confusion matrix which shows the relationship between a true value and a judgment result by a judgment device. ROC曲線とAUCとの関係を示す図である。It is a figure which shows the relationship between the ROC curve and AUC. ROC曲線に基づき閾値hを求める処理を説明する図である。It is a figure explaining the process of obtaining the threshold value h based on the ROC curve. 図2の劣化検出装置が備えるフィルタリング部の構成例を示す図である。It is a figure which shows the structural example of the filtering part provided in the deterioration detection apparatus of FIG. 4近傍連結を説明する図である。It is a figure explaining 4 neighborhood connection. 8近傍連結を説明する図である。It is a figure explaining 8 neighborhood connection. 二値化処理後のデータの一例を示す図である。It is a figure which shows an example of the data after the binarization process. フィルタリング処理後のデータの一例を示す図である。It is a figure which shows an example of the data after a filtering process. フィルタリング処理後のデータの一例を示す図である。It is a figure which shows an example of the data after a filtering process. 劣化部分が示された画像の一例を示す図である。It is a figure which shows an example of the image which showed the deteriorated part. 本開示の一実施形態に係る劣化検出装置の機能構成を示す図である。It is a figure which shows the functional structure of the deterioration detection apparatus which concerns on one Embodiment of this disclosure. 判定器格納部によって判定された行列データと二値化処理後の行列データの一例を示す図である。It is a figure which shows an example of the matrix data determined by the determination device storage part, and the matrix data after binarization processing. 金物の腐食の撮影画像によって作成されたROC曲線の一例を示す図である。It is a figure which shows an example of the ROC curve created by the photograph image of the corrosion of a hardware. ROC曲線に基づき閾値t及び閾値kを求める処理を説明する図である。It is a figure explaining the process of obtaining the threshold value t and the threshold value k based on the ROC curve. 本開示の一実施形態に係る劣化検出装置の動作の一例を示すフローチャートである。It is a flowchart which shows an example of the operation of the deterioration detection apparatus which concerns on one Embodiment of this disclosure.
 以下、本開示の一実施形態について、図面を参照して説明する。各図面中、同一又は相当する部分には、同一符号を付している。本実施形態の説明において、同一又は相当する部分については、説明を適宜省略又は簡略化する。 Hereinafter, one embodiment of the present disclosure will be described with reference to the drawings. In each drawing, the same or corresponding parts are designated by the same reference numerals. In the description of the present embodiment, the description will be omitted or simplified as appropriate for the same or corresponding parts.
 <実施の形態1>
 画像から劣化を検出する劣化検出装置10について説明する。劣化検出装置10は、撮影画像から通信用マンホールの劣化を検出する装置およびシステムに関する。検出の対象となる劣化は、撮影画像における通信用マンホールの躯体部に発生した露筋及び通信用マンホールの内部に設置した金物の腐食等である。劣化検出装置10は、セグメンテーション手法を用いて劣化を検出する際に、検出結果から汚れ等を削除することで検出精度を向上しかつ、構造物にとって影響のない微小な劣化を削除する。これにより、劣化検出装置10は、撮影画像から劣化に該当する画素領域を精度よく検出する。図1は、本開示の一実施形態に係る劣化検出装置10のハードウェア構成を示す図である。
<Embodiment 1>
A deterioration detection device 10 for detecting deterioration from an image will be described. The deterioration detection device 10 relates to a device and a system for detecting deterioration of a communication manhole from a captured image. Deterioration to be detected is dew streaks generated in the skeleton of the communication manhole in the captured image and corrosion of hardware installed inside the communication manhole. When the deterioration detection device 10 detects deterioration by using the segmentation method, the deterioration detection device 10 improves the detection accuracy by removing dirt and the like from the detection result, and deletes minute deterioration that does not affect the structure. As a result, the deterioration detection device 10 accurately detects the pixel region corresponding to the deterioration from the captured image. FIG. 1 is a diagram showing a hardware configuration of a deterioration detection device 10 according to an embodiment of the present disclosure.
 劣化検出装置10は、1つ又は互いに通信可能な複数のサーバ装置である。劣化検出装置10は、これらに限定されず、汎用コンピュータ、専用コンピュータ、ワークステーション、PC(Personal Computer)、電子ノートパッド等の任意の電子機器であってもよい。図1に示すように、劣化検出装置10は、制御部11、記憶部12、通信部13、入力部14、出力部15、及びバス16を備える。 The deterioration detection device 10 is one or a plurality of server devices capable of communicating with each other. The deterioration detection device 10 is not limited to these, and may be any electronic device such as a general-purpose computer, a dedicated computer, a workstation, a PC (Personal Computer), and an electronic notepad. As shown in FIG. 1, the deterioration detection device 10 includes a control unit 11, a storage unit 12, a communication unit 13, an input unit 14, an output unit 15, and a bus 16.
 制御部11は、1つ以上のプロセッサを含む。一実施形態において「プロセッサ」は、汎用のプロセッサ、又は特定の処理に特化した専用のプロセッサであるが、これらに限定されない。プロセッサは、例えば、CPU(Central Processing Unit)、GPU(Graphics Processing Unit)、DSP(Digital Signal Processor)、ASIC(Application Specific Integrated Circuit)などであってもよい。制御部11は、劣化検出装置10を構成する各構成部とバス16を介して通信可能に接続され、劣化検出装置10全体の動作を制御する。 The control unit 11 includes one or more processors. In one embodiment, the "processor" is a general-purpose processor or a dedicated processor specialized for a specific process, but is not limited thereto. The processor may be, for example, a CPU (Central Processing Unit), a GPU (Graphics Processing Unit), a DSP (Digital Signal Processor), an ASIC (Application Specific Integrated Circuit), or the like. The control unit 11 is communicably connected to each component constituting the deterioration detection device 10 via the bus 16 and controls the operation of the entire deterioration detection device 10.
 記憶部12は、HDD、SSD、EEPROM、ROM、及びRAMを含む任意の記憶モジュールを含む。記憶部12は、例えば、主記憶装置、補助記憶装置、又はキャッシュメモリとして機能してもよい。記憶部12は、劣化検出装置10の動作に用いられる任意の情報を記憶する。例えば、記憶部12は、システムプログラム、アプリケーションプログラム、及び通信部13によって受信された各種情報等を記憶してもよい。記憶部12は、劣化検出装置10に内蔵されているものに限定されず、USB等のデジタル入出力ポート等によって接続されている外付けのデータベース又は外付け型の記憶モジュールであってもよい。HDDはHard Disk Driveの略称である。SSDはSolid State Driveの略称である。EEPROMはElectrically Erasable Programmable Read-Only Memoryの略称である。ROMはRead-Only Memoryの略称である。RAMはRandom Access Memoryの略称である。USBはUniversal Serial Busの略称である。 The storage unit 12 includes an arbitrary storage module including an HDD, SSD, EEPROM, ROM, and RAM. The storage unit 12 may function as, for example, a main storage device, an auxiliary storage device, or a cache memory. The storage unit 12 stores arbitrary information used for the operation of the deterioration detection device 10. For example, the storage unit 12 may store various information received by the system program, the application program, and the communication unit 13. The storage unit 12 is not limited to the one built in the deterioration detection device 10, and may be an external database or an external storage module connected by a digital input / output port such as USB. HDD is an abbreviation for Hard Disk Drive. SSD is an abbreviation for Solid State Drive. EEPROM is an abbreviation for Electrically Erasable Programmable Read-Only Memory. ROM is an abbreviation for Read-Only Memory. RAM is an abbreviation for Random Access Memory. USB is an abbreviation for Universal Serial Bus.
 通信部13は、任意の通信技術によって他の装置と通信接続可能な、任意の通信モジュールを含む。通信部13は、さらに、他の装置との通信を制御するための通信制御モジュール、及び他の装置との通信に必要となる識別情報等の通信用データを記憶する記憶モジュールを含んでもよい。 The communication unit 13 includes an arbitrary communication module capable of communicating with another device by any communication technology. The communication unit 13 may further include a communication control module for controlling communication with other devices, and a storage module for storing communication data such as identification information required for communication with other devices.
 入力部14は、ユーザの入力操作を受け付けて、ユーザの操作に基づく入力情報を取得する1つ以上の入力インタフェースを含む。例えば、入力部14は、物理キー、静電容量キー、ポインティングディバイス、出力部15のディスプレイと一体的に設けられたタッチスクリーン、又は音声入力を受け付けるマイク等であるが、これらに限定されない。 The input unit 14 includes one or more input interfaces that accept user input operations and acquire input information based on the user's operations. For example, the input unit 14 is, but is not limited to, a physical key, a capacitance key, a pointing device, a touch screen provided integrally with the display of the output unit 15, a microphone that accepts voice input, and the like.
 出力部15は、ユーザに対して情報を出力し、ユーザに通知する1つ以上の出力インタフェースを含む。例えば、出力部15は、情報を画像で出力するディスプレイ、又は情報を音声で出力するスピーカ等であるが、これらに限定されない。なお、上述の入力部14及び出力部15の少なくとも一方は、劣化検出装置10と一体に構成されてもよいし、別体として設けられてもよい。 The output unit 15 includes one or more output interfaces that output information to the user and notify the user. For example, the output unit 15 is, but is not limited to, a display that outputs information as an image, a speaker that outputs information as voice, and the like. At least one of the above-mentioned input unit 14 and output unit 15 may be integrally configured with the deterioration detection device 10 or may be provided as a separate body.
 劣化検出装置10の機能は、本実施形態に係るプログラムを、制御部11に含まれるプロセッサで実行することにより実現される。すなわち、劣化検出装置10の機能は、ソフトウェアにより実現される。プログラムは、劣化検出装置10の動作に含まれるステップの処理をコンピュータに実行させることで、当該ステップの処理に対応する機能をコンピュータに実現させる。すなわち、プログラムは、コンピュータを本実施形態に係る劣化検出装置10として機能させるためのプログラムである。プログラム命令は、必要なタスクを実行するためのプログラムコード、コードセグメントなどであってもよい。 The function of the deterioration detection device 10 is realized by executing the program according to the present embodiment on the processor included in the control unit 11. That is, the function of the deterioration detection device 10 is realized by software. The program causes the computer to execute the processing of the steps included in the operation of the deterioration detection device 10, so that the computer realizes the function corresponding to the processing of the steps. That is, the program is a program for making the computer function as the deterioration detection device 10 according to the present embodiment. The program instruction may be a program code, a code segment, or the like for executing a necessary task.
 プログラムは、コンピュータが読み取り可能な記録媒体に記録されていてもよい。このような記録媒体を用いれば、プログラムをコンピュータにインストールすることが可能である。ここで、プログラムが記録された記録媒体は、非一過性の(非一時的な)記録媒体であってもよい。非一過性の記録媒体は、CD(Compact Disk)-ROM(Read-Only Memory)、DVD(Digital Versatile Disc)-ROM、BD(Blu-ray(登録商標) Disc)-ROMなどであってもよい。また、プログラムをサーバのストレージに格納しておき、ネットワークを介して、サーバから他のコンピュータにプログラムを転送することにより、プログラムは流通されてもよい。プログラムはプログラムプロダクトとして提供されてもよい。 The program may be recorded on a computer-readable recording medium. Using such a recording medium, it is possible to install the program on the computer. Here, the recording medium on which the program is recorded may be a non-transient (non-temporary) recording medium. Even if the non-transient recording medium is a CD (Compact Disk) -ROM (Read-Only Memory), DVD (Digital Versatile Disc) -ROM, BD (Blu-ray (registered trademark) Disc) -ROM, etc. good. Further, the program may be distributed by storing the program in the storage of the server and transferring the program from the server to another computer via the network. The program may be provided as a program product.
 コンピュータは、例えば、可搬型記録媒体に記録されたプログラム又はサーバから転送されたプログラムを、一旦、主記憶装置に格納する。そして、コンピュータは、主記憶装置に格納されたプログラムをプロセッサで読み取り、読み取ったプログラムに従った処理をプロセッサで実行する。コンピュータは、可搬型記録媒体から直接プログラムを読み取り、プログラムに従った処理を実行してもよい。コンピュータは、コンピュータにサーバからプログラムが転送される度に、逐次、受け取ったプログラムに従った処理を実行してもよい。このような処理は、サーバからコンピュータへのプログラムの転送を行わず、実行指示及び結果取得のみによって機能を実現する、いわゆるASP型のサービスによって実行されてもよい。「ASP」は、Application Service Providerの略称である。プログラムには、電子計算機による処理の用に供する情報であってプログラムに準ずるものが含まれる。例えば、コンピュータに対する直接の指令ではないがコンピュータの処理を規定する性質を有するデータは、「プログラムに準ずるもの」に該当する。 The computer temporarily stores the program recorded on the portable recording medium or the program transferred from the server in the main storage device. Then, the computer reads the program stored in the main storage device by the processor, and executes the processing according to the read program by the processor. The computer may read the program directly from the portable recording medium and perform processing according to the program. The computer may sequentially execute processing according to the received program each time the program is transferred from the server to the computer. Such processing may be executed by a so-called ASP type service that realizes a function only by an execution instruction and result acquisition without transferring a program from a server to a computer. "ASP" is an abbreviation for Application Service Provider. The program includes information used for processing by a computer and equivalent to the program. For example, data that is not a direct command to the computer but has the property of defining the processing of the computer corresponds to "a program-like data".
 劣化検出装置10の一部又は全ての機能が、制御部11に含まれる専用回路により実現されてもよい。すなわち、劣化検出装置10の一部又は全ての機能が、ハードウェアにより実現されてもよい。また、劣化検出装置10は単一の情報処理装置により実現されてもよいし、複数の情報処理装置の協働により実現されてもよい。 A part or all the functions of the deterioration detection device 10 may be realized by a dedicated circuit included in the control unit 11. That is, some or all the functions of the deterioration detection device 10 may be realized by hardware. Further, the deterioration detection device 10 may be realized by a single information processing device or may be realized by the cooperation of a plurality of information processing devices.
 図2は、本開示の実施の形態1に係る劣化検出装置10の機能構成を示す図である。劣化検出装置10は、判定器構築部20、判定部30、及びフィルタリング部40を備える。判定器構築部20は、教師画像入力部21、色空間変換部22、教師ラベル入力部23、及び学習部24を備える。判定部30は、画像入力部31、色空間変換部32、及び判定器格納部33を備える。 FIG. 2 is a diagram showing a functional configuration of the deterioration detection device 10 according to the first embodiment of the present disclosure. The deterioration detection device 10 includes a determination device construction unit 20, a determination unit 30, and a filtering unit 40. The determination device construction unit 20 includes a teacher image input unit 21, a color space conversion unit 22, a teacher label input unit 23, and a learning unit 24. The determination unit 30 includes an image input unit 31, a color space conversion unit 32, and a determination device storage unit 33.
 判定器構築部20における教師画像入力部21は、判定器を構築するための学習用画像(教師画像)となる撮影画像の入力を行う。撮影画像とは、デジタルカメラ等により撮影された静止画像のデジタルデータのことを指す。教師画像入力部21では、撮影画像はビットマップ画像として入力され、多次元配列のデータが格納される。例えば、静止画像がグレー画像であれば1層の行列データとして格納され、カラー画像であれば複数層の行列データとして格納される。ビットマップ画像の解像度は任意である。 The teacher image input unit 21 in the determination device construction unit 20 inputs a captured image that is a learning image (teacher image) for constructing the determination device. The captured image refers to digital data of a still image captured by a digital camera or the like. In the teacher image input unit 21, the captured image is input as a bitmap image, and the data of the multidimensional array is stored. For example, if the still image is a gray image, it is stored as one-layer matrix data, and if it is a color image, it is stored as a plurality of layers of matrix data. The resolution of the bitmap image is arbitrary.
 判定器構築部20における教師ラベル入力部23は、教師ラベルの行列データを入力する機能を持つ。図3Aは、教師画像の一例を示す図である。図3Bは、教師ラベルの一例を示す図である。図3A及び図3Bの例では、教師画像となる撮影画像において、劣化に該当する金物の腐食の画素に「1」(白)、それ以外の画素に「0」(黒)を対応させた教師ラベルの行列データを作成している。教師ラベル入力部23では、このラベルの行列データを入力する。図3A及び図3Bは、劣化を金物の腐食とした事例を示しているが、劣化の種類は金物の腐食に限られない。他の種類の劣化として、例えば、コンクリート内部の鉄筋の腐食膨張により表面コンクリートが押し出された上に剥離してしまい、腐食鉄筋が露わになった状態(露筋という)をも劣化として定義して入力することができる。教師画像入力部21と教師ラベル入力部23に入力するデータ数は任意であるが、両データは必ずペアになっている必要がある。教師画像として用いる撮影画像には、既知の劣化が含まれている。教師ラベルは、ペアとなる撮影画像中の劣化に該当する画素を「1」に、それ以外を「0」に対応させたものである。 The teacher label input unit 23 in the determination device construction unit 20 has a function of inputting matrix data of the teacher label. FIG. 3A is a diagram showing an example of a teacher image. FIG. 3B is a diagram showing an example of a teacher label. In the examples of FIGS. 3A and 3B, in the captured image as the teacher image, the teacher in which the pixel of the corrosion of the hardware corresponding to the deterioration is associated with "1" (white) and the other pixels are associated with "0" (black). Creating matrix data for labels. The teacher label input unit 23 inputs the matrix data of this label. 3A and 3B show an example in which deterioration is corrosion of hardware, but the type of deterioration is not limited to corrosion of hardware. As another type of deterioration, for example, the state where the surface concrete is extruded and peeled off due to the corrosive expansion of the reinforcing bar inside the concrete and the corroded reinforcing bar is exposed (called dew bar) is also defined as deterioration. Can be entered. The number of data to be input to the teacher image input unit 21 and the teacher label input unit 23 is arbitrary, but both data must be paired. The captured image used as the teacher image contains known degradation. The teacher label corresponds to "1" for the pixel corresponding to the deterioration in the paired captured image and "0" for the other pixels.
 判定器構築部20における色空間変換部22は、教師画像入力部21に入力されたビットマップ画像の色空間を変換する機能を持つ。色空間として、一般的なデジタルカメラ画像で撮影されたカラー画像は、RGB(Red, Green, Blue)色空間の数値配列にて画像入力部31に行列データとして格納されている。色空間変換部22は、このようなRGB色空間の数値配列の行列データを、例えば、HSV(Hue, Saturation, Value)又はL等の他の色空間の行列データに変換することができる。色空間を変換することによって、判定器の判定精度を向上することができる。RGB色空間は、人間の知覚より出力機器の都合が優先されているが、L色空間は人間の感覚に合わせて考案された色空間である。劣化において金物の腐食は、進行途中の微小な錆、錆っぽい汚れ、又は、錆汁等を撮影画像内に多数含むため、腐食している画素領域を明確に特定しにくい。そのため、教師ラベルを作成する際には、人間の視覚に大きく依存することが重要である。露筋においても、腐食鉄筋の領域は、周辺のコンクリートへの錆汁又は錆による凹凸を含むため、明確に画素領域を特定できずに、人間の視覚に依存した教師ラベルの作成になる。よって、金物の腐食と露筋の色空間には人間の視覚がより表現されるL色空間の方が有効になる。 The color space conversion unit 22 in the determination device construction unit 20 has a function of converting the color space of the bitmap image input to the teacher image input unit 21. As a color space, a color image taken by a general digital camera image is stored as matrix data in an image input unit 31 in a numerical arrangement of an RGB (Red, Green, Blue) color space. The color space conversion unit 22 converts the matrix data of such a numerical arrangement of the RGB color space into the matrix data of another color space such as HSV (Hue, Saturation, Value) or L * a * b * . be able to. By converting the color space, the determination accuracy of the determination device can be improved. In the RGB color space, the convenience of the output device is prioritized over the human perception, but the L * a * b * color space is a color space devised according to the human sense. In the deterioration, the corrosion of the hardware contains a large amount of minute rust, rust-like stains, rust juice, etc. in the process of progress, so that it is difficult to clearly identify the corroded pixel region. Therefore, it is important to rely heavily on human vision when creating teacher labels. Even in the dew bar, the area of the corroded reinforcing bar contains rust juice or unevenness due to rust on the surrounding concrete, so that the pixel area cannot be clearly specified, and the teacher label depends on human vision. Therefore, the L * a * b * color space, which more expresses human vision, is more effective for the corrosion of hardware and the color space of dew streaks.
 図4A及び図4Bは、色空間の種類に応じた検出率の変化の一例を示す図である。図4A及び図4Bは、RGB色空間、HSV色空間、及び、L色空間の3種類の色空間で判定器を作成して、劣化の検出率を検証した結果を示している。図4Aは露筋の画像320枚で検出率を検証した結果を示し、図4Bは金物の腐食画像400枚で検出率を検証した結果を示す。図4A及び図4Bは、露筋と金物の腐食とのそれぞれにおいて、セグメンテーション手法による判定器を作成し、作成した判定器を用いて検出率の評価を行った結果を示す。図4A及び図4Bに示すように、金物の腐食領域と露筋領域の検出において、L色空間に撮影画像の行列データを変換した場合が最も検出率が高かった。ここで、検出率とは判定器により判定された劣化領域と人間が真値として与えた劣化領域の画素領域との一致率である。本ケースの劣化では、L色空間を用いたが、判定器の学習および構築を行う際に、判定精度が最も向上する色空間を任意に設定することができる。 4A and 4B are diagrams showing an example of a change in the detection rate depending on the type of color space. FIGS. 4A and 4B show the results of verifying the deterioration detection rate by creating a judgment device in three types of color spaces: RGB color space, HSV color space, and L * a * b * color space. There is. FIG. 4A shows the result of verifying the detection rate with 320 images of dew streaks, and FIG. 4B shows the result of verifying the detection rate with 400 images of corrosion of hardware. FIGS. 4A and 4B show the results of creating a determination device by a segmentation method and evaluating the detection rate using the created determination device for each of the dew streaks and the corrosion of hardware. As shown in FIGS. 4A and 4B, in the detection of the corroded region and the exposed streak region of the hardware, the detection rate was the highest when the matrix data of the captured image was converted into the L * a * b * color space. Here, the detection rate is the matching rate between the deteriorated region determined by the determination device and the pixel region of the deteriorated region given as a true value by a human. In the deterioration of this case, the L * a * b * color space was used, but when learning and constructing the judgment device, the color space with the highest judgment accuracy can be arbitrarily set.
 色空間変換を行うにあたり、判定器構築部20における色空間変換部22は、格納されている行列データを正規化する処理を行う。ここでの正規化とは、行列データの各要素の値が0以上1以下に収まっていない場合、行列の各層において0以上1以下に各要素が収まるように数値を変換することである。例えば、RGB色空間の場合、3層の行列データとして格納されているが、各層における要素の数値は0以上255以下である。そこで、色空間変換部22は、各要素の数値を255で除算する処理により正規化する。色空間変換部22は、各層の要素の数値が0以上1以下となるように、各層に入力されている要素の数値範囲で処理を定める。 In performing the color space conversion, the color space conversion unit 22 in the determination device construction unit 20 performs a process of normalizing the stored matrix data. The normalization here means that when the value of each element of the matrix data does not fit within 0 or more and 1 or less, the numerical value is converted so that each element fits within 0 or more and 1 or less in each layer of the matrix. For example, in the case of the RGB color space, it is stored as matrix data of three layers, but the numerical value of the element in each layer is 0 or more and 255 or less. Therefore, the color space conversion unit 22 normalizes by the process of dividing the numerical value of each element by 255. The color space conversion unit 22 determines the processing within the numerical range of the elements input to each layer so that the numerical value of the element of each layer is 0 or more and 1 or less.
 判定器構築部20における学習部24は、教師画像と教師ラベルから劣化を検出するための判定器を作成するものである。学習部24は、色空間変換部22と教師ラベル入力部23のそれぞれが出力する行列データを入力して判定器を構築する。学習部24は、構築した判定器を判定部30に出力するとともに、後述する劣化と非劣化を区別するための閾値hをフィルタリング部40に出力する。 The learning unit 24 in the determination device construction unit 20 creates a determination device for detecting deterioration from the teacher image and the teacher label. The learning unit 24 constructs a determination device by inputting matrix data output by each of the color space conversion unit 22 and the teacher label input unit 23. The learning unit 24 outputs the constructed determination device to the determination unit 30, and also outputs the threshold value h for distinguishing between deterioration and non-deterioration, which will be described later, to the filtering unit 40.
 図5は、図2の劣化検出装置10が備える学習部24の構成例を示す図である。学習部24は、損失関数決定部241及び学習進行部242を備える。 FIG. 5 is a diagram showing a configuration example of the learning unit 24 included in the deterioration detection device 10 of FIG. The learning unit 24 includes a loss function determination unit 241 and a learning progress unit 242.
 損失関数決定部241は、学習進行部242におけるニューラルネットワークの予測精度の評価に用いる損失関数を指定する。指定できる損失関数は交差エントロピー誤差、二乗誤差等を任意に設定できるが、金物の腐食及び露筋を含む劣化の検出にはAUC(Area Under the Curve)最大化及びF1スコアが有効となる。 The loss function determination unit 241 specifies a loss function used for evaluating the prediction accuracy of the neural network in the learning progress unit 242. The loss function that can be specified can arbitrarily set the cross entropy error, square error, etc., but AUC (Area Under the Curve) maximization and F1 score are effective for detecting deterioration including corrosion of hardware and dew streaks.
 図6は、真値と判定器による判定結果との関係を示す混同行列の一例を示す図である。混同行列(Confusion Matrix)とは、二値分類問題で出力されたクラス分類の結果をまとめた行列をいう。二値分類問題とは、ある命題(例えば、その画素は劣化に当たるか)について真(Positive)又は偽(False)を判定する問題である。二値分類に関する判定器が出した結果(予測)と実際の結果には、TP(True Positive)、TN(True Negative)、FP(False Positive)、及びFN(False Negative)の4つのパターンがある。TPは、判定器による予測がPositive(例えば、その画素は劣化に当たる)である場合において、予測が実際を正しく示している(True)、すなわち、実際もPositiveである場合をいう。TNは、判定器による予測がNegative(例えば、その画素は劣化に当たらない)である場合において、予測が実際を正しく示している(True)、すなわち、実際もNegativeである場合をいう。FPは、判定器による予測がPositive(例えば、その画素は劣化に当たる)である場合において、予測が実際を正しく示していない(False)、すなわち、実際はNegative(例えば、その画素は劣化に当たらない)である場合をいう。FNは、判定器による予測がNegative(例えば、その画素は劣化に当たらない)である場合において、予測が実際を正しく示していない(False)、すなわち、実際はPositive(例えば、その画素は劣化に当たる)である場合をいう。 FIG. 6 is a diagram showing an example of a confusion matrix showing the relationship between the true value and the judgment result by the judgment device. The confusion matrix is a matrix that summarizes the results of class classification output in the binary classification problem. The binary classification problem is a problem of determining true (Positive) or false (False) for a certain proposition (for example, does the pixel correspond to deterioration). There are four patterns of TP (True Positive), TN (True Negative), FP (False Positive), and FN (False Negative) in the result (prediction) and the actual result given by the judgment device for binary classification. .. TP means that when the prediction by the determination device is Positive (for example, the pixel corresponds to deterioration), the prediction correctly indicates the actual (True), that is, the case where the prediction is also Positive. TN means that when the prediction by the determination device is Negative (for example, the pixel does not correspond to deterioration), the prediction correctly indicates the actual (True), that is, the case where the prediction is also Negative. The FP does not indicate the actual fact correctly (False) when the prediction by the determiner is Positive (for example, the pixel corresponds to deterioration), that is, it is actually Negative (for example, the pixel does not correspond to deterioration). Refers to the case of. FN does not indicate the actual fact correctly (False), that is, it is actually Positive (eg, the pixel corresponds to deterioration) when the prediction by the determiner is Negative (for example, the pixel does not correspond to deterioration). Refers to the case where.
 制御部11は、撮影画像に対して、実際を反映した教師ラベルとして、金物の腐食及び露筋等の劣化の画素を「1」、それ以外の非劣化の画素を「0」とラベリング処理を施した画像に対して、劣化の画素をPositive、非劣化の画素をNegativeとする。この場合、図6に示すような混合行列で真値(実際:Actual)と判定器による判定結果(予測:Predicted)の関係性を示すことができる。判定器による判定結果として、判定器により出力された行列データの各画素には0以上1以下の数値が入力される。0以上1以下の値を閾値とする閾値処理によって「0」もしくは「1」に二値化することよって、判定器による判定結果は、「1」となった画素を劣化と判定するPositive、及び、「0」となった画素を非劣化の画素と判定するNegativeに分類することができる。 The control unit 11 labels the captured image with "1" for pixels with deterioration such as corrosion of hardware and dew streaks and "0" for other non-deteriorated pixels as a teacher label that reflects the actual situation. For the applied image, the deteriorated pixel is Positive and the non-deteriorated pixel is Negative. In this case, the relationship between the true value (actual: Actual) and the determination result (predicted) by the determination device can be shown by the mixed matrix as shown in FIG. As a judgment result by the judgment device, a numerical value of 0 or more and 1 or less is input to each pixel of the matrix data output by the judgment device. By binarizing to "0" or "1" by the threshold processing with the value of 0 or more and 1 or less as the threshold value, the judgment result by the judgment device is Positive, which judges that the pixel which becomes "1" is deteriorated, and , "0" can be classified as Negative, which is determined to be a non-degraded pixel.
 図7は、ROC曲線とAUCとの関係を示す図である。混合行列を用いて判定器による判定結果の性能を示した曲線は、ROC曲線(Receiver Operating Characteristic curve、受信者動作特性曲線)と呼ばれる。ROC曲線は、劣化又は非劣化を判定するための閾値を変化させた場合における、FP率とTP率との関係を示す曲線である。FP率(False Positive Rate)は、実際にはNegativeと判定すべきデータのうち、誤ってPositiveと判定したデータの割合であり、この値が小さいほど判定器の性能が高い。FP率は、FPのサンプル数/(FPのサンプル数+TNのサンプル数)と表される。TP率(True Positive Rate)は、実際にはPositiveと判定すべきデータのうち、正しくPositiveと判定できたデータの割合であり、この値が大きいほど判定器の性能が高い。TP率は、TPのサンプル数/(TPのサンプル数+FNのサンプル数)と表される。 FIG. 7 is a diagram showing the relationship between the ROC curve and the AUC. The curve showing the performance of the judgment result by the judgment device using the mixed matrix is called a ROC curve (Receiver Operating Characteristic curve, receiver operating characteristic curve). The ROC curve is a curve showing the relationship between the FP rate and the TP rate when the threshold value for determining deterioration or non-deterioration is changed. The FP rate (False Positive Rate) is the ratio of the data that is erroneously determined to be Positive among the data that should be determined to be Negative, and the smaller this value, the higher the performance of the determination device. The FP rate is expressed as the number of FP samples / (the number of FP samples + the number of TN samples). The TP rate (True Positive Rate) is the ratio of the data that can be correctly determined to be Positive among the data that should be actually determined to be Positive, and the larger this value is, the higher the performance of the determination device is. The TP rate is expressed as the number of TP samples / (the number of TP samples + the number of FN samples).
 横軸にFP率、縦軸にTP率としてROC曲線を描いた場合において、ROC曲線をFP率軸に対して0から1まで積分した値は、AUCと呼ばれる。前述のように、FP率が小さいほど、また、TP率が大きいほど、判定器の性能が高いため、AUCの大きさは判定器の性能を示す指標の一つとなる。このAUCの面積が最大化するように損失関数を設定することをAUC最大化という。金物の腐食及び露筋等のような劣化現象は、各撮影画像中の画素領域としては比較的少ない領域になる。劣化している領域をPositive、非劣化の領域をNegativeとするとPositiveとNegativeのデータ量がアンバランスになる現象をデータの不均衡問題という。このデータの不均衡問題に対しては、Negativeの画素と比べて割合の少ないPositiveとなる画素が検出されるようにTP率を向上させつつ、FP率を抑制するようなAUC最大化を損失関数に設定することが有効となる。 When an ROC curve is drawn with the FP rate on the horizontal axis and the TP rate on the vertical axis, the value obtained by integrating the ROC curve from 0 to 1 with respect to the FP rate axis is called AUC. As described above, the smaller the FP rate and the larger the TP rate, the higher the performance of the determination device. Therefore, the size of the AUC is one of the indexes indicating the performance of the determination device. Setting the loss function so that the area of the AUC is maximized is called AUC maximization. Deterioration phenomena such as corrosion of hardware and dew streaks are relatively small as a pixel region in each captured image. If the deteriorated area is Positive and the non-deteriorated area is Negative, the phenomenon that the amount of Positive and Negative data becomes unbalanced is called the data imbalance problem. For this data imbalance problem, the loss function is AUC maximization that suppresses the FP rate while improving the TP rate so that positive pixels with a smaller ratio than the Negative pixels are detected. It is effective to set to.
 F1スコアとは、式(1)に示す数式である。
  F1=(2Recall×Precision)/(Recall+Precision)             (1)
ただし、Recall(再現率)及びPrecision(適合率)は式(2)(3)により示される。
  Recall=TP/(TP+FN)                        (2)
  Precision=TP/(TP+FP)                      (3)
 F1スコアも同様にデータの不均衡問題に対して有効であり、このF1スコアを損失関数に設定することが今回のように構造物の金物の腐食又は露筋等の劣化といった、画像全体中において少ない領域の検出では有効となる。
The F1 score is a mathematical formula shown in the formula (1).
F1 = (2Recall × Precision) / (Recall + Precision) (1)
However, Recall (recall rate) and Precision (precision rate) are expressed by equations (2) and (3).
Recall = TP / (TP + FN) (2)
Precision = TP / (TP + FP) (3)
The F1 score is also effective for the data imbalance problem, and setting this F1 score as a loss function is used throughout the image, such as corrosion of hardware of structures or deterioration of dew streaks, as in this case. It is effective for detecting a small area.
 学習部24における学習進行部242は、色空間変換部22に入力されている行列データと、教師ラベル入力部23に入力されている行列データを用いてニューラルネットワークモデルのディープラーニングを用いた学習を実施する。学習進行部242は、損失関数決定部241において決定された損失関数により示される指標が良くなるように学習を進めていく。例えば、決定された損失関数がAUC最大化である場合、学習の進行に応じてAUCの領域の面積が大きくなっていく。AUCの面積が大きいことは、検出器として性能が良いことに相当する。損失関数としてF1スコアを用いる場合も、学習の進行に応じて、F1スコアの値が向上する。学習進行部242において、一定以上学習が終わると、それ以上学習しても、損失関数はほとんど変化しなくなる。例えば、損失関数としてAUCを使用している場合、それ以上学習しても、AUCの面積はほとんど変化しなくなる。これは、AUCの面積が既に十分大きいことに相当する。損失関数としてF1スコアを用いた場合も、一定以上学習が終わるとF1スコアはほとんど変化しなくなる。そこで、学習進行部242は、損失関数がほとんど変化しなくなった場合、例えば、損失関数の変化率が予め定められた閾値未満になるまで学習を進める。このように、学習進行部242は、それ以上学習を進めても、損失関数決定部241により決定された損失関数の値が実質的に変わらなくなるまで十分学習して、学習済みモデルの判定器を構築する。ここでのディープラーニングのモデルは、ディープラーニングのU-net及びSegNet等のセグメンテーション手法が該当する。このセグメンテーション手法により、金物の腐食及び露筋等の劣化の領域を画像中から検出することが可能となる。一定以上の大きさの金物の腐食及び露筋等の劣化が発生している場合、補修又は補強を実施するため、腐食及び露筋等の劣化の有無の検出だけでなく、劣化の領域を検出することが重要である。さらに学習進行部242は、色空間変換部22が出力する行列データと教師ラベル入力部23が出力する行列データを用いて学習を行い、判定器の構築を行う。 The learning progress unit 242 in the learning unit 24 uses the matrix data input to the color space conversion unit 22 and the matrix data input to the teacher label input unit 23 to perform learning using deep learning of the neural network model. implement. The learning progress unit 242 proceeds with learning so that the index indicated by the loss function determined by the loss function determination unit 241 is improved. For example, when the determined loss function is AUC maximization, the area of the AUC region increases as the learning progresses. A large area of AUC corresponds to good performance as a detector. Even when the F1 score is used as the loss function, the value of the F1 score improves as the learning progresses. In the learning progress unit 242, when the learning is completed for a certain period or more, the loss function hardly changes even if the learning is further performed. For example, when AUC is used as a loss function, the area of AUC hardly changes even if further learning is performed. This corresponds to the area of the AUC already large enough. Even when the F1 score is used as the loss function, the F1 score hardly changes after learning for a certain period or more. Therefore, when the loss function hardly changes, the learning progress unit 242 proceeds with learning until, for example, the rate of change of the loss function becomes less than a predetermined threshold value. In this way, the learning progress unit 242 learns sufficiently until the value of the loss function determined by the loss function determination unit 241 does not substantially change even if the learning is further advanced, and determines the trained model. To construct. The model of deep learning here corresponds to a segmentation method such as U-net and SegNet of deep learning. By this segmentation method, it becomes possible to detect the corrosion of hardware and the deteriorated region such as dew streaks from the image. When corrosion of hardware of a certain size or more and deterioration of dew streaks, etc. occur, in order to repair or reinforce, not only the presence or absence of corrosion and dew streaks, but also the area of deterioration is detected. It is important to. Further, the learning progress unit 242 performs learning using the matrix data output by the color space conversion unit 22 and the matrix data output by the teacher label input unit 23, and constructs a determination device.
 さらに、この学習進行部242は、色空間変換部22が出力する行列データを構築した判定器によって処理した行列データ(劣化の予測値)と教師ラベル入力部23が出力する行列データからROC曲線を生成する。損失関数としてF1スコアにより学習させた場合においても、学習結果に基づきROC曲線が生成される。学習進行部242は、図8に示すようにユークリッド距離が最小となるような閾値hを定める。図8は、ROC曲線に基づき閾値hを定める処理を説明する図である。図8に示すように、学習進行部242は、座標(0,1)とROC曲線とのユークリッド距離が最小となる閾値hを決定する。 Further, the learning progress unit 242 obtains an ROC curve from the matrix data (predicted value of deterioration) processed by the determination device that constructs the matrix data output by the color space conversion unit 22 and the matrix data output by the teacher label input unit 23. Generate. Even when training is performed using the F1 score as a loss function, a ROC curve is generated based on the training result. As shown in FIG. 8, the learning progress unit 242 sets a threshold value h that minimizes the Euclidean distance. FIG. 8 is a diagram illustrating a process of determining the threshold value h based on the ROC curve. As shown in FIG. 8, the learning progress unit 242 determines the threshold value h at which the Euclidean distance between the coordinates (0,1) and the ROC curve is minimized.
 判定部30の画像入力部31は、撮影画像の入力を行う。画像入力部31は、教師画像入力部21と同様に撮影画像をビットマップ画像の行列データとして格納する。ここで、入力する画像は、教師画像入力部21に入力した画像とは異なる画像である。 The image input unit 31 of the determination unit 30 inputs a captured image. The image input unit 31 stores the captured image as matrix data of the bitmap image in the same manner as the teacher image input unit 21. Here, the image to be input is an image different from the image input to the teacher image input unit 21.
 判定部30の色空間変換部32は、判定器構築部20における色空間変換部22と同様の処理で判定部30の画像入力部31で入力されたビットマップ画像の色空間を変換する機能を持つ。ここで、重要なのは、判定器構築部20の色空間変換部22で指定した同様の色空間に変換を行うことである。判定器構築部20の学習部24で判定器を作成する際に用いた色空間と同様の色空間配列にすることよって、判定精度が向上する。 The color space conversion unit 32 of the determination unit 30 has a function of converting the color space of the bitmap image input by the image input unit 31 of the determination unit 30 by the same processing as the color space conversion unit 22 in the determination unit construction unit 20. Have. Here, what is important is to perform conversion to the same color space specified by the color space conversion unit 22 of the determination device construction unit 20. The determination accuracy is improved by using the same color space arrangement as the color space used when creating the determination device in the learning unit 24 of the determination device construction unit 20.
 判定部30における判定器格納部33は、判定器構築部20にて生成された判定器を格納する。判定器格納部33は、色空間変換部32から入力する行列データに対して、判定器を用いて演算処理を行う。演算処理の結果により出力されるデータは、画像入力部31に入力された撮影画像の各画素が劣化の領域か否かを推定した結果を示す、1層の行列データである。判定器格納部33が出力する劣化の領域か否かを推定した結果は、各画素について0以上1以下の値を示すものとなる。 The determination device storage unit 33 in the determination unit 30 stores the determination device generated by the determination device construction unit 20. The determination device storage unit 33 performs arithmetic processing on the matrix data input from the color space conversion unit 32 using the determination device. The data output as a result of the arithmetic processing is one-layer matrix data showing the result of estimating whether or not each pixel of the captured image input to the image input unit 31 is a deteriorated region. The result of estimating whether or not it is the deteriorated region output by the determination device storage unit 33 shows a value of 0 or more and 1 or less for each pixel.
 図9は、図2の劣化検出装置10が備えるフィルタリング部40の構成例を示す図である。フィルタリング部40は、二値化処理部41、連結性認識部42、及び除去部43を備える。 FIG. 9 is a diagram showing a configuration example of the filtering unit 40 included in the deterioration detection device 10 of FIG. The filtering unit 40 includes a binarization processing unit 41, a connectivity recognition unit 42, and a removal unit 43.
 二値化処理部41は、判定部30における判定器格納部33から出力される行列データの二値化処理を行う機能を持つ。二値化処理を行う際の閾値は学習部24における学習進行部242で出力される閾値hを用いる。すなわち、二値化処理部41は、行列データにより示される推定結果を示す値が閾値h以上の画素の値を1とし、閾値h未満の画素の値を0とする。 The binarization processing unit 41 has a function of performing binarization processing of the matrix data output from the determination device storage unit 33 in the determination unit 30. As the threshold value when performing the binarization process, the threshold value h output by the learning progress unit 242 in the learning unit 24 is used. That is, the binarization processing unit 41 sets the value of the pixel whose value indicating the estimation result indicated by the matrix data is equal to or greater than the threshold value h to 1, and sets the value of the pixel whose value is less than the threshold value h to 0.
 フィルタリング部40における連結性認識部42は、二値化処理後の行列データに対し、「1」が入力されている要素の連結要素数をカウントする。「連結要素数」は、同じ値を有する要素が連結して形成される領域に含まれる要素数と定義される。同じ値を有する要素が連結して形成される領域を「連結要素」という。「連結」とは、図10Aに示すように注目要素71の周辺を囲む8要素が注目要素と同じ数値を有しているかどうかによって定義される。注目要素に対して上下左右の要素に着眼し、上下左右の画素のいずれかが注目画素と同様の場合は「4近傍連結」、注目要素に対して周辺を囲む8要素に着眼する場合は「8近傍連結」として定義する。ここでは、二値化処理後に「1」が入力されている要素を注目要素とする。注目要素を起点として、どの程度の要素が連結しているかのカウントを行う。4近傍か8近傍かの設定は、ユーザの設定等に応じて任意に行うことができる。カウントが完了すると、連結性認識部42は、「1」が入力されている要素の連結要素数に関する数値データを持つこととなる。 The connectivity recognition unit 42 in the filtering unit 40 counts the number of connected elements in which "1" is input to the matrix data after the binarization process. "Number of connected elements" is defined as the number of elements included in a region formed by connecting elements having the same value. A region formed by connecting elements having the same value is called a "connecting element". “Concatenation” is defined by whether or not the eight elements surrounding the element of interest 71 have the same numerical value as the element of interest, as shown in FIG. 10A. Focus on the elements of interest, up, down, left, and right. If any of the pixels on the top, bottom, left, or right is the same as the pixel of interest, "4 neighborhood connection". It is defined as "8 neighborhood connection". Here, the element in which "1" is input after the binarization process is set as the element of interest. Starting from the element of interest, the number of connected elements is counted. The setting of the vicinity of 4 or the vicinity of 8 can be arbitrarily performed according to the user's setting or the like. When the count is completed, the connectivity recognition unit 42 will have numerical data regarding the number of connected elements for which "1" is input.
 図10Bに8近傍連結の場合の例を示す。注目要素71に対しては、右上の要素のみが連結している。注目要素の右上の要素に対しては、右の要素が連結している(左下の要素71はすでにカウント済み)。その右の要素に対しては、左の要素(これもすでにカウント済み)以外に連結している要素はない。従って、図10Bの例における連結要素数は3となる。 FIG. 10B shows an example in the case of 8 neighborhood connection. Only the upper right element is connected to the attention element 71. The element on the right is connected to the element on the upper right of the element of interest (the element 71 on the lower left has already been counted). For the element on the right, there is no concatenated element other than the element on the left (which has already been counted). Therefore, the number of connected elements in the example of FIG. 10B is 3.
 除去部43は、予め定められた1以上の整数である予め定められた閾値kを設定して、連結要素数が閾値k未満の要素に対して、その要素の値を「0」に置き換える処理を行う。つまり、除去部43は、連結性認識部42がカウントした連結要素数が閾値k未満の画素領域はノイズであると判断し、その領域が劣化を示す領域として劣化検出装置10の出力結果に含まれないよう、当該領域の画素を「0」に置き換える処理を行う。 The removal unit 43 sets a predetermined threshold value k, which is a predetermined integer of 1 or more, and replaces the value of the element with “0” for an element whose number of connected elements is less than the threshold value k. I do. That is, the removing unit 43 determines that the pixel region in which the number of connected elements counted by the connectivity recognition unit 42 is less than the threshold value k is noise, and the region is included in the output result of the deterioration detection device 10 as a region showing deterioration. The process of replacing the pixel in the region with "0" is performed so as not to prevent the pixel.
 図11A及び図11Bは、k=200として処理を行ったデータを示す。図11Aは、二値化処理後のデータの一例を示す図である。図11Bは、フィルタリング処理後のデータの一例を示す図である。図11Bでは、図11Aにおいて値「1」(白)として表示されている画素の一部が、値「0」(黒)に置き換わっている。ここで、金物の腐食及び露筋等の劣化においては、8近傍連結が有効となる。なぜならば、腐食及び露筋は画像において不特定方向に広がるため、注目画素の周辺をすべて連結として考慮した方が実態をよく反映するためである。4近傍連結は、8近傍連結と比べて連結する確率が下がる。そのため、実際にはつながっていると評価すべき腐食及び露筋領域等の劣化が分断されてしまうケースが発生してしまい、値「0」(黒)に置き換えるべきではない腐食又は露筋等の劣化領域が「0」(黒)に置き変わってしまうケースが発生する。4近傍連結は、幾何学的な図形のように連結方向が予想できる問題に対して有効性が高い。除去部43において、一定以上の大きな画素領域を有する金物の腐食及び露筋といった構造物に対して影響の大きい劣化領域のみを出力することによって、出力結果を人間が確認する作業の手間を軽減することができる。 FIGS. 11A and 11B show data processed with k = 200. FIG. 11A is a diagram showing an example of data after binarization processing. FIG. 11B is a diagram showing an example of data after filtering processing. In FIG. 11B, a part of the pixel displayed as the value “1” (white) in FIG. 11A is replaced with the value “0” (black). Here, in the case of corrosion of hardware and deterioration of dew streaks, connection in the vicinity of 8 is effective. This is because corrosion and dew streaks spread in unspecified directions in the image, so it is better to consider all the periphery of the pixel of interest as a connection to better reflect the actual situation. The probability of connecting in the 4-neighborhood connection is lower than that in the 8-neighborhood connection. Therefore, there may be cases where corrosion that should be evaluated as being actually connected and deterioration of the dew streaks are divided, and corrosion or dew streaks that should not be replaced with the value "0" (black). In some cases, the deteriorated area is replaced with "0" (black). The four-neighborhood connection is highly effective for problems such as geometric figures where the connection direction can be predicted. By outputting only the deteriorated region having a large pixel region of a certain size or more and having a large influence on the structure such as corrosion and dew streaks, the removing unit 43 reduces the labor of human confirmation of the output result. be able to.
 結果出力部50は、フィルタリング部40によって処理された結果をデジタルカメラ画像等と対比して出力する。例えば、結果出力部50は、モニタ又はディスプレイによってフィルタリング部40によって処理された結果を表示することができる。図12Aは、フィルタリング処理後のデータの一例を示す図である。図12Bは、劣化部分が示された画像の一例を示す図である。図12Bにおいては、劣化と判定された部分が強調表示73により区別して示されている。図12Bの例では、強調表示73は、ハッチングにより劣化部分を区別しているが、強調表示73はこれに限られない。例えば、強調表示73は、着色、縁取り、又はこれらの組み合わせ等としてもよい。これにより、ユーザは、強調表示73を手掛かりにして、撮像画像において劣化と判定された部分を容易に認識することが可能となる。 The result output unit 50 outputs the result processed by the filtering unit 40 in comparison with a digital camera image or the like. For example, the result output unit 50 can display the result processed by the filtering unit 40 by the monitor or the display. FIG. 12A is a diagram showing an example of data after filtering processing. FIG. 12B is a diagram showing an example of an image in which a deteriorated portion is shown. In FIG. 12B, the portion determined to be deteriorated is distinguished by the highlighting 73. In the example of FIG. 12B, the highlighting 73 distinguishes the deteriorated portion by hatching, but the highlighting 73 is not limited to this. For example, the highlighting 73 may be colored, bordered, or a combination thereof. As a result, the user can easily recognize the portion determined to be deteriorated in the captured image by using the highlighting 73 as a clue.
 上記のように、劣化検出装置10の制御部11は、撮影により得られた画像である教師画像と、教師画像において劣化部分を示す画素を特定する教師ラベルとを教師データとして機械学習された判定器を取得する。制御部11は、構造物を撮影して得られた撮影画像を取得し、撮影画像を予め定められた色空間に変換した上で、判定器を用いて、変換された撮影画像において構造物の劣化部分が占める領域を予測する。さらに、制御部11は、予測された領域のうち、予め定められた画素数以上の画素を含む領域を撮影画像の劣化領域として判定する。したがって、劣化検出装置10によれば、構造物に影響を及ぼしうる劣化を精度よく検出することができる。 As described above, the control unit 11 of the deterioration detection device 10 determines that the teacher image, which is an image obtained by photographing, and the teacher label that identifies the pixel indicating the deteriorated portion in the teacher image are machine-learned as teacher data. Get a vessel. The control unit 11 acquires a photographed image obtained by photographing the structure, converts the photographed image into a predetermined color space, and then uses a determination device to capture the structure in the converted photographed image. Predict the area occupied by the deteriorated part. Further, the control unit 11 determines, among the predicted regions, a region including a predetermined number of pixels or more as a deteriorated region of the captured image. Therefore, according to the deterioration detection device 10, deterioration that may affect the structure can be detected with high accuracy.
 <実施の形態2>
 図13は、本開示の実施の形態2に係る劣化検出装置10の機能構成を示す図である。実施の形態1では、フィルタリング部40は、連結要素数が予め定められた閾値k未満の値が「1」の要素の値を「0」に置き換える処理を行った。本実施形態では、フィルタリング部40において値が「1」の要素の値を「0」に置き換える際の基準となる連結要素数の閾値を機械学習により決定する構成について説明する。
<Embodiment 2>
FIG. 13 is a diagram showing a functional configuration of the deterioration detection device 10 according to the second embodiment of the present disclosure. In the first embodiment, the filtering unit 40 performs a process of replacing the value of the element whose value is less than the predetermined threshold value k of "1" with "0". In the present embodiment, a configuration will be described in which the threshold value of the number of connected elements as a reference when replacing the value of the element whose value is “1” with “0” in the filtering unit 40 is determined by machine learning.
 本実施形態に係る劣化検出装置10は、実施の形態1に係る劣化検出装置10において、フィルタリング構築部60が追加された構造を有する。実施の形態1に係る劣化検出装置10が有する機能構成と同一のものについては詳細な説明を省略し、本実施形態に係る劣化検出装置10に特有の構成を中心に説明する。フィルタリング構築部60は、テスト画像入力部61、色空間変換部62、テストラベル入力部63、判定器格納部64、二値化処理部65、連結性認識部66、及び連結数決定部67を備える。 The deterioration detection device 10 according to the present embodiment has a structure in which the filtering construction unit 60 is added to the deterioration detection device 10 according to the first embodiment. The same functional configuration as that of the deterioration detection device 10 according to the first embodiment will be omitted in detail, and the configuration peculiar to the deterioration detection device 10 according to the present embodiment will be mainly described. The filtering construction unit 60 includes a test image input unit 61, a color space conversion unit 62, a test label input unit 63, a determination device storage unit 64, a binarization processing unit 65, a connectivity recognition unit 66, and a connection number determination unit 67. Be prepared.
 フィルタリング構築部60におけるテスト画像入力部61は、教師画像入力部21と同様に撮影画像をビットマップ画像の行列データとして格納する。ここで、入力する画像は、教師画像入力部21に入力した画像とは異なる。ただし、テスト画像入力部61に入力される画像は、判定部30における画像入力部31と異なっていても、又は、同様の画像を含んでいてもよい。 The test image input unit 61 in the filtering construction unit 60 stores the captured image as matrix data of the bitmap image in the same manner as the teacher image input unit 21. Here, the image to be input is different from the image input to the teacher image input unit 21. However, the image input to the test image input unit 61 may be different from the image input unit 31 in the determination unit 30, or may include a similar image.
 フィルタリング構築部60におけるテストラベル入力部63は、教師ラベル入力部23と同様に撮影画像において、画像の金物の腐食又は露筋等に該当する劣化の画素に「1」、それ以外の画素に「0」を対応させた行列データを入力する。ここで、テスト画像入力部61とテストラベル入力部63に入力するデータの数は任意である。ただし、これらのデータはペアになっている必要がある。 Similar to the teacher label input unit 23, the test label input unit 63 in the filtering construction unit 60 has "1" for the deteriorated pixel corresponding to corrosion or dew streaks of the metal in the image, and "1" for the other pixels. Input the matrix data corresponding to "0". Here, the number of data to be input to the test image input unit 61 and the test label input unit 63 is arbitrary. However, these data must be paired.
 フィルタリング構築部60における色空間変換部62は、判定器構築部20における色空間変換部22と同様に、教師画像入力部21で入力されたビットマップ画像の色空間を変換する機能を持つ。ここで、重要なのは、フィルタリング構築部60における判定器格納部64に格納する判定器を作成する際に、判定器構築部20の色空間変換部22で指定した色空間と同様の色空間に変換を行うことである。判定器と同様の色空間配列を用いることによって、判定精度が向上する。 The color space conversion unit 62 in the filtering construction unit 60 has a function of converting the color space of the bitmap image input by the teacher image input unit 21 in the same manner as the color space conversion unit 22 in the determination device construction unit 20. Here, what is important is that when the determination device to be stored in the determination device storage unit 64 in the filtering construction unit 60 is created, the color space is converted to the same color space as the color space specified by the color space conversion unit 22 of the determination device construction unit 20. Is to do. By using the same color space arrangement as the judgment device, the judgment accuracy is improved.
 フィルタリング構築部60における判定器格納部64は、判定器構築部20の学習部24において作成した判定器を格納する機能を持つ。さらに、判定器格納部64は、フィルタリング構築部60における色空間変換部62にある行列データに対して判定器を用いて演算処理を行う機能を有する。演算処理の結果により出力されるデータは、1層の行列データであり、各行列の要素には0以上1以下の数値が入力されている。ここで、0から1の数値は「確信度」と呼ばれる。判定器構築部20の学習部24において判定器を構築する際に、画像の金物の腐食又は露筋に該当する劣化の画素に「1」、それ以外の画素に「0」として学習させた場合は、行列データの各要素が1に近いほど劣化、0に近いほど非劣化と判定器は判定することとなる。さらに、判定器格納部64は、判定結果とテストラベル入力部63で入力したデータを用いて、テスト画像入力部61で入力した撮影画像に対するROC曲線を作成する。 The determination device storage unit 64 in the filtering construction unit 60 has a function of storing the determination device created in the learning unit 24 of the determination device construction unit 20. Further, the determination device storage unit 64 has a function of performing arithmetic processing on the matrix data in the color space conversion unit 62 in the filtering construction unit 60 using the determination device. The data output as a result of the arithmetic processing is one-layer matrix data, and a numerical value of 0 or more and 1 or less is input to each matrix element. Here, the numerical value from 0 to 1 is called "confidence". When the learning unit 24 of the determining device construction unit 20 learns the deteriorated pixels corresponding to the corrosion or dew streaks of the metal in the image as "1" and the other pixels as "0". The determination device determines that each element of the matrix data is deteriorated as it is closer to 1, and is not deteriorated as it is closer to 0. Further, the determination device storage unit 64 creates an ROC curve for the captured image input by the test image input unit 61 by using the determination result and the data input by the test label input unit 63.
 フィルタリング構築部60における二値化処理部65は、閾値tを設定してフィルタリング構築部60における判定器格納部64によって判定された行列データの各要素を「0」もしくは「1」に二値化する。二値化する際の閾値tは、0から1の間で複数個設定する。閾値tは、0以上1以下の任意の正数である。閾値t=0.7の場合の例を図14に示す。図14は、判定器格納部64によって判定された行列データと二値化処理後の行列データの一例を示す図である。 The binarization processing unit 65 in the filtering construction unit 60 sets the threshold value t and binarizes each element of the matrix data determined by the determination device storage unit 64 in the filtering construction unit 60 to "0" or "1". do. A plurality of threshold values t for binarization are set between 0 and 1. The threshold value t is an arbitrary positive number of 0 or more and 1 or less. An example in the case of the threshold value t = 0.7 is shown in FIG. FIG. 14 is a diagram showing an example of the matrix data determined by the determination device storage unit 64 and the matrix data after the binarization process.
 フィルタリング構築部60における連結性認識部66は、フィルタリング構築部60における二値化処理部65によって二値化処理後の行列データに対し、「1」が入力されている要素の連結要素数をカウントする。閾値kを設定して、連結要素数が閾値k未満のものに対して要素を「0」に置き換える処理を行う。連結性認識部66は、閾値kの値を0から増やしていき複数の閾値kによって処理したデータを出力する。ただし、kは整数値である。kの最大値は、行列データの要素数となる。さらに、連結性認識部66は、フィルタリング構築部60における判定器格納部64により複数のtによって二値化された行列データのそれぞれにおいて、kを変更した際の結果を求める。データ数の合計は、設定した閾値tの数がi個、設定した閾値kの数がj個だったとすると、「i×j×(テスト画像入力部61に入力された撮影画像枚数)」となる。 The connectivity recognition unit 66 in the filtering construction unit 60 counts the number of connected elements in which "1" is input to the matrix data after the binarization processing by the binarization processing unit 65 in the filtering construction unit 60. do. A threshold value k is set, and a process of replacing an element with "0" is performed for an element having a number of connected elements less than the threshold value k. The connectivity recognition unit 66 increases the value of the threshold value k from 0 and outputs the data processed by the plurality of threshold values k. However, k is an integer value. The maximum value of k is the number of elements of the matrix data. Further, the connectivity recognition unit 66 obtains the result when k is changed in each of the matrix data binarized by the plurality of t by the determination device storage unit 64 in the filtering construction unit 60. Assuming that the number of set threshold values t is i and the number of set threshold values k is j, the total number of data is "i × j × (number of captured images input to the test image input unit 61)". Become.
 フィルタリング構築部60における連結数決定部67は、連結性認識部66において、算術された合計のデータ数「i×j×(テスト画像入力部に入力された撮影画像枚数)」個の行列データを判定結果(Predicted)とし、テストラベル入力部63に入力された行列データを真値(Actual)として、ROC曲線を作成する。図15は、金物の腐食の撮影画像によって作成されたROC曲線の一例を示す図である。図15のROC曲線は、閾値tを変更させながら、k=0,10,20,50に設定した際のROC曲線である。k=0のROC曲線よりもkを変更した方が、TPRが増え、FPRが減っている箇所がある。この連結性認識部66では、kを様々に変更した際のROC曲線を作成する機能を持つ。 The connection number determination unit 67 in the filtering construction unit 60 inputs matrix data of the total number of calculated data “i × j × (number of captured images input to the test image input unit)” in the connectivity recognition unit 66. The ROC curve is created with the matrix data input to the test label input unit 63 as the determination result (Predicted) as the true value (Actual). FIG. 15 is a diagram showing an example of a ROC curve created by a photographed image of corrosion of hardware. The ROC curve of FIG. 15 is an ROC curve when k = 0, 10, 20, 50 while changing the threshold value t. There are places where TPR increases and FPR decreases when k is changed rather than the ROC curve where k = 0. The connectivity recognition unit 66 has a function of creating a ROC curve when k is changed in various ways.
 その後、図16に示すように、連結数決定部67は、k=a(aは正の整数)の際にユークリッド距離が最小となるような閾値の値tと値kの組み合わせを求める。連結数決定部67の機能により、TPRの向上とFPRの低下が実現できる。図16は、ROC曲線に基づき閾値t及び閾値kを求める処理を説明する図である。図16のように、連結数決定部67は、座標(0,1)とROC曲線とのユークリッド距離が最小となる閾値tと閾値kの組み合わせを決定する。金物の腐食及び露筋については、錆汁及び汚れ等により劣化領域以外が劣化として判定されるケースが発生しうるが、検出された領域の大きさから非劣化領域と判断することによって、判定器の性能をさらに向上させることができる。 After that, as shown in FIG. 16, the connection number determination unit 67 obtains a combination of the threshold value t and the value k so that the Euclidean distance is minimized when k = a (a is a positive integer). By the function of the connection number determination unit 67, improvement of TPR and reduction of FPR can be realized. FIG. 16 is a diagram illustrating a process of obtaining a threshold value t and a threshold value k based on the ROC curve. As shown in FIG. 16, the connection number determination unit 67 determines the combination of the threshold value t and the threshold value k that minimizes the Euclidean distance between the coordinates (0,1) and the ROC curve. Regarding corrosion and dew streaks of hardware, there may be cases where areas other than the deteriorated area are judged to be deteriorated due to rust juice, dirt, etc., but by judging from the size of the detected area that the area is not deteriorated, the judgment device Performance can be further improved.
 連結数決定部67で決定された閾値kは、フィルタリング部40における除去部43に入力することができる。また、連結数決定部67で決定された閾値tは、判定部30の判定器格納部33に格納された判定器の閾値として入力し、判定器の閾値を更新する。除去部43では、閾値kに基づき、フィルタリング部40における連結性認識部42により処理された行列データに対して、連結要素数が閾値k未満のものに対して要素を「0」に置き換える処理を行う。このフィルタリング部40の機能により、判定器を用いて劣化領域のセグメンテーション手法による検出が行える。連結数決定部67で決定された閾値kを用いることによって、判定部30の画像入力部31に入力された撮影画像における劣化部の検出精度を向上させることができる。 The threshold value k determined by the connection number determination unit 67 can be input to the removal unit 43 of the filtering unit 40. Further, the threshold value t determined by the connection number determination unit 67 is input as the threshold value of the determination device stored in the determination device storage unit 33 of the determination unit 30, and the threshold value of the determination device is updated. Based on the threshold value k, the removal unit 43 performs a process of replacing the matrix data processed by the connectivity recognition unit 42 in the filtering unit 40 with "0" for the matrix data whose number of connected elements is less than the threshold value k. conduct. By the function of this filtering unit 40, it is possible to detect a deteriorated region by a segmentation method using a determination device. By using the threshold value k determined by the connection number determination unit 67, it is possible to improve the detection accuracy of the deteriorated portion in the captured image input to the image input unit 31 of the determination unit 30.
 <動作例>
 上記実施形態に係る劣化検出装置10の動作について、図17を参照して説明する。図17は、本開示の一実施形態に係る劣化検出装置10の動作の一例を示すフローチャートである。図17を参照して説明する劣化検出装置10の動作は本実施形態に係る劣化検出方法に相当する。図17の各ステップの動作は制御部11の制御に基づき実行される。本実施形態に係る劣化検出方法をコンピュータに実行させるためのプログラムは、図17に示す各ステップを含む。
<Operation example>
The operation of the deterioration detection device 10 according to the above embodiment will be described with reference to FIG. FIG. 17 is a flowchart showing an example of the operation of the deterioration detection device 10 according to the embodiment of the present disclosure. The operation of the deterioration detection device 10 described with reference to FIG. 17 corresponds to the deterioration detection method according to the present embodiment. The operation of each step in FIG. 17 is executed based on the control of the control unit 11. The program for causing the computer to execute the deterioration detection method according to the present embodiment includes each step shown in FIG.
 ステップS1において、制御部11は、撮影により得られた画像である教師画像と、教師画像において劣化部分を示す画素を特定する教師ラベルとを教師データとして機械学習を行い、判定器を生成する。このステップの詳細な動作内容は、判定器構築部20の動作と同一である。 In step S1, the control unit 11 performs machine learning using the teacher image, which is an image obtained by shooting, and the teacher label that identifies the pixel indicating the deteriorated portion in the teacher image as teacher data, and generates a determination device. The detailed operation content of this step is the same as the operation of the determination device construction unit 20.
 ステップS2において、制御部11は、劣化部分と判定すべき領域が有すべき連結要素数を決定する。すなわち、制御部11は、教師画像と異なるテスト画像と、テスト画像において劣化部分を示す画素を特定するテストラベルとを用いて、ROC曲線を取得する。制御部11は、ROC曲線に基づき、判定器を用いて、劣化部分が占めるとして予測された領域のうち、劣化領域として判定すべき領域が少なくとも有する画素数の閾値を決定する。このステップの詳細な動作内容は、フィルタリング構築部60の動作と同一である。このステップにより、劣化部分と判定すべき領域が有すべき連結要素数を予め定められた固定値とした場合よりも、より適切に劣化部分を判定することができる。なお、このステップはオプションであり、省略してもよい。 In step S2, the control unit 11 determines the number of connecting elements that the region to be determined as the deteriorated portion should have. That is, the control unit 11 acquires the ROC curve by using a test image different from the teacher image and a test label that identifies a pixel indicating a deteriorated portion in the test image. Based on the ROC curve, the control unit 11 uses a determination device to determine a threshold value of at least the number of pixels that the region to be determined as the deteriorated region has at least among the regions predicted to be occupied by the deteriorated portion. The detailed operation content of this step is the same as the operation of the filtering construction unit 60. By this step, the deteriorated portion can be determined more appropriately than when the number of connecting elements to be possessed by the region to be determined as the deteriorated portion is set to a predetermined fixed value. Note that this step is optional and may be omitted.
 ステップS3において、制御部11は、構造物を撮影して得られた撮影画像を取得する。このステップの詳細な動作内容は、画像入力部31の動作と同一である。 In step S3, the control unit 11 acquires a photographed image obtained by photographing the structure. The detailed operation content of this step is the same as the operation of the image input unit 31.
 ステップS4において、制御部11は、ステップS3で取得した撮影画像を予め定められた色空間に変換する。具体的には、制御部11は、撮影画像を、予め定められた色空間として、例えば、HSV色空間又はL色空間等の、RGB色空間よりも人間の視覚がより反映された色空間に変換する。このステップにより、人間の視覚がより反映された精度のよい劣化検出を行うことができる。このステップの詳細な動作内容は、色空間変換部32の動作と同一である。 In step S4, the control unit 11 converts the captured image acquired in step S3 into a predetermined color space. Specifically, the control unit 11 reflects the captured image as a predetermined color space, for example, the HSV color space or the L * a * b * color space, which reflects human vision more than the RGB color space. Convert to the color space. By this step, it is possible to perform accurate deterioration detection that more reflects human vision. The detailed operation content of this step is the same as the operation of the color space conversion unit 32.
 ステップS5において、制御部11は、ステップS1で取得した判定器を用いて、ステップS4で変換された撮影画像において構造物の劣化部分が占める領域を予測する。このステップの詳細な動作内容は、判定器格納部33の動作と同一である。 In step S5, the control unit 11 predicts the area occupied by the deteriorated portion of the structure in the captured image converted in step S4 by using the determination device acquired in step S1. The detailed operation content of this step is the same as the operation of the determination device storage unit 33.
 ステップS6において、制御部11は、ステップS5で劣化部分が占めると予測された領域のうち、予め定められた画素数以上の画素を含む領域を撮影画像の劣化領域として判定する。ステップS2の処理が行われた場合は、予測された領域のうち、予め定められた画素数として、ステップS2で決定した閾値以上の画素を含む領域を撮影画像の劣化領域として判定する。このステップにより、構造物にとって影響のない微小な劣化を削除して、劣化部分を適切に判定することができる。このステップの詳細な動作内容は、フィルタリング部40の動作と同一である。 In step S6, the control unit 11 determines, among the regions predicted to be occupied by the deteriorated portion in step S5, a region including a pixel equal to or larger than a predetermined number of pixels as a degraded region of the captured image. When the processing of step S2 is performed, the region including the pixels equal to or larger than the threshold value determined in step S2 is determined as the deteriorated region of the captured image as the predetermined number of pixels in the predicted region. By this step, it is possible to remove minute deterioration that does not affect the structure and appropriately determine the deteriorated portion. The detailed operation content of this step is the same as the operation of the filtering unit 40.
 ステップS7において、制御部11は、撮影画像を、劣化領域に含まれる領域と、劣化領域に含まれない領域とを区別してモニタ又はディスプレイ等の表示手段(結果出力部50)に表示させる。このステップにより、ユーザは、撮像画像において劣化と判定された部分を容易に認識することができる。そして、制御部11は、処理を終了する。 In step S7, the control unit 11 distinguishes between the area included in the deteriorated area and the area not included in the deteriorated area and displays the captured image on a display means (result output unit 50) such as a monitor or a display. By this step, the user can easily recognize the portion determined to be deteriorated in the captured image. Then, the control unit 11 ends the process.
 本開示は上述の実施形態に限定されるものではない。例えば、ブロック図に記載の複数のブロックは統合されてもよいし、又は1つのブロックは分割されてもよい。フローチャートに記載の複数のステップは、記述に従って時系列に実行する代わりに、各ステップを実行する装置の処理能力に応じて、又は必要に応じて、並列的に又は異なる順序で実行されてもよい。その他、本開示の趣旨を逸脱しない範囲での変更が可能である。 The present disclosure is not limited to the above-described embodiment. For example, the plurality of blocks shown in the block diagram may be integrated, or one block may be divided. The plurality of steps described in the flowchart may be executed in parallel or in a different order depending on the processing power of the device that executes each step, or as necessary, instead of executing the steps in chronological order according to the description. .. Other changes are possible without departing from the spirit of this disclosure.
 10  劣化検出装置
 11  制御部
 12  記憶部
 13  通信部
 14  入力部
 15  出力部
 20  判定器構築部
 21  教師画像入力部
 22  色空間変換部
 23  教師ラベル入力部
 24  学習部
 241 損失関数決定部
 242 学習進行部
 30  判定部
 31  画像入力部
 32  色空間変換部
 33  判定器格納部
 40  フィルタリング部
 41  二値化処理部
 42  連結性認識部
 43  除去部
 50  結果出力部
 60  フィルタリング構築部
 61  テスト画像入力部
 62  色空間変換部
 63  テストラベル入力部
 64  判定器格納部
 65  二値化処理部
 66  連結性認識部
 67  連結数決定部
 
10 Deterioration detection device 11 Control unit 12 Storage unit 13 Communication unit 14 Input unit 15 Output unit 20 Judgment unit construction unit 21 Teacher image input unit 22 Color space conversion unit 23 Teacher label input unit 24 Learning unit 241 Loss function determination unit 242 Learning progress Part 30 Judgment unit 31 Image input unit 32 Color space conversion unit 33 Judgment unit storage unit 40 Filtering unit 41 Binification processing unit 42 Connectivity recognition unit 43 Removal unit 50 Result output unit 60 Filtering construction unit 61 Test image input unit 62 colors Spatial conversion unit 63 Test label input unit 64 Judgment unit storage unit 65 Binarization processing unit 66 Connectivity recognition unit 67 Connection number determination unit

Claims (7)

  1.  撮影により得られた画像である教師画像と、当該教師画像において劣化部分を示す画素を特定する教師ラベルとを教師データとして機械学習された判定器を取得し、
     構造物を撮影して得られた撮影画像を取得し、
     前記撮影画像を予め定められた色空間に変換し、
     前記判定器を用いて、前記変換された前記撮影画像において前記構造物の劣化部分が占める領域を予測し、
     予測された前記領域のうち、予め定められた画素数以上の画素を含む領域を前記撮影画像の劣化領域として判定する
     制御部を備える、劣化検出装置。
    A machine-learned judgment device is acquired using the teacher image, which is an image obtained by shooting, and the teacher label, which identifies the pixel indicating the deteriorated portion in the teacher image, as teacher data.
    Acquire the captured image obtained by photographing the structure,
    The captured image is converted into a predetermined color space, and the photographed image is converted into a predetermined color space.
    Using the determination device, the region occupied by the deteriorated portion of the structure in the converted captured image is predicted.
    A deterioration detection device including a control unit for determining a region including a predetermined number of pixels or more of the predicted regions as a deterioration region of the captured image.
  2.  前記制御部は、前記教師画像と当該教師画像についての前記教師ラベルとを教師データとして機械学習を行い、前記判定器を生成する、請求項1に記載の劣化検出装置。 The deterioration detection device according to claim 1, wherein the control unit performs machine learning using the teacher image and the teacher label for the teacher image as teacher data to generate the determination device.
  3.  前記制御部は、
     前記教師画像と異なるテスト画像と、当該テスト画像において劣化部分を示す画素を特定するテストラベルとを用いて、ROC曲線を取得し、
     前記ROC曲線に基づき、前記判定器を用いて、劣化部分が占めるとして予測された前記領域のうち、劣化領域として判定すべき領域が少なくとも有する画素数の閾値を決定し、
     予測された前記領域のうち、予め定められた画素数として、前記閾値以上の画素を含む領域を前記撮影画像の劣化領域として判定する
     請求項1又は2に記載の劣化検出装置。
    The control unit
    An ROC curve is obtained by using a test image different from the teacher image and a test label that identifies a pixel indicating a deteriorated portion in the test image.
    Based on the ROC curve, the determination device is used to determine the threshold value of the number of pixels that the region to be determined as the deteriorated region has at least among the regions predicted to be occupied by the deteriorated portion.
    The deterioration detection device according to claim 1 or 2, wherein a region including pixels equal to or larger than the threshold value is determined as a deterioration region of the captured image as a predetermined number of pixels in the predicted region.
  4.  前記制御部は、前記撮影画像を、前記予め定められた色空間として、HSV色空間又はL色空間に変換する、請求項1から3のいずれか1項に記載の劣化検出装置。 The deterioration detection according to any one of claims 1 to 3, wherein the control unit converts the captured image into an HSV color space or an L * a * b * color space as the predetermined color space. Device.
  5.  前記制御部は、前記撮影画像を、前記劣化領域に含まれる領域と、前記劣化領域に含まれない領域とを区別して表示手段に表示させる、請求項1から4のいずれか1項に記載の劣化検出装置。 The one according to any one of claims 1 to 4, wherein the control unit distinguishes between a region included in the deteriorated region and a region not included in the deteriorated region and displays the captured image on a display means. Deterioration detector.
  6.  劣化検出装置の制御部が、
     撮影により得られた画像である教師画像と、当該教師画像において劣化部分を示す画素を特定する教師ラベルとを教師データとして機械学習された判定器を取得し、
     構造物を撮影して得られた撮影画像を取得し、
     前記撮影画像を予め定められた色空間に変換し、
     前記判定器を用いて、前記変換された前記撮影画像において前記構造物の劣化部分が占める領域を予測し、
     予測された前記領域のうち、予め定められた画素数以上の画素を含む領域を前記撮影画像の劣化領域として判定する
     劣化検出方法。
    The control unit of the deterioration detection device
    A machine-learned judgment device is acquired using the teacher image, which is an image obtained by shooting, and the teacher label, which identifies the pixel indicating the deteriorated portion in the teacher image, as teacher data.
    Acquire the captured image obtained by photographing the structure,
    The captured image is converted into a predetermined color space, and the photographed image is converted into a predetermined color space.
    Using the determination device, the region occupied by the deteriorated portion of the structure in the converted captured image is predicted.
    A deterioration detection method for determining a region including a predetermined number of pixels or more among the predicted regions as a deterioration region of the captured image.
  7.  コンピュータを請求項1から5のいずれか1項に記載の劣化検出装置として機能させるプログラム。
     
     
    A program that causes a computer to function as the deterioration detection device according to any one of claims 1 to 5.

PCT/JP2020/037908 2020-10-06 2020-10-06 Deterioration detection apparatus, deterioration detection method, and program WO2022074746A1 (en)

Priority Applications (2)

Application Number Priority Date Filing Date Title
JP2022555012A JP7410442B2 (en) 2020-10-06 2020-10-06 Deterioration detection device, deterioration detection method, and program
PCT/JP2020/037908 WO2022074746A1 (en) 2020-10-06 2020-10-06 Deterioration detection apparatus, deterioration detection method, and program

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2020/037908 WO2022074746A1 (en) 2020-10-06 2020-10-06 Deterioration detection apparatus, deterioration detection method, and program

Publications (1)

Publication Number Publication Date
WO2022074746A1 true WO2022074746A1 (en) 2022-04-14

Family

ID=81125722

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2020/037908 WO2022074746A1 (en) 2020-10-06 2020-10-06 Deterioration detection apparatus, deterioration detection method, and program

Country Status (2)

Country Link
JP (1) JP7410442B2 (en)
WO (1) WO2022074746A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010032511A (en) * 2008-06-30 2010-02-12 Denro Corp Method and device for diagnosing coating degradation
JP2012013675A (en) * 2010-05-31 2012-01-19 Tohoku Electric Power Co Inc Device and method for analyzing corrosion inside steel pipe
JP2013167596A (en) * 2012-02-17 2013-08-29 Honda Motor Co Ltd Defect inspection device, defect inspection method, and program
JP2019194562A (en) * 2018-04-26 2019-11-07 キヤノン株式会社 Information processing device, information processing method and program
JP2020085546A (en) * 2018-11-19 2020-06-04 国立研究開発法人産業技術総合研究所 System for supporting inspection and repair of structure
JP2020125980A (en) * 2019-02-05 2020-08-20 西日本旅客鉄道株式会社 Deterioration level determination device, deterioration level determination method, and program
JP2020136368A (en) * 2019-02-15 2020-08-31 東京エレクトロン株式会社 Image generation apparatus, inspection apparatus, and image generation method

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP7272857B2 (en) * 2019-05-10 2023-05-12 富士フイルム株式会社 Inspection method, inspection device, program and printing device

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2010032511A (en) * 2008-06-30 2010-02-12 Denro Corp Method and device for diagnosing coating degradation
JP2012013675A (en) * 2010-05-31 2012-01-19 Tohoku Electric Power Co Inc Device and method for analyzing corrosion inside steel pipe
JP2013167596A (en) * 2012-02-17 2013-08-29 Honda Motor Co Ltd Defect inspection device, defect inspection method, and program
JP2019194562A (en) * 2018-04-26 2019-11-07 キヤノン株式会社 Information processing device, information processing method and program
JP2020085546A (en) * 2018-11-19 2020-06-04 国立研究開発法人産業技術総合研究所 System for supporting inspection and repair of structure
JP2020125980A (en) * 2019-02-05 2020-08-20 西日本旅客鉄道株式会社 Deterioration level determination device, deterioration level determination method, and program
JP2020136368A (en) * 2019-02-15 2020-08-31 東京エレクトロン株式会社 Image generation apparatus, inspection apparatus, and image generation method

Also Published As

Publication number Publication date
JP7410442B2 (en) 2024-01-10
JPWO2022074746A1 (en) 2022-04-14

Similar Documents

Publication Publication Date Title
JP4369922B2 (en) Biological image collation device and collation method thereof
JP6870346B2 (en) Data analysis systems, data analysis methods and programs
KR20230124713A (en) Fault detection methods, devices and systems
WO2006019165A1 (en) Method for generating label image and image processing system
US20140286527A1 (en) Systems and methods for accelerated face detection
CN114782329A (en) Bearing defect damage degree evaluation method and system based on image processing
WO2015001967A1 (en) Device and method for creating image-processing filter
JP2020085546A (en) System for supporting inspection and repair of structure
JP2020181532A (en) Image determination device and image determination method
JP2021086379A (en) Information processing apparatus, information processing method, program, and method of generating learning model
JPWO2018207261A1 (en) Image analysis device
JP4498203B2 (en) Meter recognition system, meter recognition method, and meter recognition program
WO2022074746A1 (en) Deterioration detection apparatus, deterioration detection method, and program
CN115984341A (en) Marine water quality microorganism detection method, device, equipment and storage medium
JP4955411B2 (en) Determination apparatus, determination system, determination method, and computer program
CN115512203A (en) Information detection method, device, equipment and storage medium
JP7321452B2 (en) Program, information processing device, information processing method, and method for generating learned model
JP5251489B2 (en) Image processing apparatus and image processing program
US8514241B2 (en) Method and apparatus for bit resolution extension
CN113537253A (en) Infrared image target detection method and device, computing equipment and storage medium
JP4381445B2 (en) Image characteristic determination processing apparatus, image characteristic determination processing method, program for executing the method, and computer-readable storage medium storing the program
JP2020095017A (en) Information processing device, control method thereof, program, and storage medium
CN117474915B (en) Abnormality detection method, electronic equipment and storage medium
CN117115172B (en) Donkey-hide gelatin quality detection method and system based on machine vision
JP5636674B2 (en) Image processing apparatus and image processing program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20956694

Country of ref document: EP

Kind code of ref document: A1

ENP Entry into the national phase

Ref document number: 2022555012

Country of ref document: JP

Kind code of ref document: A

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20956694

Country of ref document: EP

Kind code of ref document: A1