WO2024029026A1 - Information processing system, program, information processing method, and server - Google Patents

Information processing system, program, information processing method, and server Download PDF

Info

Publication number
WO2024029026A1
WO2024029026A1 PCT/JP2022/029915 JP2022029915W WO2024029026A1 WO 2024029026 A1 WO2024029026 A1 WO 2024029026A1 JP 2022029915 W JP2022029915 W JP 2022029915W WO 2024029026 A1 WO2024029026 A1 WO 2024029026A1
Authority
WO
WIPO (PCT)
Prior art keywords
crack
area
unit
image
information processing
Prior art date
Application number
PCT/JP2022/029915
Other languages
French (fr)
Japanese (ja)
Inventor
駿 菅井
Original Assignee
株式会社センシンロボティクス
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 株式会社センシンロボティクス filed Critical 株式会社センシンロボティクス
Priority to JP2022572668A priority Critical patent/JP7228310B1/en
Priority to PCT/JP2022/029915 priority patent/WO2024029026A1/en
Priority to JP2023016315A priority patent/JP2024022449A/en
Publication of WO2024029026A1 publication Critical patent/WO2024029026A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/30Measuring arrangements characterised by the use of optical techniques for measuring roughness or irregularity of surfaces
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/60Analysis of geometric attributes
    • HELECTRICITY
    • H04ELECTRIC COMMUNICATION TECHNIQUE
    • H04NPICTORIAL COMMUNICATION, e.g. TELEVISION
    • H04N7/00Television systems
    • H04N7/18Closed-circuit television [CCTV] systems, i.e. systems in which the video signal is not broadcast

Definitions

  • the present invention relates to an information processing system and program for detecting cracks, an information processing method, and a server.
  • Patent Document 1 A crack detection device that detects cracks in the walls of buildings and civil engineering structures is disclosed in Patent Document 1.
  • the crack detection device disclosed in Patent Document 1 as an example of preprocessing for crack detection, an image region representing a tile (tile region) is extracted from a target image, and an image region other than the tile region is extracted.
  • a process of excluding it from the crack detection target range is executed (see paragraph 0022 of Cited Document 1).
  • the present invention was made in view of this background, and particularly provides an information processing system and a program that can prevent objects such as tiles with cracks from being detected in a divided state.
  • One purpose is to provide an information processing method and server.
  • a crack estimating unit that estimates a crack area in an object with respect to an original image in which one or more objects in a structure are reflected; Accordingly, the present invention is characterized by comprising a crack coloring unit that colors a crack area in the original image, and an object area estimating unit that estimates an object area where the object exists in the colored image.
  • An information processing system is provided.
  • an information processing system a program, an information processing method, and a server that can prevent an object such as a cracked tile from being detected in a divided state. can.
  • FIG. 1 is a diagram showing the overall configuration of an embodiment of the present invention.
  • 1 is a diagram showing a system configuration of an information processing system according to an embodiment of the present invention.
  • 3 is a block diagram showing the hardware configuration of the server in FIG. 2.
  • FIG. 3 is a block diagram showing the hardware configuration of the terminal in FIG. 2.
  • FIG. 3 is a block diagram showing the hardware configuration of the aircraft shown in FIG. 2.
  • FIG. 3 is a block diagram showing the functions of the server and terminal in FIG. 2.
  • FIG. It is a figure explaining the shape analysis process of the crack area
  • 7 is a flowchart illustrating a process for implementing a crack area detection method by the information processing system according to the present embodiment.
  • FIG. 3 is a diagram conceptually showing a state in which an image to be detected is divided into a plurality of regions in a grid pattern. This is an image showing an estimated crack area on a divided area where it is estimated that the crack area exists.
  • Figure (a) is a diagram showing a crack area estimated in an image and an enlarged area including the crack area
  • Figure (b) is a diagram showing a state in which the crack area is colored.
  • FIG. 3 is a diagram showing a target object area estimated by a target object area estimation unit.
  • FIG. 7 is a diagram showing a state in which the original crack region before coloring is superimposed on the estimated object region.
  • FIG. 12 is a diagram showing an enlarged view of the object region shown in FIG. 11;
  • FIG. 6 is a diagram showing one original detection target image reconstructed from each divided region in which the target object region has been estimated.
  • An information processing system, program, information processing method, and server has the following configuration.
  • [Item 1] a crack estimation unit that estimates a crack area in one or more objects in a structure with respect to an original image in which the objects are reflected; a crack coloring unit that colors the crack area in the original image according to at least the estimated color around the crack area; an object area estimation unit that estimates an object area in which the object exists in the colored image;
  • An information processing system characterized by: [Item 2] further comprising a superimposing unit that specifies a crack region in the target object region by at least superimposing the estimated position of the target object region and the crack region;
  • the information processing system according to item 1 characterized in that: [Item 3]
  • the crack estimating unit re-performs the estimation of the crack area on the original image associated with the estimated object area, and identifies the crack area in the object area.
  • the information processing system characterized in that: [Item 4] further comprising a crack shape analysis unit that performs a crack shape analysis on the crack region in the specified object region;
  • the crack estimation unit divides the original image into two or more divided images and estimates a crack area for each divided image.
  • the crack coloring unit generates an enlarged area by enlarging the estimated crack area, and colors the crack area with a statistically determined color based on color information in the enlarged area.
  • the information processing system according to any one of items 1 to 3, characterized in that: [Item 7]
  • the crack coloring unit acquires color information of the object and colors the crack area with a color that matches or substantially matches the color information.
  • the information processing system according to any one of items 1 to 3, characterized in that: [Item 8] A program that causes a computer having a processing unit to perform information processing, The program causes the processing unit to estimating a crack area in one or more objects in a structure with respect to an original image in which the objects are reflected; Coloring the crack area in the original image according to at least the estimated color of the surrounding area of the crack area; estimating an object area where the object exists in the colored image; A program to run.
  • [Item 9] estimating a crack area in one or more objects in a structure with respect to an original image in which one or more objects in the structure are reflected, by a crack estimating unit; Coloring the crack area in the original image according to at least the estimated color of the surrounding area of the crack area using a crack coloring unit; estimating, by a target object area estimating unit, a target object area in which the target object exists in the colored image; An information processing method that executes on a computer.
  • a crack estimation unit that estimates a crack area in one or more objects in a structure with respect to an original image in which the objects are reflected; a crack coloring unit that colors the crack area in the original image according to at least the estimated color around the crack area; an object area estimation unit that estimates an object area in which the object exists in the colored image;
  • the information processing system detects cracks existing in a wall surface of a structure such as a building or a civil engineering structure based on an image taken of such a wall surface. It is something.
  • the wall surface of the structure may be imaged by the user himself or herself by operating a camera, or by a camera mounted on an unmanned flying vehicle 4 as shown in FIG. 1 that flies autonomously or by remote control. You may operate it to take an image.
  • the information processing system in this embodiment simply detects the presence or absence of a crack in an image to be inspected by performing processing to estimate an object area where an object without cracks exists. In addition to detection, it also identifies the area in the image to be inspected where an object (for example, a wall tile or panel that forms a partitioned area) is present, and also determines which part of which object is present. It also makes it possible to detect the presence of cracks.
  • an object for example, a wall tile or panel that forms a partitioned area
  • the information processing system in this embodiment includes a server 1, a terminal 2, and an unmanned flying vehicle 4.
  • the server 1, the terminal 2, and the unmanned aircraft 4 may be communicably connected to each other via the network NW.
  • the illustrated configuration is an example, and the configuration is not limited to this.
  • the unmanned flying object 4 may not be connected to the network NW.
  • the unmanned aerial vehicle 4 may be operated by a transmitter (so-called radio) operated by a user, or the image data acquired by the camera of the unmanned aerial vehicle 4 may be stored in an auxiliary storage device (e.g.
  • the configuration may be such that the data is stored in a memory card such as an SD card, a USB memory, etc.), and later read out from the auxiliary storage device by the user and stored in the server 1 or terminal 2, for operational purposes or image data.
  • the unmanned aerial vehicle 4 may be connected to the network NW only for one of the storage purposes.
  • FIG. 2 is a diagram showing the hardware configuration of the server 1 in this embodiment. Note that the illustrated configuration is an example, and other configurations may be used.
  • the server 1 includes at least a processor 10, a memory 11, a storage 12, a transmitting/receiving section 13, an input/output section 14, etc., which are electrically connected to each other via a bus 15.
  • the server 1 may be a general-purpose computer, such as a workstation or a personal computer, or may be logically implemented by cloud computing.
  • the processor 10 is an arithmetic device that controls the overall operation of the server 1, controls the transmission and reception of data between each element, and performs information processing necessary for application execution and authentication processing.
  • the processor 10 is a CPU (Central Processing Unit) and/or a GPU (Graphics Processing Unit), and executes programs stored in the storage 12 and developed in the memory 11 to perform various information processing.
  • CPU Central Processing Unit
  • GPU Graphics Processing Unit
  • the memory 11 includes a main memory configured with a volatile storage device such as a DRAM (Dynamic Random Access Memory), and an auxiliary memory configured with a non-volatile storage device such as a flash memory or an HDD (Hard Disc Drive). .
  • the memory 11 is used as a work area for the processor 10, and also stores a BIOS (Basic Input/Output System) executed when the server 1 is started, various setting information, and the like.
  • BIOS Basic Input/Output System
  • the storage 12 stores various programs such as application programs.
  • a database storing data used for each process may be constructed in the storage 12.
  • a storage unit 130 which will be described later, may be provided in a part of the storage area.
  • the transmitting/receiving unit 13 is a communication interface through which the server 1 communicates with an external device (not shown), the unmanned aircraft 4, etc. via a communication network.
  • the transmitter/receiver 13 may further include a short-range communication interface for Bluetooth (registered trademark) and BLE (Bluetooth Low Energy), a USB (Universal Serial Bus) terminal, and the like.
  • the input/output unit 14 is information input devices such as a keyboard and mouse, and output devices such as a display.
  • the bus 15 is commonly connected to each of the above elements and transmits, for example, address signals, data signals, and various control signals.
  • the terminal 2 shown in FIG. 4 also includes a processor 20, a memory 21, a storage 22, a transmitting/receiving section 23, an input/output section 24, etc., which are electrically connected to each other through a bus 25. Since the functions of each element can be configured in the same manner as the server 1 described above, a detailed explanation of each element will be omitted.
  • FIG. 5 is a block diagram showing the hardware configuration of the unmanned aerial vehicle 4.
  • Flight controller 41 may include one or more processors, such as a programmable processor (eg, a central processing unit (CPU)).
  • processors such as a programmable processor (eg, a central processing unit (CPU)).
  • the flight controller 41 has a memory 411 and can access the memory.
  • Memory 411 stores logic, code, and/or program instructions executable by the flight controller to perform one or more steps.
  • the flight controller 41 may include sensors 412 such as an inertial sensor (acceleration sensor, gyro sensor), a GPS sensor, a proximity sensor (eg, lidar), and the like.
  • the memory 411 may include, for example, a separable medium or external storage device such as an SD card or random access memory (RAM). Data acquired from cameras/sensors 42 may be communicated directly to and stored in memory 411. For example, still image/video data taken with a camera or the like may be recorded in the built-in memory or external memory, but the data is not limited to this. 2 may be recorded.
  • the camera 42 is installed on the unmanned aerial vehicle 4 via a gimbal 43.
  • Flight controller 41 includes a control module (not shown) configured to control the state of unmanned aerial vehicle 4 .
  • the control module adjusts the spatial position, velocity, and/or acceleration of the unmanned air vehicle 4 with six degrees of freedom (translational movements x, y, and z, and rotational movements ⁇ x , ⁇ y , and ⁇ z ).
  • the propulsion mechanism (motor 45, etc.) of the unmanned aerial vehicle 4 is controlled via an ESC 44 (Electric Speed Controller).
  • a propeller 46 is rotated by a motor 45 supplied with power from a battery 48, thereby generating lift of the unmanned flying vehicle 4.
  • the control module can control one or more of the states of the mounting section and sensors.
  • Flight controller 41 is a transceiver configured to transmit and/or receive data from one or more external devices (e.g., a transceiver 49, terminal, display, or other remote controller). It is possible to communicate with the unit 47.
  • Transceiver 49 may use any suitable communication means, such as wired or wireless communication.
  • the transmitter/receiver 47 uses one or more of a local area network (LAN), wide area network (WAN), infrared rays, wireless, WiFi, point-to-point (P2P) network, telecommunications network, cloud communication, etc. can do.
  • LAN local area network
  • WAN wide area network
  • infrared rays wireless
  • WiFi point-to-point
  • P2P point-to-point
  • telecommunications network telecommunications network
  • cloud communication etc.
  • the transmitting/receiving unit 47 transmits and/or receives one or more of data acquired by the sensors 42, processing results generated by the flight controller 41, predetermined control data, user commands from a terminal or a remote controller, etc. be able to.
  • Sensors 42 may include an inertial sensor (acceleration sensor, gyro sensor), a GPS sensor, a proximity sensor (eg, lidar), or a vision/image sensor (eg, camera).
  • inertial sensor acceleration sensor, gyro sensor
  • GPS sensor GPS sensor
  • proximity sensor eg, lidar
  • vision/image sensor eg, camera
  • FIG. 6 is a block diagram illustrating functions implemented in the server 1 and the terminal 2.
  • the server 1 includes an image acquisition section 115, a processing section 120, and a storage section 130.
  • the processing unit 120 includes a crack estimation unit 121 , a crack coloring unit 122 , an object area estimation unit 123 , a superimposition unit 124 , and a crack shape analysis unit 125 .
  • the storage unit 130 also includes an information/image storage unit 131, a crack estimation learning model 132, and an object area estimation learning model 133.
  • the various functional units are illustrated as functional units in the processor 10 of the server 1, but some or all of the various functional units may be implemented in the processor 10 of the server 1, the processor 20 of the terminal 2, or the controller of the unmanned aircraft 4.
  • the configuration may be implemented in any of the processor 10, the processor 20, and the controller 41 depending on the capabilities of the processor 41 and the like.
  • the communication unit 110 communicates with the terminal 2 and the unmanned aerial vehicle 4 via the network NW.
  • the communication unit 110 also functions as a reception unit that receives various requests, data, etc. from the terminal 2, the unmanned aircraft 4, and the like.
  • the image acquisition unit 115 acquires images captured by a digital camera mounted on the unmanned aerial vehicle 4 or a digital camera used by a user, for example, by wireless communication via a communication interface or wired communication via a USB terminal. acquired from a digital camera.
  • the image acquisition unit 115 may be configured to acquire images via a storage medium such as a USB memory or an SD memory.
  • the processing unit 120 includes functional units 121 to 125 that perform a series of processes for detecting cracks in the image acquired by the image acquisition unit 115 and detecting which parts of which objects have cracks. We are prepared.
  • the crack estimating unit 121 performs a process of estimating a crack area existing in an original image in which one or more objects in a structure (tiles or panels on the wall of a building or civil engineering structure, etc.) are reflected. Execute.
  • the crack estimating unit 121 of this embodiment estimates a crack area using the crack estimation learning model 132 in the storage unit 130. Details of the crack estimation learning model 132 will be described later.
  • the crack estimating unit 121 may perform a process of estimating a crack area for the entire original image, or may perform a process of dividing the original image into a plurality of areas and then divide each divided area. A process for estimating the crack area may also be executed. The process of dividing the original image into multiple regions and estimating the crack area can be done once for the entire original image, since by dividing the area for crack area estimation into smaller parts, the calculation amount for the process can also be divided into smaller parts. The calculation load on the crack estimating unit 121 can be reduced compared to when the process of estimating the crack area is executed.
  • the crack estimation unit 121 executes the process of dividing the original image into a plurality of regions, the crack estimation unit 121 executes each process described below regarding each divided region, and then reconstructs the divided regions and returns the original image to the original image. Generate an image corresponding to one image of .
  • the crack coloring unit 122 executes a process of coloring the crack area estimated by the crack estimation unit 121 in the image according to the color of surrounding objects, etc., and displays the colored crack area. Generate an image.
  • the crack coloring unit 122 defines an enlarged area in the image that includes the estimated crack area and is wider than the crack area, and includes the area included in the enlarged area.
  • the color is statistically determined based on the color information of objects (objects such as wall tiles and panels, joints between these objects, etc.), and the cracked area is colored with the determined color.
  • the process of statistically determining a color based on the color information of objects within the enlarged area is, for example, weighting the color information of various colors included within the enlarged area according to the area occupied by those colors within the enlarged area. This can be done by averaging. In the image in which the cracked area is colored, the cracked area blends in with the surrounding objects, and it appears that there is almost no cracked area on the wall surface.
  • the crack coloring unit 122 acquires color information of an object such as a wall tile or panel, and matches or substantially matches the color information. Color the cracked area with color.
  • the process of acquiring the color information of the object may be performed, for example, by determining the color information having the most frequent value among the color information in the entire image as the colored color, or by determining the color information in the predetermined area adjacent to the cracked area (for example, , pixels within a specific number from the crack area) by determining the color information having the mode or average value of the color information as the colored color.
  • the user may specify or preset the color information via the input/output unit 24 of the terminal 2 (see FIG. 4), for example. By doing so, it is also possible for the crack coloring section 122 to acquire color information.
  • the target object area estimating unit 123 calculates an object area, which is an area where a target object (such as a wall tile or panel) is present, from the image in which the crack area is colored, which is generated by the crack coloring unit 122. Execute the process of estimating.
  • the object area estimation unit 123 of this embodiment estimates the object area using the object area estimation learning model 133 in the storage unit 130. Details of the object area estimation learning model 133 will be described later.
  • the object area estimation process performed by the object area estimation unit 123 detects the area in the image where the object (wall tile, panel, etc.) exists, and further calculates the position, shape, etc. of each object in that area. is detected. In other words, the position, shape, etc. of each object are individually recognized by this object area estimation process.
  • the target area estimation process detects the area in the image where the tile exists, and also calculates the size of each tile separated by joints in that area. Position and shape are also detected individually.
  • the superimposing unit 124 adds the cracks estimated by the crack estimating unit 121 to the target object area estimated by the target object area estimating unit 123 in the image with the crack area colored.
  • a process of superimposing the crack areas and identifying the crack area in the object area is executed.
  • the object area estimation process by the object area estimation unit 123 detects the area in the image where the object (wall tiles, panels, etc.) exists and the position and shape of each object. There is.
  • the superimposing unit 124 by superimposing the crack area estimated by the crack estimating unit 121 on the area where the crack area exists in the image (the area colored by the crack coloring unit 122), Instead of simply detecting the position where a cracked area exists on a wall surface, it is possible to specify in which object, such as a tile, on a wall surface a cracked area exists. If a cracked area exists in a single object, the superimposing unit 124 identifies that object as an object in which a cracked area exists, and also specifies that the cracked area extends over multiple objects. If there are cracked areas, those multiple objects are identified as objects in which cracked areas exist.
  • the crack shape analysis unit 125 executes a process of analyzing the shape (length, width, etc.) of the crack area based on the crack area estimated by the crack estimation unit 121.
  • the crack shape analysis unit 125 of the present embodiment analyzes the shape of the crack location by a method using a Hessian matrix, for example. It is configured to perform analysis processing.
  • the image can be interpreted as a three-dimensional curved surface.
  • the Hessian matrix for a certain pixel (x, y) among pixels is a square matrix composed of elements obtained by second-order differentiation of the brightness value of the image in the x direction and the y direction, and is expressed by the following equation (1). Ru. ... Formula (1)
  • Pixels that satisfy the relationship are regarded as linear structures and are emphasized.
  • FIG. 7(a) shows a conceptual diagram of the crack area A estimated by the crack estimating unit 121 and the correct actual crack area B
  • FIG. 7(b) a skeleton C having a line width of one pixel is shown, which is formed by pixels that are considered to be a linear structure as described above based on the estimated crack area A. Skeleton C indicates the stretching direction and length of the cracked portion.
  • the linear structure is evaluated by changing the scale of the Hessian matrix on the skeleton C, and the crack width is evaluated by examining the scale at which the linearity evaluation value is maximum for pixels determined to be linear structures. (See FIG. 7(c)).
  • the crack shape analysis unit 125 calculates the correct actual crack area as the shape of the crack location, as shown in FIG. 7(c). A linear structure D close to B can be obtained.
  • the skeleton It is possible to obtain the stretching direction and length of the cracked portion based on , and obtain the width of the cracked portion based on the Hessian matrix.
  • the information/image storage unit 131 of the storage unit 130 stores, in addition to the image acquired by the image acquisition unit 115, an image in which the crack area is colored by the crack coloring unit 122, and each of the processing unit 120. Information, data, etc. generated through processing by the functional units 121 to 125 are stored at least temporarily.
  • the crack estimation learning model 132 is a learning model generated by machine learning using crack images related to various cracks as training data.
  • the crack estimation learning model 132 can be created using, for example, an arbitrary external computer device (not shown) as a learning device, and can be stored in the storage unit 130.
  • the crack estimation learning model 132 may be generated by machine learning using crack images as training data for each different object such as a tile or panel.
  • An estimated learning model is generated and stored in the storage unit 130.
  • the crack estimation learning model 132 is generated by performing machine learning with a neural network composed of multiple layers each including neurons.
  • a neural network such as a convolutional neural network (CNN) can be used.
  • CNN convolutional neural network
  • Mask R-CNN (Region-based Convolutional Neural Network) is used.
  • Mask R-CNN a candidate region for an object is extracted using CNN, and by simultaneously estimating the region position and class probability, the object is multiplied by a bounding box and the class to which the object belongs is determined.
  • the object area estimation learning model 133 is a learning model generated by machine learning using images of various objects such as tiles and panels as training data.
  • the object region estimation learning model 133 can also be created using, for example, an arbitrary external computer device (not shown) as a learning device, and can be stored in the storage unit 130.
  • the object area estimation learning model 133 may be generated by machine learning using cracked images for each of various objects such as tiles and panels as training data; in this case, multiple A target object area estimation learning model is generated and stored in the storage unit 130.
  • the object region estimation learning model 133 is also generated by performing machine learning using Mask R-CNN (Region-based Convolutional Neural Network). Therefore, by using the target region estimation learning model 133 of this embodiment, it is possible to estimate not only the region in an image where the target region exists, but also the position and shape of each target region in that region. .
  • Mask R-CNN Registered-based Convolutional Neural Network
  • FIG. 8 is a flowchart showing a process for implementing a crack area detection method by the information processing system according to the present embodiment.
  • the image acquisition unit 115 of the server 1 acquires original images captured by the camera mounted on the unmanned flying vehicle 4 or the camera used by the user (S101).
  • the original image to be acquired is an image of the object whose crack area is to be detected, such as on the wall of a building or civil engineering structure.
  • FIG. 9 shows an example of an original image to be detected.
  • the crack estimating unit 121 of the server 1 estimates the crack area in the original image in which one or more objects (tiles, panels, etc. on the walls of buildings, civil engineering structures, etc.) are reflected. The process to do so is executed (S102).
  • This process of estimating the crack area may include a process in which the crack estimation unit 121 divides the original image of which the crack area is to be detected into a plurality of areas.
  • FIG. 10 is a diagram conceptually showing a state in which the image of the detection target is divided into a plurality of regions in a grid pattern.
  • the image to be detected is divided into a total of nine regions, three vertically by three horizontally.
  • the crack estimation unit 121 associates the image of each divided area divided by the crack estimation unit 121 with information indicating which part of the entire original image before division corresponds, and stores the image in the information/image storage unit 131. Store. Note that the process of dividing the detection target image in this way is an option, and the subsequent process may be performed on one entire image without dividing the original image of the detection target.
  • the crack estimation unit 121 next estimates the crack area using the crack estimation learning model 132 for each of the divided areas of the original image to be detected as described above. Execute the process to estimate.
  • FIG. 11 shows an image in which the estimated crack area is shown on the divided area where it is estimated that the crack area exists as a result of the crack area estimation process by the crack estimation unit 121.
  • the crack estimating unit 121 stores information about the estimated crack area (divided area where the cracked area exists, position of the cracked area in the divided area, size/shape of the cracked area, etc.) as information/image storage. It is stored in the section 131.
  • the crack coloring unit 122 of the server 1 executes a process of coloring the crack area estimated by the crack estimation unit 121 according to the color of surrounding objects, etc. (S103).
  • the crack coloring unit 122 defines an enlarged area in the image that includes the estimated crack area and is wider than the crack area (see FIG. 12(a)). Then, the crack coloring unit 122 statistically determines a color based on the color information of objects included in the enlarged area (in the illustrated example, wall tiles and joints between tiles), and the determined color
  • the cracked area is colored using the steps shown in FIG. 12(b) to generate an image with the cracked area colored (see FIG. 12(b)). As shown in FIG. 12(b), in the image in which the cracked area is colored, the cracked area blends in with the surrounding objects, so that it appears that almost no cracked area exists on the wall surface.
  • the crack coloring unit 122 stores an image of the generated crack area in a colored state in the information/image storage unit 131.
  • the target object area estimating unit 123 of the server 1 determines whether the target object (in the illustrated example, a tile on the wall) is generated by the crack coloring unit 122 in the image with the crack area colored.
  • a process of estimating the existing target object area using the target object area estimation learning model 133 is executed (S104).
  • FIG. 13 is a diagram showing the target object area estimated by the target object area estimation unit 123.
  • the area in which the object in the illustrated example, a wall tile
  • the target object area estimating unit 123 similarly estimates the target area where the target object (in the illustrated example, a wall tile) exists for the divided area in which the crack area is not estimated.
  • a process of estimating the area using the object area estimation learning model 133 is executed.
  • the object area estimation unit 123 stores information regarding the estimated object area for each divided area in the information/image storage unit 131.
  • the superimposing unit 124 of the server 1 adds the crack estimated by the crack estimating unit 121 to the target object area estimated by the target object area estimating unit 123 in the image (divided area) in which the crack area is colored.
  • the crack area before the coloring process by the crack coloring unit 122 is superimposed to execute a process of specifying the crack area in the object area (S105).
  • the above superimposition process by the superimposition unit 124 is performed so that the original crack area before coloring overlaps the position where the colored crack area exists in the estimated object area.
  • the process of estimating the target area (S104) by the target area estimation unit 123 the area where the target object (tile in the illustrated example) exists and the position, shape, etc. of each target object (tile) in the image are detected. Therefore, this superimposition process does not simply detect the location of the cracked area on the wall surface, but also identifies which object (tile) on the wall surface has the cracked area. be able to.
  • the superimposition unit 124 transfers information indicating the relationship between the target object area and the crack area identified by this process (which target object (tile) has the crack area, etc.) to the information/image storage unit 131.
  • FIG. 14 shows a state in which the original crack region before coloring is superimposed on the estimated object region.
  • two tiles in which cracked areas exist are identified by superimposition processing, and are shown with a different brightness than other tiles.
  • the crack shape analysis unit 125 of the server 1 executes a process of analyzing the shape (length, width, etc.) of the crack area based on the crack area estimated by the crack estimation unit 121 ( S106).
  • the crack shape analysis unit 125 is configured to perform a shape analysis process of a crack location using a method using a Hessian matrix as described above, and by the shape analysis process, the direction of extension of the crack location is determined. , the length, and the width of the cracked area are acquired as information regarding the shape of the cracked area, and the acquired information regarding the shape of the cracked area is stored in the information/image storage unit 131 .
  • FIG. 15 is an enlarged view of the object region shown in FIG. 14, and FIG. 15 shows a skeleton generated based on the estimated crack area and an actual skeleton obtained by evaluating the crack width. A linear structure close to the crack area is shown.
  • FIG. 15 also shows numerical information regarding the length and width of each segment constituting the crack location, which was obtained as a result of the analysis by the crack shape analysis unit 125.
  • the crack estimating unit 121 regenerates the image of the divided divided regions into one original detection target image.
  • the construction process is executed (S107).
  • the crack estimating unit 121 converts the divided areas into original parts based on information stored in the information/image storage unit 131 that indicates which part of the entire image before division the image of each divided area corresponds to. Reconstruct into two detection target images. Among the images of each divided area in which the target object area was estimated, in the divided area in which the crack area was estimated, the shape of the cracked area was analyzed and identified by the above process, and the crack location was identified in the estimated target area. is superimposed on top of.
  • FIG. 16 shows one original detection target image reconstructed from each divided region in which the target object region was estimated.
  • tiles shown in colors with low brightness (close to black) are target object regions estimated by the target object region estimating unit 123.
  • the crack estimation unit 121 stores the thus reconstructed image and various data related to the image in the information/image storage unit 131 in association with each other.
  • Various data related to the reconstructed image include data obtained through each of the above processes (estimated object area (for example, ID may be assigned and managed), position and shape of each object, The estimated number of object regions, the object region in which the crack exists (for example, the ID may be identified and managed), the shape (length and width) of the crack, and the existence of the crack. (In particular, information regarding the number of object regions where there are cracks whose length or width exceeds a reference value).
  • Part or all of these images and various data related to the images may be transmitted to the terminal 2 in response to a request from the terminal 2.
  • the image and various data related to the image may be viewable by the user on a predetermined user interface via the input/output unit 24 (eg, display) of the terminal 2.
  • repair Preparation of new objects to be prepared at the time of preparation becomes smooth.
  • the number of object regions in which there are cracks whose length or width exceeds a reference value is extracted by processing the processing unit 120 by determining the value of at least one of the length and width of the crack. This may be performed based on the result of comparison with a reference value, or alternatively, after data regarding cracks is received on the terminal 2, a similar comparison may be performed on the terminal 2 to extract it.
  • the crack area in the original image is colored according to at least the color of the estimated crack area, and the target object is colored in the image in which the crack area is colored.
  • the target object is colored in the image in which the crack area is colored.
  • the superimposition unit 124 of the server 1 adds the estimated area by the crack estimating unit 121 to the object area estimated by the object area estimating unit 123 in the image (divided area) in which the crack area is colored. It has been explained that the process of specifying the crack area in the object area by superimposing the crack area before the coloring process by the crack coloring unit 122 is executed (step S105 in FIG. 8). This modification provides processing that can be an alternative to such superimposition processing.
  • the crack estimating unit 121 performs the process estimated by the object area estimation unit 123 in the process in step S104 in FIG.
  • the crack area estimating process described with reference to step S102 in FIG. 8 is re-executed on the original image associated with the target object area to identify the crack area in the target object area. Similar to the superimposition process by the superimposition unit 124, this process also allows not only the position of the crack area in the object area estimated in the original image but also the location of the crack area in which object in the object area. It is also possible to estimate whether there are any animals present.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • General Health & Medical Sciences (AREA)
  • Health & Medical Sciences (AREA)
  • Multimedia (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Biochemistry (AREA)
  • Signal Processing (AREA)
  • Immunology (AREA)
  • Pathology (AREA)
  • Geometry (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Closed-Circuit Television Systems (AREA)

Abstract

[Problem] To provide an information processing system and a program that make it possible to prevent a cracked object from being detected in a split state. [Solution] According to an embodiment of the present invention, an information processing system is provided that comprises: a crack estimation unit that estimates, with respect to an original image containing one or more objects in a structure, a cracked region in an object; a crack coloring unit that colors the estimated cracked region in the original image at least according to the color of surroundings of the cracked region; and an object region estimation unit that estimates, with respect to an image resulting from the coloring, an object region where the object is present.

Description

情報処理システム及びプログラム、情報処理方法、サーバInformation processing system and program, information processing method, server
 本発明は、ヒビ割れを検出する情報処理システム及びプログラム、情報処理方法、サーバに関する。 The present invention relates to an information processing system and program for detecting cracks, an information processing method, and a server.
 建築物や土木構造物の壁におけるヒビ割れを検出するヒビ割れ検出装置が、特許文献1に開示されている。特許文献1に開示されたヒビ割れ検出装置によれば、ヒビ割れ検出のための前処理の一例として、対象画像からタイルを表す画像領域(タイル領域)を抽出し、タイル領域以外の画像領域をヒビ割れ検出の対象範囲から除外する処理が実行される(引用文献1の段落0022参照)。 A crack detection device that detects cracks in the walls of buildings and civil engineering structures is disclosed in Patent Document 1. According to the crack detection device disclosed in Patent Document 1, as an example of preprocessing for crack detection, an image region representing a tile (tile region) is extracted from a target image, and an image region other than the tile region is extracted. A process of excluding it from the crack detection target range is executed (see paragraph 0022 of Cited Document 1).
特許第6894339号公報Patent No. 6894339
 しかしながら、このようにヒビ割れ検出処理の前にタイル領域を抽出する処理を行うと、ヒビ割れがあるタイルは、そのヒビ割れにより、本来は1つであるタイルが分割された2つのタイルとして誤検出されたり、あるいは、分割されたタイルの一方の部分が欠損した状態で誤検出されたりする可能性がある。この場合は、ヒビ割れ箇所が分割されたタイルの境界(目地)として認識されてしまうなどにより、ヒビ割れ検出が正しくなされなくなるおそれがある。 However, if the process of extracting the tile area is performed before the crack detection process, tiles with cracks may be mistakenly treated as two tiles that were originally one tile due to the cracks. Otherwise, one part of the divided tile may be erroneously detected as missing. In this case, the crack may be recognized as a boundary (joint) between divided tiles, and the crack may not be detected correctly.
 本発明はこのような背景を鑑みてなされたものであり、特に、ヒビ割れのあるタイル等の対象物が分割された状態で検出されることを防止することが可能な情報処理システム及びプログラム、情報処理方法、サーバを提供することを一つの目的とする。 The present invention was made in view of this background, and particularly provides an information processing system and a program that can prevent objects such as tiles with cracks from being detected in a divided state. One purpose is to provide an information processing method and server.
 本発明の一態様によれば、構造物における一以上の対象物が映る原画像に対して対象物におけるヒビ割れ領域を推定するヒビ割れ推定部と、推定したヒビ割れ領域の周囲の色に少なくとも応じて、原画像におけるヒビ割れ領域を着色するヒビ割れ着色部と、着色後の画像に対して対象物が存在する対象物領域を推定する対象物領域推定部と、を備えることを特徴とする情報処理システムが提供される。 According to one aspect of the present invention, there is provided a crack estimating unit that estimates a crack area in an object with respect to an original image in which one or more objects in a structure are reflected; Accordingly, the present invention is characterized by comprising a crack coloring unit that colors a crack area in the original image, and an object area estimating unit that estimates an object area where the object exists in the colored image. An information processing system is provided.
 本発明によれば、特に、ヒビ割れのあるタイル等の対象物が分割された状態で検出されることを防止することが可能な情報処理システム及びプログラム、情報処理方法、サーバを提供することができる。 According to the present invention, it is possible to provide an information processing system, a program, an information processing method, and a server that can prevent an object such as a cracked tile from being detected in a divided state. can.
本発明の実施の形態の全体構成を示す図である。1 is a diagram showing the overall configuration of an embodiment of the present invention. 本発明の実施の形態にかかる情報処理システムのシステム構成を示す図である。1 is a diagram showing a system configuration of an information processing system according to an embodiment of the present invention. 図2のサーバのハードウェア構成を示すブロック図である。3 is a block diagram showing the hardware configuration of the server in FIG. 2. FIG. 図2の端末のハードウェア構成を示すブロック図である。3 is a block diagram showing the hardware configuration of the terminal in FIG. 2. FIG. 図2の飛行体のハードウェア構成を示すブロック図である。FIG. 3 is a block diagram showing the hardware configuration of the aircraft shown in FIG. 2. FIG. 図2のサーバ、端末の機能を示すブロック図である。3 is a block diagram showing the functions of the server and terminal in FIG. 2. FIG. ヒビ割れ形状解析部によるヒビ割れ領域の形状解析処理を説明する図である。It is a figure explaining the shape analysis process of the crack area|region by a crack shape analysis part. 本実施形態にかかる情報処理システムによるヒビ割れ領域検出方法を実施する処理を示すフローチャートである。7 is a flowchart illustrating a process for implementing a crack area detection method by the information processing system according to the present embodiment. ヒビ割れ領域を検出する対象の画像の一例である。This is an example of an image in which a crack area is to be detected. 検出対象の画像を格子状に複数の領域に分割した状態を概念的に示す図である。FIG. 3 is a diagram conceptually showing a state in which an image to be detected is divided into a plurality of regions in a grid pattern. ヒビ割れ領域が存在すると推定された分割領域の上に、推定されたヒビ割れ領域を示した状態の画像である。This is an image showing an estimated crack area on a divided area where it is estimated that the crack area exists. 図(a)は、画像中に推定されたヒビ割れ領域と、そのヒビ割れ領域を含む拡大領域とを示す図であり、図(b)は、ヒビ割れ領域を着色した状態を示す図である。Figure (a) is a diagram showing a crack area estimated in an image and an enlarged area including the crack area, and Figure (b) is a diagram showing a state in which the crack area is colored. . 対象物領域推定部によって推定された対象物領域を示す図である。FIG. 3 is a diagram showing a target object area estimated by a target object area estimation unit. 推定された対象物領域に着色前の元のヒビ割れ領域が重畳された状態を示す図である。FIG. 7 is a diagram showing a state in which the original crack region before coloring is superimposed on the estimated object region. 図11に示した対象物領域を拡大して示す図である。FIG. 12 is a diagram showing an enlarged view of the object region shown in FIG. 11; 対象物領域が推定された各分割領域によって再構築された元の1つの検出対象画像を示す図である。FIG. 6 is a diagram showing one original detection target image reconstructed from each divided region in which the target object region has been estimated.
 本発明の実施形態の内容を列記して説明する。本発明の実施の形態による情報処理システム及びプログラム、情報処理方法、サーバは、以下のような構成を備える。
[項目1]
 構造物における一以上の対象物が映る原画像に対して当該対象物におけるヒビ割れ領域を推定するヒビ割れ推定部と、
 推定した前記ヒビ割れ領域の周囲の色に少なくとも応じて、前記原画像における前記ヒビ割れ領域を着色するヒビ割れ着色部と、
 当該着色後の画像に対して前記対象物が存在する対象物領域を推定する対象物領域推定部と、を備える、
 ことを特徴とする情報処理システム。
[項目2]
 推定した前記対象物領域と前記ヒビ割れ領域の位置を少なくとも重畳して前記対象物領域におけるヒビ割れ領域を特定する重畳部をさらに備える、
 ことを特徴とする項目1に記載の情報処理システム。
[項目3]
 前記ヒビ割れ推定部は、推定した前記対象物領域を対応づけた前記原画像に対してヒビ割れ領域の推定を再実行し、前記対象物領域におけるヒビ割れ領域を特定する、
 ことを特徴とする項目1に記載の情報処理システム。
[項目4]
 特定された前記対象物領域における前記ヒビ割れ領域に対して、ヒビ割れ形状の解析を実行するヒビ割れ形状解析部をさらに備える、
 ことを特徴とする項目2または3に記載の情報処理システム。
[項目5]
 前記ヒビ割れ推定部は、前記原画像を二以上の分割画像に分割し、各分割画像に対してヒビ割れ領域を推定する、
 ことを特徴とする項目1ないし3のいずれかに記載の情報処理システム。
[項目6]
 前記ヒビ割れ着色部は、推定された前記ヒビ割れ領域を拡大させた拡大領域を生成し、当該拡大領域内の色情報に基づき統計的に決定された色で前記ヒビ割れ領域を着色する、
 ことを特徴とする項目1ないし3のいずれかに記載の情報処理システム。
[項目7]
 前記ヒビ割れ着色部は、前記対象物の色情報を取得し、当該色情報に一致または略一致した色で前記ヒビ割れ領域を着色する、
 ことを特徴とする項目1ないし3のいずれかに記載の情報処理システム。
[項目8]
 処理部を有するコンピュータに情報処理を実行させるプログラムであって、
 前記プログラムは、前記処理部に、
 構造物における一以上の対象物が映る原画像に対して当該対象物におけるヒビ割れ領域を推定することと、
 推定した前記ヒビ割れ領域の周囲の色に少なくとも応じて、前記原画像における前記ヒビ割れ領域を着色することと、
 当該着色後の画像に対して前記対象物が存在する対象物領域を推定することと、
を実行させる、プログラム。
[項目9]
 ヒビ割れ推定部により、構造物における一以上の対象物が映る原画像に対して当該対象物におけるヒビ割れ領域を推定するステップと、
 ヒビ割れ着色部により、推定した前記ヒビ割れ領域の周囲の色に少なくとも応じて、前記原画像における前記ヒビ割れ領域を着色するステップと、
 対象物領域推定部により、当該着色後の画像に対して前記対象物が存在する対象物領域を推定するステップと、
をコンピュータにおいて実行する、情報処理方法。
[項目10]
 構造物における一以上の対象物が映る原画像に対して当該対象物におけるヒビ割れ領域を推定するヒビ割れ推定部と、
 推定した前記ヒビ割れ領域の周囲の色に少なくとも応じて、前記原画像における前記ヒビ割れ領域を着色するヒビ割れ着色部と、
 当該着色後の画像に対して前記対象物が存在する対象物領域を推定する対象物領域推定部と、を備える、
 ことを特徴とするサーバ。
The contents of the embodiments of the present invention will be listed and explained. An information processing system, program, information processing method, and server according to an embodiment of the present invention has the following configuration.
[Item 1]
a crack estimation unit that estimates a crack area in one or more objects in a structure with respect to an original image in which the objects are reflected;
a crack coloring unit that colors the crack area in the original image according to at least the estimated color around the crack area;
an object area estimation unit that estimates an object area in which the object exists in the colored image;
An information processing system characterized by:
[Item 2]
further comprising a superimposing unit that specifies a crack region in the target object region by at least superimposing the estimated position of the target object region and the crack region;
The information processing system according to item 1, characterized in that:
[Item 3]
The crack estimating unit re-performs the estimation of the crack area on the original image associated with the estimated object area, and identifies the crack area in the object area.
The information processing system according to item 1, characterized in that:
[Item 4]
further comprising a crack shape analysis unit that performs a crack shape analysis on the crack region in the specified object region;
The information processing system according to item 2 or 3, characterized in that:
[Item 5]
The crack estimation unit divides the original image into two or more divided images and estimates a crack area for each divided image.
The information processing system according to any one of items 1 to 3, characterized in that:
[Item 6]
The crack coloring unit generates an enlarged area by enlarging the estimated crack area, and colors the crack area with a statistically determined color based on color information in the enlarged area.
The information processing system according to any one of items 1 to 3, characterized in that:
[Item 7]
The crack coloring unit acquires color information of the object and colors the crack area with a color that matches or substantially matches the color information.
The information processing system according to any one of items 1 to 3, characterized in that:
[Item 8]
A program that causes a computer having a processing unit to perform information processing,
The program causes the processing unit to
estimating a crack area in one or more objects in a structure with respect to an original image in which the objects are reflected;
Coloring the crack area in the original image according to at least the estimated color of the surrounding area of the crack area;
estimating an object area where the object exists in the colored image;
A program to run.
[Item 9]
estimating a crack area in one or more objects in a structure with respect to an original image in which one or more objects in the structure are reflected, by a crack estimating unit;
Coloring the crack area in the original image according to at least the estimated color of the surrounding area of the crack area using a crack coloring unit;
estimating, by a target object area estimating unit, a target object area in which the target object exists in the colored image;
An information processing method that executes on a computer.
[Item 10]
a crack estimation unit that estimates a crack area in one or more objects in a structure with respect to an original image in which the objects are reflected;
a crack coloring unit that colors the crack area in the original image according to at least the estimated color around the crack area;
an object area estimation unit that estimates an object area in which the object exists in the colored image;
A server characterized by:
<実施の形態の詳細>
 以下、本発明の実施の形態による情報処理システムを説明する。添付図面において、同一または類似の要素には同一または類似の参照符号及び名称が付され、各実施形態の説明において同一または類似の要素に関する重複する説明は省略することがある。また、各実施形態で示される特徴は、互いに矛盾しない限り他の実施形態にも適用可能である。
<Details of embodiment>
An information processing system according to an embodiment of the present invention will be described below. In the accompanying drawings, the same or similar elements are given the same or similar reference numerals and names, and redundant description of the same or similar elements may be omitted in the description of each embodiment. Furthermore, features shown in each embodiment can be applied to other embodiments as long as they do not contradict each other.
<本実施形態の概要>
 図1に示されるように、本実施の形態における情報処理システムは、例えば建物や土木建造物などの構造物の壁面を撮像した画像を基に、そのような壁面に存在するヒビ割れを検出するものである。構造物の壁面は、一例として、ユーザ自身がカメラを操作して撮像してもよいし、あるいは、自律飛行もしくは遠隔操作により飛行する図1に示すような無人飛行体4に搭載したカメラを遠隔操作して撮像してもよい。
<Overview of this embodiment>
As shown in FIG. 1, the information processing system according to the present embodiment detects cracks existing in a wall surface of a structure such as a building or a civil engineering structure based on an image taken of such a wall surface. It is something. For example, the wall surface of the structure may be imaged by the user himself or herself by operating a camera, or by a camera mounted on an unmanned flying vehicle 4 as shown in FIG. 1 that flies autonomously or by remote control. You may operate it to take an image.
 本実施形態における情報処理システムは、後述するように、ヒビ割れの無い状態の対象物が存在する対象物領域を推定する処理を行うことにより、検査対象の画像中におけるヒビ割れ箇所の有無を単に検出するだけでなく、検査対象の画像中において対象物(例えば、壁面のタイルやパネルなどの区画された領域をなすもの)が存在する領域を特定し、さらには、どの対象物のどの部分にヒビ割れ箇所が存在するかについても検出することを可能にするものである。 As will be described later, the information processing system in this embodiment simply detects the presence or absence of a crack in an image to be inspected by performing processing to estimate an object area where an object without cracks exists. In addition to detection, it also identifies the area in the image to be inspected where an object (for example, a wall tile or panel that forms a partitioned area) is present, and also determines which part of which object is present. It also makes it possible to detect the presence of cracks.
<システム構成>
 図2に示されるように、本実施の形態における情報処理システムは、サーバ1と、端末2と、無人飛行体4とを有している。サーバ1と、端末2と、無人飛行体4は、ネットワークNWを介して互いに通信可能に接続されていてもよい。なお、図示された構成は一例であり、これに限らず、例えば無人飛行体4がネットワークNWに接続されていなくてもよい。その場合、無人飛行体4の操作がユーザが操作する送信機(いわゆるプロポ)により行われたり、無人飛行体4のカメラにより取得した画像データが無人飛行体4に接続される補助記憶装置(例えばSDカードなどのメモリカードやUSBメモリなど)に記憶され、ユーザにより事後的に補助記憶装置からサーバ1や端末2に読み出されて記憶されたりする構成であってもよく、操作目的または画像データの記憶目的のいずれか一方の目的のためだけに無人飛行体4がネットワークNWに接続されていてもよい。
<System configuration>
As shown in FIG. 2, the information processing system in this embodiment includes a server 1, a terminal 2, and an unmanned flying vehicle 4. The server 1, the terminal 2, and the unmanned aircraft 4 may be communicably connected to each other via the network NW. Note that the illustrated configuration is an example, and the configuration is not limited to this. For example, the unmanned flying object 4 may not be connected to the network NW. In that case, the unmanned aerial vehicle 4 may be operated by a transmitter (so-called radio) operated by a user, or the image data acquired by the camera of the unmanned aerial vehicle 4 may be stored in an auxiliary storage device (e.g. The configuration may be such that the data is stored in a memory card such as an SD card, a USB memory, etc.), and later read out from the auxiliary storage device by the user and stored in the server 1 or terminal 2, for operational purposes or image data. The unmanned aerial vehicle 4 may be connected to the network NW only for one of the storage purposes.
<サーバ1のハードウェア構成>
 図2は、本実施形態におけるサーバ1のハードウェア構成を示す図である。なお、図示された構成は一例であり、これ以外の構成を有していてもよい。
<Hardware configuration of server 1>
FIG. 2 is a diagram showing the hardware configuration of the server 1 in this embodiment. Note that the illustrated configuration is an example, and other configurations may be used.
 サーバ1は、少なくとも、プロセッサ10、メモリ11、ストレージ12、送受信部13、入出力部14等を備え、これらはバス15を通じて相互に電気的に接続される。サーバ1は、例えばワークステーションやパーソナルコンピュータのような汎用コンピュータとしてもよいし、或いはクラウド・コンピューティングによって論理的に実現されてもよい。 The server 1 includes at least a processor 10, a memory 11, a storage 12, a transmitting/receiving section 13, an input/output section 14, etc., which are electrically connected to each other via a bus 15. The server 1 may be a general-purpose computer, such as a workstation or a personal computer, or may be logically implemented by cloud computing.
 プロセッサ10は、サーバ1全体の動作を制御し、各要素間におけるデータの送受信の制御、及びアプリケーションの実行及び認証処理に必要な情報処理等を行う演算装置である。例えばプロセッサ10はCPU(Central Processing Unit)および/またはGPU(Graphics Processing Unit)であり、ストレージ12に格納されメモリ11に展開されたプログラム等を実行して各情報処理を実施する。 The processor 10 is an arithmetic device that controls the overall operation of the server 1, controls the transmission and reception of data between each element, and performs information processing necessary for application execution and authentication processing. For example, the processor 10 is a CPU (Central Processing Unit) and/or a GPU (Graphics Processing Unit), and executes programs stored in the storage 12 and developed in the memory 11 to perform various information processing.
 メモリ11は、DRAM(Dynamic Random Access Memory)等の揮発性記憶装置で構成される主記憶と、フラッシュメモリやHDD(Hard Disc Drive)等の不揮発性記憶装置で構成される補助記憶と、を含む。メモリ11は、プロセッサ10のワークエリア等として使用され、また、サーバ1の起動時に実行されるBIOS(Basic Input / Output System)、及び各種設定情報等を格納する。 The memory 11 includes a main memory configured with a volatile storage device such as a DRAM (Dynamic Random Access Memory), and an auxiliary memory configured with a non-volatile storage device such as a flash memory or an HDD (Hard Disc Drive). . The memory 11 is used as a work area for the processor 10, and also stores a BIOS (Basic Input/Output System) executed when the server 1 is started, various setting information, and the like.
 ストレージ12は、アプリケーション・プログラム等の各種プログラムを格納する。各処理に用いられるデータを格納したデータベースがストレージ12に構築されていてもよい。また、後述の記憶部130が記憶領域の一部に設けられていてもよい。 The storage 12 stores various programs such as application programs. A database storing data used for each process may be constructed in the storage 12. Further, a storage unit 130, which will be described later, may be provided in a part of the storage area.
 送受信部13は、サーバ1が通信ネットワークを介して外部装置(不図示)や無人飛行体4等と通信を行うための通信インターフェースである。送受信部13は、Bluetooth(登録商標)及びBLE(Bluetooth Low Energy)の近距離通信インターフェースやUSB(Universal Serial Bus)端子等をさらに備えていてもよい。 The transmitting/receiving unit 13 is a communication interface through which the server 1 communicates with an external device (not shown), the unmanned aircraft 4, etc. via a communication network. The transmitter/receiver 13 may further include a short-range communication interface for Bluetooth (registered trademark) and BLE (Bluetooth Low Energy), a USB (Universal Serial Bus) terminal, and the like.
 入出力部14は、キーボード・マウス類等の情報入力機器、及びディスプレイ等の出力機器である。 The input/output unit 14 is information input devices such as a keyboard and mouse, and output devices such as a display.
 バス15は、上記各要素に共通に接続され、例えば、アドレス信号、データ信号及び各種制御信号を伝達する。 The bus 15 is commonly connected to each of the above elements and transmits, for example, address signals, data signals, and various control signals.
<端末2>
 図4に示される端末2もまた、プロセッサ20、メモリ21、ストレージ22、送受信部23、入出力部24等を備え、これらはバス25を通じて相互に電気的に接続される。各要素の機能は、上述したサーバ1と同様に構成することが可能であることから、各要素の詳細な説明は省略する。
<Terminal 2>
The terminal 2 shown in FIG. 4 also includes a processor 20, a memory 21, a storage 22, a transmitting/receiving section 23, an input/output section 24, etc., which are electrically connected to each other through a bus 25. Since the functions of each element can be configured in the same manner as the server 1 described above, a detailed explanation of each element will be omitted.
<無人飛行体4>
 図5は、無人飛行体4のハードウェア構成を示すブロック図である。フライトコントローラ41は、プログラマブルプロセッサ(例えば、中央演算処理装置(CPU))などの1つ以上のプロセッサを有することができる。
<Unmanned Aerial Vehicle 4>
FIG. 5 is a block diagram showing the hardware configuration of the unmanned aerial vehicle 4. As shown in FIG. Flight controller 41 may include one or more processors, such as a programmable processor (eg, a central processing unit (CPU)).
 また、フライトコントローラ41は、メモリ411を有しており、当該メモリにアクセス可能である。メモリ411は、1つ以上のステップを行うためにフライトコントローラが実行可能であるロジック、コード、および/またはプログラム命令を記憶している。また、フライトコントローラ41は、慣性センサ(加速度センサ、ジャイロセンサ)、GPSセンサ、近接センサ(例えば、ライダー)等のセンサ類412を含みうる。 Additionally, the flight controller 41 has a memory 411 and can access the memory. Memory 411 stores logic, code, and/or program instructions executable by the flight controller to perform one or more steps. Further, the flight controller 41 may include sensors 412 such as an inertial sensor (acceleration sensor, gyro sensor), a GPS sensor, a proximity sensor (eg, lidar), and the like.
 メモリ411は、例えば、SDカードやランダムアクセスメモリ(RAM)などの分離可能な媒体または外部の記憶装置を含んでいてもよい。カメラ/センサ類42から取得したデータは、メモリ411に直接に伝達されかつ記憶されてもよい。例えば、カメラ等で撮影した静止画・動画データが内蔵メモリ又は外部メモリに記録されてもよいが、これに限らず、カメラ/センサ42または内蔵メモリからネットワークNWを介して、少なくともサーバ1や端末2のいずれか1つに記録されてもよい。カメラ42は無人飛行体4にジンバル43を介して設置される。 The memory 411 may include, for example, a separable medium or external storage device such as an SD card or random access memory (RAM). Data acquired from cameras/sensors 42 may be communicated directly to and stored in memory 411. For example, still image/video data taken with a camera or the like may be recorded in the built-in memory or external memory, but the data is not limited to this. 2 may be recorded. The camera 42 is installed on the unmanned aerial vehicle 4 via a gimbal 43.
 フライトコントローラ41は、無人飛行体4の状態を制御するように構成された図示しない制御モジュールを含んでいる。例えば、制御モジュールは、6自由度(並進運動x、y及びz、並びに回転運動θ、θ及びθ)を有する無人飛行体4の空間的配置、速度、および/または加速度を調整するために、ESC44(Electric Speed Controller)を経由して無人飛行体4の推進機構(モータ45等)を制御する。バッテリー48から給電されるモータ45によりプロペラ46が回転することで無人飛行体4の揚力を生じさせる。制御モジュールは、搭載部、センサ類の状態のうちの1つ以上を制御することができる。 Flight controller 41 includes a control module (not shown) configured to control the state of unmanned aerial vehicle 4 . For example, the control module adjusts the spatial position, velocity, and/or acceleration of the unmanned air vehicle 4 with six degrees of freedom (translational movements x, y, and z, and rotational movements θ x , θ y , and θ z ). For this purpose, the propulsion mechanism (motor 45, etc.) of the unmanned aerial vehicle 4 is controlled via an ESC 44 (Electric Speed Controller). A propeller 46 is rotated by a motor 45 supplied with power from a battery 48, thereby generating lift of the unmanned flying vehicle 4. The control module can control one or more of the states of the mounting section and sensors.
 フライトコントローラ41は、1つ以上の外部のデバイス(例えば、送受信機(プロポ)49、端末、表示装置、または他の遠隔の制御器)からのデータを送信および/または受け取るように構成された送受信部47と通信可能である。送受信機49は、有線通信または無線通信などの任意の適当な通信手段を使用することができる。 Flight controller 41 is a transceiver configured to transmit and/or receive data from one or more external devices (e.g., a transceiver 49, terminal, display, or other remote controller). It is possible to communicate with the unit 47. Transceiver 49 may use any suitable communication means, such as wired or wireless communication.
 例えば、送受信部47は、ローカルエリアネットワーク(LAN)、ワイドエリアネットワーク(WAN)、赤外線、無線、WiFi、ポイントツーポイント(P2P)ネットワーク、電気通信ネットワーク、クラウド通信などのうちの1つ以上を利用することができる。 For example, the transmitter/receiver 47 uses one or more of a local area network (LAN), wide area network (WAN), infrared rays, wireless, WiFi, point-to-point (P2P) network, telecommunications network, cloud communication, etc. can do.
 送受信部47は、センサ類42で取得したデータ、フライトコントローラ41が生成した処理結果、所定の制御データ、端末または遠隔の制御器からのユーザコマンドなどのうちの1つ以上を送信および/または受け取ることができる。 The transmitting/receiving unit 47 transmits and/or receives one or more of data acquired by the sensors 42, processing results generated by the flight controller 41, predetermined control data, user commands from a terminal or a remote controller, etc. be able to.
 本実施の形態によるセンサ類42は、慣性センサ(加速度センサ、ジャイロセンサ)、GPSセンサ、近接センサ(例えば、ライダー)、またはビジョン/イメージセンサ(例えば、カメラ)を含み得る。 Sensors 42 according to this embodiment may include an inertial sensor (acceleration sensor, gyro sensor), a GPS sensor, a proximity sensor (eg, lidar), or a vision/image sensor (eg, camera).
<サーバ1の機能>
 図6は、サーバ1及び端末2に実装される機能を例示したブロック図である。本実施の形態においては、サーバ1は、画像取得部115、処理部120、記憶部130を備えている。処理部120は、ヒビ割れ推定部121、ヒビ割れ着色部122、対象物領域推定部123、重畳部124、ヒビ割れ形状解析部125を含んでいる。また、記憶部130は、情報・画像記憶部131、ヒビ割れ推定学習モデル132、対象物領域推定学習モデル133を含んでいる。なお、各種機能部は、サーバ1のプロセッサ10における機能部として例示しているが、各種機能部の一部または全部は、サーバ1のプロセッサ10または端末2のプロセッサ20、無人飛行体4のコントローラ41の能力等に合わせて、プロセッサ10またはプロセッサ20、コントローラ41のうちのいずれの構成において実現されていてもよい。
<Function of server 1>
FIG. 6 is a block diagram illustrating functions implemented in the server 1 and the terminal 2. In this embodiment, the server 1 includes an image acquisition section 115, a processing section 120, and a storage section 130. The processing unit 120 includes a crack estimation unit 121 , a crack coloring unit 122 , an object area estimation unit 123 , a superimposition unit 124 , and a crack shape analysis unit 125 . The storage unit 130 also includes an information/image storage unit 131, a crack estimation learning model 132, and an object area estimation learning model 133. The various functional units are illustrated as functional units in the processor 10 of the server 1, but some or all of the various functional units may be implemented in the processor 10 of the server 1, the processor 20 of the terminal 2, or the controller of the unmanned aircraft 4. The configuration may be implemented in any of the processor 10, the processor 20, and the controller 41 depending on the capabilities of the processor 41 and the like.
 通信部110は、ネットワークNWを介して端末2や、無人飛行体4と通信を行う。通信部110は、端末2や無人飛行体4等からの各種要求やデータ等を受け付ける受付部としても機能する。 The communication unit 110 communicates with the terminal 2 and the unmanned aerial vehicle 4 via the network NW. The communication unit 110 also functions as a reception unit that receives various requests, data, etc. from the terminal 2, the unmanned aircraft 4, and the like.
 画像取得部115は、例えば、通信インターフェースを介した無線通信あるいはUSB端子等を介した有線通信によって、無人飛行体4に搭載されたデジタルカメラやユーザが用いたデジタルカメラで撮像された画像をそれらのデジタルカメラから取得する。画像取得部115は、USBメモリやSDメモリ等の記憶媒体を介して画像を取得するように構成されていてもよい。 The image acquisition unit 115 acquires images captured by a digital camera mounted on the unmanned aerial vehicle 4 or a digital camera used by a user, for example, by wireless communication via a communication interface or wired communication via a USB terminal. acquired from a digital camera. The image acquisition unit 115 may be configured to acquire images via a storage medium such as a USB memory or an SD memory.
 処理部120は、画像取得部115が取得した画像についてヒビ割れ検出を行い、どの対象物のどの部分にヒビ割れ箇所が存在するかを検出する一連の処理を実行する各機能部121~125を備えている。 The processing unit 120 includes functional units 121 to 125 that perform a series of processes for detecting cracks in the image acquired by the image acquisition unit 115 and detecting which parts of which objects have cracks. We are prepared.
 ヒビ割れ推定部121は、構造物における一以上の対象物(建物や土木建造物の壁面のタイルやパネル等)が映る原画像に対して、その対象物において存在するヒビ割れ領域を推定する処理を実行する。本実施形態のヒビ割れ推定部121は、記憶部130のヒビ割れ推定学習モデル132を用いてヒビ割れ領域を推定する。ヒビ割れ推定学習モデル132の詳細については後述する。 The crack estimating unit 121 performs a process of estimating a crack area existing in an original image in which one or more objects in a structure (tiles or panels on the wall of a building or civil engineering structure, etc.) are reflected. Execute. The crack estimating unit 121 of this embodiment estimates a crack area using the crack estimation learning model 132 in the storage unit 130. Details of the crack estimation learning model 132 will be described later.
 ヒビ割れ推定部121は、原画像の全体に対してヒビ割れ領域を推定する処理を実行してもよいし、あるいは、原画像を複数の領域に分割する処理を実行した後に、各々の分割領域についてヒビ割れ領域を推定する処理を実行してもよい。原画像を複数の領域に分割してヒビ割れ領域を推定する処理は、ヒビ割れ領域推定を行う領域を細分化することでその処理のための計算量も細分化できるので、原画像全体について一度にヒビ割れ領域を推定する処理を実行する場合に比べて、ヒビ割れ推定部121における計算負荷を抑えることができる。ヒビ割れ推定部121は、原画像を複数の領域に分割する処理を実行した場合には、各々の分割領域に関する以下に説明する各処理を実行した後、それらの分割領域を再構築して元の1つの画像に対応する画像を生成する。 The crack estimating unit 121 may perform a process of estimating a crack area for the entire original image, or may perform a process of dividing the original image into a plurality of areas and then divide each divided area. A process for estimating the crack area may also be executed. The process of dividing the original image into multiple regions and estimating the crack area can be done once for the entire original image, since by dividing the area for crack area estimation into smaller parts, the calculation amount for the process can also be divided into smaller parts. The calculation load on the crack estimating unit 121 can be reduced compared to when the process of estimating the crack area is executed. When the crack estimating unit 121 executes the process of dividing the original image into a plurality of regions, the crack estimation unit 121 executes each process described below regarding each divided region, and then reconstructs the divided regions and returns the original image to the original image. Generate an image corresponding to one image of .
 ヒビ割れ着色部122は、画像中においてヒビ割れ推定部121で推定されたヒビ割れ領域を、その周囲の対象物等の色に応じて着色する処理を実行し、ヒビ割れ領域を着色した状態の画像を生成する。 The crack coloring unit 122 executes a process of coloring the crack area estimated by the crack estimation unit 121 in the image according to the color of surrounding objects, etc., and displays the colored crack area. Generate an image.
 ヒビ割れ着色部122による着色処理の一例として、ヒビ割れ着色部122は、推定されたヒビ割れ領域を含む、そのヒビ割れ領域よりも広い拡大領域を画像中に画定し、その拡大領域内に含まれるオブジェクト(壁面のタイルやパネル等の対象物や、それら対象物の間の目地等)の色情報に基づいて統計的に色を決定し、その決定した色でヒビ割れ領域を着色する。拡大領域内のオブジェクトの色情報に基づいて統計的に色を決定する処理は、例えば、拡大領域内に含まれる各種色の色情報を、それらの色が拡大領域内において占める面積に応じて加重平均することにより実行することが可能である。ヒビ割れ領域を着色した状態の画像では、ヒビ割れ領域がその周囲の対象物と同化して、壁面にヒビ割れ領域がほとんど存在していないように見える。 As an example of the coloring process by the crack coloring unit 122, the crack coloring unit 122 defines an enlarged area in the image that includes the estimated crack area and is wider than the crack area, and includes the area included in the enlarged area. The color is statistically determined based on the color information of objects (objects such as wall tiles and panels, joints between these objects, etc.), and the cracked area is colored with the determined color. The process of statistically determining a color based on the color information of objects within the enlarged area is, for example, weighting the color information of various colors included within the enlarged area according to the area occupied by those colors within the enlarged area. This can be done by averaging. In the image in which the cracked area is colored, the cracked area blends in with the surrounding objects, and it appears that there is almost no cracked area on the wall surface.
 ヒビ割れ着色部122による着色処理の他の例として、ヒビ割れ着色部122は、壁面のタイルやパネル等の対象物の色情報を取得して、その色情報と一致するもしくは実質的に一致する色でヒビ割れ領域を着色する。対象物の色情報を取得する処理は、例えば、画像全体における色情報のうち最頻値を有する色情報を着色色として決定することにより実行したり、ヒビ割れ領域に隣接する所定領域内(例えば、ヒビ割れ領域から特定数以内の画素)の色情報の最頻値もしくは平均値を有する色情報を着色色として決定することにより実行することが可能である。あるいは、壁面のタイルやパネル等の対象物の色情報が既知である場合には、例えばユーザが端末2の入出力部24(図4参照)を介してその色情報を指定したり予め設定しておくことで、ヒビ割れ着色部122が色情報を取得することも可能である。 As another example of the coloring process by the crack coloring unit 122, the crack coloring unit 122 acquires color information of an object such as a wall tile or panel, and matches or substantially matches the color information. Color the cracked area with color. The process of acquiring the color information of the object may be performed, for example, by determining the color information having the most frequent value among the color information in the entire image as the colored color, or by determining the color information in the predetermined area adjacent to the cracked area (for example, , pixels within a specific number from the crack area) by determining the color information having the mode or average value of the color information as the colored color. Alternatively, if the color information of an object such as a wall tile or panel is known, the user may specify or preset the color information via the input/output unit 24 of the terminal 2 (see FIG. 4), for example. By doing so, it is also possible for the crack coloring section 122 to acquire color information.
 対象物領域推定部123は、ヒビ割れ着色部122により生成された、ヒビ割れ領域を着色した状態の画像に対して、対象物(壁面のタイルやパネル等)が存在する領域である対象物領域を推定する処理を実行する。本実施形態の対象物領域推定部123は、記憶部130の対象物領域推定学習モデル133を用いて対象物領域を推定する。対象物領域推定学習モデル133の詳細については後述する。 The target object area estimating unit 123 calculates an object area, which is an area where a target object (such as a wall tile or panel) is present, from the image in which the crack area is colored, which is generated by the crack coloring unit 122. Execute the process of estimating. The object area estimation unit 123 of this embodiment estimates the object area using the object area estimation learning model 133 in the storage unit 130. Details of the object area estimation learning model 133 will be described later.
 対象物領域推定部123による対象物領域推定処理により、画像中における対象物(壁面のタイルやパネル等)が存在する領域が検出され、さらには、その領域における個々の対象物の位置及び形状等が検出される。換言すれば、この対象物領域推定処理により個々の対象物の位置及び形状等が個別に認識される。対象物がタイルである場合を例に挙げてこれを説明すると、対象物領域推定処理により、画像中におけるタイルが存在する領域が検出されると共に、その領域において目地で区切られた個々のタイルの位置及び形状も個別に検出される。 The object area estimation process performed by the object area estimation unit 123 detects the area in the image where the object (wall tile, panel, etc.) exists, and further calculates the position, shape, etc. of each object in that area. is detected. In other words, the position, shape, etc. of each object are individually recognized by this object area estimation process. To explain this using an example where the target object is a tile, the target area estimation process detects the area in the image where the tile exists, and also calculates the size of each tile separated by joints in that area. Position and shape are also detected individually.
 重畳部124は、ヒビ割れ領域を着色した状態の画像において対象物領域推定部123によって推定された対象物領域に、ヒビ割れ推定部121によって推定されたヒビ割れ着色部122による着色処理前のヒビ割れ領域を重畳して、対象物領域におけるヒビ割れ領域を特定する処理を実行する。上記のように、対象物領域推定部123による対象物領域推定処理により、画像中における対象物(壁面のタイルやパネル等)が存在する領域と個々の対象物の位置及び形状等が検出されている。そのため、重畳部124によれば、画像中のヒビ割れ領域が存在する領域(ヒビ割れ着色部122に着色された領域)に、ヒビ割れ推定部121で推定されたヒビ割れ領域を重畳することにより、壁面上のヒビ割れ領域が存在する位置を単に検出するのではなく、壁面上のタイル等のどの対象物にヒビ割れ領域が存在しているのかを特定することができる。重畳部124は、ヒビ割れ領域が単一の対象物に存在している場合には、ヒビ割れ領域が存在する対象物としてその対象物を特定し、また、ヒビ割れ領域が複数の対象物にわたって存在している場合には、ヒビ割れ領域が存在する対象物としてそれらの複数の対象物を特定する。 The superimposing unit 124 adds the cracks estimated by the crack estimating unit 121 to the target object area estimated by the target object area estimating unit 123 in the image with the crack area colored. A process of superimposing the crack areas and identifying the crack area in the object area is executed. As described above, the object area estimation process by the object area estimation unit 123 detects the area in the image where the object (wall tiles, panels, etc.) exists and the position and shape of each object. There is. Therefore, according to the superimposing unit 124, by superimposing the crack area estimated by the crack estimating unit 121 on the area where the crack area exists in the image (the area colored by the crack coloring unit 122), Instead of simply detecting the position where a cracked area exists on a wall surface, it is possible to specify in which object, such as a tile, on a wall surface a cracked area exists. If a cracked area exists in a single object, the superimposing unit 124 identifies that object as an object in which a cracked area exists, and also specifies that the cracked area extends over multiple objects. If there are cracked areas, those multiple objects are identified as objects in which cracked areas exist.
 ヒビ割れ形状解析部125は、ヒビ割れ推定部121によって推定されたヒビ割れ領域に基づいて、ヒビ割れ領域の形状(長さ及び幅等)を解析する処理を実行する。ヒビ割れ領域の形状解析処理には種々の公知の手法を用いることが可能であるが、本実施形態のヒビ割れ形状解析部125は、一例として、ヘッセ行列を用いた手法によりヒビ割れ箇所の形状解析処理を行うように構成されている。 The crack shape analysis unit 125 executes a process of analyzing the shape (length, width, etc.) of the crack area based on the crack area estimated by the crack estimation unit 121. Although various known methods can be used to analyze the shape of the crack region, the crack shape analysis unit 125 of the present embodiment analyzes the shape of the crack location by a method using a Hessian matrix, for example. It is configured to perform analysis processing.
 ここで、図7を参照して、ヘッセ行列を用いたヒビ割れ領域の形状解析手法について説明する。 Here, with reference to FIG. 7, a method of analyzing the shape of a crack region using a Hessian matrix will be described.
 画像の各画素(x,y)において、輝度値f(x,y)を高さ方向とし、(x,y)を連続変数とすると、画像は三次元曲面と解釈することができる。画素中の或る画素(x,y)に関するヘッセ行列は、画像の輝度値をx方向及びy方向に2階微分した要素から構成される正方行列であり、下記の式(1)で表される。
Figure JPOXMLDOC01-appb-M000001
 … 式(1)
For each pixel (x, y) of the image, if the brightness value f(x, y) is in the height direction and (x, y) is a continuous variable, the image can be interpreted as a three-dimensional curved surface. The Hessian matrix for a certain pixel (x, y) among pixels is a square matrix composed of elements obtained by second-order differentiation of the brightness value of the image in the x direction and the y direction, and is expressed by the following equation (1). Ru.
Figure JPOXMLDOC01-appb-M000001
... Formula (1)
 このヘッセ行列の固有値λ,λの関係に基づき、
Figure JPOXMLDOC01-appb-M000002
 … 式(2)
の関係を満たす画素を線状構造とみなして強調する。
Based on the relationship between the eigenvalues λ 1 and λ 2 of this Hessian matrix,
Figure JPOXMLDOC01-appb-M000002
... Formula (2)
Pixels that satisfy the relationship are regarded as linear structures and are emphasized.
 ここで、図7(a)には、ヒビ割れ推定部121によって推定されたヒビ割れ領域Aと、正解である実際のヒビ割れ領域Bとの概念図が示されており、図7(b)には、推定されたヒビ割れ領域Aに基づいて上述するように線状構造とみなされた画素によって形成された、1画素の線幅からなるスケルトンCが示されている。スケルトンCは、ヒビ割れ箇所の延伸方向及び長さを示す。 Here, FIG. 7(a) shows a conceptual diagram of the crack area A estimated by the crack estimating unit 121 and the correct actual crack area B, and FIG. 7(b) , a skeleton C having a line width of one pixel is shown, which is formed by pixels that are considered to be a linear structure as described above based on the estimated crack area A. Skeleton C indicates the stretching direction and length of the cracked portion.
 そして、スケルトンC上でヘッセ行列のスケールを変えて線状構造を評価し、線状構造と判別された画素について線らしさの評価値が最大であるスケールを調べることにより、ヒビ割れ幅を評価することができる(図7(c)参照)。このようにヘッセ行列をスケールさせてヒビ割れ幅を評価することにより、ヒビ割れ形状解析部125は、ヒビ割れ箇所の形状として、図7(c)に示すように正解である実際のヒビ割れ領域Bに近い線状構造Dを得ることができる。 Then, the linear structure is evaluated by changing the scale of the Hessian matrix on the skeleton C, and the crack width is evaluated by examining the scale at which the linearity evaluation value is maximum for pixels determined to be linear structures. (See FIG. 7(c)). By scaling the Hessian matrix and evaluating the crack width in this way, the crack shape analysis unit 125 calculates the correct actual crack area as the shape of the crack location, as shown in FIG. 7(c). A linear structure D close to B can be obtained.
 このように、ヒビ割れ推定部121(及び後述のヒビ割れ推定学習モデル132)によって推定されたヒビ割れ領域に対して、ヘッセ行列を用いたヒビ割れ箇所の形状解析手法を用いることで、スケルトンCに基づいてヒビ割れ箇所の延伸方向及び長さを取得し、ヘッセ行列に基づいてヒビ割れ箇所の幅を取得することができる。 In this way, by using the shape analysis method of the crack location using the Hessian matrix, the skeleton It is possible to obtain the stretching direction and length of the cracked portion based on , and obtain the width of the cracked portion based on the Hessian matrix.
 なお、上述したヘッセ行列を用いた手法は、論文「画像処理によるコンクリート構造物のヒビ割れ幅の分類」(コンクリート工学年次論文集,Vol.34,No.1,2012)に開示されている。 The method using the Hessian matrix described above is disclosed in the paper "Classification of crack width in concrete structures using image processing" (Concrete Engineering Annual Proceedings, Vol. 34, No. 1, 2012). .
 次に、記憶部130の情報・画像記憶部131は、画像取得部115が取得した画像の他、ヒビ割れ着色部122が生成したヒビ割れ領域を着色した状態の画像や、処理部120の各機能部121~125による処理に生成された情報・データ等を、少なくとも一時的に記憶する。 Next, the information/image storage unit 131 of the storage unit 130 stores, in addition to the image acquired by the image acquisition unit 115, an image in which the crack area is colored by the crack coloring unit 122, and each of the processing unit 120. Information, data, etc. generated through processing by the functional units 121 to 125 are stored at least temporarily.
 ヒビ割れ推定学習モデル132は、種々のヒビ割れに関するヒビ割れ画像を教師データとして機械学習して生成された学習モデルである。ヒビ割れ推定学習モデル132は、例えば任意の外部コンピュータ装置(不図示)を学習器として用いて作成して、記憶部130に記憶させることができる。ヒビ割れ推定学習モデル132は、タイルやパネル等の異なる対象物毎にヒビ割れ画像を教師データとして機械学習して生成してもよく、この場合は、対象物毎に特化した複数のヒビ割れ推定学習モデルが生成されて記憶部130に記憶される。 The crack estimation learning model 132 is a learning model generated by machine learning using crack images related to various cracks as training data. The crack estimation learning model 132 can be created using, for example, an arbitrary external computer device (not shown) as a learning device, and can be stored in the storage unit 130. The crack estimation learning model 132 may be generated by machine learning using crack images as training data for each different object such as a tile or panel. An estimated learning model is generated and stored in the storage unit 130.
 ヒビ割れ推定学習モデル132は、各層にニューロンを含む複数の層で構成されるニューラルネットワークで機械学習を実行して生成される。そのようなニューラルネットワークとして、畳み込みニューラルネットワーク(CNN:Convolutional Neural Network)のようなディープニューラルネットワークを用いることができる。 The crack estimation learning model 132 is generated by performing machine learning with a neural network composed of multiple layers each including neurons. As such a neural network, a deep neural network such as a convolutional neural network (CNN) can be used.
 本実施形態では特に、画像の何処に何が写っているかを推定する物体検出に加えて、それがどのような形状を有しているかも推定することが可能なMask R-CNN(Region-based Convolutional Neural Network)が用いられる。Mask R-CNNによれば、CNNを用いて物体の候補領域を抽出し、領域位置とクラスの確率を同時に推定することにより、物体にバウンディングボックスを掛け、その物体がどのクラスに属するか(その物体が何であるか)を推定することに加えて、バウンディングボックス内のピクセル単位でクラス分類を行うことで、その物体の形も推定することが可能である。そのため、本実施形態のヒビ割れ推定学習モデル132を用いることで、画像中のヒビ割れ領域の位置のみならず、そのヒビ割れ領域の形状も推定することが可能である。 In this embodiment, in addition to object detection that estimates what appears where in an image, Mask R-CNN (Region-based Convolutional Neural Network) is used. According to Mask R-CNN, a candidate region for an object is extracted using CNN, and by simultaneously estimating the region position and class probability, the object is multiplied by a bounding box and the class to which the object belongs is determined. In addition to estimating the object (what it is), it is also possible to estimate the shape of the object by performing class classification on a pixel-by-pixel basis within the bounding box. Therefore, by using the crack estimation learning model 132 of this embodiment, it is possible to estimate not only the position of the crack region in the image but also the shape of the crack region.
 対象物領域推定学習モデル133は、タイルやパネル等の種々の対象物に関する画像を教師データとして機械学習して生成された学習モデルである。対象物領域推定学習モデル133も、例えば任意の外部コンピュータ装置(不図示)を学習器として用いて作成して、記憶部130に記憶させることができる。対象物領域推定学習モデル133は、タイルやパネル等の種々の対象物毎にヒビ割れ画像を教師データとして機械学習して生成してもよく、この場合は、対象物毎に特化した複数の対象物領域推定学習モデルが生成されて記憶部130に記憶される。 The object area estimation learning model 133 is a learning model generated by machine learning using images of various objects such as tiles and panels as training data. The object region estimation learning model 133 can also be created using, for example, an arbitrary external computer device (not shown) as a learning device, and can be stored in the storage unit 130. The object area estimation learning model 133 may be generated by machine learning using cracked images for each of various objects such as tiles and panels as training data; in this case, multiple A target object area estimation learning model is generated and stored in the storage unit 130.
 本実施形態では、対象物領域推定学習モデル133もMask R-CNN(Region-based Convolutional Neural Network)で機械学習を実行して生成されている。そのため、本実施形態の対象部領域推定学習モデル133を用いることで、画像中の対象部が存在する領域のみならず、その領域における個々の対象部の位置及び形状も推定することが可能である。 In the present embodiment, the object region estimation learning model 133 is also generated by performing machine learning using Mask R-CNN (Region-based Convolutional Neural Network). Therefore, by using the target region estimation learning model 133 of this embodiment, it is possible to estimate not only the region in an image where the target region exists, but also the position and shape of each target region in that region. .
<ヒビ割れ領域検出方法の一例>
 続いて、図8等を参照して、本実施形態にかかる情報処理システムによるヒビ割れ領域検出方法について説明する。図8は、本実施形態にかかる情報処理システムによるヒビ割れ領域検出方法を実施する処理を示すフローチャートである。
<Example of crack area detection method>
Next, a crack area detection method using the information processing system according to the present embodiment will be described with reference to FIG. 8 and the like. FIG. 8 is a flowchart showing a process for implementing a crack area detection method by the information processing system according to the present embodiment.
 最初に、サーバ1の画像取得部115は、無人飛行体4に搭載されたカメラやユーザが用いたカメラで撮像された原画像をそれらのカメラから取得する(S101)。 First, the image acquisition unit 115 of the server 1 acquires original images captured by the camera mounted on the unmanned flying vehicle 4 or the camera used by the user (S101).
 取得する原画像は、建物や土木建造物等の壁面等における、ヒビ割れ領域を検出する対象物を撮像したものである。図9に、検出対象の原画像の一例を示す。 The original image to be acquired is an image of the object whose crack area is to be detected, such as on the wall of a building or civil engineering structure. FIG. 9 shows an example of an original image to be detected.
 次に、サーバ1のヒビ割れ推定部121は、一以上の対象物(建物や土木建造物等の壁面におけるタイルやパネル等)が映る原画像に対して、その対象物におけるヒビ割れ領域を推定する処理を実行する(S102)。 Next, the crack estimating unit 121 of the server 1 estimates the crack area in the original image in which one or more objects (tiles, panels, etc. on the walls of buildings, civil engineering structures, etc.) are reflected. The process to do so is executed (S102).
 このヒビ割れ領域を推定する処理は、ヒビ割れ推定部121が、ヒビ割れ領域を検出する対象の原画像を複数の領域に分割する処理を含んでもよい。 This process of estimating the crack area may include a process in which the crack estimation unit 121 divides the original image of which the crack area is to be detected into a plurality of areas.
 図10は、検出対象の画像を格子状に複数の領域に分割した状態を概念的に示す図である。図10に示す例では、検出対象の画像が縦3個×横3個の合計9個の領域に分割されている。ヒビ割れ推定部121は、ヒビ割れ推定部121によって分割された各分割領域の画像を、分割前の原画像全体のどの部分に対応するかを示す情報と関連付けて、情報・画像記憶部131に格納する。なお、このように検出対象画像を分割する処理はオプションであり、検出対象の原画像を分割することなく1つの画像全体について後続の処理を実行してもよい。 FIG. 10 is a diagram conceptually showing a state in which the image of the detection target is divided into a plurality of regions in a grid pattern. In the example shown in FIG. 10, the image to be detected is divided into a total of nine regions, three vertically by three horizontally. The crack estimation unit 121 associates the image of each divided area divided by the crack estimation unit 121 with information indicating which part of the entire original image before division corresponds, and stores the image in the information/image storage unit 131. Store. Note that the process of dividing the detection target image in this way is an option, and the subsequent process may be performed on one entire image without dividing the original image of the detection target.
 ヒビ割れ領域を推定する処理において、ヒビ割れ推定部121は次に、検出対象の原画像の上記のように分割された個々の分割領域について、ヒビ割れ推定学習モデル132を用いてヒビ割れ領域を推定する処理を実行する。 In the process of estimating the crack area, the crack estimation unit 121 next estimates the crack area using the crack estimation learning model 132 for each of the divided areas of the original image to be detected as described above. Execute the process to estimate.
 図11は、ヒビ割れ推定部121によるヒビ割れ領域推定処理の結果、ヒビ割れ領域が存在すると推定された分割領域の上に、推定されたヒビ割れ領域を示した状態の画像を示している。ヒビ割れ推定部121は、推定されたヒビ割れ領域に関する情報(ヒビ割れ領域が存在する分割領域、当該分割領域におけるヒビ割れ領域の位置、ヒビ割れ領域の大きさ・形状等)を情報・画像記憶部131に格納する。 FIG. 11 shows an image in which the estimated crack area is shown on the divided area where it is estimated that the crack area exists as a result of the crack area estimation process by the crack estimation unit 121. The crack estimating unit 121 stores information about the estimated crack area (divided area where the cracked area exists, position of the cracked area in the divided area, size/shape of the cracked area, etc.) as information/image storage. It is stored in the section 131.
 次に、サーバ1のヒビ割れ着色部122は、ヒビ割れ推定部121で推定されたヒビ割れ領域を、その周囲の対象物等の色に応じて着色する処理を実行する(S103)。 Next, the crack coloring unit 122 of the server 1 executes a process of coloring the crack area estimated by the crack estimation unit 121 according to the color of surrounding objects, etc. (S103).
 着色処理の一例として、ヒビ割れ着色部122は、推定されたヒビ割れ領域を含む、そのヒビ割れ領域よりも広い拡大領域を画像中に画定する(図12(a)参照)。そして、ヒビ割れ着色部122は、その拡大領域内に含まれるオブジェクト(図示の例では、壁面のタイル及びタイル間の目地)の色情報に基づいて統計的に色を決定し、その決定した色でヒビ割れ領域を着色し、ヒビ割れ領域を着色した状態の画像を生成する(図12(b)参照)。図12(b)に示すように、ヒビ割れ領域を着色した状態の画像では、ヒビ割れ領域がその周囲の対象物と同化して、壁面にヒビ割れ領域がほとんど存在していないように見える。ヒビ割れ着色部122は、生成したヒビ割れ領域を着色した状態の画像を情報・画像記憶部131に格納する。 As an example of coloring processing, the crack coloring unit 122 defines an enlarged area in the image that includes the estimated crack area and is wider than the crack area (see FIG. 12(a)). Then, the crack coloring unit 122 statistically determines a color based on the color information of objects included in the enlarged area (in the illustrated example, wall tiles and joints between tiles), and the determined color The cracked area is colored using the steps shown in FIG. 12(b) to generate an image with the cracked area colored (see FIG. 12(b)). As shown in FIG. 12(b), in the image in which the cracked area is colored, the cracked area blends in with the surrounding objects, so that it appears that almost no cracked area exists on the wall surface. The crack coloring unit 122 stores an image of the generated crack area in a colored state in the information/image storage unit 131.
 次に、サーバ1の対象物領域推定部123は、ヒビ割れ着色部122により生成された、ヒビ割れ領域を着色した状態の画像に対して、対象物(図示の例では、壁面のタイル)が存在する対象物領域を対象物領域推定学習モデル133を用いて推定する処理を実行する(S104)。 Next, the target object area estimating unit 123 of the server 1 determines whether the target object (in the illustrated example, a tile on the wall) is generated by the crack coloring unit 122 in the image with the crack area colored. A process of estimating the existing target object area using the target object area estimation learning model 133 is executed (S104).
 図13は、対象物領域推定部123によって推定された対象物領域を示す図である。対象物領域推定部123による対象物領域推定する処理により、着色したヒビ割れ領域が存在する分割領域の画像についても、画像中における対象物(図示の例では、壁面のタイル)が存在する領域が検出され、さらには、その領域における個々の対象物(タイル)の位置及び形状等が個別に検出される。なお、このステップS104の処理においては、対象物領域推定部123は、ヒビ割れ領域が推定されていない分割領域についても同様に、対象物(図示の例では、壁面のタイル)が存在する対象物領域を対象物領域推定学習モデル133を用いて推定する処理を実行する。対象物領域推定部123は、各々の分割領域について推定した対象物領域に関する情報を情報・画像記憶部131に格納する。 FIG. 13 is a diagram showing the target object area estimated by the target object area estimation unit 123. Through the process of estimating the object area by the object area estimating unit 123, the area in which the object (in the illustrated example, a wall tile) exists is determined even for the image of the divided area in which the colored crack area exists. Furthermore, the position, shape, etc. of each object (tile) in that area are individually detected. In addition, in the process of step S104, the target object area estimating unit 123 similarly estimates the target area where the target object (in the illustrated example, a wall tile) exists for the divided area in which the crack area is not estimated. A process of estimating the area using the object area estimation learning model 133 is executed. The object area estimation unit 123 stores information regarding the estimated object area for each divided area in the information/image storage unit 131.
 次に、サーバ1の重畳部124は、ヒビ割れ領域を着色した状態の画像(分割領域)において対象物領域推定部123によって推定された対象物領域に、ヒビ割れ推定部121によって推定されたヒビ割れ着色部122による着色処理前のヒビ割れ領域を重畳して、対象物領域におけるヒビ割れ領域を特定する処理を実行する(S105)。 Next, the superimposing unit 124 of the server 1 adds the crack estimated by the crack estimating unit 121 to the target object area estimated by the target object area estimating unit 123 in the image (divided area) in which the crack area is colored. The crack area before the coloring process by the crack coloring unit 122 is superimposed to execute a process of specifying the crack area in the object area (S105).
 重畳部124による上記重畳処理は、推定された対象物領域における着色されたヒビ割れ領域が存在する位置の上に、着色前の元のヒビ割れ領域が重なるように行われる。対象物領域推定部123による対象物領域推定する処理(S104)により、画像中における対象物(図示例ではタイル)が存在する領域と個々の対象物(タイル)の位置及び形状等が検出されているので、この重畳処理によれば、壁面上のヒビ割れ領域が存在する位置を単に検出するのではなく、壁面上のどの対象物(タイル)にヒビ割れ領域が存在しているのかを特定することができる。重畳部124は、この処理によって特定された対象物領域とヒビ割れ領域との関係(どの対象物(タイル)にヒビ割れ領域が存在しているのか等)を示す情報を情報・画像記憶部131に格納する。図14は、推定された対象物領域に着色前の元のヒビ割れ領域が重畳された状態を示している。図14に示す例では、ヒビ割れ領域が存在する2つのタイルが重畳処理によって特定され、他のタイルとは異なる明度で示されている。 The above superimposition process by the superimposition unit 124 is performed so that the original crack area before coloring overlaps the position where the colored crack area exists in the estimated object area. Through the process of estimating the target area (S104) by the target area estimation unit 123, the area where the target object (tile in the illustrated example) exists and the position, shape, etc. of each target object (tile) in the image are detected. Therefore, this superimposition process does not simply detect the location of the cracked area on the wall surface, but also identifies which object (tile) on the wall surface has the cracked area. be able to. The superimposition unit 124 transfers information indicating the relationship between the target object area and the crack area identified by this process (which target object (tile) has the crack area, etc.) to the information/image storage unit 131. Store in. FIG. 14 shows a state in which the original crack region before coloring is superimposed on the estimated object region. In the example shown in FIG. 14, two tiles in which cracked areas exist are identified by superimposition processing, and are shown with a different brightness than other tiles.
 次に、サーバ1のヒビ割れ形状解析部125は、ヒビ割れ推定部121によって推定されたヒビ割れ領域に基づいて、ヒビ割れ領域の形状(長さ及び幅等)を解析する処理を実行する(S106)。 Next, the crack shape analysis unit 125 of the server 1 executes a process of analyzing the shape (length, width, etc.) of the crack area based on the crack area estimated by the crack estimation unit 121 ( S106).
 ヒビ割れ形状解析部125は、一例として、上述したようにヘッセ行列を用いた手法によりヒビ割れ箇所の形状解析処理を行うように構成されており、その形状解析処理により、ヒビ割れ箇所の延伸方向及び長さと、ヒビ割れ箇所の幅とをヒビ割れ領域の形状に関する情報として取得し、取得したヒビ割れ領域の形状に関する情報を情報・画像記憶部131に格納する。図15は図14に示した対象物領域を拡大して示す図であり、図15には推定されたヒビ割れ領域に基づいて生成されたスケルトンと、ヒビ割れ幅を評価して得られた実際のヒビ割れ領域に近い線状構造とが示されている。図15では、ヒビ割れ形状解析部125による解析の結果取得された、ヒビ割れ箇所を構成する各セグメントの長さ及び幅に関する数値情報も併せて示されている。 For example, the crack shape analysis unit 125 is configured to perform a shape analysis process of a crack location using a method using a Hessian matrix as described above, and by the shape analysis process, the direction of extension of the crack location is determined. , the length, and the width of the cracked area are acquired as information regarding the shape of the cracked area, and the acquired information regarding the shape of the cracked area is stored in the information/image storage unit 131 . FIG. 15 is an enlarged view of the object region shown in FIG. 14, and FIG. 15 shows a skeleton generated based on the estimated crack area and an actual skeleton obtained by evaluating the crack width. A linear structure close to the crack area is shown. FIG. 15 also shows numerical information regarding the length and width of each segment constituting the crack location, which was obtained as a result of the analysis by the crack shape analysis unit 125.
 最後に、ヒビ割れ推定部121は、上記ステップS102において原画像を複数の分割領域に分割する処理を実行していた場合には、分割した分割領域の画像を元の1つの検出対象画像に再構築する処理を実行する(S107)。 Finally, if the crack estimating unit 121 has executed the process of dividing the original image into a plurality of divided regions in step S102, the crack estimating unit 121 regenerates the image of the divided divided regions into one original detection target image. The construction process is executed (S107).
 ヒビ割れ推定部121は、情報・画像記憶部131に格納した、各分割領域の画像が分割前の全体画像のどの部分に対応するかを示す情報に基づいて、それらの分割領域を元の1つの検出対象画像に再構築する。対象物領域が推定された各分割領域の画像のうち、ヒビ割れ領域が推定された分割領域では、上述した処理により、形状が解析されて特定されたヒビ割れ箇所が、推定された対象物領域の上に重畳される。 The crack estimating unit 121 converts the divided areas into original parts based on information stored in the information/image storage unit 131 that indicates which part of the entire image before division the image of each divided area corresponds to. Reconstruct into two detection target images. Among the images of each divided area in which the target object area was estimated, in the divided area in which the crack area was estimated, the shape of the cracked area was analyzed and identified by the above process, and the crack location was identified in the estimated target area. is superimposed on top of.
 図16は、対象物領域が推定された各分割領域によって再構築された元の1つの検出対象画像を示している。図16において、明度が低い(黒に近い)色で示されたタイルは、対象物領域推定部123によって推定された対象物領域である。また、図16には、検出対象の壁面のうち、窓枠部分の図示左側に位置する2つの対象物(タイル)に、長さ及び幅が特定された形状を有するヒビ割れ箇所が存在することが示されている。ヒビ割れ推定部121は、このように再構築された画像とその画像に関連する各種データとを、互いに関連付けて情報・画像記憶部131に格納する。再構築された画像に関連する各種データは、上記各処理により得られたデータ(推定された対象物領域(例えば、IDが割り振られて管理されてもよい)、各対象物の位置及び形状、推定された対象物領域の数、ヒビ割れ箇所が存在する対象物領域(例えば、IDが特定されて管理されてもよい)、ヒビ割れ箇所の形状(長さ及び幅)、ヒビ割れ箇所が存在する対象物領域の数(特に、長さまたは幅の少なくともいずれかが基準値を超えるヒビ割れ箇所が存在する対象物領域の数)等に関する情報)を含む。これらの画像及びその画像に関連する各種データは、端末2からの要求に応じてその一部または全部が端末2へ送信されてもよい。そして、画像及びその画像に関連する各種データは、ユーザが端末2の入出力部24(例えばディスプレイ)を介して所定のユーザインタフェースにおいて閲覧可能であってもよい。特に、ヒビ割れ箇所が存在する対象物領域の数(特に、長さまたは幅の少なくともいずれかが基準値を超えるヒビ割れ箇所が存在する対象物領域の数)が確認可能となることで、修理の際に準備する新規の対象物の用意がスムーズとなる。なお、長さまたは幅の少なくともいずれかが基準値を超えるヒビ割れ箇所が存在する対象物領域の数の抽出は、処理部120においてヒビ割れの長さまたは幅の少なくともいずれかの値を対応する基準値と比較した結果に基づいて行われてもよいし、これに代えて、端末2上でヒビ割れに関するデータを受け取った後に、端末2上で同様の比較を行って抽出してもよい。 FIG. 16 shows one original detection target image reconstructed from each divided region in which the target object region was estimated. In FIG. 16, tiles shown in colors with low brightness (close to black) are target object regions estimated by the target object region estimating unit 123. Additionally, in FIG. 16, there are cracks with specified lengths and widths in two objects (tiles) located on the left side of the window frame portion of the wall surface to be detected. It is shown. The crack estimation unit 121 stores the thus reconstructed image and various data related to the image in the information/image storage unit 131 in association with each other. Various data related to the reconstructed image include data obtained through each of the above processes (estimated object area (for example, ID may be assigned and managed), position and shape of each object, The estimated number of object regions, the object region in which the crack exists (for example, the ID may be identified and managed), the shape (length and width) of the crack, and the existence of the crack. (In particular, information regarding the number of object regions where there are cracks whose length or width exceeds a reference value). Part or all of these images and various data related to the images may be transmitted to the terminal 2 in response to a request from the terminal 2. The image and various data related to the image may be viewable by the user on a predetermined user interface via the input/output unit 24 (eg, display) of the terminal 2. In particular, by making it possible to confirm the number of object areas that have cracks (in particular, the number of object areas that have cracks that exceed the standard value in either length or width), repair Preparation of new objects to be prepared at the time of preparation becomes smooth. Note that the number of object regions in which there are cracks whose length or width exceeds a reference value is extracted by processing the processing unit 120 by determining the value of at least one of the length and width of the crack. This may be performed based on the result of comparison with a reference value, or alternatively, after data regarding cracks is received on the terminal 2, a similar comparison may be performed on the terminal 2 to extract it.
 このように、本実施形態のサーバ1によれば、推定したヒビ割れ領域の周囲の色に少なくとも応じて原画像におけるヒビ割れ領域を着色し、ヒビ割れ領域を着色した画像に対して対象物が存在する対象物領域を推定する処理を行うことにより、ヒビ割れがある対象物(タイルやパネル等)であっても、そのヒビ割れにより分割された状態で検出されること等を防止して、対象物をより正確に検出することが可能となる。さらに、少なくとも推定した対象物領域とヒビ割れ領域の位置とを重畳して対象物領域におけるヒビ割れ領域を特定することにより、原画像において推定された対象物領域におけるヒビ割れ領域の位置のみならず、対象物領域におけるどの対象物にヒビ割れ領域が存在しているかについても推定することができる。 As described above, according to the server 1 of the present embodiment, the crack area in the original image is colored according to at least the color of the estimated crack area, and the target object is colored in the image in which the crack area is colored. By performing processing to estimate the area of existing objects, even objects with cracks (tiles, panels, etc.) can be prevented from being detected as being divided by the cracks. It becomes possible to detect the target object more accurately. Furthermore, by superimposing at least the estimated object area and the position of the crack area to identify the crack area in the object area, it is possible to identify not only the position of the crack area in the object area estimated in the original image but also the position of the crack area in the object area estimated in the original image. , it is also possible to estimate which object in the object region has a crack region.
(変形例)
 次に、本実施形態のサーバ1の変形例について説明する。
(Modified example)
Next, a modification of the server 1 of this embodiment will be described.
 上述した実施形態では、サーバ1の重畳部124が、ヒビ割れ領域を着色した状態の画像(分割領域)において対象物領域推定部123によって推定された対象物領域に、ヒビ割れ推定部121によって推定されたヒビ割れ着色部122による着色処理前のヒビ割れ領域を重畳して、対象物領域におけるヒビ割れ領域を特定する処理を実行することを説明した(図8のステップS105)。本変形例は、このような重畳処理の代替となり得る処理を提供する。 In the embodiment described above, the superimposition unit 124 of the server 1 adds the estimated area by the crack estimating unit 121 to the object area estimated by the object area estimating unit 123 in the image (divided area) in which the crack area is colored. It has been explained that the process of specifying the crack area in the object area by superimposing the crack area before the coloring process by the crack coloring unit 122 is executed (step S105 in FIG. 8). This modification provides processing that can be an alternative to such superimposition processing.
 本変形例では、サーバ1の重畳部124による重畳処理(図5のステップS105)に代えて、ヒビ割れ推定部121が、図8のステップS104の処理において対象物領域推定部123によって推定された対象物領域を対応付けた原画像に対して、図8のステップS102を参照して説明したヒビ割れ領域を推定する処理を再実行して、対象物領域におけるヒビ割れ領域を特定する。この処理によっても、重畳部124による重畳処理と同様に、原画像において推定された対象物領域におけるヒビ割れ領域の位置のみならず、対象物領域におけるどの対象物にそのヒビ割れ領域が存在しているかについても推定することができる。 In this modification, instead of the superimposition process by the superimposition unit 124 of the server 1 (step S105 in FIG. 5), the crack estimating unit 121 performs the process estimated by the object area estimation unit 123 in the process in step S104 in FIG. The crack area estimating process described with reference to step S102 in FIG. 8 is re-executed on the original image associated with the target object area to identify the crack area in the target object area. Similar to the superimposition process by the superimposition unit 124, this process also allows not only the position of the crack area in the object area estimated in the original image but also the location of the crack area in which object in the object area. It is also possible to estimate whether there are any animals present.
 上述した実施の形態は、本発明の理解を容易にするための例示に過ぎず、本発明を限定して解釈するためのものではない。本発明は、その趣旨を逸脱することなく、変更、改良することができると共に、本発明にはその均等物が含まれることは言うまでもない。 The embodiments described above are merely illustrative to facilitate understanding of the present invention, and are not intended to be interpreted as limiting the present invention. It goes without saying that the present invention can be modified and improved without departing from its spirit, and that the present invention includes equivalents thereof.
 1    情報処理システム
 2    無人飛行体

 
1 Information processing system 2 Unmanned aerial vehicle

Claims (10)

  1.  構造物における一以上の対象物が映る原画像に対して当該対象物におけるヒビ割れ領域を推定するヒビ割れ推定部と、
     推定した前記ヒビ割れ領域の周囲の色に少なくとも応じて、前記原画像における前記ヒビ割れ領域を着色するヒビ割れ着色部と、
     当該着色後の画像に対して前記対象物が存在する対象物領域を推定する対象物領域推定部と、を備える、
     ことを特徴とする情報処理システム。
    a crack estimation unit that estimates a crack area in one or more objects in a structure with respect to an original image in which the objects are reflected;
    a crack coloring unit that colors the crack area in the original image according to at least the estimated color around the crack area;
    an object area estimation unit that estimates an object area in which the object exists in the colored image;
    An information processing system characterized by:
  2.  推定した前記対象物領域と前記ヒビ割れ領域の位置を少なくとも重畳して前記対象物領域におけるヒビ割れ領域を特定する重畳部をさらに備える、
     ことを特徴とする請求項1に記載の情報処理システム。
    further comprising a superimposing unit that specifies a crack region in the target object region by at least superimposing the estimated position of the target object region and the crack region;
    The information processing system according to claim 1, characterized in that:
  3.  前記ヒビ割れ推定部は、推定した前記対象物領域を対応づけた前記原画像に対してヒビ割れ領域の推定を再実行し、前記対象物領域におけるヒビ割れ領域を特定する、
     ことを特徴とする請求項1に記載の情報処理システム。
    The crack estimating unit re-performs the estimation of the crack area on the original image associated with the estimated object area, and identifies the crack area in the object area.
    The information processing system according to claim 1, characterized in that:
  4.  特定された前記対象物領域における前記ヒビ割れ領域に対して、ヒビ割れ形状の解析を実行するヒビ割れ形状解析部をさらに備える、
     ことを特徴とする請求項2または3に記載の情報処理システム。
    further comprising a crack shape analysis unit that performs a crack shape analysis on the crack region in the specified object region;
    The information processing system according to claim 2 or 3, characterized in that:
  5.  前記ヒビ割れ推定部は、前記原画像を二以上の分割画像に分割し、各分割画像に対してヒビ割れ領域を推定する、
     ことを特徴とする請求項1ないし3のいずれかに記載の情報処理システム。
    The crack estimation unit divides the original image into two or more divided images and estimates a crack area for each divided image.
    The information processing system according to any one of claims 1 to 3, characterized in that:
  6.  前記ヒビ割れ着色部は、推定された前記ヒビ割れ領域を拡大させた拡大領域を生成し、当該拡大領域内の色情報に基づき統計的に決定された色で前記ヒビ割れ領域を着色する、
     ことを特徴とする請求項1ないし3のいずれかに記載の情報処理システム。
    The crack coloring unit generates an enlarged area by enlarging the estimated crack area, and colors the crack area with a statistically determined color based on color information in the enlarged area.
    The information processing system according to any one of claims 1 to 3, characterized in that:
  7.  前記ヒビ割れ着色部は、前記対象物の色情報を取得し、当該色情報に一致または略一致した色で前記ヒビ割れ領域を着色する、
     ことを特徴とする請求項1ないし3のいずれかに記載の情報処理システム。
    The crack coloring unit acquires color information of the object and colors the crack area with a color that matches or substantially matches the color information.
    The information processing system according to any one of claims 1 to 3, characterized in that:
  8.  処理部を有するコンピュータに情報処理を実行させるプログラムであって、
     前記プログラムは、前記処理部に、
     構造物における一以上の対象物が映る原画像に対して当該対象物におけるヒビ割れ領域を推定することと、
     推定した前記ヒビ割れ領域の周囲の色に少なくとも応じて、前記原画像における前記ヒビ割れ領域を着色することと、
     当該着色後の画像に対して前記対象物が存在する対象物領域を推定することと、
    を実行させる、プログラム。
    A program that causes a computer having a processing unit to perform information processing,
    The program causes the processing unit to
    Estimating a crack area in one or more objects in a structure with respect to an original image in which the objects are reflected;
    Coloring the crack area in the original image according to at least the estimated color of the surrounding area of the crack area;
    estimating an object area where the object exists in the colored image;
    A program to run.
  9.  ヒビ割れ推定部により、構造物における一以上の対象物が映る原画像に対して当該対象物におけるヒビ割れ領域を推定するステップと、
     ヒビ割れ着色部により、推定した前記ヒビ割れ領域の周囲の色に少なくとも応じて、前記原画像における前記ヒビ割れ領域を着色するステップと、
     対象物領域推定部により、当該着色後の画像に対して前記対象物が存在する対象物領域を推定するステップと、
    をコンピュータにおいて実行する、情報処理方法。
    estimating a crack area in one or more objects in a structure with respect to an original image in which one or more objects in the structure are reflected, by a crack estimating unit;
    Coloring the crack area in the original image according to at least the estimated color of the surrounding area of the crack area using a crack coloring unit;
    estimating, by a target object area estimating unit, a target object area in which the target object exists in the colored image;
    An information processing method that executes on a computer.
  10.  構造物における一以上の対象物が映る原画像に対して当該対象物におけるヒビ割れ領域を推定するヒビ割れ推定部と、
     推定した前記ヒビ割れ領域の周囲の色に少なくとも応じて、前記原画像における前記ヒビ割れ領域を着色するヒビ割れ着色部と、
     当該着色後の画像に対して前記対象物が存在する対象物領域を推定する対象物領域推定部と、を備える、
     ことを特徴とするサーバ。
     

     
    a crack estimation unit that estimates a crack area in one or more objects in a structure with respect to an original image in which the objects are reflected;
    a crack coloring unit that colors the crack area in the original image according to at least the estimated color around the crack area;
    an object area estimation unit that estimates an object area in which the object exists in the colored image;
    A server characterized by:


PCT/JP2022/029915 2022-08-04 2022-08-04 Information processing system, program, information processing method, and server WO2024029026A1 (en)

Priority Applications (3)

Application Number Priority Date Filing Date Title
JP2022572668A JP7228310B1 (en) 2022-08-04 2022-08-04 Information processing system and program, information processing method, server
PCT/JP2022/029915 WO2024029026A1 (en) 2022-08-04 2022-08-04 Information processing system, program, information processing method, and server
JP2023016315A JP2024022449A (en) 2022-08-04 2023-02-06 Information processing system, program, information processing method, and server

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/029915 WO2024029026A1 (en) 2022-08-04 2022-08-04 Information processing system, program, information processing method, and server

Publications (1)

Publication Number Publication Date
WO2024029026A1 true WO2024029026A1 (en) 2024-02-08

Family

ID=85283414

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/029915 WO2024029026A1 (en) 2022-08-04 2022-08-04 Information processing system, program, information processing method, and server

Country Status (2)

Country Link
JP (2) JP7228310B1 (en)
WO (1) WO2024029026A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110044509A1 (en) * 2009-08-24 2011-02-24 National Applied Research Laboratories Bridge structural safety monitoring system and method thereof
JP2012211815A (en) * 2011-03-31 2012-11-01 Aisin Aw Co Ltd Stop line recognition device, stop line recognition method and program
JP2015001406A (en) * 2013-06-13 2015-01-05 富士通株式会社 Surface inspection method, surface inspection device, and surface inspection program
JP2017002658A (en) * 2015-06-15 2017-01-05 阪神高速技術株式会社 Bridge inspection method
JP2018004308A (en) * 2016-06-28 2018-01-11 富士フイルム株式会社 Measurement assist device and measurement assist method
JP2020085869A (en) * 2018-11-30 2020-06-04 キヤノン株式会社 Information processing apparatus, information processing method, and program
JP2021081953A (en) * 2019-11-19 2021-05-27 富士通株式会社 Computation program, computation device, and computation method

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20110044509A1 (en) * 2009-08-24 2011-02-24 National Applied Research Laboratories Bridge structural safety monitoring system and method thereof
JP2012211815A (en) * 2011-03-31 2012-11-01 Aisin Aw Co Ltd Stop line recognition device, stop line recognition method and program
JP2015001406A (en) * 2013-06-13 2015-01-05 富士通株式会社 Surface inspection method, surface inspection device, and surface inspection program
JP2017002658A (en) * 2015-06-15 2017-01-05 阪神高速技術株式会社 Bridge inspection method
JP2018004308A (en) * 2016-06-28 2018-01-11 富士フイルム株式会社 Measurement assist device and measurement assist method
JP2020085869A (en) * 2018-11-30 2020-06-04 キヤノン株式会社 Information processing apparatus, information processing method, and program
JP2021081953A (en) * 2019-11-19 2021-05-27 富士通株式会社 Computation program, computation device, and computation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
HE LILI, ZHU HAN, GAO ZHANXU: "A novel asphalt pavement crack detection algorithm based on multi-feature test of cross-section image", TRAITEMENT DU SIGNAL., CENTRALE DES REVUES, MONTROUGE., FR, vol. 35, no. 3-4, 28 December 2018 (2018-12-28), FR , pages 289 - 302, XP093135199, ISSN: 0765-0019, DOI: 10.3166/ts.35.289-302 *

Also Published As

Publication number Publication date
JP7228310B1 (en) 2023-02-24
JP2024022449A (en) 2024-02-16
JPWO2024029026A1 (en) 2024-02-08

Similar Documents

Publication Publication Date Title
WO2021093240A1 (en) Method and system for camera-lidar calibration
Pandey et al. Automatic extrinsic calibration of vision and lidar by maximizing mutual information
US11120280B2 (en) Geometry-aware instance segmentation in stereo image capture processes
JP6906471B2 (en) Target information estimation device, program and method for estimating the direction of a target from a point cloud
Kim et al. Robotic sensing and object recognition from thermal-mapped point clouds
CN115147328A (en) Three-dimensional target detection method and device
JP7118490B1 (en) Information processing system, information processing method, program, mobile object, management server
CN115496923A (en) Multi-modal fusion target detection method and device based on uncertainty perception
Agyemang et al. Lightweight real-time detection of components via a micro aerial vehicle with domain randomization towards structural health monitoring
US11899750B2 (en) Quantile neural network
JP6807093B1 (en) Inspection system and management server, program, crack information provision method
WO2024029026A1 (en) Information processing system, program, information processing method, and server
JP7347651B2 (en) Aircraft control device, aircraft control method, and program
JP7081720B2 (en) Foreign object detector, foreign object detection method, and program
JP7149569B2 (en) Building measurement method
WO2024029046A1 (en) Information processing system and program, information processing method, and server
JP7385332B1 (en) Information processing system and program, information processing method, server
WO2021087785A1 (en) Terrain detection method, movable platform, control device and system, and storage medium
JP2022052779A (en) Inspection system and management server, program, and crack information providing method
JP7228298B1 (en) Information processing system, information processing method, program, mobile object, management server
WO2024069669A1 (en) Information processing system, program, information processing method, terminal, and server
JP7487900B1 (en) Information processing method, information processing system, and program
JP7370045B2 (en) Dimension display system and method
US20240054612A1 (en) Iinformation processing apparatus, information processing method, and program
JP7401068B1 (en) Information processing system, information processing method and program

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 22954015

Country of ref document: EP

Kind code of ref document: A1