WO2024084673A1 - Reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method - Google Patents

Reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method Download PDF

Info

Publication number
WO2024084673A1
WO2024084673A1 PCT/JP2022/039215 JP2022039215W WO2024084673A1 WO 2024084673 A1 WO2024084673 A1 WO 2024084673A1 JP 2022039215 W JP2022039215 W JP 2022039215W WO 2024084673 A1 WO2024084673 A1 WO 2024084673A1
Authority
WO
WIPO (PCT)
Prior art keywords
learning
unit
image information
reinforcement
learning model
Prior art date
Application number
PCT/JP2022/039215
Other languages
French (fr)
Japanese (ja)
Inventor
健 宮本
友哉 澤田
理 高橋
高明 宮本
Original Assignee
三菱電機株式会社
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 三菱電機株式会社 filed Critical 三菱電機株式会社
Priority to PCT/JP2022/039215 priority Critical patent/WO2024084673A1/en
Publication of WO2024084673A1 publication Critical patent/WO2024084673A1/en

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis

Definitions

  • This disclosure relates to a reinforcing bar inspection device, a learning device, a reinforcing bar inspection system, and a reinforcing bar inspection method.
  • Patent Document 1 describes a reinforcement as-built management system that uses at least one of pattern matching and machine learning techniques to generate data such as the type of reinforcing bars, number of reinforcing bars, reinforcing bar pitch, length of reinforcing bars, thickness of reinforcing bars, shape and position of joints, etc. within a set shooting range.
  • An actual reinforced concrete structure is composed of not only the main reinforcing bars, which are the main reinforcing bars, but also various other components that are installed in addition to the main reinforcing bars.
  • Components installed in reinforced concrete structures other than the main reinforcing bars include components installed as part of the main reinforcing bars, such as joints, and also components installed as separate components from the main reinforcing bars, such as shear reinforcement bars.
  • the reinforcement as-built management system described in Patent Document 1 automatically detects main reinforcement and the joints installed in some of the main reinforcement, but does not anticipate the detection of components separate from the main reinforcement, such as shear reinforcement. For this reason, even in the reinforcement as-built management system described in Patent Document 1, it is expected that inspectors will still manually inspect components installed separately from the main reinforcement in reinforcement structures.
  • the present disclosure aims to solve the above problem by providing a reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method that can automatically detect secondary components other than the main reinforcement in a reinforcement structure.
  • the reinforcing steel inspection device disclosed herein is a reinforcing steel inspection device that inspects a reinforcing steel structure in which multiple reinforcing steel bars are arranged as main reinforcement and which includes secondary components other than the main reinforcement, and includes an acquisition unit that acquires image information, a selection unit that selects a specified learning model from multiple learning models provided for each external appearance feature of the secondary component, including the type, color, and shape, for inferring the secondary component shown in the image information, and an inference unit that infers the secondary component from the image information using the selected learning model.
  • a specified learning model is selected from among multiple learning models for inferring secondary components shown in image information, each of which is provided for a different external characteristic including the type, color, and shape of the secondary component, and the secondary component is inferred from the image information using the selected learning model.
  • FIG. 1 is a block diagram showing a configuration of a bar arrangement inspection system according to a first embodiment
  • 1 is a block diagram showing a hardware configuration for realizing the functions of a bar arrangement inspection device according to a first embodiment.
  • FIG. FIG. 4 is a screen diagram showing an example of an operation screen.
  • FIG. 11 is a diagram showing an example of registered contents of a selection candidate database (hereinafter, referred to as DB).
  • FIG. 11 is a screen diagram showing an example of image information obtained by photographing a reinforcement structure. 4 is a flowchart showing a reinforcement bar inspection method according to the first embodiment.
  • FIG. 11 is a diagram illustrating an example of learning data.
  • FIG. 2 is a block diagram showing a configuration of a learning unit.
  • FIG. 13 is a diagram showing evaluation results of a learning model.
  • FIG. 13 is an explanatory diagram illustrating a schematic diagram of a matching determination of an embedding vector in a learning model.
  • 13 is a flowchart showing a process for creating a preset model.
  • 13 is a flowchart showing a process for creating a custom model.
  • Fig. 1 is a block diagram showing the configuration of a reinforcement inspection system 1 according to a first embodiment.
  • the reinforcement inspection system 1 is a system in which a reinforcement inspection device 2 and a learning device 3 are connected by communication, and inspects a reinforcement structure before concrete is poured.
  • the reinforcement inspection device 2 acquires image information showing the reinforcement structure, and detects secondary components in the reinforcement structure using a learning model for inferring secondary components in the reinforcement structure shown in the acquired image information.
  • the learning device 3 creates a learning model used for the detection of secondary components by the reinforcement inspection device 2.
  • the reinforcing bar inspection device 2 uses image information of the reinforcing bar structure captured by the camera device to inspect at least the secondary components in the reinforcing bar structure, and outputs the inspection results to the display unit 23. For example, the reinforcing bar inspection device 2 inspects the number of secondary components of multiple types contained in the reinforcing bar structure that are arranged, or inspects whether the secondary components have been created as designed.
  • the reinforcing bar inspection device 2 is, for example, a tablet terminal, a smartphone, or a personal computer (PC).
  • the learning device 3 is, for example, a server that provides the reinforcing bar inspection device 2 with a learning model used to detect secondary components.
  • a reinforced concrete structure is composed of multiple main bars, which are arranged reinforcing bars, and secondary components other than the main bars.
  • the main bars are the main reinforcing bars that bear the load of a building or other structure that is based on a reinforced concrete structure, and are also called distribution bars.
  • some reinforced concrete structures have a structure in which multiple layers of flat surfaces are provided with multiple main bars arranged in a lattice pattern.
  • a secondary component is a component provided in a reinforcement structure to supplement the main reinforcement or to give the reinforcement structure a function different from that of the main reinforcement, and is a part of the main reinforcement or a component completely separate from the main reinforcement.
  • secondary components include shear reinforcement, lap joints, spacer blocks, sheath tubes, and compression joints.
  • the reinforcement structure inspected by the reinforcement inspection system 1 is provided with at least one of shear reinforcement, lap joints, spacer blocks, sheath tubes, and compression joints.
  • Shear reinforcement is a reinforcing bar that restrains and reinforces the main bars. By attaching shear reinforcement to the main bars, the shear force acting on the main bars is suppressed.
  • Shear reinforcement can be U-shaped, V-shaped, C-shaped, or other shapes.
  • U-shaped or V-shaped shear reinforcement has longer reinforcing bars on both sides, and may be installed to surround the main bars that make up columns, etc.
  • C-shaped shear reinforcement has shorter reinforcing bars on both sides than U-shaped or V-shaped reinforcing bars, and for example, one end is connected to one of the main bars spaced apart, and the other end is connected to the other main bar.
  • the ends of the shear reinforcement are hooked to connect to the main bars.
  • hook shape such as right angle hooks, acute angle hooks, or semicircular hooks.
  • some reinforcing bars that make up the main reinforcement and shear reinforcement are coated with resin to improve corrosion resistance, and their appearance is a different color from the base metal.
  • reinforcing bars coated with epoxy resin are blue or green, and some reinforcing bars that have been specially surface-treated are gray.
  • shear reinforcement bars have various appearance characteristics, including color and shape.
  • the appearance characteristics of shear reinforcement bars vary depending on the manufacturer and the model.
  • a lap joint is a joint formed by overlapping the ends of two reinforcing bars, which are the main reinforcing bars. Secondary components also include components that are provided as part of such main reinforcing bars. In other words, the reinforcing bars that make up a lap joint may also be coated with a resin to improve corrosion resistance, and their appearance is a different color from the base metal.
  • lap joints come in a variety of shapes depending on the type of rebar and the application of the reinforced structure. For example, the length over which the ends of two rebars in a lap joint overlap varies depending on the application of the reinforced structure.
  • the shape of a lap joint differs depending on whether or not the end of the rebar has a hook, and also differs depending on the shape of the hook. Examples of hook shapes include right-angle hooks, acute-angle hooks, and semicircular hooks.
  • lap joints come in a variety of appearance features, including color and shape.
  • Spacer blocks are components used to maintain the cover of rebar and prevent disturbance of the rebar arrangement during work. "Cover" is the minimum distance from the concrete surface to the rebar. Spacer blocks consist of spacers and bar supports. Spacers ensure the cover of rebar on the sides, and bar supports ensure the cover of rebar in the horizontal direction.
  • Spacer blocks are made of concrete, steel, or plastic, and come in colors that correspond to the material they are made of. Spacer blocks are also available in dice shapes, grooved shapes with grooves for passing rebar through, and inverted V shapes. Thus, spacer blocks come in a variety of external features, including color and shape. Furthermore, the external features of spacer blocks vary depending on the manufacturer and the model.
  • Sheath tubes are metal tubes through which steel wires are passed.
  • sheath tubes are made of galvanized steel sheets and have a silver appearance.
  • Sheath tubes are installed at various positions in the reinforcement structure depending on their use, and their lengths also vary.
  • Sheath tubes are also colored according to their material.
  • sheath tubes have various appearance characteristics, including color and shape. Furthermore, the appearance characteristics of sheath tubes differ depending on the manufacturer and the model.
  • a compression joint is a joint formed by butting together the ends of two reinforcing bars, which are the main reinforcement.
  • a compression joint is a secondary component that is installed as part of the main reinforcement.
  • the reinforcing bars that make up a compression joint may also be coated with a resin to improve corrosion resistance, giving them a different color than the base metal.
  • the shape of the compression joint is subject to the reinforcement inspection.
  • the law requires that the diameter of the bulge at the joint where the ends of the reinforcing bars are butted together is at least 1.4 times the diameter of the reinforcing bars, and that the length of the joint where the ends of the reinforcing bars are butted together is at least 1.1 times the diameter of the reinforcing bars.
  • the eccentricity of the compression joint from the central axis of the reinforcing bars is at most 1/5 of the diameter of the reinforcing bars, and the deviation of the compression surface from the top of the bulge is at most 1/4 of the diameter of the reinforcing bars.
  • compression fittings come in a variety of appearance features including color and shape.
  • the reinforcing bar inspection device 2 includes a communication unit 21, a calculation unit 22, a display unit 23, an operation input unit 24, and a memory unit 25.
  • the learning device 3 includes a communication unit 31, a calculation unit 32, and a memory unit 33.
  • the communication units 21 and 31 may be communication devices that exchange data within a common device.
  • the memory units 25 and 33 may be memory areas constructed in a common storage device.
  • the reinforcement inspection system 1, or the reinforcement inspection device 2 equipped with the learning device 3, may provide a reinforcement inspection service in the form of SaaS (Software as a Service) to a user terminal (not shown in FIG. 1) that can communicate with the reinforcement inspection device 2.
  • SaaS Software as a Service
  • a reinforcement inspection application for providing the reinforcement inspection service is executed by the reinforcement inspection device 2, and the user terminal can receive the reinforcement inspection service on a web browser without having to install a dedicated application for the service.
  • the user terminal transmits image information of the reinforcement structure captured by a camera provided on the terminal to the reinforcement inspection device 2.
  • the reinforcement inspection device 2 uses the image information received from the user terminal to inspect secondary components and the like shown in the image information and returns the inspection results to the user terminal.
  • the user terminal receives the inspection results from the reinforcement inspection device 2 and can display the inspection results in an appropriate manner on a display unit (not shown) of the user terminal.
  • the communication unit 21 communicates with the learning device 3 via a communication line.
  • the communication unit 21 can communicate via a communication line with the learning device 3 capable of communication using a communication method such as LTE, 3G, 4G, or 5G.
  • the communication unit 31 communicates with the reinforcing bar inspection device 2 via a communication line.
  • the communication unit 31 can communicate via a communication line with the reinforcing bar inspection device 2 capable of communication using a communication method such as LTE, 3G, 4G, or 5G.
  • the calculation unit 22 controls the overall operation of the reinforcement inspection device 2.
  • the calculation unit 22 includes an acquisition unit 221, a preprocessing unit 222, a selection unit 223, and an inference unit 224.
  • the calculation unit 22 executes a reinforcement inspection application to realize the functions of the acquisition unit 221, the preprocessing unit 222, the selection unit 223, and the inference unit 224.
  • the calculation unit 32 controls the overall operation of the learning device 3.
  • the calculation unit 32 includes a data acquisition unit 321, a learning unit 322, and a search unit 323.
  • the calculation unit 32 executes a learning application to realize the functions of the data acquisition unit 321, the learning unit 322, and the search unit 323.
  • the display unit 23 is a display device provided in the reinforcing bar inspection device 2.
  • the display unit 23 is, for example, an LCD (Liquid Crystal Display) or an organic EL (Electroluminescence) display device.
  • the operation input unit 24 is an input device that accepts operations on an operation screen (described below) displayed on the display unit 23.
  • the operation input unit 24 is, for example, a touch panel that is integrated with the screen of the display unit 23.
  • the operation input unit 24 is, for example, a mouse or keyboard.
  • the storage unit 25 stores, for example, a reinforcement inspection application and information used in the calculation process by the calculation unit 22.
  • the storage unit 25 stores, as information used in the calculation process, for example, image information, position information of objects reflected in the image information, and a learning model acquired from the learning device 3.
  • the storage unit 25 is a storage device provided in a computer functioning as the reinforcement inspection device 2, and includes storage such as a HDD (Hard Disk Drive) or SSD (Solid State Drive), or memory 103 in FIG. 2 described later.
  • the storage unit 33 stores, for example, a selection candidate DB 331, a pre-learning model DB 332, and a learning DB 333 in addition to a learning application.
  • the storage unit 33 is a storage device provided in a computer that functions as the learning device 3, and includes, for example, a storage device such as an HDD or SSD, or a memory 103 in FIG. 2 described later.
  • the selection candidate DB 331 stores one or more preset models.
  • a preset model is a first learning model that is provided for each external appearance feature, including the type, color, and shape of a secondary component, and is used to infer a secondary component shown in image information acquired by the acquisition unit 221.
  • the preset models are created in advance by the learning device 3 before being specified by the selection unit 223. These preset models are linked to each external appearance feature, including the type, color, and shape of a secondary component, and stored in the selection candidate DB 331.
  • the pre-training model DB332 stores pre-training models.
  • the pre-training model is a learning model that has been trained to infer objects depicted in image information.
  • the pre-training model is a learning model that has been trained using a large amount of training data set such as COCO (Common Objects in Context).
  • COCO Common Objects in Context
  • As a learning method for example, it is possible to set the parameters of the neural network using the stochastic gradient descent method.
  • the learning DB 333 stores learning data including image information and positional information of objects shown in the image information.
  • the positional information of objects shown in the image information is set, for example, using the operation input unit 24 in the reinforcement inspection device 2.
  • the learning device 3 receives designation information for a learning model from the reinforcement inspection device 2, it reads out a pre-learning model from the pre-learning model DB 332 and reads out learning data from the learning DB 333.
  • the learning device 3 inputs the learning data into the pre-learning model to create a custom model for inferring secondary components shown in the image information.
  • the custom model is created for each external appearance feature including the type, color, and shape of the secondary component, and is a second learning model for inferring secondary components shown in the image information acquired by the acquisition unit 221.
  • FIG. 2 is a block diagram showing the hardware configuration that realizes the functions of the reinforcement inspection device 2.
  • the reinforcement inspection device 2 has a communication interface 100, an input/output interface 101, a processor 102, and a memory 103 as its hardware configuration.
  • the functions of the acquisition unit 221, preprocessing unit 222, selection unit 223, and inference unit 224 provided in the reinforcement inspection device 2 are realized by executing a reinforcement inspection application in this hardware configuration.
  • the communication interface 100 outputs the learning model received from the learning device 3 via the communication line to the processor 102, and transmits designation information for the learning model generated by the processor 102 to the learning device 3 via the communication line.
  • the processor 102 reads and writes data from the memory unit 25 in FIG. 1 via the input/output interface 101. Furthermore, the processor 102 acquires image information from an external device via the input/output interface 101.
  • the external device is, for example, a camera device that photographs the reinforcement structure, or an external storage device that stores image information photographed by the camera device.
  • the programs constituting the reinforcement inspection application for realizing the functions of the acquisition unit 221, preprocessing unit 222, selection unit 223, and inference unit 224 are stored in the storage unit 25.
  • the processor 102 reads the programs stored in the storage unit 25 via the input/output interface 101, loads them into the memory 103, and executes the programs loaded into the memory 103. In this way, the processor 102 realizes the functions of the acquisition unit 221, preprocessing unit 222, selection unit 223, and inference unit 224.
  • the memory 103 is, for example, a RAM (Random Access Memory).
  • the functions of the learning device 3 are realized by the hardware configuration shown in FIG. 2, the functions of the data acquisition unit 321, learning unit 322, and search unit 323 of the learning device 3 are realized by executing a learning application in the above hardware configuration.
  • the communication interface 100 outputs the learning model specification information received from the bar arrangement inspection device 2 via the communication line to the processor 102, and transmits the learning model searched by the processor 102 to the bar arrangement inspection device 2 via the communication line.
  • the processor 102 reads and writes data from the memory unit 33 in FIG. 1 via the input/output interface 101.
  • the programs constituting the learning application for realizing the functions of the data acquisition unit 321, the learning unit 322, and the search unit 323 are stored in the storage unit 33.
  • the processor 102 reads out the programs stored in the storage unit 33 via the input/output interface 101, loads them into the memory 103, and executes the programs loaded into the memory 103. In this way, the processor 102 realizes the functions of the data acquisition unit 321, the learning unit 322, and the search unit 323.
  • the acquisition unit 221 acquires image information.
  • the acquisition unit 221 is connected to a camera device via wireless communication or wired communication, and receives image information (still images or video) of the reinforcement structure from the camera device.
  • the information acquired by the acquisition unit 221 is output to the pre-processing unit 222.
  • the acquisition unit 221 may also output the acquired information to the memory 103 shown in FIG. 2 for storage.
  • the camera device is assumed to be, for example, a monocular camera, but may also be a stereo camera or an infrared camera. With a stereo camera or infrared camera, distance information between the reinforcement structure and the camera device can also be obtained.
  • the acquisition unit 221 may acquire this distance information in addition to the image information. If the reinforcement inspection device 2 is a smartphone or tablet terminal, the camera device may be a camera equipped on the smartphone or tablet terminal.
  • the preprocessing unit 222 preprocesses the image information acquired by the acquisition unit 221 so that the image information is in a form suitable for the inference process performed by the inference unit 224.
  • the preprocessing unit 222 normalizes the image information. Normalization is a process of adjusting pixel values on the screen of the display unit 23 that displays the image information to values within a certain range.
  • the image information is a color image
  • the color value of the i-th pixel in the image can be expressed as (r i , g i , b i ), where r i (red ) , g i (green ) , and b i (blue) take values from 0 to 255.
  • DL When a learning model is created by deep learning (hereinafter, referred to as DL), DL generally handles color values of 0 to 255 in the range of 0 to 1. Therefore, the preprocessing unit 222 calculates color values ( ri , gi , bi ) obtained by normalizing the color value ( ri , gi , bi ) of the i-th pixel according to the following formula (1). Note that normalization is performed for each frame image when the image information is a moving image consisting of frame images shot at a constant frame rate. This allows the inference unit 224 to smoothly infer the secondary components.
  • the pre-processing unit 222 may also perform normalization after orienting the image indicated by the image information.
  • An orientated image is an image in which the distance between the camera device and the reinforcement structure is constant and the reinforcement structure is orientated directly relative to the camera device.
  • the pre-processing unit 222 specifies the four corner vertices of any rectangle on the reinforcing bars arranged in a lattice pattern in the image information, and estimates a transformation matrix using the position coordinates of the specified four vertices. Then, based on the estimated transformation matrix, the pre-processing unit 222 converts the image indicated by the image information into an orientated image in which the plane of the inspection target is orientated directly to the camera device.
  • the selection unit 223 is provided for each external appearance feature including the type, color, and shape of the secondary component, and selects a specified learning model from among multiple learning models for inferring the secondary component shown in the image information. For example, the selection unit 223 outputs display control information for displaying an operation screen to the display unit 23.
  • the display unit 23 displays the operation screen according to the display control information from the selection unit 223. This operation screen is a screen that allows an operation to specify a learning model to be performed.
  • the selection unit 223 creates specification information for the learning model and transmits the specification information to the learning device 3 via the communication line using the communication unit 21.
  • the learning device 3 receives the specification information from the reinforcement inspection device 2, it searches for the learning model indicated by the specification information from among the multiple learning models stored in the memory unit 33, and returns information indicating the learning model of the search result to the reinforcement inspection device 2.
  • the learning models to be selected include preset models and custom models.
  • the preset model is a learning model for inferring secondary components in a reinforcement structure from image information, and is created in advance by the learning device 3.
  • a large amount of learning data set such as COCO is used to create the preset model
  • the selection candidate DB 331 stores preset models whose evaluation results are equal to or above the lower limit of tolerance.
  • the accuracy rate with respect to the correct data Precision
  • Recall recall rate of the inference results
  • a custom model like a preset model, is a learning model for inferring secondary components in a reinforcement structure from image information.
  • creation of a custom model is initiated by the learning device 3 when the selection unit 223 accepts the designation of the custom model.
  • image information captured on-site and in which the positions of secondary components are identified is used as learning data. This allows the reinforcement inspection device 2 to infer secondary components using a highly rated preset model, and to infer secondary components using a custom model suited to the situation on-site.
  • FIG. 3 is a screen diagram showing an example of operation screen 23A.
  • Selection unit 223 outputs display control information to display unit 23 for displaying an operation screen on which an operation for specifying a preset model and a custom model is performed.
  • Display unit 23 displays operation screen 23A as shown in FIG. 3 in accordance with the display control information from selection unit 223.
  • Operation screen 23A displays selection buttons 23A-1, 23A-2, 23A-3, ... for selecting each of a plurality of preset models.
  • selection button 23A-1 is a selection button for specifying a preset model for inferring shear reinforcement having the external characteristics of brown and U-shape.
  • Selection button 23A-2 is a selection button for specifying a preset model for inferring shear reinforcement having the external characteristics of blue and C-shape.
  • Selection button 23A-3 is a selection button for specifying a preset model for inferring a rebar lap joint having the external characteristics of blue and cylindrical shape.
  • an inspector of a reinforcement structure refers to design data for a structure that includes the reinforcement structure to identify a secondary component that is to be used in the reinforcement structure, and uses the operation input unit 24 to press a selection button for a learning model that corresponds to the identified secondary component. Operation information indicating which selection button was pressed is output from the operation input unit 24 to the selection unit 223.
  • the selection unit 223 creates specification information for a preset model that corresponds to the operation information, and the communication unit 21 transmits the specification information to the learning device 3 via a communication line.
  • FIG. 4 is a diagram showing an example of the registered contents of the selection candidate DB 331.
  • the memory unit 33 of the learning device 3 stores the selection candidate DB 331 in which multiple preset models are stored and linked to appearance features including the type, color, and shape of the secondary component.
  • the learning device 3 receives the above-mentioned specification information from the reinforcement inspection device 2, it searches for the preset model indicated by the specification information from the multiple preset models stored in the selection candidate DB 331, and returns information indicating the preset model as a search result to the reinforcement inspection device 2.
  • the learning device 3 searches the selection candidate DB 331 for models A1, A2, and B1 shown in FIG. 4, and returns information indicating the search results for models A1, A2, and B1 to the reinforcement inspection device 2.
  • Information indicating a learning model such as model A1 is, for example, a parameter required to construct a neural network that functions as a learning model, such as a weighting coefficient for a node.
  • the selection unit 223 receives information indicating models A1, A2, and B1 via the communication line from the communication unit 21, the selection unit 223 outputs the received information to the inference unit 224 and further stores it in the memory unit 25. This allows the selection unit 223 to accurately select a preset model for inferring a secondary component.
  • Figure 4 shows model B1, which infers a lap joint of rebars with the external characteristics of being blue and cylindrical, but the shape of the lap joint is not limited to the shape of the rebar in which the lap joint is formed.
  • the shape of the lap joint includes the length over which the ends of the two rebars overlap.
  • the shape of the lap joint also includes the presence or absence of a hook at the end of the rebar.
  • the shape of the hook also includes, for example, a right-angle hook, an acute-angle hook, or a semicircular hook.
  • the shape of a compression joint includes the shape of the rebar in which the compression joint is formed, as well as the shape of the bulge at the part where the ends of the rebar are butted together and joined.
  • the shape of the bulge is determined, for example, by the diameter of the bulge, the length of the bulge where the ends of the rebar are butted together and joined, the amount of eccentricity of the bulge from the central axis of the rebar, and the deviation of the compression surface from the top of the bulge.
  • the selection unit 223 may also automatically specify and select a preset model from a plurality of preset models. For example, design data of a structure including a reinforcement structure is stored in the storage unit 25. When an inspector of the reinforcement structure issues an instruction to start inspecting the reinforcement structure using the operation input unit 24, the selection unit 223 automatically identifies secondary components included in the reinforcement structure from the design data stored in the storage unit 25.
  • the design data may be, for example, a three-dimensional model that realizes Building Information Modeling (BIM) for a structure that includes a reinforcement structure, or design drawing data for a structure that includes a reinforcement structure.
  • BIM Building Information Modeling
  • the selection unit 223 creates designation information that specifies a preset model for inferring the identified secondary component, and acquires the preset model searched for by the learning device 3 based on the designation information. This enables the selection unit 223 to accurately select a preset model for inferring the secondary component.
  • the selection unit 223 may select all preset models stored in the selection candidate DB 331. For example, when an instruction to start an inspection of a reinforcement structure is given using the operation input unit 24, the selection unit 223 selects all preset models stored in the selection candidate DB 331.
  • the inference unit 224 infers secondary components using each of all preset models selected by the selection unit 223, and outputs the result obtained from the model with the highest evaluation as the final inference result.
  • selection button 23Ac is a selection button for creating a custom model.
  • selection unit 223 creates designation information indicating the designation of a custom model, and further creates learning data including image information used to create the custom model.
  • Selection unit 223 transmits the custom model designation information and learning data including image information to learning device 3 via communication line through communication unit 21.
  • the learning device 3 uses the learning data received from the reinforcement inspection device 2 to create a custom model corresponding to the specified information, and transmits the created custom model to the reinforcement inspection device 2 via the communication line by the communication unit 31.
  • the selection unit 223 receives information indicating a custom model via the communication line by the communication unit 21, it outputs the received information to the inference unit 224 and further stores it in the memory unit 25.
  • Fig. 5 is a screen diagram showing an example of image information of a reinforcement structure.
  • the reinforcement structure shown in the image information shown in Fig. 5 has a plurality of reinforcing bars 11 arranged in a lattice pattern as main reinforcements, and includes shear reinforcement bars 12, spacer blocks 13, lap joints 14, sheath tubes 15, and compression joints 16 as secondary components.
  • the selection unit 223 When the selection button 23Ac is pressed using the operation input unit 24, the selection unit 223 causes the display unit 23 to display the screen shown in FIG. 5.
  • the inspector uses the operation input unit 24 to identify each of the secondary components on the screen. For example, the inspector uses the operation input unit 24 to surround the area on the screen shown in FIG. 5 where the shear reinforcement 12 is displayed with a bounding box 23B-1, and inputs the color and name (in this case, "shear reinforcement") of the shear reinforcement 12 surrounded by the bounding box 23B-1.
  • the inspector also uses the operation input unit 24 to enclose the area on the screen shown in Figure 5 where the spacer block 13 appears with a bounding box 23B-2, and inputs the color and name of the spacer block 13 enclosed by the bounding box 23B-2 (in this case, "spacer block”).
  • the inspector also uses the operation input unit 24 to enclose the area on the screen shown in Figure 5 where the lap joint 14 appears with a bounding box 23B-3, and inputs the color and name of the lap joint 14 enclosed by the bounding box 23B-3 (in this case, "lap joint").
  • the examiner uses the operation input unit 24 to surround the area on the screen shown in Figure 5 where the sheath tube 15 is displayed with a bounding box 23B-4, and inputs the color and name (in this case, "sheath tube”) of the sheath tube 15 surrounded by the bounding box 23B-4. Furthermore, the inspector uses the operation input unit 24 to surround the area on the screen shown in Figure 5 where the compression joint 16 is displayed with a bounding box 23B-5, and inputs the color and name (in this case, "compression joint") of the compression joint 16 surrounded by the bounding box 23B-5.
  • the selection unit 223 extracts, for example, the position coordinates of the upper left vertex and the lower right vertex of the rectangular bounding box, and creates position information including the extracted position coordinates.
  • This position information is linked to the color and name of the secondary component within the bounding box.
  • the position information is information for identifying the bounding box that represents the correct label of the secondary component.
  • Data including the image information identifying the secondary component by the bounding box and the above position information created by the selection unit 223 is used as learning data for creating a custom model.
  • the shear reinforcement 12 is linked to the bounding box 23B-1
  • the spacer block 13 is linked to the bounding box 23B-2
  • the lap joint 14 is linked to the bounding box 23B-3
  • the sheath tube 15 is linked to the bounding box 23B-4
  • the compression joint 16 is linked to the bounding box 23B-5.
  • the learning device 3 creates custom models for each of the shear reinforcement 12, the spacer block 13, the lap joint 14, the sheath tube 15, and the compression joint 16.
  • the examiner uses the operation input unit 24 to draw a line along the longitudinal direction of the sheath tube 15 on the screen shown in Fig. 5.
  • the selection unit 223 extracts the position coordinates of the start point and the end point of the line, and creates position information that links the extracted position coordinates with the sheath tube 15.
  • the selection unit 223 creates learning data including this position information.
  • the examiner also uses the operation input unit 24 to mask everything except the sheath tube 15 on the screen shown in FIG. 5.
  • the selection unit 223 extracts the position coordinates of the area on the screen shown in FIG. 5 where the unmasked sheath tube 15 is displayed, and creates position information that links the extracted position coordinates with the sheath tube 15.
  • the selection unit 223 creates learning data that includes this position information.
  • the selection unit 223 may automatically set a bounding box surrounding an object present on the screen of FIG. 5 by performing image analysis such as pattern matching on the image information or by using a learning model that roughly detects objects shown in the image information. In this case, the inspector uses the operation input unit 24 to check whether a secondary component is present within the automatically set bounding box. The selection unit 223 will create the above-mentioned position information for a bounding box that is confirmed to contain a secondary component.
  • the reinforcement inspection device 2 may also infer secondary components from image information using at least one of a preset model or a custom model. For example, if the reinforcement inspection device 2 infers secondary components using only a custom model, the selection unit 223 automatically specifies a custom model when an instruction to start reinforcement inspection is given using the operation input unit 24, and proceeds to the above-mentioned custom model creation process.
  • the selection unit 223 may also select multiple preset models for a common secondary component.
  • the selection unit 223 selects multiple preset models for the common secondary component by specifying a preset model using the operation input unit 24 or by automatically specifying a preset model. For example, if the type of secondary component is "shear reinforcement", the selection unit 223 selects all preset models whose type is "shear reinforcement”.
  • the inference unit 224 infers the secondary component using each of all the preset models selected by the selection unit 223, and outputs the result obtained from the model with the highest evaluation as the final inference result.
  • the inference unit 224 infers the secondary components from the preprocessed image information using the learning model selected as described above. For example, the inference unit 224 inputs the image information preprocessed by the preprocessing unit 222 for the preset model or custom model selected by the selection unit 223. The preset model or custom model infers the secondary components appearing in the input image information. For example, the positions and external features of the secondary components in the image are inferred by these learning models.
  • the pre-processing of the image information may be performed by an external device provided separately from the reinforcement inspection device 2. In this case, the reinforcement inspection device 2 does not need to include the pre-processing unit 222.
  • the acquisition unit 221 acquires the pre-processed image information from the external device, and the inference unit 224 uses the image information acquired by the acquisition unit 221 as it is to infer the secondary components shown in the image information. Furthermore, the inference unit 224 may infer the secondary components shown in the image information by directly using the image information of the reinforcement structure that has not been preprocessed. In this case as well, the reinforcement inspection device 2 does not need to be equipped with the preprocessing unit 222. For example, the learning device 3 creates a learning model using the image information that has not been preprocessed as learning data. By using this learning model, the inference unit 224 can infer the secondary components using the image information that has not been preprocessed.
  • the inference unit 224 inspects the secondary components based on the inference result, creates display control information for displaying the inspection result, and outputs the created display control information to the display unit 23 .
  • the inference unit 224 inspects the number of shear reinforcement bars in a reinforcement structure based on the positions and appearance features of the shear reinforcement bars on the image inferred using the learning model.
  • the inference unit 224 then creates display control information for superimposing an electronic whiteboard on which the inspection results are described on image information showing the reinforcement structure.
  • the display unit 23 superimposes an electronic whiteboard on which the number, color, and shape of the shear reinforcement bars are described on the image showing the reinforcement structure based on the display control information.
  • the electronic whiteboard is electronic image data on which the inspection results are described.
  • the inference unit 224 also inspects the number of lap joints in the reinforcement structure and the degree of deviation of the shape from the design value based on the position and appearance features of the lap joints on the image inferred using the learning model. For example, the inference unit 224 may inspect the amount of deviation of the overlapping length of the ends of two reinforcing bars from the design value for each lap joint, and display the inspection results for each lap joint on an electronic whiteboard.
  • the inference unit 224 checks the number of compression joints in the reinforcement structure and the degree of deviation of the shape from the design value based on the position and appearance features of the compression joints on the image inferred using the learning model. For example, the inference unit 224 may check the deviation from the design value for at least one of the diameter of the bulge in the inferred compression joint, the length of the bulge, the eccentricity of the bulge from the central axis of the rebar, and the deviation of the compression surface from the top of the bulge, and display this as the inspection result on an electronic whiteboard.
  • the inspection results are displayed on the electronic whiteboard, indicating whether the diameter of the bulge at the part where the ends of the reinforcing bars are butt-jointed is 1.4 times or more the diameter of the reinforcing bars, whether the length of the part where the ends of the reinforcing bars are butt-jointed is 1.1 times or more the diameter of the reinforcing bars, whether the eccentricity from the central axis of the reinforcing bars is less than one-fifth of the diameter of the reinforcing bars, and whether the deviation of the compression surface from the top of the bulge is less than one-fourth of the diameter of the reinforcing bars.
  • the inference unit 224 may use the measurement results of the secondary components to determine whether the inference result is incorrect.
  • the acquisition unit 221 acquires point cloud data that represents the reinforcement structure as a three-dimensional point cloud.
  • the point cloud data is data indicating the distance to the reinforcement structure detected by a sensor such as a stereo camera, an infrared camera, or a LIDAR.
  • the inference unit 224 reads out the image information acquired by the acquisition unit 221 and stored in the storage unit 25, and assigns the distance d i between the three-dimensional point on the object and the sensor to the pixel value (r i , g i , b i ) in the image area in which the object is reflected in the image information.
  • a pixel value (r i , g i , b i , b i ) having four elements is obtained as the pixel value in the image area in which the object is reflected.
  • the inference unit 224 estimates the size of the image area in which the secondary component is shown, calculates the size (1) of the secondary component based on this estimated value, and further calculates the size (2) of the secondary component using a distance d i between a three-dimensional point in the image area in which the secondary component is shown and the pixel value of a pixel corresponding to the three-dimensional point. Then, the inference unit 224 determines whether the secondary component has been erroneously inferred based on the result of comparing the size (1) and the size (2) of the secondary component.
  • the size (2) corresponds to the actual size of the secondary component.
  • the inference unit 224 determines that the inference of the secondary component by the learning model is erroneous.
  • the inference unit 224 may also calculate the inference accuracy of the learning model using the determination result of whether the inference result is incorrect. For example, the inference unit 224 causes each of the multiple learning models to perform inference using common image information, and the inference unit 224 determines whether the inference result is incorrect for each of the multiple learning models, and calculates the ratio of the number of correct inference results to the number of inferences as the inference accuracy. By determining whether the inference result is an error in this manner, the reinforcement inspection device 2 can accurately infer the secondary components.
  • the reinforcement inspection device 2 detects secondary components in a reinforcement structure, but it may also automatically detect main reinforcement in addition to secondary components.
  • the preset models and custom models include models related to secondary components as well as models related to main reinforcement.
  • a model related to main reinforcement is a third learning model that is provided for each external appearance feature including the type, color, and shape of the main reinforcement, and is used to infer the main reinforcement reflected in the image information.
  • the third learning model may be a preset model or a custom model.
  • the selection unit 223 also displays a selection button for specifying a preset model for inferring the main reinforcement on the operation screen 23A shown in FIG. 3.
  • the selection button for a preset model related to the main reinforcement is operated using the operation input unit 24, the selection unit 223 creates specification information for the preset model related to the main reinforcement.
  • the inference unit 224 acquires a learning model indicated by the specification information from the learning device 3, it uses the acquired learning model to infer the main reinforcement from the image information preprocessed by the preprocessing unit 222. This allows the reinforcement inspection device 2 to automatically detect the main reinforcement in addition to secondary components in a reinforcement structure.
  • the data acquisition unit 321 acquires learning data including image information and positional information of objects in the image information. For example, when the data acquisition unit 321 receives designation information of a custom model from the reinforcement inspection device 2 via the communication line by the communication unit 31, the data acquisition unit 321 acquires learning data including image information of a captured reinforcement structure and positional information of secondary components shown in the image information from the reinforcement inspection device 2. The data acquisition unit 321 stores the learning data acquired from the reinforcement inspection device 2 in the learning DB 333.
  • the learning unit 322 uses the learning data to create and store a learning model for inferring secondary components shown in the image information. For example, the learning unit 322 creates a preset model or a custom model using a pre-learning model stored in the pre-learning model DB 332 and learning data stored in the learning DB 333. Furthermore, the learning unit 322 evaluates the created learning model, and determines the learning model whose evaluation value is equal to or greater than the allowable value as the learning model to be used by the reinforcement inspection device 2. For example, the matching rate with respect to the correct data and the recall rate of the inference result are used as indices to evaluate the learning model.
  • the search unit 323 searches for a learning model specified in the reinforcement inspection device 2 from the learning models created for each external appearance feature including the type, color, and shape of the secondary component, and outputs the learning model obtained by the search to the reinforcement inspection device 2. For example, when the search unit 323 acquires designation information of a preset model from the reinforcement inspection device 2, it searches the selection candidate DB 331 based on information about the secondary component included in the acquired designation information. The search unit 323 then transmits information indicating the preset model of the search result to the reinforcement inspection device 2 via the communication line by the communication unit 31. This allows the reinforcement inspection device 2 to acquire the designated learning model.
  • FIG. 6 is a flowchart showing the reinforcement bar inspection method according to the first embodiment.
  • the acquisition unit 221 acquires image information (step ST1). For example, the acquisition unit 221 acquires image information of a reinforcement structure captured by a monocular camera or a stereo camera provided in the reinforcement inspection device 2.
  • the pre-processing unit 222 pre-processes the image information (step ST2). For example, the pre-processing unit 222 calculates image information by normalizing the color values of pixels by the maximum color value.
  • the process may proceed to step ST3 without performing step ST2.
  • the selection unit 223 selects a specified learning model from among multiple learning models held by the learning device 3 (step ST3).
  • the learning device 3 manages multiple preset models provided for each external appearance feature including the type, color, and shape of the secondary component, and further creates a custom model specified by the reinforcement inspection device 2.
  • the selection unit 223 selects the learning model to be used by the inference unit 224 by specifying a preset model or a custom model to the learning device 3.
  • the inference unit 224 infers the secondary component from the preprocessed image information using the learning model selected by the selection unit 223 (step ST4). For example, the inference unit 224 causes the display unit 23 to display the inference result of the secondary component and the inspection result of the secondary component using the inference result.
  • the data acquisition unit 321 acquires learning data used to create a preset model.
  • the learning data includes, for example, image information in which an object on an image is specified using the operation input unit 24.
  • the learning data is stored in the learning DB 333 by the data acquisition unit 321.
  • Fig. 7 is a diagram showing an example of the learning data. As shown in Fig. 7, the learning data is data provided for each external appearance feature including the type, color, and shape of the secondary component.
  • bounding box positions A, B, and C are set in the image areas in image A where blue, U-shaped shear reinforcement bars are shown.
  • bounding box positions D and E are set in the image areas in image B where blue, cylindrical lap joints on reinforcing bars are shown.
  • bounding box position F is set in the image area in image C where a silver, cylindrical sheath tube is shown.
  • Fig. 8 is a block diagram showing the configuration of the learning unit 322.
  • the learning unit 322 includes a data classification unit 3221, a model creation unit 3222, and an evaluation unit 3223.
  • the data classification unit 3221 extracts a plurality of pieces of image information showing secondary components from the image information stored in the learning DB 333 by the data acquisition unit 321, and divides the extracted image information into pieces for learning and pieces for evaluation.
  • the data classification unit 3221 reads out the learning data No. 1 shown in FIG. 7 from the learning DB 333 and separates this data into data for learning and data for evaluation. This allows image information relating to the same shear reinforcement to be classified into data for learning and data for evaluation.
  • the model creation unit 3222 uses the image information for learning to create a learning model for inferring a secondary component shown in the image information.
  • the model creation unit 3222 uses a pre-learning model extracted from the pre-learning model DB 332 to learn the secondary component shown in the image information for learning to create a learning model for inferring the secondary component.
  • a Siamese network is used as the learning model including the pre-learning model.
  • the evaluation unit 3223 evaluates the learning model using the image information for evaluation, and saves the learning model that satisfies the evaluation conditions as a preset model. For example, the evaluation unit 3223 evaluates the learning model created by the model creation unit 3222 using the image information for evaluation obtained from the data classification unit 3221.
  • FIG. 9 shows the evaluation results of the learning model, and is the evaluation result for the learning data No. 1 shown in FIG. 7.
  • the learning data and evaluation data in FIG. 9 are image information classified by the data classification unit 3221 into learning data and evaluation data.
  • the evaluation unit 3223 creates a data set that associates the type of pre-learning model used when the model creation unit 3222 created the learning model with the learning method (e.g., stochastic gradient descent method, etc.) for the learning data and evaluation data.
  • the learning method e.g., stochastic gradient descent method, etc.
  • the evaluation unit 3223 uses model A, which is a pre-learning model, to infer shear reinforcement using images A, I, and J and the position information of the shear reinforcement in these images as training data.
  • model A which is a pre-learning model
  • the evaluation unit 3223 uses the same model A to infer shear reinforcement using images K, L, and M and the position information of the shear reinforcement in these images as evaluation data.
  • the evaluation unit 3223 calculates the precision rate A and the recall rate A.
  • the evaluation unit 3223 calculates the precision and recall of the data sets after data set No. 2 in the same manner. Based on the evaluation condition of selecting the learning model with the highest precision and recall, the evaluation unit 3223 compares the precision and recall of all data sets related to the learning data No. 1 shown in FIG. 7. The evaluation unit 3223 determines the learning model with the highest precision and recall from the comparison results as the preset model and stores this model in the selection candidate DB 331. Note that, although the evaluation condition of selecting the learning model with the highest precision and recall has been shown, the present invention is not limited to this. For example, the evaluation condition may be to select a learning model with a precision and recall higher than a certain threshold. The evaluation condition may also be to select a learning model with either the precision or recall highest or higher than a threshold.
  • evaluation indices for the learning model
  • the evaluation indices may be MIOU (Mean Intersection Over Union), which is the ratio of the overlapping area between the bounding box (correct rectangle) surrounding the sub-component, which is the correct data, and the bounding box (inferred rectangle) surrounding the estimated sub-component.
  • MIOU Mean Intersection Over Union
  • FIG. 10 is an explanatory diagram that shows an outline of the matching judgment of embedded vectors in a learning model.
  • the network to which image information (1) is input and the network to which image information (2) is input are a common network.
  • Image information (1) is image information to which a correct answer label has been assigned using a bounding box or the like.
  • Image information (2) is data that has been classified for learning purposes by the data classification unit 3221 from among the image information stored in the learning DB 333.
  • image information (2) is a partial image in which an image area showing any of the secondary components is extracted.
  • a Siamese Network may be used as the machine learning model that is the common network.
  • the model creation unit 3222 inputs the correct answer data, which is image information (1), into the network, and calculates an embedding vector corresponding to the image information (1). For example, in PaDiM, the embedding vector is calculated by combining the outputs from the first to third layers. This embedding vector is stored in the memory unit 33 by the model creation unit 3222.
  • the model creation unit 3222 calculates an embedding vector corresponding to the image information using the learning image information (2), and creates a learning model as a preset model if it determines that the calculated embedding vector matches the embedding vector stored in the memory unit 33. In this way, by using an embedding vector that can be created before calculating the final inference result, the time required to create a learning model can be reduced.
  • FIG. 11 is a flowchart showing the process of creating a preset model.
  • the data classification unit 3221 extracts multiple pieces of image information showing secondary components from the image information acquired by the data acquisition unit 321 and stored sequentially in the learning DB 333, and separates the extracted image information into pieces for learning and pieces for evaluation (step ST1A).
  • the model creation unit 3222 uses the image information for learning to create a learning model for inferring the secondary component shown in the image information (step ST2A).
  • the evaluation unit 3223 evaluates the learning model using the image information for evaluation, and saves the learning model that satisfies the evaluation conditions as a preset model (step ST3A).
  • FIG. 12 is a flowchart showing the process of creating a custom model.
  • the data classification unit 3221 acquires multiple pieces of image information showing secondary components from the image information stored in the learning DB 333, classifies the acquired multiple pieces of image information into those for learning and those for evaluation, and outputs the image information for learning to the model creation unit 3222 as learning data (step ST1B).
  • the model creation unit 3222 acquires a pre-training model from the pre-training model DB 332 (step ST2B).
  • the model creation unit 3222 uses the pre-learning model and the learning data to create a learning model for inferring the secondary component shown in the image information (step ST3B).
  • the evaluation unit 3223 evaluates the learning model using the evaluation image information from the data classification unit 3221, and transmits the learning model that satisfies the evaluation conditions as a custom model to the reinforcement inspection device 2 via the communication line by the communication unit 31 (step ST4B). If the inspector is not satisfied with the inference accuracy of the custom model, he/she may instruct re-creation using the operation input unit 24, thereby repeatedly creating learning data, creating the custom model shown in Fig. 12, and evaluating the performance of the custom model in the bar arrangement inspection system 1. Furthermore, the performance evaluation of the custom model may use the precision rate and the recall rate, or may use the MIOU.
  • the reinforcement inspection device 2 includes an acquisition unit 221 that acquires image information, a selection unit 223 that is provided for each external appearance feature including the type, color, and shape of the secondary component and that selects a specified learning model from among a plurality of learning models for inferring the secondary component shown in the image information, and an inference unit 224 that infers the secondary component from the image information using the selected learning model.
  • This allows the reinforcement inspection device 2 to automatically detect secondary components that are provided other than the main reinforcement in a reinforcement structure.
  • the multiple learning models include one or more preset models that are created before being specified, and a custom model that is created after being specified. This allows the reinforcement inspection device 2 to infer secondary components using preset models with high inference accuracy that are prepared in advance, and to infer secondary components using custom models that are suited to the on-site situation.
  • the reinforcement inspection device 2 includes a preprocessing unit 222 that preprocesses image information into a form suitable for the inference processing performed by the inference unit 224.
  • the inference unit 224 uses a learning model to infer secondary components from the preprocessed image information. This allows the inference unit 224 to smoothly infer secondary components.
  • the preprocessing unit 222 normalizes the image information.
  • the normalized image information can be used as input data for the learning model created by DL.
  • the selection unit 223 outputs display control information for displaying an operation screen on which an operation for specifying a learning model is performed, and selects the learning model specified by the accepted operation based on the operation screen. This enables the selection unit 223 to accurately select a learning model for inferring secondary components.
  • the reinforcing bar inspection device 2 includes a display unit 23 that displays an operation screen 23A based on the display control information. This allows the reinforcing bar inspection device 2 to display an operation screen that allows an operation to specify a learning model.
  • the bar arrangement inspection device 2 includes an operation input unit 24 that accepts an operation to specify a learning model from a plurality of learning models on an operation screen 23A displayed on the display unit 23. This allows the learning model to be specified by operating the operation input unit 24.
  • the selection unit 223 automatically specifies and selects a learning model from a plurality of learning models. This enables the selection unit 223 to accurately select a learning model for inferring secondary components.
  • the acquisition unit 221 acquires point cloud data that represents the reinforcement structure as a three-dimensional point cloud.
  • the inference unit 224 determines whether or not the secondary component has been erroneously inferred based on the result of comparing the secondary component calculated using the image information with the secondary component calculated using the point cloud data. This allows the reinforcement inspection device 2 to accurately infer the secondary component.
  • the preset models and custom models include learning models that are provided for each external appearance feature, including the type, color, and shape of the main reinforcement, and are used to infer the main reinforcement reflected in the image information.
  • the inference unit 224 uses these learning models to infer the main reinforcement from the preprocessed image information. This allows the reinforcement inspection device 2 to automatically detect the main reinforcement in addition to the secondary components in the reinforcement structure.
  • the secondary component is at least one of a shear reinforcement bar, a lap joint, a spacer block, a sheath tube, or a compression joint.
  • the reinforcement inspection device 2 is capable of detecting various components as secondary components.
  • the learning device 3 includes a data acquisition unit 321 that acquires learning data including image information and positional information of objects in the image information, a learning unit 322 that uses the learning data to create and store a learning model for inferring secondary components shown in the image information, and a search unit 323 that searches for a learning model specified in the reinforcement inspection device 2 from the learning models created for each external appearance feature including the type, color, and shape of the secondary component, and outputs the learning model obtained by the search to the reinforcement inspection device 2.
  • This allows the learning device 3 to create a learning model for inferring secondary components shown in the image information for each external appearance feature including the type, color, and shape of the secondary component.
  • the multiple learning models include one or more preset models created before being specified, and a custom model created after being specified.
  • the data acquisition unit 321 acquires image information and stores it sequentially.
  • the learning unit 322 includes a data classification unit 3221 that extracts multiple pieces of image information showing secondary components from the stored image information and separates the extracted image information into image information for learning and image information for evaluation, a model creation unit 3222 that uses the image information for learning to create a learning model for inferring the secondary components shown in the image information, and an evaluation unit 3223 that evaluates the learning model using the image information for evaluation and stores the learning model that satisfies the evaluation conditions as a preset model. This allows the learning device 3 to create a learning model with high inference accuracy as a preset model.
  • the model creation unit 3222 calculates and stores an embedding vector corresponding to correct data, which is image information, calculates an embedding vector corresponding to image information using image information for learning, and creates a learning model in which it is determined that the calculated embedding vector matches the stored embedding vector. This allows the model creation unit 3222 to reduce the time required to create a learning model.
  • the learning unit 322 creates a custom model specified in the reinforcement inspection device 2 using a pre-learned model that has been trained to infer objects shown in image information. This allows the learning device 3 to create a custom model with high inference accuracy even when there is a small amount of learning data.
  • the data acquisition unit 321 repeatedly acquires learning data and the learning unit 322 repeatedly creates a custom model until the custom model meets the target inference accuracy. This allows the learning device 3 to create a custom model with high inference accuracy.
  • the reinforcement inspection system 1 includes a reinforcement inspection device 2 and a learning device 3. As a result, the reinforcement inspection system 1 can provide a reinforcement inspection device 2 that can automatically detect secondary components provided in addition to the main reinforcement in a reinforcement structure.
  • the reinforcement inspection method includes a step in which the acquisition unit 221 acquires image information, a step in which the selection unit 223 is provided for each external appearance feature including the type, color, and shape of the secondary component and selects a specified learning model from among a plurality of learning models for inferring the secondary component shown in the image information, and a step in which the inference unit 224 infers the secondary component from the image information using the selected learning model.
  • any of the components of the embodiments may be modified or omitted.
  • the reinforcing bar inspection device disclosed herein can be used, for example, to inspect reinforcing bar structures before concrete is poured.
  • 1 Reinforcement inspection system 2 Reinforcement inspection device, 3 Learning device, 11 Steel bar, 12 Shear reinforcement bar, 13 Spacer block, 14 Lap joint, 15 Sheath tube, 16 Compression joint, 21 Communication unit, 22 Calculation unit, 23 Display unit, 23A Operation screen, 23A-1 to 23A-3, 23Ac Selection button, 23Ab Slide bar, 23B-1 to 23B-5 Bounding box, 24 Operation Input unit, 25 memory unit, 31 communication unit, 32 calculation unit, 33 memory unit, 100 communication interface, 101 input/output interface, 102 processor, 103 memory, 221 acquisition unit, 222 preprocessing unit, 223 selection unit, 224 inference unit, 321 data acquisition unit, 322 learning unit, 323 search unit, 3221 data classification unit, 3222 model creation unit, 3223 evaluation unit.

Landscapes

  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Theoretical Computer Science (AREA)
  • Image Analysis (AREA)

Abstract

This reinforcement inspection device (2) comprises: an acquisition unit (221) that acquires image information; a selection unit (223) that selects a designated learning model from among a plurality of learning models for inferring a sub-component member appearing in the image information, the learning models being provided with respect to each external appearance feature including the type, color, and shape of the sub-component member; and an inference unit (224) that infers a sub-component member from the image information using the selected learning model.

Description

配筋検査装置、学習装置、配筋検査システムおよび配筋検査方法Reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method
 本開示は、配筋検査装置、学習装置、配筋検査システムおよび配筋検査方法に関する。 This disclosure relates to a reinforcing bar inspection device, a learning device, a reinforcing bar inspection system, and a reinforcing bar inspection method.
 鉄筋コンクリート構造物の施工では、複数の鉄筋を配筋した配筋構造体において、鉄筋が設計通りに配筋されたかどうかの検査(配筋検査)が行われる。例えば、特許文献1には、パターンマッチングまたは機械学習の少なくとも一方の手法を用いて、設定された撮影範囲内における鉄筋の種類、鉄筋の本数、配筋ピッチ、鉄筋の長さ、鉄筋の太さ、継手の形状および位置等のデータを生成する配筋出来形管理システムが記載されている。 In the construction of reinforced concrete structures, inspections (reinforcement inspections) are conducted to check whether reinforcing bars have been arranged as designed in reinforcement structures in which multiple reinforcing bars are arranged. For example, Patent Document 1 describes a reinforcement as-built management system that uses at least one of pattern matching and machine learning techniques to generate data such as the type of reinforcing bars, number of reinforcing bars, reinforcing bar pitch, length of reinforcing bars, thickness of reinforcing bars, shape and position of joints, etc. within a set shooting range.
特開2020-27058号公報JP 2020-27058 A
 実際の配筋構造体は、主要な鉄筋である主筋に加え、主筋以外に設けられる様々な部材を含んで構成されている。配筋構造体において主筋以外に設けられる部材には、継手等のように主筋の一部として設けられる部材があり、さらにせん断補強筋等のように主筋とは別の部材として設けられるものもある。しかしながら、従来の配筋検査において、主筋を自動で検出する技術は提案されているが、主筋とは別の様々な部材を自動で検出するものはなく、検査者が手動で検査するのが現状であった。  An actual reinforced concrete structure is composed of not only the main reinforcing bars, which are the main reinforcing bars, but also various other components that are installed in addition to the main reinforcing bars. Components installed in reinforced concrete structures other than the main reinforcing bars include components installed as part of the main reinforcing bars, such as joints, and also components installed as separate components from the main reinforcing bars, such as shear reinforcement bars. However, in conventional reinforcement inspections, although technology has been proposed to automatically detect the main reinforcing bars, there is nothing that can automatically detect various other components apart from the main reinforcing bars, and the current situation is that inspectors must inspect manually.
 特許文献1に記載される配筋出来形管理システムは、主筋およびその一部に設けられた継手を自動で検出するものであるが、せん断補強筋等のように主筋とは別の部材の検出は想定されていない。このため、特許文献1に記載される配筋出来形管理システムにおいても、配筋構造体において主筋とは別に設けられる部材は、検査者が手動で検査することになると予想される。 The reinforcement as-built management system described in Patent Document 1 automatically detects main reinforcement and the joints installed in some of the main reinforcement, but does not anticipate the detection of components separate from the main reinforcement, such as shear reinforcement. For this reason, even in the reinforcement as-built management system described in Patent Document 1, it is expected that inspectors will still manually inspect components installed separately from the main reinforcement in reinforcement structures.
 本開示は上記課題を解決するものであって、配筋構造体において主筋以外に設けられる副構成部材を自動で検出することができる配筋検査装置、学習装置、配筋検査システムおよび配筋検査方法を得ることを目的とする。 The present disclosure aims to solve the above problem by providing a reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method that can automatically detect secondary components other than the main reinforcement in a reinforcement structure.
 本開示に係る配筋検査装置は、複数の鉄筋が主筋として配筋され、主筋以外の副構成部材を含んで構成される配筋構造体を検査する配筋検査装置であって、画像情報を取得する取得部と、副構成部材の種類、色および形状を含む外観的特徴ごとに設けられ、画像情報に映った副構成部材を推論するための複数の学習モデルのうち、指定された学習モデルを選択する選択部と、選択された学習モデルを用いて、画像情報から副構成部材を推論する推論部と、を備える。 The reinforcing steel inspection device disclosed herein is a reinforcing steel inspection device that inspects a reinforcing steel structure in which multiple reinforcing steel bars are arranged as main reinforcement and which includes secondary components other than the main reinforcement, and includes an acquisition unit that acquires image information, a selection unit that selects a specified learning model from multiple learning models provided for each external appearance feature of the secondary component, including the type, color, and shape, for inferring the secondary component shown in the image information, and an inference unit that infers the secondary component from the image information using the selected learning model.
 本開示によれば、副構成部材の種類、色および形状を含む外観的特徴ごとに設けられ、画像情報に映った副構成部材を推論するための複数の学習モデルのうち、指定された学習モデルを選択し、選択した学習モデルを用いて画像情報から副構成部材を推論する。これにより、本開示に係る配筋検査装置は、配筋構造体において主筋以外に設けられる副構成部材を自動で検出することができる。 According to the present disclosure, a specified learning model is selected from among multiple learning models for inferring secondary components shown in image information, each of which is provided for a different external characteristic including the type, color, and shape of the secondary component, and the secondary component is inferred from the image information using the selected learning model. This allows the reinforcement inspection device according to the present disclosure to automatically detect secondary components provided other than the main reinforcement in a reinforcement structure.
実施の形態1に係る配筋検査システムの構成を示すブロック図である。1 is a block diagram showing a configuration of a bar arrangement inspection system according to a first embodiment; 実施の形態1に係る配筋検査装置の機能を実現するハードウェア構成を示すブロック図である。1 is a block diagram showing a hardware configuration for realizing the functions of a bar arrangement inspection device according to a first embodiment. FIG. 操作画面の例を示す画面図である。FIG. 4 is a screen diagram showing an example of an operation screen. 選択候補データベース(以下、DBと記載する。)の登録内容の例を示す図である。FIG. 11 is a diagram showing an example of registered contents of a selection candidate database (hereinafter, referred to as DB). 配筋構造体を撮影した画像情報の例を示す画面図である。FIG. 11 is a screen diagram showing an example of image information obtained by photographing a reinforcement structure. 実施の形態1に係る配筋検査方法を示すフローチャートである。4 is a flowchart showing a reinforcement bar inspection method according to the first embodiment. 学習データの例を示す図である。FIG. 11 is a diagram illustrating an example of learning data. 学習部の構成を示すブロック図である。FIG. 2 is a block diagram showing a configuration of a learning unit. 学習モデルの評価結果を示す図である。FIG. 13 is a diagram showing evaluation results of a learning model. 学習モデルにおける埋め込みベクトルの一致判定を概略的に示す説明図である。FIG. 13 is an explanatory diagram illustrating a schematic diagram of a matching determination of an embedding vector in a learning model. プリセットモデルの作成処理を示すフローチャートである。13 is a flowchart showing a process for creating a preset model. カスタムモデルの作成処理を示すフローチャートである。13 is a flowchart showing a process for creating a custom model.
実施の形態1.
 図1は、実施の形態1に係る配筋検査システム1の構成を示すブロック図である。図1において、配筋検査システム1は、配筋検査装置2と学習装置3との間が通信接続されたシステムであり、コンクリートを打ち込む前の配筋構造体の検査を行う。配筋検査装置2は、配筋構造体が映った画像情報を取得し、取得した画像情報に映る配筋構造体における副構成部材を推論するための学習モデルを用いて、配筋構造体における副構成部材を検出する。学習装置3は、配筋検査装置2による副構成部材の検出に用いられる学習モデルを作成する。
Embodiment 1.
Fig. 1 is a block diagram showing the configuration of a reinforcement inspection system 1 according to a first embodiment. In Fig. 1, the reinforcement inspection system 1 is a system in which a reinforcement inspection device 2 and a learning device 3 are connected by communication, and inspects a reinforcement structure before concrete is poured. The reinforcement inspection device 2 acquires image information showing the reinforcement structure, and detects secondary components in the reinforcement structure using a learning model for inferring secondary components in the reinforcement structure shown in the acquired image information. The learning device 3 creates a learning model used for the detection of secondary components by the reinforcement inspection device 2.
 配筋検査装置2は、カメラ装置が撮影した配筋構造体の画像情報を用いて、配筋構造体における少なくとも副構成部材を検査して、検査結果を表示部23に出力する。例えば、配筋検査装置2は、配筋構造体に含まれる複数種類の副構成部材のそれぞれが配置された数を検査し、または、副構成部材が設計通りに作成されているかどうかを検査する。配筋検査装置2は、例えば、タブレット端末、スマートフォンまたはパーソナルコンピュータ(PC)である。また、学習装置3は、例えば、副構成部材の検出に用いる学習モデルを配筋検査装置2に与えるサーバである。 The reinforcing bar inspection device 2 uses image information of the reinforcing bar structure captured by the camera device to inspect at least the secondary components in the reinforcing bar structure, and outputs the inspection results to the display unit 23. For example, the reinforcing bar inspection device 2 inspects the number of secondary components of multiple types contained in the reinforcing bar structure that are arranged, or inspects whether the secondary components have been created as designed. The reinforcing bar inspection device 2 is, for example, a tablet terminal, a smartphone, or a personal computer (PC). The learning device 3 is, for example, a server that provides the reinforcing bar inspection device 2 with a learning model used to detect secondary components.
 配筋構造体は、配筋された複数の鉄筋である主筋と、主筋以外の副構成部材とを含んで構成される。主筋は、配筋構造体を基礎とする建物等の荷重を受ける主要な鉄筋であり、配力筋とも呼ばれる。配筋構造体には、例えば、複数の主筋が格子状に配筋された平面が複数層設けられた構造を有するものがある。 A reinforced concrete structure is composed of multiple main bars, which are arranged reinforcing bars, and secondary components other than the main bars. The main bars are the main reinforcing bars that bear the load of a building or other structure that is based on a reinforced concrete structure, and are also called distribution bars. For example, some reinforced concrete structures have a structure in which multiple layers of flat surfaces are provided with multiple main bars arranged in a lattice pattern.
 副構成部材は、配筋構造体において主筋を補助するため、または、配筋構造体に主筋とは別の機能を持たせるために設けられる部材であり、主筋の一部または主筋とは全く別の部材である。例えば、副構成部材には、せん断補強筋、重ね継手、スペーサブロック、シース管または圧縮継手がある。以下、配筋検査システム1が検査を行う配筋構造体には、せん断補強筋、重ね継手、スペーサブロック、シース管または圧縮継手の少なくとも一つが設けられているものとする。 A secondary component is a component provided in a reinforcement structure to supplement the main reinforcement or to give the reinforcement structure a function different from that of the main reinforcement, and is a part of the main reinforcement or a component completely separate from the main reinforcement. For example, secondary components include shear reinforcement, lap joints, spacer blocks, sheath tubes, and compression joints. Hereinafter, it is assumed that the reinforcement structure inspected by the reinforcement inspection system 1 is provided with at least one of shear reinforcement, lap joints, spacer blocks, sheath tubes, and compression joints.
 せん断補強筋は、主筋を拘束して補強する鉄筋である。せん断補強筋を主筋に取り付けることにより、主筋にかかるせん断力が抑制される。また、せん断補強筋には、例えば、U字形状、V字形状、またはC字形状等がある。U字形状またはV字形状のせん断補強筋は、両側の鉄筋が長く、柱などを構成する主筋を囲むように設けられる場合がある。C字形状のせん断補強筋は、U字形状またはV字形状よりも両側の鉄筋が短く、例えば、間隔を空けて配置された一方の主筋に一方の端部が接続され、他方の主筋に他方の端部が接続される。 Shear reinforcement is a reinforcing bar that restrains and reinforces the main bars. By attaching shear reinforcement to the main bars, the shear force acting on the main bars is suppressed. Shear reinforcement can be U-shaped, V-shaped, C-shaped, or other shapes. U-shaped or V-shaped shear reinforcement has longer reinforcing bars on both sides, and may be installed to surround the main bars that make up columns, etc. C-shaped shear reinforcement has shorter reinforcing bars on both sides than U-shaped or V-shaped reinforcing bars, and for example, one end is connected to one of the main bars spaced apart, and the other end is connected to the other main bar.
 せん断補強筋の端部は、主筋に接続するためにフック形状になっている。フック形状にもバリエーションがあり、例えば、直角フック、鋭角フック、または半円フックがある。
 また、主筋およびせん断補強筋を構成する鉄筋は、防食性を向上させるため、樹脂塗装が施されたものがあり、その外観は地金と異なる色になっている。例えば、エポキシ樹脂塗装が施された鉄筋には青色または緑色のものがあり、さらに特殊な表面加工が施されたものでは、灰色の鉄筋もある。このように、せん断補強筋には、色および形状を含む様々な外観的特徴がある。さらに、せん断補強筋の外観的特徴は、製造メーカごとおよびその型式ごとに異なる。
The ends of the shear reinforcement are hooked to connect to the main bars. There are variations in hook shape, such as right angle hooks, acute angle hooks, or semicircular hooks.
In addition, some reinforcing bars that make up the main reinforcement and shear reinforcement are coated with resin to improve corrosion resistance, and their appearance is a different color from the base metal. For example, reinforcing bars coated with epoxy resin are blue or green, and some reinforcing bars that have been specially surface-treated are gray. Thus, shear reinforcement bars have various appearance characteristics, including color and shape. Furthermore, the appearance characteristics of shear reinforcement bars vary depending on the manufacturer and the model.
 重ね継手は、主筋である2本の鉄筋の端部を重ねて接合した部分である。副構成部材には、このような主筋の一部として設けられる部材も含まれる。すなわち、重ね継手を構成する鉄筋も、防食性を向上させるため、樹脂塗装が施されたものがあり、その外観は地金と異なる色になっている。 A lap joint is a joint formed by overlapping the ends of two reinforcing bars, which are the main reinforcing bars. Secondary components also include components that are provided as part of such main reinforcing bars. In other words, the reinforcing bars that make up a lap joint may also be coated with a resin to improve corrosion resistance, and their appearance is a different color from the base metal.
 また、重ね継手には、鉄筋の種類および配筋構造体の用途に応じて様々な形状がある。例えば、配筋構造体の用途に応じて重ね継手における2本の鉄筋の端部が重なり合う長さが異なる。重ね継手の形状は、鉄筋の端部にフックがある場合およびフックがない場合で異なり、さらにフックの形状によっても異なる。フック形状には、例えば、直角フック、鋭角フック、または半円フックがある。このように、重ね継手には、色および形状を含む様々な外観的特徴がある。 Furthermore, lap joints come in a variety of shapes depending on the type of rebar and the application of the reinforced structure. For example, the length over which the ends of two rebars in a lap joint overlap varies depending on the application of the reinforced structure. The shape of a lap joint differs depending on whether or not the end of the rebar has a hook, and also differs depending on the shape of the hook. Examples of hook shapes include right-angle hooks, acute-angle hooks, and semicircular hooks. Thus, lap joints come in a variety of appearance features, including color and shape.
 スペーサブロックは、鉄筋のかぶりの保持と作業中の配筋の乱れを防止するための部材である。「かぶり」とは、コンクリート面から鉄筋までの最小距離である。また、スペーサブロックには、スペーサとバーサポートとがある。スペーサは、側面の鉄筋のかぶりを確保するものであり、バーサポートは、水平方向の鉄筋のかぶりを確保するものである。 Spacer blocks are components used to maintain the cover of rebar and prevent disturbance of the rebar arrangement during work. "Cover" is the minimum distance from the concrete surface to the rebar. Spacer blocks consist of spacers and bar supports. Spacers ensure the cover of rebar on the sides, and bar supports ensure the cover of rebar in the horizontal direction.
 また、スペーサブロックには、コンクリート製、鋼製、またはプラスチック製があり、その材質に応じた色になっている。さらに、スペーサブロックには、サイコロ形状、サイコロに鉄筋を通す溝を設けた溝付き形状、または、逆V字形状がある。このように、スペーサブロックには、色および形状を含む様々な外観的特徴がある。さらに、スペーサブロックの外観的特徴は、製造メーカごとおよびその型式ごとに異なる。 Spacer blocks are made of concrete, steel, or plastic, and come in colors that correspond to the material they are made of. Spacer blocks are also available in dice shapes, grooved shapes with grooves for passing rebar through, and inverted V shapes. Thus, spacer blocks come in a variety of external features, including color and shape. Furthermore, the external features of spacer blocks vary depending on the manufacturer and the model.
 シース管は、内部に鋼線を通して使用される金属製の管である。例えば、シース管は、亜鉛メッキ鋼板で作成され、銀色の外観を呈している。シース管は、その用途によって、配筋構造体における様々な位置に設けられ、さらに、その長さも異なる。また、シース管は、その材質に応じた色になっている。このように、シース管には、色および形状を含む様々な外観的特徴がある。さらに、シース管の外観的特徴は、製造メーカごとおよびその型式ごとに異なる。  Sheath tubes are metal tubes through which steel wires are passed. For example, sheath tubes are made of galvanized steel sheets and have a silver appearance. Sheath tubes are installed at various positions in the reinforcement structure depending on their use, and their lengths also vary. Sheath tubes are also colored according to their material. Thus, sheath tubes have various appearance characteristics, including color and shape. Furthermore, the appearance characteristics of sheath tubes differ depending on the manufacturer and the model.
 圧縮継手は、主筋である2本の鉄筋の端部を突き合わせて接合した部分である。すなわち、圧縮継手は、主筋の一部として設けられる副構成部材である。また、圧縮継手を構成する鉄筋も、防食性を向上させるため、樹脂塗装が施されたものがあり、その外観は地金と異なる色になっている。 A compression joint is a joint formed by butting together the ends of two reinforcing bars, which are the main reinforcement. In other words, a compression joint is a secondary component that is installed as part of the main reinforcement. In addition, the reinforcing bars that make up a compression joint may also be coated with a resin to improve corrosion resistance, giving them a different color than the base metal.
 圧縮継手の形状は、配筋検査対象である。例えば、法律上、鉄筋の端部を突き合わせて接合した部分の膨らみの直径は、鉄筋径の1.4倍以上であり、かつ、鉄筋の端部を突き合わせて接合した部分の長さは、鉄筋径の1.1倍以上であることが要求される。また、圧縮継手における鉄筋の中心軸からの偏心量は、鉄筋径の5分の1以下、かつ、膨らみの頂部からの圧縮面のずれは、鉄筋径の4分の1以下であることが要求される。
 このように、圧縮継手には、色および形状を含む様々な外観的特徴がある。
The shape of the compression joint is subject to the reinforcement inspection. For example, the law requires that the diameter of the bulge at the joint where the ends of the reinforcing bars are butted together is at least 1.4 times the diameter of the reinforcing bars, and that the length of the joint where the ends of the reinforcing bars are butted together is at least 1.1 times the diameter of the reinforcing bars. In addition, the eccentricity of the compression joint from the central axis of the reinforcing bars is at most 1/5 of the diameter of the reinforcing bars, and the deviation of the compression surface from the top of the bulge is at most 1/4 of the diameter of the reinforcing bars.
As such, compression fittings come in a variety of appearance features including color and shape.
 配筋検査装置2は、図1に示すように、通信部21、演算部22、表示部23、操作入力部24および記憶部25を備える。学習装置3は、通信部31、演算部32および記憶部33を備える。なお、図1では、配筋検査装置2と学習装置3が別々の装置である場合を示したが、配筋検査装置2は学習装置3を備えてもよい。この場合、通信部21および通信部31は、共通の装置内部でデータをやり取りする通信装置であってもよい。また、記憶部25および記憶部33は、共通の記憶装置に構築された記憶領域であってもよい。 As shown in FIG. 1, the reinforcing bar inspection device 2 includes a communication unit 21, a calculation unit 22, a display unit 23, an operation input unit 24, and a memory unit 25. The learning device 3 includes a communication unit 31, a calculation unit 32, and a memory unit 33. Note that while FIG. 1 shows a case in which the reinforcing bar inspection device 2 and the learning device 3 are separate devices, the reinforcing bar inspection device 2 may also include the learning device 3. In this case, the communication units 21 and 31 may be communication devices that exchange data within a common device. Furthermore, the memory units 25 and 33 may be memory areas constructed in a common storage device.
 また、配筋検査システム1、または、学習装置3を備える配筋検査装置2は、配筋検査装置2と通信接続が可能なユーザ端末(図1において不図示)に対して、SaaS(Software as a Service)の形態で配筋検査サービスを提供するものであってもよい。例えば、配筋検査サービスを提供するための配筋検査用アプリケーションは、配筋検査装置2で実行され、ユーザ端末は、サービス専用のアプリケーションをインストールすることなく、Webブラウザ上で配筋検査サービスの提供を受けることが可能である。 Furthermore, the reinforcement inspection system 1, or the reinforcement inspection device 2 equipped with the learning device 3, may provide a reinforcement inspection service in the form of SaaS (Software as a Service) to a user terminal (not shown in FIG. 1) that can communicate with the reinforcement inspection device 2. For example, a reinforcement inspection application for providing the reinforcement inspection service is executed by the reinforcement inspection device 2, and the user terminal can receive the reinforcement inspection service on a web browser without having to install a dedicated application for the service.
 ユーザ端末は、例えば、当該端末が備えるカメラで撮影した配筋構造体の画像情報を、配筋検査装置2に送信する。配筋検査装置2は、ユーザ端末から受信した画像情報を用いて、当該画像情報に映る副構成部材等の検査を行い、検査結果をユーザ端末に返信する。ユーザ端末は、配筋検査装置2から検査結果を受信し、当該ユーザ端末が有する表示部(不図示)に、適宜の態様で検査結果を表示することができる。 The user terminal transmits image information of the reinforcement structure captured by a camera provided on the terminal to the reinforcement inspection device 2. The reinforcement inspection device 2 uses the image information received from the user terminal to inspect secondary components and the like shown in the image information and returns the inspection results to the user terminal. The user terminal receives the inspection results from the reinforcement inspection device 2 and can display the inspection results in an appropriate manner on a display unit (not shown) of the user terminal.
 通信部21は、通信回線を介して学習装置3との通信を行う。例えば、通信部21は、LTE、3G、4Gまたは5G等の通信方式の通信が可能な学習装置3との間で、通信回線を介して通信可能である。通信部31は、通信回線を介して配筋検査装置2との通信を行う。例えば、通信部31は、通信部21と同様に、LTE、3G、4Gまたは5G等の通信方式の通信が可能な配筋検査装置2との間で、通信回線を介して通信可能である。 The communication unit 21 communicates with the learning device 3 via a communication line. For example, the communication unit 21 can communicate via a communication line with the learning device 3 capable of communication using a communication method such as LTE, 3G, 4G, or 5G. The communication unit 31 communicates with the reinforcing bar inspection device 2 via a communication line. For example, like the communication unit 21, the communication unit 31 can communicate via a communication line with the reinforcing bar inspection device 2 capable of communication using a communication method such as LTE, 3G, 4G, or 5G.
 演算部22は、配筋検査装置2の全体動作を制御する。演算部22は、取得部221、前処理部222、選択部223および推論部224を備える。演算部22が、配筋検査用アプリケーションを実行することにより、取得部221、前処理部222、選択部223および推論部224の各機能が実現される。 The calculation unit 22 controls the overall operation of the reinforcement inspection device 2. The calculation unit 22 includes an acquisition unit 221, a preprocessing unit 222, a selection unit 223, and an inference unit 224. The calculation unit 22 executes a reinforcement inspection application to realize the functions of the acquisition unit 221, the preprocessing unit 222, the selection unit 223, and the inference unit 224.
 演算部32は、学習装置3の全体動作を制御する。演算部32は、データ取得部321、学習部322および検索部323を備える。演算部32が、学習用アプリケーションを実行することにより、データ取得部321、学習部322および検索部323の各機能が実現される。 The calculation unit 32 controls the overall operation of the learning device 3. The calculation unit 32 includes a data acquisition unit 321, a learning unit 322, and a search unit 323. The calculation unit 32 executes a learning application to realize the functions of the data acquisition unit 321, the learning unit 322, and the search unit 323.
 表示部23は、配筋検査装置2が備える表示装置である。表示部23は、例えば、LCD(Liquid Crystal Display)または有機EL(Electroluminescence)表示装置である。 The display unit 23 is a display device provided in the reinforcing bar inspection device 2. The display unit 23 is, for example, an LCD (Liquid Crystal Display) or an organic EL (Electroluminescence) display device.
 操作入力部24は、表示部23に表示された後述の操作画面に対する操作を受け付ける入力装置である。配筋検査装置2がスマートフォンまたはタブレット端末である場合に、操作入力部24は、例えば、表示部23の画面と一体に設けられたタッチパネルである。配筋検査装置2がPCである場合、操作入力部24は、例えば、マウスまたはキーボードである。 The operation input unit 24 is an input device that accepts operations on an operation screen (described below) displayed on the display unit 23. When the reinforcing bar inspection device 2 is a smartphone or tablet terminal, the operation input unit 24 is, for example, a touch panel that is integrated with the screen of the display unit 23. When the reinforcing bar inspection device 2 is a PC, the operation input unit 24 is, for example, a mouse or keyboard.
 記憶部25は、例えば、配筋検査用アプリケーションと、演算部22による演算処理に用いられる情報を記憶する。記憶部25は、演算処理に用いられる情報として、例えば、画像情報、画像情報に映る物体の位置情報、および学習装置3から取得した学習モデルを記憶する。記憶部25は、配筋検査装置2として機能するコンピュータが備える記憶装置であり、HDD(Hard Disk Drive)もしくはSSD(Solid State Drive)等のストレージ、または後述する図2のメモリ103等を含むものである。 The storage unit 25 stores, for example, a reinforcement inspection application and information used in the calculation process by the calculation unit 22. The storage unit 25 stores, as information used in the calculation process, for example, image information, position information of objects reflected in the image information, and a learning model acquired from the learning device 3. The storage unit 25 is a storage device provided in a computer functioning as the reinforcement inspection device 2, and includes storage such as a HDD (Hard Disk Drive) or SSD (Solid State Drive), or memory 103 in FIG. 2 described later.
 記憶部33は、例えば、学習用アプリケーションに加えて、選択候補DB331、事前学習モデルDB332および学習用DB333を記憶する。記憶部33は、学習装置3として機能するコンピュータが備える記憶装置であって、例えば、HDDまたはSSD等のストレージ、または後述する図2のメモリ103等を含むものである。 The storage unit 33 stores, for example, a selection candidate DB 331, a pre-learning model DB 332, and a learning DB 333 in addition to a learning application. The storage unit 33 is a storage device provided in a computer that functions as the learning device 3, and includes, for example, a storage device such as an HDD or SSD, or a memory 103 in FIG. 2 described later.
 選択候補DB331は、一つまたは複数のプリセットモデルが格納される。プリセットモデルとは、副構成部材の種類、色および形状を含む外観的特徴ごとに設けられ、取得部221が取得した画像情報に映った副構成部材を推論するための第1学習モデルである。プリセットモデルは、選択部223において指定される前に、学習装置3によって予め作成されたものである。これらのプリセットモデルは、副構成部材の種類、色および形状を含む外観的特徴ごとに紐付けられて選択候補DB331に格納される。 The selection candidate DB 331 stores one or more preset models. A preset model is a first learning model that is provided for each external appearance feature, including the type, color, and shape of a secondary component, and is used to infer a secondary component shown in image information acquired by the acquisition unit 221. The preset models are created in advance by the learning device 3 before being specified by the selection unit 223. These preset models are linked to each external appearance feature, including the type, color, and shape of a secondary component, and stored in the selection candidate DB 331.
 事前学習モデルDB332は、事前学習モデルが格納される。事前学習モデルは、画像情報に映った物体を推論するように学習された学習モデルである。例えば、事前学習モデルは、COCO(Common Objects in COntext)のような大量の学習データセットを用いて学習された学習モデルである。学習方法としては、例えば、確率的勾配降下法を用いてニューラルネットワークのパラメータを設定することが考えられる。 The pre-training model DB332 stores pre-training models. The pre-training model is a learning model that has been trained to infer objects depicted in image information. For example, the pre-training model is a learning model that has been trained using a large amount of training data set such as COCO (Common Objects in Context). As a learning method, for example, it is possible to set the parameters of the neural network using the stochastic gradient descent method.
 学習用DB333には、画像情報と、当該画像情報に映る物体の位置情報とを含む学習データが格納される。画像情報に映る物体の位置情報は、例えば、配筋検査装置2における操作入力部24を用いて設定される。学習装置3は、配筋検査装置2からの学習モデルの指定情報を受信すると、事前学習モデルDB332から事前学習モデルを読み出し、学習用DB333から学習データを読み出す。学習装置3は、事前学習モデルに学習データを入力することにより、画像情報に映った副構成部材を推論するためのカスタムモデルを作成する。カスタムモデルは、副構成部材の種類、色および形状を含む外観的特徴ごとに作成され、取得部221が取得した画像情報に映った副構成部材を推論するための第2学習モデルである。 The learning DB 333 stores learning data including image information and positional information of objects shown in the image information. The positional information of objects shown in the image information is set, for example, using the operation input unit 24 in the reinforcement inspection device 2. When the learning device 3 receives designation information for a learning model from the reinforcement inspection device 2, it reads out a pre-learning model from the pre-learning model DB 332 and reads out learning data from the learning DB 333. The learning device 3 inputs the learning data into the pre-learning model to create a custom model for inferring secondary components shown in the image information. The custom model is created for each external appearance feature including the type, color, and shape of the secondary component, and is a second learning model for inferring secondary components shown in the image information acquired by the acquisition unit 221.
 図2は、配筋検査装置2の機能を実現するハードウェア構成を示すブロック図である。例えば、配筋検査装置2は、ハードウェア構成として通信インタフェース100、入出力インタフェース101、プロセッサ102およびメモリ103を有する。配筋検査装置2が備える取得部221、前処理部222、選択部223および推論部224の各機能は、これらのハードウェア構成において配筋検査用アプリケーションが実行されることで実現される。 FIG. 2 is a block diagram showing the hardware configuration that realizes the functions of the reinforcement inspection device 2. For example, the reinforcement inspection device 2 has a communication interface 100, an input/output interface 101, a processor 102, and a memory 103 as its hardware configuration. The functions of the acquisition unit 221, preprocessing unit 222, selection unit 223, and inference unit 224 provided in the reinforcement inspection device 2 are realized by executing a reinforcement inspection application in this hardware configuration.
 通信インタフェース100は、通信回線を介して学習装置3から受信した学習モデルをプロセッサ102へ出力し、プロセッサ102が生成した学習モデルの指定情報を、通信回線を介して学習装置3へ送信する。プロセッサ102は、入出力インタフェース101を介して、図1における記憶部25に対してデータを読み書きする。さらに、プロセッサ102は、入出力インタフェース101を介して外部装置から画像情報を取得する。外部装置は、例えば、配筋構造体を撮影するカメラ装置、または、カメラ装置が撮影した画像情報を保存する外部記憶装置である。 The communication interface 100 outputs the learning model received from the learning device 3 via the communication line to the processor 102, and transmits designation information for the learning model generated by the processor 102 to the learning device 3 via the communication line. The processor 102 reads and writes data from the memory unit 25 in FIG. 1 via the input/output interface 101. Furthermore, the processor 102 acquires image information from an external device via the input/output interface 101. The external device is, for example, a camera device that photographs the reinforcement structure, or an external storage device that stores image information photographed by the camera device.
 取得部221、前処理部222、選択部223および推論部224の各機能を実現するための配筋検査用アプリケーションを構成するプログラムは、記憶部25に記憶されている。プロセッサ102は、入出力インタフェース101を介して記憶部25に記憶されたプログラムを読み出してメモリ103にロードし、メモリ103にロードされたプログラムを実行する。これにより、プロセッサ102は、取得部221、前処理部222、選択部223および推論部224の各機能を実現する。メモリ103は、例えば、RAM(Random Access Memory)である。 The programs constituting the reinforcement inspection application for realizing the functions of the acquisition unit 221, preprocessing unit 222, selection unit 223, and inference unit 224 are stored in the storage unit 25. The processor 102 reads the programs stored in the storage unit 25 via the input/output interface 101, loads them into the memory 103, and executes the programs loaded into the memory 103. In this way, the processor 102 realizes the functions of the acquisition unit 221, preprocessing unit 222, selection unit 223, and inference unit 224. The memory 103 is, for example, a RAM (Random Access Memory).
 学習装置3の機能を図2に示したハードウェア構成で実現する場合、学習装置3が備えるデータ取得部321、学習部322および検索部323の各機能は、上記ハードウェア構成において学習用アプリケーションが実行されることで実現される。 When the functions of the learning device 3 are realized by the hardware configuration shown in FIG. 2, the functions of the data acquisition unit 321, learning unit 322, and search unit 323 of the learning device 3 are realized by executing a learning application in the above hardware configuration.
 通信インタフェース100は、通信回線を介して配筋検査装置2から受信した学習モデルの指定情報をプロセッサ102へ出力し、プロセッサ102が検索した学習モデルを、通信回線を介して配筋検査装置2へ送信する。プロセッサ102は、入出力インタフェース101を介して、図1における記憶部33に対してデータを読み書きする。 The communication interface 100 outputs the learning model specification information received from the bar arrangement inspection device 2 via the communication line to the processor 102, and transmits the learning model searched by the processor 102 to the bar arrangement inspection device 2 via the communication line. The processor 102 reads and writes data from the memory unit 33 in FIG. 1 via the input/output interface 101.
 データ取得部321、学習部322および検索部323の各機能を実現するための学習用アプリケーションを構成するプログラムは、記憶部33に記憶されている。プロセッサ102は、入出力インタフェース101を介して記憶部33に記憶されたプログラムを読み出してメモリ103にロードし、メモリ103にロードされたプログラムを実行する。これにより、プロセッサ102は、データ取得部321、学習部322および検索部323の各機能を実現する。 The programs constituting the learning application for realizing the functions of the data acquisition unit 321, the learning unit 322, and the search unit 323 are stored in the storage unit 33. The processor 102 reads out the programs stored in the storage unit 33 via the input/output interface 101, loads them into the memory 103, and executes the programs loaded into the memory 103. In this way, the processor 102 realizes the functions of the data acquisition unit 321, the learning unit 322, and the search unit 323.
 配筋検査装置2の機能構成要素について説明する。
 取得部221は、画像情報を取得する。例えば、取得部221は、無線通信または有線通信を介してカメラ装置と接続されており、配筋構造体の画像情報(静止画または動画)をカメラ装置から受信する。取得部221が取得した情報は、前処理部222に出力される。また、取得部221は、取得した情報を、図2に示したメモリ103に出力して保存してもよい。
The functional components of the bar arrangement inspection device 2 will be described.
The acquisition unit 221 acquires image information. For example, the acquisition unit 221 is connected to a camera device via wireless communication or wired communication, and receives image information (still images or video) of the reinforcement structure from the camera device. The information acquired by the acquisition unit 221 is output to the pre-processing unit 222. The acquisition unit 221 may also output the acquired information to the memory 103 shown in FIG. 2 for storage.
 カメラ装置としては、例えば、単眼カメラを想定しているが、ステレオカメラであってもよいし、赤外線カメラであってもよい。ステレオカメラまたは赤外線カメラでは、配筋構造体とカメラ装置との間の距離情報も得られる。取得部221は、この距離情報を画像情報に加えて取得してもよい。配筋検査装置2がスマートフォンまたはタブレット端末である場合、カメラ装置は、スマートフォンまたはタブレット端末が備えるカメラであってもよい。 The camera device is assumed to be, for example, a monocular camera, but may also be a stereo camera or an infrared camera. With a stereo camera or infrared camera, distance information between the reinforcement structure and the camera device can also be obtained. The acquisition unit 221 may acquire this distance information in addition to the image information. If the reinforcement inspection device 2 is a smartphone or tablet terminal, the camera device may be a camera equipped on the smartphone or tablet terminal.
 前処理部222は、取得部221が取得した画像情報を、推論部224が行う推論処理に適した形態になるように前処理する。例えば、前処理部222は、画像情報を正規化する。正規化とは、画像情報を表示する表示部23の画面上の画素値を一定の範囲内の値に整える処理である。画像情報がカラー画像である場合、画像におけるi番目の画素の色値は、0~255の値をとる、r(赤)、g(緑)およびb(青)を用いた(r,g,b)で表すことができる。 The preprocessing unit 222 preprocesses the image information acquired by the acquisition unit 221 so that the image information is in a form suitable for the inference process performed by the inference unit 224. For example, the preprocessing unit 222 normalizes the image information. Normalization is a process of adjusting pixel values on the screen of the display unit 23 that displays the image information to values within a certain range. When the image information is a color image, the color value of the i-th pixel in the image can be expressed as (r i , g i , b i ), where r i (red ) , g i (green ) , and b i (blue) take values from 0 to 255.
 学習モデルがディープラーニング(以下、DLと記載する。)で作成されるものである場合、DLでは、0~255の色値を0~1の範囲で扱うことが一般的である。そこで、前処理部222は、下記式(1)に従って、i番目の画素の色値(r,g,b)を正規化した色値(rハット,gハット,bハット)を算出する。なお、正規化は、画像情報が一定のフレームレイトで撮影されたフレーム画像からなる動画像である場合、フレーム画像ごとに実施される。これにより、推論部224は、副構成部材の推論を円滑に行うことが可能である。
Figure JPOXMLDOC01-appb-I000001
When a learning model is created by deep learning (hereinafter, referred to as DL), DL generally handles color values of 0 to 255 in the range of 0 to 1. Therefore, the preprocessing unit 222 calculates color values ( ri , gi , bi ) obtained by normalizing the color value ( ri , gi , bi ) of the i-th pixel according to the following formula (1). Note that normalization is performed for each frame image when the image information is a moving image consisting of frame images shot at a constant frame rate. This allows the inference unit 224 to smoothly infer the secondary components.
Figure JPOXMLDOC01-appb-I000001
 また、前処理部222は、画像情報が示す画像を正対化した後に、正規化を実施してもよい。正対化画像とは、カメラ装置と配筋構造体との距離が一定で、カメラ装置に対して配筋構造体が正対した画像である。例えば、前処理部222は、画像情報における格子状に配筋された鉄筋上の任意の矩形の4隅の頂点を指定して、指定した4頂点の位置座標を用いて変換行列を推定する。そして、前処理部222は、推定した変換行列に基づいて、画像情報が示す画像を、カメラ装置に検査対象の平面が正対した正対化画像に変換する。 The pre-processing unit 222 may also perform normalization after orienting the image indicated by the image information. An orientated image is an image in which the distance between the camera device and the reinforcement structure is constant and the reinforcement structure is orientated directly relative to the camera device. For example, the pre-processing unit 222 specifies the four corner vertices of any rectangle on the reinforcing bars arranged in a lattice pattern in the image information, and estimates a transformation matrix using the position coordinates of the specified four vertices. Then, based on the estimated transformation matrix, the pre-processing unit 222 converts the image indicated by the image information into an orientated image in which the plane of the inspection target is orientated directly to the camera device.
 選択部223は、副構成部材の種類、色および形状を含む外観的特徴ごとに設けられ、画像情報に映った副構成部材を推論するための複数の学習モデルのうち、指定された学習モデルを選択する。例えば、選択部223は、操作画面を表示させるための表示制御情報を表示部23に出力する。表示部23は、選択部223からの表示制御情報に従って操作画面を表示する。この操作画面は、学習モデルを指定する操作を行わせる画面である。 The selection unit 223 is provided for each external appearance feature including the type, color, and shape of the secondary component, and selects a specified learning model from among multiple learning models for inferring the secondary component shown in the image information. For example, the selection unit 223 outputs display control information for displaying an operation screen to the display unit 23. The display unit 23 displays the operation screen according to the display control information from the selection unit 223. This operation screen is a screen that allows an operation to specify a learning model to be performed.
 選択部223は、操作画面に基づいて、操作入力部24を用いて受け付けられた操作により学習モデルが指定された場合、学習モデルの指定情報を作成し、通信部21により、通信回線を介して学習装置3に上記指定情報を送信する。学習装置3は、配筋検査装置2から上記指定情報を受信すると、記憶部33に格納されている複数の学習モデルのうち、指定情報が示す学習モデルを検索し、検索結果の学習モデルを示す情報を配筋検査装置2に返信する。 When a learning model is specified by an operation received using the operation input unit 24 based on the operation screen, the selection unit 223 creates specification information for the learning model and transmits the specification information to the learning device 3 via the communication line using the communication unit 21. When the learning device 3 receives the specification information from the reinforcement inspection device 2, it searches for the learning model indicated by the specification information from among the multiple learning models stored in the memory unit 33, and returns information indicating the learning model of the search result to the reinforcement inspection device 2.
 さらに、選択対象の学習モデルには、プリセットモデルおよびカスタムモデルがある。プリセットモデルは、配筋構造体における副構成部材を画像情報から推論するための学習モデルであって、学習装置3が事前に作成したものである。例えば、プリセットモデルの作成には、COCOのような大量の学習データセットが用いられ、選択候補DB331には、評価結果が許容下限以上のプリセットモデルが格納される。なお、評価には、例えば正解データとの適合率(Precision)および推論結果の再現率(Recall)が用いられる。 Furthermore, the learning models to be selected include preset models and custom models. The preset model is a learning model for inferring secondary components in a reinforcement structure from image information, and is created in advance by the learning device 3. For example, a large amount of learning data set such as COCO is used to create the preset model, and the selection candidate DB 331 stores preset models whose evaluation results are equal to or above the lower limit of tolerance. For example, the accuracy rate with respect to the correct data (Precision) and the recall rate of the inference results (Recall) are used for the evaluation.
 カスタムモデルは、プリセットモデルと同様に、配筋構造体における副構成部材を画像情報から推論するための学習モデルである。ただし、カスタムモデルは、選択部223が当該カスタムモデルの指定を受け付けたことを契機として、学習装置3により作成が開始される。カスタムモデルの作成には、学習データとして大量のデータセットを用意できないが、現場で撮影され、かつ副構成部材の位置が特定された画像情報が学習データとして用いられる。これにより、配筋検査装置2は、評価が高いプリセットモデルを用いた副構成部材の推論と、現場の状況に即したカスタムモデルを用いた副構成部材の推論とが可能である。 A custom model, like a preset model, is a learning model for inferring secondary components in a reinforcement structure from image information. However, creation of a custom model is initiated by the learning device 3 when the selection unit 223 accepts the designation of the custom model. To create a custom model, it is not possible to prepare a large data set as learning data, but image information captured on-site and in which the positions of secondary components are identified is used as learning data. This allows the reinforcement inspection device 2 to infer secondary components using a highly rated preset model, and to infer secondary components using a custom model suited to the situation on-site.
 図3は、操作画面23Aの例を示す画面図である。選択部223は、プリセットモデルとカスタムモデルを指定する操作を行わせる操作画面を表示させるための表示制御情報を表示部23に出力する。表示部23は、選択部223からの表示制御情報に従って、図3に示すような操作画面23Aを表示する。操作画面23Aには、複数のプリセットモデルをそれぞれ選択するための選択ボタン23A-1,23A-2,23A-3,・・・が表示されている。 FIG. 3 is a screen diagram showing an example of operation screen 23A. Selection unit 223 outputs display control information to display unit 23 for displaying an operation screen on which an operation for specifying a preset model and a custom model is performed. Display unit 23 displays operation screen 23A as shown in FIG. 3 in accordance with the display control information from selection unit 223. Operation screen 23A displays selection buttons 23A-1, 23A-2, 23A-3, ... for selecting each of a plurality of preset models.
 図3において、選択ボタン23A-1は、茶色かつU字形状という外観的特徴を有したせん断補強筋を推論するためのプリセットモデルを指定するための選択ボタンである。選択ボタン23A-2は、青色かつC字形状という外観的特徴を有したせん断補強筋を推論するためのプリセットモデルを指定するための選択ボタンである。選択ボタン23A-3は、青色かつ円柱形状という外観的特徴を有した鉄筋の重ね継手を推論するためのプリセットモデルを指定するための選択ボタンである。これらの選択ボタンは、操作入力部24を用いてスライドバー23Abを操作することにより、操作画面23A上で視認可能かつ選択可能な位置にスライドする。 In FIG. 3, selection button 23A-1 is a selection button for specifying a preset model for inferring shear reinforcement having the external characteristics of brown and U-shape. Selection button 23A-2 is a selection button for specifying a preset model for inferring shear reinforcement having the external characteristics of blue and C-shape. Selection button 23A-3 is a selection button for specifying a preset model for inferring a rebar lap joint having the external characteristics of blue and cylindrical shape. These selection buttons slide to visible and selectable positions on operation screen 23A by operating slide bar 23Ab using operation input unit 24.
 例えば、配筋構造体の検査者は、当該配筋構造体を含む構造物の設計データを参照して配筋構造体に使用される予定の副構成部材を特定し、操作入力部24を用いて、特定した副構成部材に対応する学習モデルの選択ボタンを押下する。どの選択ボタンが押下されたかを示す操作情報は、操作入力部24から選択部223に出力される。選択部223は、操作情報に対応するプリセットモデルの指定情報を作成し、通信部21により、通信回線を介して当該指定情報を学習装置3に送信する。 For example, an inspector of a reinforcement structure refers to design data for a structure that includes the reinforcement structure to identify a secondary component that is to be used in the reinforcement structure, and uses the operation input unit 24 to press a selection button for a learning model that corresponds to the identified secondary component. Operation information indicating which selection button was pressed is output from the operation input unit 24 to the selection unit 223. The selection unit 223 creates specification information for a preset model that corresponds to the operation information, and the communication unit 21 transmits the specification information to the learning device 3 via a communication line.
 図4は、選択候補DB331の登録内容の例を示す図である。学習装置3が備える記憶部33には、図4に示すように、副構成部材の種類、色および形状を含む外観的特徴に紐付けられて複数のプリセットモデルが格納された選択候補DB331が記憶されている。学習装置3は、配筋検査装置2からの上記指定情報を受信すると、選択候補DB331に格納されている複数のプリセットモデルのうち、指定情報が示すプリセットモデルを検索し、検索結果のプリセットモデルを示す情報を配筋検査装置2に返信する。 FIG. 4 is a diagram showing an example of the registered contents of the selection candidate DB 331. As shown in FIG. 4, the memory unit 33 of the learning device 3 stores the selection candidate DB 331 in which multiple preset models are stored and linked to appearance features including the type, color, and shape of the secondary component. When the learning device 3 receives the above-mentioned specification information from the reinforcement inspection device 2, it searches for the preset model indicated by the specification information from the multiple preset models stored in the selection candidate DB 331, and returns information indicating the preset model as a search result to the reinforcement inspection device 2.
 例えば、図3に示す選択ボタン23A-1、23A-2および23A-3が押下された場合、学習装置3は、選択候補DB331から図4に示すモデルA1、A2およびB1を検索し、検索結果のモデルA1、A2およびB1をそれぞれ示す情報を配筋検査装置2に返信する。モデルA1等の学習モデルを示す情報は、例えば、学習モデルとして機能するニューラルネットワークを構築するために必要なパラメータであり、ノードに対する重み付け係数等である。選択部223は、通信部21により通信回線を介して、モデルA1、A2およびB1をそれぞれ示す情報を受信すると、受信した情報を、推論部224に出力し、さらに記憶部25に格納する。これにより、選択部223は、副構成部材を推論するためのプリセットモデルを的確に選択することができる。 For example, when the selection buttons 23A-1, 23A-2, and 23A-3 shown in FIG. 3 are pressed, the learning device 3 searches the selection candidate DB 331 for models A1, A2, and B1 shown in FIG. 4, and returns information indicating the search results for models A1, A2, and B1 to the reinforcement inspection device 2. Information indicating a learning model such as model A1 is, for example, a parameter required to construct a neural network that functions as a learning model, such as a weighting coefficient for a node. When the selection unit 223 receives information indicating models A1, A2, and B1 via the communication line from the communication unit 21, the selection unit 223 outputs the received information to the inference unit 224 and further stores it in the memory unit 25. This allows the selection unit 223 to accurately select a preset model for inferring a secondary component.
 なお、図4には、青色かつ円柱形状という外観的特徴を有した鉄筋の重ね継手を推論するモデルB1を示したが、重ね継手の形状は、重ね継手が形成される鉄筋の形状に限定されるものではない。例えば、重ね継手の形状には、2本の鉄筋の端部が重なり合う長さが含まれる。さらに、重ね継手の形状には、鉄筋の端部におけるフックの有無も含まれる。さらに、鉄筋がフックを有する場合には、例えば、直角フック、鋭角フック、または半円フックといったフックの形状も含まれる。 Note that Figure 4 shows model B1, which infers a lap joint of rebars with the external characteristics of being blue and cylindrical, but the shape of the lap joint is not limited to the shape of the rebar in which the lap joint is formed. For example, the shape of the lap joint includes the length over which the ends of the two rebars overlap. Furthermore, the shape of the lap joint also includes the presence or absence of a hook at the end of the rebar. Furthermore, if the rebar has a hook, the shape of the hook also includes, for example, a right-angle hook, an acute-angle hook, or a semicircular hook.
 圧縮継手の形状には、圧縮継手が形成される鉄筋の形状に加え、鉄筋の端部を突き合わせて接合した部分の膨らみの形状が含まれる。膨らみの形状は、例えば、膨らみの直径、鉄筋の端部を突き合わせて接合した膨らみ部分の長さ、膨らみにおける鉄筋の中心軸からの偏心量、および、膨らみの頂部からの圧縮面のずれにより決定される形状である。 The shape of a compression joint includes the shape of the rebar in which the compression joint is formed, as well as the shape of the bulge at the part where the ends of the rebar are butted together and joined. The shape of the bulge is determined, for example, by the diameter of the bulge, the length of the bulge where the ends of the rebar are butted together and joined, the amount of eccentricity of the bulge from the central axis of the rebar, and the deviation of the compression surface from the top of the bulge.
 また、選択部223は、複数のプリセットモデルから、プリセットモデルを自動で指定して選択してもよい。例えば、配筋構造体を含む構造物の設計データを記憶部25に格納しておく。選択部223は、配筋構造体の検査者が操作入力部24を用いて配筋構造体の検査開始を指示すると、記憶部25に格納された設計データから、配筋構造体に含まれる副構成部材を自動で特定する。 The selection unit 223 may also automatically specify and select a preset model from a plurality of preset models. For example, design data of a structure including a reinforcement structure is stored in the storage unit 25. When an inspector of the reinforcement structure issues an instruction to start inspecting the reinforcement structure using the operation input unit 24, the selection unit 223 automatically identifies secondary components included in the reinforcement structure from the design data stored in the storage unit 25.
 設計データとしては、例えば、配筋構造体を含む構造物に関するビルディングインフォメーションモデリング(BIM)を実現する3次元モデルがあり、または、配筋構造体を含む構造物の設計図面データであってもよい。選択部223は、特定した副構成部材を推論するためのプリセットモデルを指定する指定情報を作成して、当該指定情報に基づいて学習装置3が検索したプリセットモデルを取得する。これにより、選択部223は、副構成部材を推論するためのプリセットモデルを的確に選択することが可能である。 The design data may be, for example, a three-dimensional model that realizes Building Information Modeling (BIM) for a structure that includes a reinforcement structure, or design drawing data for a structure that includes a reinforcement structure. The selection unit 223 creates designation information that specifies a preset model for inferring the identified secondary component, and acquires the preset model searched for by the learning device 3 based on the designation information. This enables the selection unit 223 to accurately select a preset model for inferring the secondary component.
 さらに、選択部223は、選択候補DB331に格納された全てのプリセットモデルを選択してもよい。例えば、選択部223は、操作入力部24を用いて配筋構造体の検査の開始が指示されると、選択候補DB331に格納されている全てのプリセットモデルを選択する。推論部224は、選択部223が選択した全てのプリセットモデルのそれぞれを用いて副構成部材を推論し、評価が最も高いモデルから得られた結果を最終的な推論結果として出力する。 Furthermore, the selection unit 223 may select all preset models stored in the selection candidate DB 331. For example, when an instruction to start an inspection of a reinforcement structure is given using the operation input unit 24, the selection unit 223 selects all preset models stored in the selection candidate DB 331. The inference unit 224 infers secondary components using each of all preset models selected by the selection unit 223, and outputs the result obtained from the model with the highest evaluation as the final inference result.
 図3において、選択ボタン23Acは、カスタムモデルを作成するための選択ボタンである。操作入力部24を用いて選択ボタン23Acが押下された場合、選択部223は、カスタムモデルの指定を示す指定情報を作成し、さらに、カスタムモデルの作成に用いられる画像情報を含む学習データを作成する。選択部223は、カスタムモデルの指定情報および画像情報含む学習データを、通信部21により通信回線を介して学習装置3に送信する。 In FIG. 3, selection button 23Ac is a selection button for creating a custom model. When selection button 23Ac is pressed using operation input unit 24, selection unit 223 creates designation information indicating the designation of a custom model, and further creates learning data including image information used to create the custom model. Selection unit 223 transmits the custom model designation information and learning data including image information to learning device 3 via communication line through communication unit 21.
 学習装置3は、配筋検査装置2から受信した学習データを用いて、指定情報に対応するカスタムモデルを作成して、作成したカスタムモデルを、通信部31により、通信回線を介して配筋検査装置2に送信する。選択部223は、通信部21により、通信回線を介して、カスタムモデルを示す情報を受信すると、受信した情報を、推論部224に出力し、さらに記憶部25に格納する。 The learning device 3 uses the learning data received from the reinforcement inspection device 2 to create a custom model corresponding to the specified information, and transmits the created custom model to the reinforcement inspection device 2 via the communication line by the communication unit 31. When the selection unit 223 receives information indicating a custom model via the communication line by the communication unit 21, it outputs the received information to the inference unit 224 and further stores it in the memory unit 25.
 カスタムモデルの作成に用いられる学習データは、現場で撮影され、かつ副構成部材の位置が特定された画像情報である。
 図5は、配筋構造体を撮影した画像情報の例を示す画面図である。図5に示す画像情報に映る配筋構造体は、主筋である複数の鉄筋11が格子状に配置されており、副構成部材として、せん断補強筋12、スペーサブロック13、重ね継手14、シース管15および圧縮継手16を備える。
The training data used to create the custom model is image information taken on-site and in which the locations of sub-components are identified.
Fig. 5 is a screen diagram showing an example of image information of a reinforcement structure. The reinforcement structure shown in the image information shown in Fig. 5 has a plurality of reinforcing bars 11 arranged in a lattice pattern as main reinforcements, and includes shear reinforcement bars 12, spacer blocks 13, lap joints 14, sheath tubes 15, and compression joints 16 as secondary components.
 操作入力部24を用いて選択ボタン23Acが押下されると、選択部223は、図5に示す画面を表示部23に表示させる。検査者は、操作入力部24を用いて画面上の副構成部材をそれぞれ特定する。例えば、検査者は、操作入力部24を用いて、図5に示す画面上のせん断補強筋12が映る箇所をバウンディングボックス23B-1で囲み、バウンディングボックス23B-1で囲まれたせん断補強筋12の色および名称(この場合、「せん断補強筋」)を入力する。 When the selection button 23Ac is pressed using the operation input unit 24, the selection unit 223 causes the display unit 23 to display the screen shown in FIG. 5. The inspector uses the operation input unit 24 to identify each of the secondary components on the screen. For example, the inspector uses the operation input unit 24 to surround the area on the screen shown in FIG. 5 where the shear reinforcement 12 is displayed with a bounding box 23B-1, and inputs the color and name (in this case, "shear reinforcement") of the shear reinforcement 12 surrounded by the bounding box 23B-1.
 また、検査者は、操作入力部24を用いて、図5に示す画面上のスペーサブロック13が映る箇所をバウンディングボックス23B-2で囲み、バウンディングボックス23B-2で囲まれたスペーサブロック13の色および名称(この場合、「スペーサブロック」)を入力する。さらに、検査者は、操作入力部24を用いて、図5に示す画面上の重ね継手14が映る箇所をバウンディングボックス23B-3で囲んで、バウンディングボックス23B-3で囲まれた重ね継手14の色および名称(この場合、「重ね継手」)を入力する。 The inspector also uses the operation input unit 24 to enclose the area on the screen shown in Figure 5 where the spacer block 13 appears with a bounding box 23B-2, and inputs the color and name of the spacer block 13 enclosed by the bounding box 23B-2 (in this case, "spacer block"). The inspector also uses the operation input unit 24 to enclose the area on the screen shown in Figure 5 where the lap joint 14 appears with a bounding box 23B-3, and inputs the color and name of the lap joint 14 enclosed by the bounding box 23B-3 (in this case, "lap joint").
 さらに、検査者は、操作入力部24を用いて、図5に示す画面上のシース管15が映る箇所をバウンディングボックス23B-4で囲んで、バウンディングボックス23B-4で囲まれたシース管15の色および名称(この場合、「シース管」)を入力する。
 さらに、検査者は、操作入力部24を用いて、図5に示す画面上の圧縮継手16が映る箇所をバウンディングボックス23B-5で囲んで、バウンディングボックス23B-5で囲まれた圧縮継手16の色および名称(この場合、「圧縮継手」)を入力する。
Furthermore, the examiner uses the operation input unit 24 to surround the area on the screen shown in Figure 5 where the sheath tube 15 is displayed with a bounding box 23B-4, and inputs the color and name (in this case, "sheath tube") of the sheath tube 15 surrounded by the bounding box 23B-4.
Furthermore, the inspector uses the operation input unit 24 to surround the area on the screen shown in Figure 5 where the compression joint 16 is displayed with a bounding box 23B-5, and inputs the color and name (in this case, "compression joint") of the compression joint 16 surrounded by the bounding box 23B-5.
 選択部223は、バウンディングボックスの入力が完了すると、例えば、バウンディングボックスである矩形の左上の頂点の位置座標と右下の頂点の位置座標とを抽出し、抽出した位置座標を含む位置情報を作成する。この位置情報は、バウンディングボックス内の副構成部材の色および名称に紐付けられている。すなわち、位置情報は、副構成部材の正解ラベルを表すバウンディングボックスを特定するための情報である。バウンディングボックスで副構成部材を特定した画像情報と選択部223が作成した上記位置情報とを含むデータが学習データとして、カスタムモデルの作成に用いられる。 When the input of the bounding box is completed, the selection unit 223 extracts, for example, the position coordinates of the upper left vertex and the lower right vertex of the rectangular bounding box, and creates position information including the extracted position coordinates. This position information is linked to the color and name of the secondary component within the bounding box. In other words, the position information is information for identifying the bounding box that represents the correct label of the secondary component. Data including the image information identifying the secondary component by the bounding box and the above position information created by the selection unit 223 is used as learning data for creating a custom model.
 図5の例では、バウンディングボックス23B-1にせん断補強筋12が紐付けられ、バウンディングボックス23B-2にスペーサブロック13が紐付けられ、バウンディングボックス23B-3に重ね継手14が紐付けられ、バウンディングボックス23B-4にシース管15が紐付けられ、バウンディングボックス23B-5に圧縮継手16が紐付けられている。学習装置3は、せん断補強筋12、スペーサブロック13、重ね継手14、シース管15および圧縮継手16のそれぞれについてのカスタムモデルを作成する。 In the example of FIG. 5, the shear reinforcement 12 is linked to the bounding box 23B-1, the spacer block 13 is linked to the bounding box 23B-2, the lap joint 14 is linked to the bounding box 23B-3, the sheath tube 15 is linked to the bounding box 23B-4, and the compression joint 16 is linked to the bounding box 23B-5. The learning device 3 creates custom models for each of the shear reinforcement 12, the spacer block 13, the lap joint 14, the sheath tube 15, and the compression joint 16.
 なお、副構成部材の正解ラベルをバウンディングボックス形式で示す場合を示したが、これに限定されるものではなく、線またはマスク画像で表してもよい。
 例えば、検査者が、操作入力部24を用いて、図5に示す画面におけるシース管15の長手方向に沿って線を引く。選択部223は、線の始点の位置座標および終点の位置座標を抽出し、抽出した位置座標とシース管15とを紐付けた位置情報を作成する。選択部223は、この位置情報を含む学習データを作成する。
Although the correct labels of the secondary components are shown in the form of bounding boxes in the above embodiment, the present invention is not limited to this and the labels may be shown as lines or mask images.
For example, the examiner uses the operation input unit 24 to draw a line along the longitudinal direction of the sheath tube 15 on the screen shown in Fig. 5. The selection unit 223 extracts the position coordinates of the start point and the end point of the line, and creates position information that links the extracted position coordinates with the sheath tube 15. The selection unit 223 creates learning data including this position information.
 また、検査者が、操作入力部24を用いて、図5に示す画面におけるシース管15以外にマスクをかける。選択部223が、図5に示す画面におけるマスクされていないシース管15が映っている領域の位置座標を抽出し、抽出した位置座標とシース管15とを紐付けた位置情報を作成する。選択部223は、この位置情報を含む学習データを作成する。 The examiner also uses the operation input unit 24 to mask everything except the sheath tube 15 on the screen shown in FIG. 5. The selection unit 223 extracts the position coordinates of the area on the screen shown in FIG. 5 where the unmasked sheath tube 15 is displayed, and creates position information that links the extracted position coordinates with the sheath tube 15. The selection unit 223 creates learning data that includes this position information.
 選択部223が、画像情報に対してパターンマッチング等の画像解析を施すか、または画像情報に映る物体を大まかに検出する学習モデルを用いることにより、図5の画面上に存在する物体を囲むバウンディングボックスを自動で設定してもよい。この場合、検査者が、操作入力部24を用いて、自動で設定されたバウンディングボックス内に副構成部材が存在するかどうかを確認する。選択部223は、内部に副構成部材が存在することが確認されたバウンディングボックスについて上記位置情報を作成することになる。 The selection unit 223 may automatically set a bounding box surrounding an object present on the screen of FIG. 5 by performing image analysis such as pattern matching on the image information or by using a learning model that roughly detects objects shown in the image information. In this case, the inspector uses the operation input unit 24 to check whether a secondary component is present within the automatically set bounding box. The selection unit 223 will create the above-mentioned position information for a bounding box that is confirmed to contain a secondary component.
 また、配筋検査装置2は、プリセットモデルまたはカスタムモデルの少なくとも一方を用いて画像情報から副構成部材を推論するものであってもよい。例えば、配筋検査装置2がカスタムモデルのみを用いて副構成部材を推論する場合、選択部223は、操作入力部24を用いて配筋検査の開始が指示されると、カスタムモデルを自動で指定し、上述したカスタムモデルの作成処理に移行する。 The reinforcement inspection device 2 may also infer secondary components from image information using at least one of a preset model or a custom model. For example, if the reinforcement inspection device 2 infers secondary components using only a custom model, the selection unit 223 automatically specifies a custom model when an instruction to start reinforcement inspection is given using the operation input unit 24, and proceeds to the above-mentioned custom model creation process.
 また、選択部223は、共通の副構成部材に対して複数のプリセットモデルを選択してもよい。この場合、選択部223は、操作入力部24を用いてプリセットモデルを指定するか、あるいは自動でプリセットモデルを指定することにより、共通の副構成部材に対して複数のプリセットモデルを選択する。例えば、副構成部材の種類が「せん断補強筋」である場合に、選択部223は、種類が「せん断補強筋」である全てのプリセットモデルを選択する。推論部224は、選択部223が選択した全てのプリセットモデルのそれぞれを用いて副構成部材を推論し、評価が最も高いモデルから得られた結果を最終的な推論結果として出力する。 The selection unit 223 may also select multiple preset models for a common secondary component. In this case, the selection unit 223 selects multiple preset models for the common secondary component by specifying a preset model using the operation input unit 24 or by automatically specifying a preset model. For example, if the type of secondary component is "shear reinforcement", the selection unit 223 selects all preset models whose type is "shear reinforcement". The inference unit 224 infers the secondary component using each of all the preset models selected by the selection unit 223, and outputs the result obtained from the model with the highest evaluation as the final inference result.
 推論部224は、上述のように選択された学習モデルを用いて、前処理された画像情報から副構成部材を推論する。例えば、推論部224は、選択部223によって選択されたプリセットモデルまたはカスタムモデルに対して、前処理部222が前処理を施した画像情報を入力する。プリセットモデルまたはカスタムモデルが、入力された画像情報に映る副構成部材を推論する。例えば、これらの学習モデルによって画像上の副構成部材の位置および外観的特徴が推論される。
 なお、画像情報の前処理は、配筋検査装置2とは別に設けられた外部装置が行う処理であってもよい。この場合、配筋検査装置2は、前処理部222を備えていなくてもよい。取得部221は、上記外部装置から前処理済の画像情報を取得し、推論部224は、取得部221が取得した画像情報をそのまま用いて、画像情報に映る副構成部材を推論する。
 さらに、推論部224は、配筋構造体が撮影された、前処理を行っていない画像情報をそのまま用いて、画像情報に映る副構成部材を推論してもよい。この場合も同様に、配筋検査装置2は、前処理部222を備えていなくてもよい。例えば、学習装置3が、前処理を行っていない画像情報を、学習データとして学習モデルを作成する。推論部224は、この学習モデルを用いることにより、前処理を行っていない画像情報を用いた副構成部材の推論が可能である。
The inference unit 224 infers the secondary components from the preprocessed image information using the learning model selected as described above. For example, the inference unit 224 inputs the image information preprocessed by the preprocessing unit 222 for the preset model or custom model selected by the selection unit 223. The preset model or custom model infers the secondary components appearing in the input image information. For example, the positions and external features of the secondary components in the image are inferred by these learning models.
The pre-processing of the image information may be performed by an external device provided separately from the reinforcement inspection device 2. In this case, the reinforcement inspection device 2 does not need to include the pre-processing unit 222. The acquisition unit 221 acquires the pre-processed image information from the external device, and the inference unit 224 uses the image information acquired by the acquisition unit 221 as it is to infer the secondary components shown in the image information.
Furthermore, the inference unit 224 may infer the secondary components shown in the image information by directly using the image information of the reinforcement structure that has not been preprocessed. In this case as well, the reinforcement inspection device 2 does not need to be equipped with the preprocessing unit 222. For example, the learning device 3 creates a learning model using the image information that has not been preprocessed as learning data. By using this learning model, the inference unit 224 can infer the secondary components using the image information that has not been preprocessed.
 推論部224は、推論結果に基づいて副構成部材を検査し、検査結果を表示するための表示制御情報を作成し、作成した表示制御情報を表示部23に出力する。
 例えば、推論部224は、学習モデルを用いて推論した画像上のせん断補強筋の位置および外観的特徴に基づいて、配筋構造体におけるせん断補強筋の数を検査する。そして、推論部224は、検査結果を記述した電子黒板を配筋構造体が映る画像情報に重畳させて表示するための表示制御情報を作成する。表示部23は、当該表示制御情報に基づいて、配筋構造体が映る画像上に、せん断補強筋の数、色および形状が記述された電子黒板を重畳表示する。電子黒板は、検査結果が記述される電子的な画像データである。
The inference unit 224 inspects the secondary components based on the inference result, creates display control information for displaying the inspection result, and outputs the created display control information to the display unit 23 .
For example, the inference unit 224 inspects the number of shear reinforcement bars in a reinforcement structure based on the positions and appearance features of the shear reinforcement bars on the image inferred using the learning model.The inference unit 224 then creates display control information for superimposing an electronic whiteboard on which the inspection results are described on image information showing the reinforcement structure.The display unit 23 superimposes an electronic whiteboard on which the number, color, and shape of the shear reinforcement bars are described on the image showing the reinforcement structure based on the display control information.The electronic whiteboard is electronic image data on which the inspection results are described.
 また、推論部224は、学習モデルを用いて推論した画像上の重ね継手の位置および外観的特徴に基づいて、配筋構造体における重ね継手の数および設計値からの形状のずれ度合いを検査する。例えば、推論部224は、2本の鉄筋の端部が重なり合う長さの設計値からのずれ量を重ね継手ごとに検査し、重ね継手ごとの検査結果を電子黒板に表示させてもよい。 The inference unit 224 also inspects the number of lap joints in the reinforcement structure and the degree of deviation of the shape from the design value based on the position and appearance features of the lap joints on the image inferred using the learning model. For example, the inference unit 224 may inspect the amount of deviation of the overlapping length of the ends of two reinforcing bars from the design value for each lap joint, and display the inspection results for each lap joint on an electronic whiteboard.
 さらに、推論部224は、学習モデルを用いて推論した画像上の圧縮継手の位置および外観的特徴に基づいて、配筋構造体における圧縮継手の数および設計値からの形状のずれ度合いを検査する。例えば、推論部224は、推論した圧縮継手における膨らみの直径、膨らみ部分の長さ、膨らみにおける鉄筋の中心軸からの偏心量、および膨らみの頂部からの圧縮面のずれの少なくとも一つについて設計値からのずれ量を検査し、これを検査結果として電子黒板に表示させてもよい。
 例えば、検査対象の配筋構造体に設けられた圧縮継手ごとに、鉄筋の端部を突き合わせて接合した部分の膨らみの直径が、鉄筋径の1.4倍以上であり、かつ、鉄筋の端部を突き合わせて接合した部分の長さが、鉄筋径の1.1倍以上であるか、および、鉄筋の中心軸からの偏心量が鉄筋径の5分の1以下であり、かつ、膨らみの頂部からの圧縮面のずれが鉄筋径の4分の1以下であるかの検査結果が電子黒板に表示される。
Furthermore, the inference unit 224 checks the number of compression joints in the reinforcement structure and the degree of deviation of the shape from the design value based on the position and appearance features of the compression joints on the image inferred using the learning model. For example, the inference unit 224 may check the deviation from the design value for at least one of the diameter of the bulge in the inferred compression joint, the length of the bulge, the eccentricity of the bulge from the central axis of the rebar, and the deviation of the compression surface from the top of the bulge, and display this as the inspection result on an electronic whiteboard.
For example, for each compression joint installed in the reinforced concrete structure being inspected, the inspection results are displayed on the electronic whiteboard, indicating whether the diameter of the bulge at the part where the ends of the reinforcing bars are butt-jointed is 1.4 times or more the diameter of the reinforcing bars, whether the length of the part where the ends of the reinforcing bars are butt-jointed is 1.1 times or more the diameter of the reinforcing bars, whether the eccentricity from the central axis of the reinforcing bars is less than one-fifth of the diameter of the reinforcing bars, and whether the deviation of the compression surface from the top of the bulge is less than one-fourth of the diameter of the reinforcing bars.
 推論部224は、副構成部材の計測結果を用いて推論結果が誤っているかどうかを判定してもよい。例えば、取得部221は、配筋構造体を3次元点群で表す点群データを取得する。上記点群データは、ステレオカメラ、赤外線カメラまたはLIDAR等のセンサが検出した配筋構造体との距離を示すデータである。推論部224は、取得部221により取得されて記憶部25に保存された画像情報を読み出し、画像情報において物体が映る画像領域における画素値(r,g,b)に対し、当該物体における3次元点とセンサとの距離dを付与する。これにより、物体が映る画像領域における画素値として4つの要素を有した画素値(r,g,b,d)が得られる。 The inference unit 224 may use the measurement results of the secondary components to determine whether the inference result is incorrect. For example, the acquisition unit 221 acquires point cloud data that represents the reinforcement structure as a three-dimensional point cloud. The point cloud data is data indicating the distance to the reinforcement structure detected by a sensor such as a stereo camera, an infrared camera, or a LIDAR. The inference unit 224 reads out the image information acquired by the acquisition unit 221 and stored in the storage unit 25, and assigns the distance d i between the three-dimensional point on the object and the sensor to the pixel value (r i , g i , b i ) in the image area in which the object is reflected in the image information. As a result, a pixel value (r i , g i , b i , b i ) having four elements is obtained as the pixel value in the image area in which the object is reflected.
 推論部224は、副構成部材が映っている画像領域の広さを推定し、この推定値に基づいて副構成部材の大きさ(1)を算出し、さらに、副構成部材が映っている画像領域における3次元点と当該3次元点に対応する画素の画素値における距離dとを用いて、当該副構成部材の大きさ(2)を算出する。そして、推論部224は、副構成部材の大きさ(1)と大きさ(2)とを比較した結果に基づいて、副構成部材を誤って推論したか否かを判定する。ここで、大きさ(2)は副構成部材の実サイズに相当する。推論部224は、大きさ(2)の値に対して大きさ(1)の値が許容閾値範囲の上限を超える場合、または許容閾値範囲の下限を下回る場合、学習モデルによる当該副構成部材の推論は誤りであると判定する。 The inference unit 224 estimates the size of the image area in which the secondary component is shown, calculates the size (1) of the secondary component based on this estimated value, and further calculates the size (2) of the secondary component using a distance d i between a three-dimensional point in the image area in which the secondary component is shown and the pixel value of a pixel corresponding to the three-dimensional point. Then, the inference unit 224 determines whether the secondary component has been erroneously inferred based on the result of comparing the size (1) and the size (2) of the secondary component. Here, the size (2) corresponds to the actual size of the secondary component. If the value of the size (1) exceeds the upper limit of the allowable threshold range with respect to the value of the size (2), or falls below the lower limit of the allowable threshold range, the inference unit 224 determines that the inference of the secondary component by the learning model is erroneous.
 また、推論部224は、推論結果が誤っているかどうかの判定結果を用いて学習モデルの推論精度を算出してもよい。例えば、共通した画像情報を用いて、複数の学習モデルのそれぞれに推論を行わせ、推論部224は、これらの推論結果が誤っているか否かの判定をそれぞれで行い、推論回数に対する推論結果の正解数の比を推論精度として算出する。
 このように推論結果の誤りを判定することで、配筋検査装置2は、副構成部材を正確に推論することができる。
The inference unit 224 may also calculate the inference accuracy of the learning model using the determination result of whether the inference result is incorrect. For example, the inference unit 224 causes each of the multiple learning models to perform inference using common image information, and the inference unit 224 determines whether the inference result is incorrect for each of the multiple learning models, and calculates the ratio of the number of correct inference results to the number of inferences as the inference accuracy.
By determining whether the inference result is an error in this manner, the reinforcement inspection device 2 can accurately infer the secondary components.
 これまで、配筋検査装置2が、配筋構造体における副構成部材を検出する場合を示したが、副構成部材に加え、主筋を自動で検出してもよい。例えば、プリセットモデルおよびカスタムモデルには、副構成部材に関するモデルの他、主筋に関するモデルも含まれる。主筋に関するモデルとは、主筋の種類、色および形状を含む外観的特徴ごとに設けられ、画像情報に映る主筋を推論するための第3学習モデルである。第3学習モデルは、プリセットモデルであってもよいし、カスタムモデルであってもよい。 So far, we have shown the case where the reinforcement inspection device 2 detects secondary components in a reinforcement structure, but it may also automatically detect main reinforcement in addition to secondary components. For example, the preset models and custom models include models related to secondary components as well as models related to main reinforcement. A model related to main reinforcement is a third learning model that is provided for each external appearance feature including the type, color, and shape of the main reinforcement, and is used to infer the main reinforcement reflected in the image information. The third learning model may be a preset model or a custom model.
 選択部223は、図3に示した操作画面23Aにおいて、主筋を推論するためのプリセットモデルを指定するための選択ボタンも表示する。選択部223は、操作入力部24を用いて主筋に関するプリセットモデルの選択ボタンが操作されると、主筋に関するプリセットモデルの指定情報を作成する。推論部224は、学習装置3から当該指定情報が示す学習モデルを取得すると、取得した学習モデルを用いて、前処理部222によって前処理された画像情報から主筋を推論する。これにより、配筋検査装置2は、配筋構造体において副構成部材に加え、主筋を自動で検出することができる。 The selection unit 223 also displays a selection button for specifying a preset model for inferring the main reinforcement on the operation screen 23A shown in FIG. 3. When the selection button for a preset model related to the main reinforcement is operated using the operation input unit 24, the selection unit 223 creates specification information for the preset model related to the main reinforcement. When the inference unit 224 acquires a learning model indicated by the specification information from the learning device 3, it uses the acquired learning model to infer the main reinforcement from the image information preprocessed by the preprocessing unit 222. This allows the reinforcement inspection device 2 to automatically detect the main reinforcement in addition to secondary components in a reinforcement structure.
 学習装置3の機能構成要素について説明する。
 データ取得部321は、画像情報と画像情報における物体の位置情報とを含む学習データを取得する。例えば、データ取得部321は、通信部31により、通信回線を介して、配筋検査装置2からカスタムモデルの指定情報を受信すると、配筋構造体が撮影された画像情報とこの画像情報に映る副構成部材の位置情報とを含む学習データを配筋検査装置2から取得する。データ取得部321は、配筋検査装置2から取得した学習データを学習用DB333に保存する。
The functional components of the learning device 3 will now be described.
The data acquisition unit 321 acquires learning data including image information and positional information of objects in the image information. For example, when the data acquisition unit 321 receives designation information of a custom model from the reinforcement inspection device 2 via the communication line by the communication unit 31, the data acquisition unit 321 acquires learning data including image information of a captured reinforcement structure and positional information of secondary components shown in the image information from the reinforcement inspection device 2. The data acquisition unit 321 stores the learning data acquired from the reinforcement inspection device 2 in the learning DB 333.
 学習部322は、学習データを用いて、画像情報に映った副構成部材を推論するための学習モデルを作成して保存する。例えば、学習部322は、事前学習モデルDB332に格納される事前学習モデルと、学習用DB333に保存された学習データを用いて、プリセットモデルまたはカスタムモデルを作成する。さらに、学習部322は、作成した学習モデルを評価し、評価値が許容値以上である学習モデルを、配筋検査装置2が用いる学習モデルに決定する。学習モデルの評価には、例えば、正解データに対する適合率および推論結果の再現率が指標として用いられる。 The learning unit 322 uses the learning data to create and store a learning model for inferring secondary components shown in the image information. For example, the learning unit 322 creates a preset model or a custom model using a pre-learning model stored in the pre-learning model DB 332 and learning data stored in the learning DB 333. Furthermore, the learning unit 322 evaluates the created learning model, and determines the learning model whose evaluation value is equal to or greater than the allowable value as the learning model to be used by the reinforcement inspection device 2. For example, the matching rate with respect to the correct data and the recall rate of the inference result are used as indices to evaluate the learning model.
 検索部323は、副構成部材の種類、色および形状を含む外観的特徴ごとに作成された学習モデルから、配筋検査装置2において指定された学習モデルを検索し、検索により得られた学習モデルを配筋検査装置2に出力する。例えば、検索部323は、配筋検査装置2からのプリセットモデルの指定情報を取得すると、取得した指定情報に含まれる副構成部材に関する情報に基づいて選択候補DB331を検索する。そして、検索部323は、検索結果のプリセットモデルを示す情報を、通信部31により、通信回線を介して、配筋検査装置2に送信する。これにより、配筋検査装置2は、指定した学習モデルを取得することができる。 The search unit 323 searches for a learning model specified in the reinforcement inspection device 2 from the learning models created for each external appearance feature including the type, color, and shape of the secondary component, and outputs the learning model obtained by the search to the reinforcement inspection device 2. For example, when the search unit 323 acquires designation information of a preset model from the reinforcement inspection device 2, it searches the selection candidate DB 331 based on information about the secondary component included in the acquired designation information. The search unit 323 then transmits information indicating the preset model of the search result to the reinforcement inspection device 2 via the communication line by the communication unit 31. This allows the reinforcement inspection device 2 to acquire the designated learning model.
 図6は、実施の形態1に係る配筋検査方法を示すフローチャートである。
 取得部221が、画像情報を取得する(ステップST1)。例えば、取得部221は、配筋検査装置2が備える単眼カメラまたはステレオカメラから、配筋構造体が撮影された画像情報を取得する。
 前処理部222が画像情報を前処理する(ステップST2)。例えば、前処理部222は、画素の色値を色値の最大値で正規化した画像情報を算出する。
 なお、取得部221が前処理済の画像情報を取得する場合、または、推論部224が、前処理を行わない画像情報を用いて副構成部材の推論を行う場合には、ステップST2の処理を行わずに、ステップST3の処理に移行してもよい。
FIG. 6 is a flowchart showing the reinforcement bar inspection method according to the first embodiment.
The acquisition unit 221 acquires image information (step ST1). For example, the acquisition unit 221 acquires image information of a reinforcement structure captured by a monocular camera or a stereo camera provided in the reinforcement inspection device 2.
The pre-processing unit 222 pre-processes the image information (step ST2). For example, the pre-processing unit 222 calculates image information by normalizing the color values of pixels by the maximum color value.
In addition, when the acquisition unit 221 acquires preprocessed image information, or when the inference unit 224 infers secondary components using image information that has not been preprocessed, the process may proceed to step ST3 without performing step ST2.
 選択部223が、学習装置3が保持する複数の学習モデルのうちから、指定された学習モデルを選択する(ステップST3)。例えば、学習装置3は、副構成部材の種類、色および形状を含む外観的特徴ごとに設けられた複数のプリセットモデルを管理し、さらに、配筋検査装置2から指定されたカスタムモデルを作成する。選択部223は、学習装置3に対してプリセットモデルまたはカスタムモデルを指定することにより、推論部224が用いる学習モデルを選択する。 The selection unit 223 selects a specified learning model from among multiple learning models held by the learning device 3 (step ST3). For example, the learning device 3 manages multiple preset models provided for each external appearance feature including the type, color, and shape of the secondary component, and further creates a custom model specified by the reinforcement inspection device 2. The selection unit 223 selects the learning model to be used by the inference unit 224 by specifying a preset model or a custom model to the learning device 3.
 推論部224が、選択部223により選択された学習モデルを用いて、前処理された画像情報から副構成部材を推論する(ステップST4)。例えば、推論部224は、副構成部材の推論結果と、推論結果を用いた副構成部材の検査結果を表示部23に表示させる。
 配筋検査装置2がこの方法を実行することにより、配筋構造体において主筋以外に設けられる副構成部材を自動で検出することができる。
The inference unit 224 infers the secondary component from the preprocessed image information using the learning model selected by the selection unit 223 (step ST4). For example, the inference unit 224 causes the display unit 23 to display the inference result of the secondary component and the inspection result of the secondary component using the inference result.
By executing this method with the reinforcement inspection device 2, secondary components provided other than the main reinforcement in a reinforcement structure can be automatically detected.
 次に、学習装置3によるプリセットモデルの事前作成について説明する。
 まず、データ取得部321がプリセットモデルの作成に用いる学習データを取得する。当該学習データは、例えば、操作入力部24を用いて画像上の物体が特定された画像情報を含む。これらの学習データは、データ取得部321によって学習用DB333に保存される。図7は、学習データの例を示す図である。図7に示すように、学習データは、副構成部材の種類、色および形状を含む外観的特徴ごとに設けられたデータである。
Next, the advance creation of a preset model by the learning device 3 will be described.
First, the data acquisition unit 321 acquires learning data used to create a preset model. The learning data includes, for example, image information in which an object on an image is specified using the operation input unit 24. The learning data is stored in the learning DB 333 by the data acquisition unit 321. Fig. 7 is a diagram showing an example of the learning data. As shown in Fig. 7, the learning data is data provided for each external appearance feature including the type, color, and shape of the secondary component.
 例えば、図7における、No.1の学習データには、画像Aにおける、青色でかつU字形状を有したせん断補強筋がそれぞれ映っている画像領域に設定されたバウンディングボックスの位置A,B,Cが設定されている。また、No.2の学習データには、画像Bにおける、青色でかつ円柱形状の鉄筋に設けられた重ね継手がそれぞれ映っている画像領域に設定されたバウンディングボックスの位置D,Eが設定されている。さらに、No.3の学習データには、画像Cにおける、銀色でかつ円柱形状を有したシース管が映っている画像領域に設定されたバウンディングボックスの位置Fが設定されている。 For example, in the learning data No. 1 in FIG. 7, bounding box positions A, B, and C are set in the image areas in image A where blue, U-shaped shear reinforcement bars are shown. In addition, in the learning data No. 2, bounding box positions D and E are set in the image areas in image B where blue, cylindrical lap joints on reinforcing bars are shown. Furthermore, in the learning data No. 3, bounding box position F is set in the image area in image C where a silver, cylindrical sheath tube is shown.
 図8は、学習部322の構成を示すブロック図である。図8に示すように、学習部322は、データ分類部3221、モデル作成部3222および評価部3223を備える。
 データ分類部3221は、データ取得部321によって学習用DB333に保存された画像情報から、副構成部材が映った複数の画像情報を抽出し、抽出した画像情報を学習用と評価用とに分ける。
Fig. 8 is a block diagram showing the configuration of the learning unit 322. As shown in Fig. 8, the learning unit 322 includes a data classification unit 3221, a model creation unit 3222, and an evaluation unit 3223.
The data classification unit 3221 extracts a plurality of pieces of image information showing secondary components from the image information stored in the learning DB 333 by the data acquisition unit 321, and divides the extracted image information into pieces for learning and pieces for evaluation.
 例えば、青色かつU字形状を有したせん断補強筋を推論するためのプリセットモデルを作成する場合、データ分類部3221は、図7に示したNo.1の学習データを学習用DB333から読み出し、このデータを学習用と評価用とに分ける。これにより、同じせん断補強筋に関する画像情報が学習用と評価用とに分類される。 For example, when creating a preset model for inferring blue, U-shaped shear reinforcement, the data classification unit 3221 reads out the learning data No. 1 shown in FIG. 7 from the learning DB 333 and separates this data into data for learning and data for evaluation. This allows image information relating to the same shear reinforcement to be classified into data for learning and data for evaluation.
 モデル作成部3222は、学習用の画像情報を用いて、画像情報に映った副構成部材を推論するための学習モデルを作成する。例えば、モデル作成部3222は、事前学習モデルDB332から取り出した事前学習モデルを用いて、学習用の画像情報に映る副構成部材を学習することにより、当該副構成部材を推論するための学習モデルを作成する。
 なお、事前学習モデルを含む学習モデルには、例えば、サイアミーズネットワーク(Siamese Network)が用いられる。
The model creation unit 3222 uses the image information for learning to create a learning model for inferring a secondary component shown in the image information. For example, the model creation unit 3222 uses a pre-learning model extracted from the pre-learning model DB 332 to learn the secondary component shown in the image information for learning to create a learning model for inferring the secondary component.
In addition, for example, a Siamese network is used as the learning model including the pre-learning model.
 評価部3223は、評価用の画像情報を用いて学習モデルを評価し、評価条件を満たした学習モデルを、プリセットモデルとして保存する。例えば、評価部3223は、データ分類部3221から取得した評価用の画像情報を用いて、モデル作成部3222によって作成された学習モデルを評価する。 The evaluation unit 3223 evaluates the learning model using the image information for evaluation, and saves the learning model that satisfies the evaluation conditions as a preset model. For example, the evaluation unit 3223 evaluates the learning model created by the model creation unit 3222 using the image information for evaluation obtained from the data classification unit 3221.
 図9は、学習モデルの評価結果を示す図であり、図7に示したNo.1の学習データに関する評価結果である。図9における学習データと評価データは、データ分類部3221によって学習用と評価用に分類された画像情報である。評価部3223は、学習データおよび評価データに対して、モデル作成部3222が学習モデルを作成する際に用いられた事前学習モデルの種類と、その学習方法(例えば、確率的勾配降下法等)とを対応付けたデータセットを作成する。 FIG. 9 shows the evaluation results of the learning model, and is the evaluation result for the learning data No. 1 shown in FIG. 7. The learning data and evaluation data in FIG. 9 are image information classified by the data classification unit 3221 into learning data and evaluation data. The evaluation unit 3223 creates a data set that associates the type of pre-learning model used when the model creation unit 3222 created the learning model with the learning method (e.g., stochastic gradient descent method, etc.) for the learning data and evaluation data.
 評価部3223は、図9に示すNo.1のデータセットにおいて、事前学習モデルであるモデルAを用いて、画像A、画像Iおよび画像Jとこれらの画像におけるせん断補強筋の位置情報とを学習データとしてせん断補強筋を推論する。次に、評価部3223は、同じモデルAを用いて、画像K、画像Lおよび画像Mとこれらの画像におけるせん断補強筋の位置情報とを評価データとしてせん断補強筋を推論する。これらの結果を用いて、評価部3223は、適合率Aおよび再現率Aを算出する。 In the data set No. 1 shown in FIG. 9, the evaluation unit 3223 uses model A, which is a pre-learning model, to infer shear reinforcement using images A, I, and J and the position information of the shear reinforcement in these images as training data. Next, the evaluation unit 3223 uses the same model A to infer shear reinforcement using images K, L, and M and the position information of the shear reinforcement in these images as evaluation data. Using these results, the evaluation unit 3223 calculates the precision rate A and the recall rate A.
 評価部3223は、No.2のデータセット以降のデータセットについても、同様に、適合率および再現率を算出する。評価部3223は、最も適合率および再現率が高い学習モデルを選択するという評価条件に基づき、図7に示したNo.1の学習データに関する全てのデータセットについての適合率および再現率をそれぞれ比較する。評価部3223は、これらの比較結果から適合率および再現率が最も高い学習モデルをプリセットモデルに決定し、このモデルを選択候補DB331に保存する。なお、適合率および再現率が最も高い学習モデルを選択する評価条件を示したが、これに限定されるものではない。例えば、適合率および再現率が一定の閾値よりも高い学習モデルを選択するという評価条件であってもよい。また、評価条件は、適合率または再現率のいずれか一方が最も高いまたは閾値よりも高い学習モデルを選択するものであってもよい。 The evaluation unit 3223 calculates the precision and recall of the data sets after data set No. 2 in the same manner. Based on the evaluation condition of selecting the learning model with the highest precision and recall, the evaluation unit 3223 compares the precision and recall of all data sets related to the learning data No. 1 shown in FIG. 7. The evaluation unit 3223 determines the learning model with the highest precision and recall from the comparison results as the preset model and stores this model in the selection candidate DB 331. Note that, although the evaluation condition of selecting the learning model with the highest precision and recall has been shown, the present invention is not limited to this. For example, the evaluation condition may be to select a learning model with a precision and recall higher than a certain threshold. The evaluation condition may also be to select a learning model with either the precision or recall highest or higher than a threshold.
 また、学習モデルの評価指標として適合率および再現率を示したが、他の評価指標でも構わない。例えば、正解データである副構成部材を囲むバウンディングボックス(正解矩形)と、推定した副構成部材を囲むバウンディングボックス(推論矩形)とがオーバーラップした面積の割合である、MIOU(Mean Intersection Over Union)を評価指標としてもよい。 Furthermore, although precision and recall are shown as evaluation indices for the learning model, other evaluation indices may be used. For example, the evaluation indices may be MIOU (Mean Intersection Over Union), which is the ratio of the overlapping area between the bounding box (correct rectangle) surrounding the sub-component, which is the correct data, and the bounding box (inferred rectangle) surrounding the estimated sub-component.
 図10は、学習モデルにおける埋め込みベクトルの一致判定を概略的に示す説明図である。図10において、画像情報(1)が入力されるネットワークおよび画像情報(2)が入力されるネットワークは、共通のネットワークである。画像情報(1)は、バウンディングボックス等を用いて正解ラベルを付与した画像情報である。画像情報(2)は、学習用DB333に保存された画像情報のうち、データ分類部3221が、学習用として分類したデータである。例えば、画像情報(2)はいずれかの副構成部材が映った画像領域を抽出した部分画像である。なお、上記共通のネットワークである機械学習モデルとして、例えば、Siamese Networkを用いてもよい。 FIG. 10 is an explanatory diagram that shows an outline of the matching judgment of embedded vectors in a learning model. In FIG. 10, the network to which image information (1) is input and the network to which image information (2) is input are a common network. Image information (1) is image information to which a correct answer label has been assigned using a bounding box or the like. Image information (2) is data that has been classified for learning purposes by the data classification unit 3221 from among the image information stored in the learning DB 333. For example, image information (2) is a partial image in which an image area showing any of the secondary components is extracted. Note that, for example, a Siamese Network may be used as the machine learning model that is the common network.
 モデル作成部3222は、画像情報(1)である正解データをネットワークに入力することにより、画像情報(1)に対応する埋め込みベクトル(Embedding Vector)が算出される。例えば、PaDiMでは、1層目から3層目までの出力を結合することにより、埋め込みベクトルを算出する。この埋め込みベクトルは、モデル作成部3222により記憶部33に保存される。 The model creation unit 3222 inputs the correct answer data, which is image information (1), into the network, and calculates an embedding vector corresponding to the image information (1). For example, in PaDiM, the embedding vector is calculated by combining the outputs from the first to third layers. This embedding vector is stored in the memory unit 33 by the model creation unit 3222.
 次に、モデル作成部3222は、学習用の画像情報(2)を用いて画像情報に対応する埋め込みベクトルを算出し、算出した埋め込みベクトルと、記憶部33に保存した埋め込みベクトルとが一致すると判定した学習モデルを、プリセットモデルとして作成する。
 このように最終的な推論結果を算出する前に作成可能な埋め込みベクトルを用いることにより、学習モデルの作成に要する時間を削減できる。
Next, the model creation unit 3222 calculates an embedding vector corresponding to the image information using the learning image information (2), and creates a learning model as a preset model if it determines that the calculated embedding vector matches the embedding vector stored in the memory unit 33.
In this way, by using an embedding vector that can be created before calculating the final inference result, the time required to create a learning model can be reduced.
 図11は、プリセットモデルの作成処理を示すフローチャートである。
 データ分類部3221が、データ取得部321が取得して学習用DB333に順次保存した画像情報から、副構成部材が映った複数の画像情報を抽出して、抽出した画像情報を学習用と評価用とに分ける(ステップST1A)。
 モデル作成部3222が、学習用の画像情報を用いて、画像情報に映った副構成部材を推論するための学習モデルを作成する(ステップST2A)。
 評価部3223が、評価用の画像情報を用いて学習モデルを評価し、評価条件を満たす学習モデルを、プリセットモデルとして保存する(ステップST3A)。
FIG. 11 is a flowchart showing the process of creating a preset model.
The data classification unit 3221 extracts multiple pieces of image information showing secondary components from the image information acquired by the data acquisition unit 321 and stored sequentially in the learning DB 333, and separates the extracted image information into pieces for learning and pieces for evaluation (step ST1A).
The model creation unit 3222 uses the image information for learning to create a learning model for inferring the secondary component shown in the image information (step ST2A).
The evaluation unit 3223 evaluates the learning model using the image information for evaluation, and saves the learning model that satisfies the evaluation conditions as a preset model (step ST3A).
 図12は、カスタムモデルの作成処理を示すフローチャートである。
 データ分類部3221が、学習用DB333に保存された画像情報から、副構成部材が映った複数の画像情報を取得し、取得した複数の画像情報を学習用と評価用とにそれぞれ分類し、学習用の画像情報を学習データとしてモデル作成部3222に出力する(ステップST1B)。
 次に、モデル作成部3222が、事前学習モデルDB332から事前学習モデルを取得する(ステップST2B)。
 モデル作成部3222は、事前学習モデルおよび学習データを用いて、上記画像情報に映った副構成部材を推論するための学習モデルを作成する(ステップST3B)。
FIG. 12 is a flowchart showing the process of creating a custom model.
The data classification unit 3221 acquires multiple pieces of image information showing secondary components from the image information stored in the learning DB 333, classifies the acquired multiple pieces of image information into those for learning and those for evaluation, and outputs the image information for learning to the model creation unit 3222 as learning data (step ST1B).
Next, the model creation unit 3222 acquires a pre-training model from the pre-training model DB 332 (step ST2B).
The model creation unit 3222 uses the pre-learning model and the learning data to create a learning model for inferring the secondary component shown in the image information (step ST3B).
 評価部3223が、データ分類部3221からの評価用の画像情報を用いて学習モデルを評価し、評価条件を満たした学習モデルをカスタムモデルとして、通信部31により、通信回線を介して配筋検査装置2に送信する(ステップST4B)。
 なお、検査者が、カスタムモデルの推論精度に満足しない場合には、操作入力部24を用いて再作成を指示することにより、配筋検査システム1において、学習データの作成、図12に示したカスタムモデルの作成、およびカスタムモデルの性能評価を繰り返し実行してもよい。さらに、カスタムモデルの性能評価には適合率および再現率を用いてもよいし、MIOUを用いてもよい。
The evaluation unit 3223 evaluates the learning model using the evaluation image information from the data classification unit 3221, and transmits the learning model that satisfies the evaluation conditions as a custom model to the reinforcement inspection device 2 via the communication line by the communication unit 31 (step ST4B).
If the inspector is not satisfied with the inference accuracy of the custom model, he/she may instruct re-creation using the operation input unit 24, thereby repeatedly creating learning data, creating the custom model shown in Fig. 12, and evaluating the performance of the custom model in the bar arrangement inspection system 1. Furthermore, the performance evaluation of the custom model may use the precision rate and the recall rate, or may use the MIOU.
 以上のように、実施の形態1に係る配筋検査装置2は、画像情報を取得する取得部221と、副構成部材の種類、色および形状を含む外観的特徴ごとに設けられ、画像情報に映った副構成部材を推論するための複数の学習モデルのうち、指定された学習モデルを選択する選択部223と、選択された学習モデルを用いて、画像情報から副構成部材を推論する推論部224を備える。これにより、配筋検査装置2は、配筋構造体において主筋以外に設けられる副構成部材を自動で検出することができる。 As described above, the reinforcement inspection device 2 according to the first embodiment includes an acquisition unit 221 that acquires image information, a selection unit 223 that is provided for each external appearance feature including the type, color, and shape of the secondary component and that selects a specified learning model from among a plurality of learning models for inferring the secondary component shown in the image information, and an inference unit 224 that infers the secondary component from the image information using the selected learning model. This allows the reinforcement inspection device 2 to automatically detect secondary components that are provided other than the main reinforcement in a reinforcement structure.
 実施の形態1に係る配筋検査装置2において、複数の学習モデルには、指定される前に予め作成された一つまたは複数のプリセットモデルと、指定されてから作成されるカスタムモデルとが含まれる。これにより、配筋検査装置2は、事前に用意された推論精度の高いプリセットモデルを用いた副構成部材の推論と、現場の状況に即したカスタムモデルを用いた副構成部材の推論とが可能である。 In the reinforcement inspection device 2 according to the first embodiment, the multiple learning models include one or more preset models that are created before being specified, and a custom model that is created after being specified. This allows the reinforcement inspection device 2 to infer secondary components using preset models with high inference accuracy that are prepared in advance, and to infer secondary components using custom models that are suited to the on-site situation.
 実施の形態1に係る配筋検査装置2は、画像情報を、推論部224が行う推論処理に適した形態になるように前処理する前処理部222を備える。推論部224は、学習モデルを用いて、前処理された画像情報から副構成部材を推論する。これにより、推論部224は、副構成部材の推論を円滑に行うことができる。 The reinforcement inspection device 2 according to the first embodiment includes a preprocessing unit 222 that preprocesses image information into a form suitable for the inference processing performed by the inference unit 224. The inference unit 224 uses a learning model to infer secondary components from the preprocessed image information. This allows the inference unit 224 to smoothly infer secondary components.
 実施の形態1に係る配筋検査装置2において、前処理部222は、画像情報を正規化する。正規化された画像情報は、DLで作成された学習モデルの入力データとして用いることができる。 In the reinforcement bar inspection device 2 according to the first embodiment, the preprocessing unit 222 normalizes the image information. The normalized image information can be used as input data for the learning model created by DL.
 実施の形態1に係る配筋検査装置2において、選択部223は、学習モデルを指定する操作を行わせる操作画面を表示させるための表示制御情報を出力し、操作画面に基づき、受け付けられた操作により指定された学習モデルを選択する。これにより、選択部223は、副構成部材を推論するための学習モデルを的確に選択することが可能である。 In the reinforcement inspection device 2 according to the first embodiment, the selection unit 223 outputs display control information for displaying an operation screen on which an operation for specifying a learning model is performed, and selects the learning model specified by the accepted operation based on the operation screen. This enables the selection unit 223 to accurately select a learning model for inferring secondary components.
 実施の形態1に係る配筋検査装置2は、表示制御情報に基づいて操作画面23Aを表示する表示部23を備える。これにより、配筋検査装置2は、学習モデルを指定する操作を行わせる操作画面を表示させることができる。 The reinforcing bar inspection device 2 according to the first embodiment includes a display unit 23 that displays an operation screen 23A based on the display control information. This allows the reinforcing bar inspection device 2 to display an operation screen that allows an operation to specify a learning model.
 実施の形態1に係る配筋検査装置2は、表示部23に表示された操作画面23A上の複数の学習モデルから学習モデルを指定する操作を受け付ける操作入力部24を備える。
 これにより、操作入力部24を用いた操作で学習モデルを指定することができる。
The bar arrangement inspection device 2 according to the first embodiment includes an operation input unit 24 that accepts an operation to specify a learning model from a plurality of learning models on an operation screen 23A displayed on the display unit 23.
This allows the learning model to be specified by operating the operation input unit 24.
 実施の形態1に係る配筋検査装置2において、選択部223は、複数の学習モデルから学習モデルを自動で指定して選択する。これにより、選択部223は、副構成部材を推論するための学習モデルを的確に選択することが可能である。 In the reinforcement inspection device 2 according to the first embodiment, the selection unit 223 automatically specifies and selects a learning model from a plurality of learning models. This enables the selection unit 223 to accurately select a learning model for inferring secondary components.
 実施の形態1に係る配筋検査装置2において、取得部221は、配筋構造体を3次元点群で表す点群データを取得する。推論部224は、画像情報を用いて算出された副構成部材と、点群データを用いて算出された副構成部材とを比較した結果に基づいて、副構成部材を誤って推論したか否かを判定する。これにより、配筋検査装置2は、副構成部材を正確に推論することができる。 In the reinforcement inspection device 2 according to the first embodiment, the acquisition unit 221 acquires point cloud data that represents the reinforcement structure as a three-dimensional point cloud. The inference unit 224 determines whether or not the secondary component has been erroneously inferred based on the result of comparing the secondary component calculated using the image information with the secondary component calculated using the point cloud data. This allows the reinforcement inspection device 2 to accurately infer the secondary component.
 実施の形態1に係る配筋検査装置2において、プリセットモデルおよびカスタムモデルには、主筋の種類、色および形状を含む外観的特徴ごとに設けられ、画像情報に映る主筋を推論するための学習モデルが含まれる。推論部224は、これら学習モデルを用いて、前処理された画像情報から主筋を推論する。これにより、配筋検査装置2は、配筋構造体における副構成部材に加え、主筋を自動で検出することができる。 In the reinforcement inspection device 2 according to the first embodiment, the preset models and custom models include learning models that are provided for each external appearance feature, including the type, color, and shape of the main reinforcement, and are used to infer the main reinforcement reflected in the image information. The inference unit 224 uses these learning models to infer the main reinforcement from the preprocessed image information. This allows the reinforcement inspection device 2 to automatically detect the main reinforcement in addition to the secondary components in the reinforcement structure.
 実施の形態1に係る配筋検査装置2において、副構成部材は、せん断補強筋、重ね継手、スペーサブロック、シース管、または圧縮継手のうちの少なくとも一つである。このように、配筋検査装置2は、副構成部材として様々な部材の検出が可能である。 In the reinforcement inspection device 2 according to the first embodiment, the secondary component is at least one of a shear reinforcement bar, a lap joint, a spacer block, a sheath tube, or a compression joint. In this way, the reinforcement inspection device 2 is capable of detecting various components as secondary components.
 実施の形態1に係る学習装置3は、画像情報と画像情報における物体の位置情報とを含む学習データを取得するデータ取得部321と、学習データを用いて、画像情報に映った副構成部材を推論するための学習モデルを作成して保存する学習部322と、副構成部材の種類、色および形状を含む外観的特徴ごとに作成された学習モデルから、配筋検査装置2において指定された学習モデルを検索し、検索により得られた学習モデルを配筋検査装置2に出力する検索部323を備える。これにより、学習装置3は、画像情報に映った副構成部材を推論するための学習モデルを、副構成部材の種類、色および形状を含む外観的特徴ごとに作成することができる。 The learning device 3 according to the first embodiment includes a data acquisition unit 321 that acquires learning data including image information and positional information of objects in the image information, a learning unit 322 that uses the learning data to create and store a learning model for inferring secondary components shown in the image information, and a search unit 323 that searches for a learning model specified in the reinforcement inspection device 2 from the learning models created for each external appearance feature including the type, color, and shape of the secondary component, and outputs the learning model obtained by the search to the reinforcement inspection device 2. This allows the learning device 3 to create a learning model for inferring secondary components shown in the image information for each external appearance feature including the type, color, and shape of the secondary component.
 実施の形態1に係る学習装置3において、複数の学習モデルには、指定される前に予め作成された一つまたは複数のプリセットモデルと、指定されてから作成されるカスタムモデルとが含まれる。データ取得部321は、画像情報を取得して順次保存する。学習部322は、保存された画像情報から、副構成部材が映った複数の画像情報を抽出し、抽出した画像情報を学習用と評価用とに分けるデータ分類部3221と、学習用の画像情報を用いて、画像情報に映った副構成部材を推論するための学習モデルを作成するモデル作成部3222と、評価用の画像情報を用いて学習モデルを評価し、評価条件を満たした学習モデルを、プリセットモデルとして保存する評価部3223を備える。これにより、学習装置3は、推論精度が高い学習モデルをプリセットモデルとして作成することができる。 In the learning device 3 according to the first embodiment, the multiple learning models include one or more preset models created before being specified, and a custom model created after being specified. The data acquisition unit 321 acquires image information and stores it sequentially. The learning unit 322 includes a data classification unit 3221 that extracts multiple pieces of image information showing secondary components from the stored image information and separates the extracted image information into image information for learning and image information for evaluation, a model creation unit 3222 that uses the image information for learning to create a learning model for inferring the secondary components shown in the image information, and an evaluation unit 3223 that evaluates the learning model using the image information for evaluation and stores the learning model that satisfies the evaluation conditions as a preset model. This allows the learning device 3 to create a learning model with high inference accuracy as a preset model.
 実施の形態1に係る学習装置3において、モデル作成部3222は、画像情報である正解データに対応する埋め込みベクトルを算出して保存し、学習用の画像情報を用いて画像情報に対応する埋め込みベクトルを算出し、算出した埋め込みベクトルと保存した埋め込みベクトルとが一致すると判定した学習モデルを作成する。これにより、モデル作成部3222は、学習モデルの作成に要する時間を削減できる。 In the learning device 3 according to the first embodiment, the model creation unit 3222 calculates and stores an embedding vector corresponding to correct data, which is image information, calculates an embedding vector corresponding to image information using image information for learning, and creates a learning model in which it is determined that the calculated embedding vector matches the stored embedding vector. This allows the model creation unit 3222 to reduce the time required to create a learning model.
 実施の形態1に係る学習装置3において、学習部322は、画像情報に映った物体を推論するように学習された事前学習モデルを用いて、配筋検査装置2において指定されたカスタムモデルを作成する。これにより、学習装置3は、学習データが少ない場合であっても、推論精度の高いカスタムモデルを作成することができる。 In the learning device 3 according to the first embodiment, the learning unit 322 creates a custom model specified in the reinforcement inspection device 2 using a pre-learned model that has been trained to infer objects shown in image information. This allows the learning device 3 to create a custom model with high inference accuracy even when there is a small amount of learning data.
 実施の形態1に係る学習装置3において、カスタムモデルが目標推論精度を満たすまで、データ取得部321による学習データの取得と学習部322によるカスタムモデルの作成とを繰り返し実行する。これにより、学習装置3は、推論精度の高いカスタムモデルを作成することができる。 In the learning device 3 according to the first embodiment, the data acquisition unit 321 repeatedly acquires learning data and the learning unit 322 repeatedly creates a custom model until the custom model meets the target inference accuracy. This allows the learning device 3 to create a custom model with high inference accuracy.
 実施の形態1に係る配筋検査システム1は、配筋検査装置2と学習装置3とを備える。これにより、配筋検査システム1は、配筋構造体において主筋以外に設けられる副構成部材を自動で検出できる配筋検査装置2を提供することが可能である。 The reinforcement inspection system 1 according to the first embodiment includes a reinforcement inspection device 2 and a learning device 3. As a result, the reinforcement inspection system 1 can provide a reinforcement inspection device 2 that can automatically detect secondary components provided in addition to the main reinforcement in a reinforcement structure.
 実施の形態1に係る配筋検査方法は、取得部221が画像情報を取得するステップと、選択部223が、副構成部材の種類、色および形状を含む外観的特徴ごとに設けられ、画像情報に映った副構成部材を推論するための複数の学習モデルのうち、指定された学習モデルを選択するステップと、推論部224が、選択された学習モデルを用いて、画像情報から副構成部材を推論するステップを備える。配筋検査装置2がこの方法を実行することにより、配筋構造体において主筋以外に設けられる副構成部材を自動で検出することができる。 The reinforcement inspection method according to the first embodiment includes a step in which the acquisition unit 221 acquires image information, a step in which the selection unit 223 is provided for each external appearance feature including the type, color, and shape of the secondary component and selects a specified learning model from among a plurality of learning models for inferring the secondary component shown in the image information, and a step in which the inference unit 224 infers the secondary component from the image information using the selected learning model. By executing this method by the reinforcement inspection device 2, secondary components provided other than the main reinforcement in a reinforcement structure can be automatically detected.
 なお、実施の形態の任意の構成要素の変形もしくは実施の形態の任意の構成要素の省略が可能である。 In addition, any of the components of the embodiments may be modified or omitted.
 本開示に係る配筋検査装置は、例えばコンクリートを打ち込む前の配筋構造体の検査に利用可能である。 The reinforcing bar inspection device disclosed herein can be used, for example, to inspect reinforcing bar structures before concrete is poured.
 1 配筋検査システム、2 配筋検査装置、3 学習装置、11 鉄筋、12 せん断補強筋、13 スペーサブロック、14 重ね継手、15 シース管、16 圧縮継手、21 通信部、22 演算部、23 表示部、23A 操作画面、23A-1~23A-3,23Ac 選択ボタン、23Ab スライドバー、23B-1~23B-5 バウンディングボックス、24 操作入力部、25 記憶部、31 通信部、32 演算部、33 記憶部、100 通信インタフェース、101 入出力インタフェース、102 プロセッサ、103 メモリ、221 取得部、222 前処理部、223 選択部、224 推論部、321 データ取得部、322 学習部、323 検索部、3221 データ分類部、3222 モデル作成部、3223 評価部。 1 Reinforcement inspection system, 2 Reinforcement inspection device, 3 Learning device, 11 Steel bar, 12 Shear reinforcement bar, 13 Spacer block, 14 Lap joint, 15 Sheath tube, 16 Compression joint, 21 Communication unit, 22 Calculation unit, 23 Display unit, 23A Operation screen, 23A-1 to 23A-3, 23Ac Selection button, 23Ab Slide bar, 23B-1 to 23B-5 Bounding box, 24 Operation Input unit, 25 memory unit, 31 communication unit, 32 calculation unit, 33 memory unit, 100 communication interface, 101 input/output interface, 102 processor, 103 memory, 221 acquisition unit, 222 preprocessing unit, 223 selection unit, 224 inference unit, 321 data acquisition unit, 322 learning unit, 323 search unit, 3221 data classification unit, 3222 model creation unit, 3223 evaluation unit.

Claims (18)

  1.  複数の鉄筋が主筋として配筋され、前記主筋以外の副構成部材を含んで構成される配筋構造体を検査する配筋検査装置であって、
     画像情報を取得する取得部と、
     前記副構成部材の種類、色および形状を含む外観的特徴ごとに設けられ、前記画像情報に映った前記副構成部材を推論するための複数の学習モデルのうち、指定された前記学習モデルを選択する選択部と、
     選択された前記学習モデルを用いて、前記画像情報から前記副構成部材を推論する推論部と、を備えた
     ことを特徴とする配筋検査装置。
    A reinforcing bar inspection device for inspecting a reinforcing bar structure including a plurality of reinforcing bars arranged as main reinforcements and including secondary components other than the main reinforcements,
    An acquisition unit that acquires image information;
    a selection unit that selects a designated learning model from among a plurality of learning models for inferring the secondary component shown in the image information, the learning model being provided for each external appearance feature including the type, color, and shape of the secondary component;
    an inference unit that infers the secondary component from the image information by using the selected learning model.
  2.  複数の前記学習モデルには、指定される前に予め作成された一つまたは複数の第1学習モデルと、指定されてから作成される第2学習モデルとが含まれる
     ことを特徴とする請求項1に記載の配筋検査装置。
    The bar arrangement inspection device according to claim 1, characterized in that the multiple learning models include one or more first learning models that are created in advance before being specified, and a second learning model that is created after being specified.
  3.  前記画像情報を、前記推論部が行う推論処理に適した形態になるように前処理する前処理部を備え、
     前記推論部は、前記学習モデルを用いて、前処理された前記画像情報から前記副構成部材を推論する
     ことを特徴とする請求項1または請求項2に記載の配筋検査装置。
    a pre-processing unit that pre-processes the image information so that the image information is in a form suitable for inference processing performed by the inference unit;
    The bar arrangement inspection device according to claim 1 or 2, wherein the inference unit infers the secondary component from the preprocessed image information by using the learning model.
  4.  前記前処理部は、前記画像情報を正規化する
     ことを特徴とする請求項3に記載の配筋検査装置。
    The bar arrangement inspection device according to claim 3 , wherein the preprocessing unit normalizes the image information.
  5.  前記選択部は、前記学習モデルを指定する操作を行わせる操作画面を表示させるための表示制御情報を出力し、前記操作画面に基づき受け付けられた操作により指定された前記学習モデルを選択する
     ことを特徴とする請求項1から請求項4のいずれか1項に記載の配筋検査装置。
    The reinforcement inspection device according to any one of claims 1 to 4, characterized in that the selection unit outputs display control information for displaying an operation screen on which an operation for specifying the learning model is performed, and selects the learning model specified by an operation accepted based on the operation screen.
  6.  前記表示制御情報に基づいて前記操作画面を表示する表示部を備えた
     ことを特徴とする請求項5に記載の配筋検査装置。
    The bar arrangement inspection device according to claim 5 , further comprising a display unit that displays the operation screen based on the display control information.
  7.  前記表示部に表示された前記操作画面上の複数の前記学習モデルから前記学習モデルを指定する操作を受け付ける操作入力部を備えた
     ことを特徴とする請求項6に記載の配筋検査装置。
    The bar arrangement inspection device according to claim 6, further comprising an operation input unit that accepts an operation to specify the learning model from a plurality of the learning models on the operation screen displayed on the display unit.
  8.  前記選択部は、複数の前記学習モデルから前記学習モデルを自動で選択する
     ことを特徴とする請求項1から請求項4のいずれか1項に記載の配筋検査装置。
    The bar arrangement inspection device according to claim 1 , wherein the selection unit automatically selects the learning model from a plurality of the learning models.
  9.  前記取得部は、さらに前記配筋構造体を3次元点群で表す点群データを取得し、
     前記推論部は、前記画像情報を用いて算出された前記副構成部材と、前記点群データを用いて算出された前記副構成部材とを比較した結果に基づいて、前記副構成部材を誤って推論したか否かを判定する
     ことを特徴とする請求項1に記載の配筋検査装置。
    The acquisition unit further acquires point cloud data representing the reinforcement structure as a three-dimensional point cloud,
    The reinforcing bar inspection device according to claim 1, characterized in that the inference unit determines whether or not the secondary component has been erroneously inferred based on a result of comparing the secondary component calculated using the image information with the secondary component calculated using the point cloud data.
  10.  前記第1学習モデルおよび前記第2学習モデルには、前記主筋の種類、色および形状を含む外観的特徴ごとに設けられ、前記画像情報に映る前記主筋を推論するための第3学習モデルが含まれ、
     前記推論部は、前記第3学習モデルを用いて、前処理された前記画像情報から前記主筋を推論する
     ことを特徴とする請求項2に記載の配筋検査装置。
    The first learning model and the second learning model include a third learning model provided for each appearance feature including the type, color, and shape of the main reinforcement, for inferring the main reinforcement shown in the image information,
    The bar arrangement inspection device according to claim 2 , wherein the inference unit infers the main reinforcement bars from the preprocessed image information by using the third learning model.
  11.  前記副構成部材は、せん断補強筋、重ね継手、スペーサブロック、シース管または圧縮継手の少なくとも一つである
     ことを特徴とする請求項1から請求項10のいずれか1項に記載の配筋検査装置。
    The reinforcing bar inspection device according to any one of claims 1 to 10, wherein the secondary component is at least one of a shear reinforcement bar, a lap joint, a spacer block, a sheath tube, and a compression joint.
  12.  複数の鉄筋が主筋として配筋され、前記主筋以外の副構成部材を含んで構成される配筋構造体が撮影された画像情報を用いて、前記画像情報に映った前記副構成部材を推論するための学習モデルを作成する学習装置であって、
     前記画像情報と、当該画像情報における物体の位置情報とを含む学習データを取得するデータ取得部と、
     前記学習データを用いて、前記画像情報に映った前記副構成部材を推論するための前記学習モデルを作成して保存する学習部と、
     前記副構成部材の種類、色および形状を含む外観的特徴ごとに作成された前記学習モデルから、配筋検査装置において指定された前記学習モデルを検索し、検索により得られた前記学習モデルを配筋検査装置に出力する検索部と、を備えた
     ことを特徴とする学習装置。
    A learning device that uses image information of a reinforcement structure in which a plurality of reinforcing bars are arranged as main reinforcements and which includes secondary components other than the main reinforcements to create a learning model for inferring the secondary components shown in the image information,
    A data acquisition unit that acquires learning data including the image information and position information of an object in the image information;
    a learning unit that uses the learning data to create and store the learning model for inferring the secondary component shown in the image information;
    a search unit that searches for the learning model specified in the reinforcement inspection device from the learning models created for each appearance feature including the type, color, and shape of the secondary component, and outputs the learning model obtained by the search to the reinforcement inspection device.
  13.  複数の前記学習モデルには、指定される前に予め作成された一つまたは複数の第1学習モデルと、指定されてから作成される第2学習モデルとが含まれ、
     前記データ取得部は、前記画像情報を取得して順次保存し、
     前記学習部は、
     保存された前記画像情報から、前記副構成部材が映った複数の前記画像情報を抽出し、抽出した前記画像情報を学習用と評価用とに分けるデータ分類部と、
     前記学習用の前記画像情報を用いて、前記画像情報に映った前記副構成部材を推論するための前記学習モデルを作成するモデル作成部と、
     前記評価用の前記画像情報を用いて前記学習モデルを評価し、評価条件を満たした前記学習モデルを、前記第1学習モデルとして保存する評価部と、を備えた
     ことを特徴とする請求項12に記載の学習装置。
    The plurality of learning models include one or more first learning models that are created before being designated, and a second learning model that is created after being designated;
    The data acquisition unit acquires and sequentially stores the image information;
    The learning unit is
    a data classification unit that extracts a plurality of pieces of image information showing the secondary component from the stored image information and classifies the extracted image information into pieces of image information for learning and pieces of image information for evaluation;
    a model creation unit that creates the learning model for inferring the secondary component shown in the image information by using the image information for learning;
    The learning device according to claim 12, further comprising an evaluation unit that evaluates the learning model using the image information for evaluation and stores the learning model that satisfies an evaluation condition as the first learning model.
  14.  前記モデル作成部は、前記画像情報である正解データに対応する埋め込みベクトルを算出して保存し、前記学習用の前記画像情報を用いて当該画像情報に対応する埋め込みベクトルを算出し、算出した埋め込みベクトルと保存した埋め込みベクトルとが一致すると判定した前記学習モデルを作成する
     ことを特徴とする請求項13に記載の学習装置。
    The learning device according to claim 13, characterized in that the model creation unit calculates and stores an embedding vector corresponding to the correct answer data which is the image information, calculates an embedding vector corresponding to the image information using the image information for learning, and creates the learning model in which it is determined that the calculated embedding vector matches the stored embedding vector.
  15.  前記学習部は、前記画像情報に映った物体を推論するように学習された事前学習モデルを用いて、配筋検査装置において指定された前記第2学習モデルを作成する
     ことを特徴とする請求項13または請求項14に記載の学習装置。
    The learning device according to claim 13 or 14, characterized in that the learning unit creates the second learning model specified in the reinforcement inspection device by using a pre-learning model that has been trained to infer an object shown in the image information.
  16.  前記第2学習モデルが目標推論精度を満たすまで、前記データ取得部による前記学習データの取得と、前記学習部による前記第2学習モデルの作成と、を繰り返し実行する
     ことを特徴とする請求項15に記載の学習装置。
    The learning device according to claim 15, characterized in that the data acquisition unit repeatedly acquires the learning data and the learning unit repeatedly creates the second learning model until the second learning model meets a target inference accuracy.
  17.  請求項1から請求項11のいずれか1項に記載の配筋検査装置と、
     請求項12から請求項16のいずれか1項に記載の学習装置と、
     を備えた配筋検査システム。
    The bar arrangement inspection device according to any one of claims 1 to 11,
    A learning device according to any one of claims 12 to 16,
    Reinforcement inspection system equipped with
  18.  複数の鉄筋が主筋として配筋され、前記主筋以外の副構成部材を含んで構成される配筋構造体を検査する配筋検査装置の配筋検査方法であって、
     取得部が、画像情報を取得するステップと、
     選択部が、前記副構成部材の種類、色および形状を含む外観的特徴ごとに設けられ、前記画像情報に映った前記副構成部材を推論するための複数の学習モデルのうち、指定された前記学習モデルを選択するステップと、
     推論部が、選択された前記学習モデルを用いて、前記画像情報から前記副構成部材を推論するステップと、を備えた
     ことを特徴とする配筋検査方法。
    A reinforcing bar inspection method for inspecting a reinforcing bar structure in which a plurality of reinforcing bars are arranged as main reinforcing bars and which includes secondary components other than the main reinforcing bars, comprising:
    An acquisition unit acquires image information;
    A selection unit is provided for each appearance feature including a type, color, and shape of the secondary component, and selects a designated learning model from among a plurality of learning models for inferring the secondary component shown in the image information;
    and a step of inferring the secondary component from the image information by an inference unit using the selected learning model.
PCT/JP2022/039215 2022-10-21 2022-10-21 Reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method WO2024084673A1 (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/039215 WO2024084673A1 (en) 2022-10-21 2022-10-21 Reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
PCT/JP2022/039215 WO2024084673A1 (en) 2022-10-21 2022-10-21 Reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method

Publications (1)

Publication Number Publication Date
WO2024084673A1 true WO2024084673A1 (en) 2024-04-25

Family

ID=90737215

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/JP2022/039215 WO2024084673A1 (en) 2022-10-21 2022-10-21 Reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method

Country Status (1)

Country Link
WO (1) WO2024084673A1 (en)

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019114146A (en) * 2017-12-25 2019-07-11 株式会社竹中工務店 Rebar inspection support device
JP2020027058A (en) * 2018-08-14 2020-02-20 前田建設工業株式会社 Bar arrangement make management system and bar arrangement make management method
JP2020095009A (en) * 2018-07-02 2020-06-18 エスアールアイ インターナショナルSRI International Measurement inspection system for iron reinforcing bar by computer
WO2021024499A1 (en) * 2019-08-08 2021-02-11 鹿島建設株式会社 Reinforcing bar determination device and reinforcing bar determination method
JP2022025818A (en) * 2020-07-30 2022-02-10 戸田建設株式会社 Three-dimensional bar arrangement data creation method and three-dimensional bar arrangement data creation system for bar arrangement measurement
JP2022030356A (en) * 2020-08-07 2022-02-18 Jfeエンジニアリング株式会社 Information processor, information processing method, and program
JP2022164949A (en) * 2021-04-19 2022-10-31 三菱電機エンジニアリング株式会社 Bar arrangement inspection device, bar arrangement inspection method and program

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP2019114146A (en) * 2017-12-25 2019-07-11 株式会社竹中工務店 Rebar inspection support device
JP2020095009A (en) * 2018-07-02 2020-06-18 エスアールアイ インターナショナルSRI International Measurement inspection system for iron reinforcing bar by computer
JP2020027058A (en) * 2018-08-14 2020-02-20 前田建設工業株式会社 Bar arrangement make management system and bar arrangement make management method
WO2021024499A1 (en) * 2019-08-08 2021-02-11 鹿島建設株式会社 Reinforcing bar determination device and reinforcing bar determination method
JP2022025818A (en) * 2020-07-30 2022-02-10 戸田建設株式会社 Three-dimensional bar arrangement data creation method and three-dimensional bar arrangement data creation system for bar arrangement measurement
JP2022030356A (en) * 2020-08-07 2022-02-18 Jfeエンジニアリング株式会社 Information processor, information processing method, and program
JP2022164949A (en) * 2021-04-19 2022-10-31 三菱電機エンジニアリング株式会社 Bar arrangement inspection device, bar arrangement inspection method and program

Similar Documents

Publication Publication Date Title
US11657567B2 (en) Method for the automatic material classification and texture simulation for 3D models
CN110069972B (en) Automatic detection of real world objects
CN111459166B (en) Scene map construction method containing trapped person position information in post-disaster rescue environment
Bosche et al. Automated retrieval of 3D CAD model objects in construction range images
Kardovskyi et al. Artificial intelligence quality inspection of steel bars installation by integrating mask R-CNN and stereo vision
Han et al. A formalism for utilization of autonomous vision-based systems and integrated project models for construction progress monitoring
Zhou et al. Image-based onsite object recognition for automatic crane lifting tasks
US10861247B2 (en) Roof report generation
Rankohi et al. Image-based modeling approaches for projects status comparison
Hu et al. Pipe pose estimation based on machine vision
KR20230133831A (en) Device, method and program that automatically designs equipment lines within BIM design data
CN114862745A (en) Weld defect identification method, and training method and device of weld defect identification model
Hartl et al. Automated visual inspection of friction stir welds: a deep learning approach
WO2020227343A1 (en) Systems and methods for detection of anomalies in civil infrastructure using context aware semantic computer vision techniques
EP3825804A1 (en) Map construction method, apparatus, storage medium and electronic device
Higgins et al. Imaging tools for evaluation of gusset plate connections in steel truss bridges
KR102366840B1 (en) methods for detecting construction objects based on artificial intelligence and cloud platform system for providing construction supervision service and system thereof
WO2024084673A1 (en) Reinforcement inspection device, learning device, reinforcement inspection system, and reinforcement inspection method
Hoskere Developing autonomy in structural inspections through computer vision and graphics
JP2017199259A (en) Material recognition device and material recognition method
KR102603276B1 (en) Device, method and program for assisting supevision based on XR
Neeli Use of photogrammetry aided damage detection for residual strength estimation of corrosion damaged prestressed concrete bridge girders
AU2018204115B2 (en) A method for automatic material classification and texture simulation for 3d models
Li et al. Efficient assessment of window views in high-rise, high-density urban areas using 3D color City Information Models
Ercan et al. Deep learning for accurate corner detection in computer vision-based inspection