GB2620478A - A method of detecting defects in a structure and a method of estimating repair material quantity requirements - Google Patents

A method of detecting defects in a structure and a method of estimating repair material quantity requirements Download PDF

Info

Publication number
GB2620478A
GB2620478A GB2306478.5A GB202306478A GB2620478A GB 2620478 A GB2620478 A GB 2620478A GB 202306478 A GB202306478 A GB 202306478A GB 2620478 A GB2620478 A GB 2620478A
Authority
GB
United Kingdom
Prior art keywords
features
image
interest
processing technique
identified
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB2306478.5A
Other versions
GB202306478D0 (en
Inventor
Alexander Coombs Rhys
Robert Cramman John
Cramman Mark
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Cc Informatics Ltd
Original Assignee
Cc Informatics Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Cc Informatics Ltd filed Critical Cc Informatics Ltd
Publication of GB202306478D0 publication Critical patent/GB202306478D0/en
Publication of GB2620478A publication Critical patent/GB2620478A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q10/00Administration; Management
    • G06Q10/20Administration of product repair or maintenance
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20092Interactive image processing based on input by user
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30132Masonry; Concrete
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30181Earth observation
    • G06T2207/30184Infrastructure

Landscapes

  • Engineering & Computer Science (AREA)
  • Business, Economics & Management (AREA)
  • Human Resources & Organizations (AREA)
  • Quality & Reliability (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Theoretical Computer Science (AREA)
  • Economics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Entrepreneurship & Innovation (AREA)
  • Marketing (AREA)
  • Operations Research (AREA)
  • Strategic Management (AREA)
  • Tourism & Hospitality (AREA)
  • General Business, Economics & Management (AREA)
  • Image Analysis (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)

Abstract

A method of detecting defects in a structure 10 involves receiving at least one image of the structure. A first processing technique, such as a generative adversarial network, is used to process the image to identify features of interest of the structure 20. A second processing technique, such as a forked convolutional neural network, is used to process portions of the image identified as features of interest by separating the portions into sub-portions and applying a grade to each sub-portion representing the quality of the associated feature of interest of the structure 32. An output based on the grades is produced, preferably by representing the grades of the features using different colours overlaid on the image 33. The grading can be used to estimate a quantity of material required to repair the structure. In one embodiment, the features are not graded and the output is based on the features instead.

Description

A Method of Detecting Defects in a Structure and a Method of Estimating Repair Material Quantity Requirements The present invention relates to a method of detecting 5 defects in a structure and to a method of estimating a quantity of material required to repair a structure. The invention relates particularly, but not exclusively, to defect detection and repair material estimating in large structures such as reservoir dams, viaducts, tunnels and retaining walls.
It is commonplace for large structures such as reservoir dams, large buildings, bridges and wind turbines to periodically require maintenance and repair due to deterioration of the materials used in the construction of those structures. For example, in a reservoir dam faced with stone or concrete blocks 15 the mortar used between the blocks deteriorates over time and requires maintenance and repair. Before undertaking such repairs it is important to understand the magnitude of the job to know what proportion of the structure is in need of repair. The repair work itself is undertaken, on structures such as reservoir dams, by having the repair workers abseil down the structure identifying areas where mortar requires replacement and undertaking the repair. Typically, no survey is undertaken prior to repairs in order to estimate the size of the job involved completing the repair and therefore price estimates can be quite inaccurate. This inaccuracy is both in the quantity of material required to undertake the repairs and in the time required or number of people required to complete the repair task. Furthermore, it is only possible to see the condition of the structure close to the abseil line, meaning that area of the structure can be easily missed and not repaired. The same approach is adopted for concrete structures where there is cracking or spalling. In particular, specialists use rope access to abseil down structures looking for defects to repair -2 -and repairing them on an ad hoc basis during the survey or, perhaps for larger repairs, noting the location of a damaged section and initiating a repair later.
Preferred embodiments of the present invention seek to 5 overcome or alleviate the above described disadvantages of the prior art.
According to an aspect of the present invention there is provided a method of detecting defects in a structure, comprising: receiving at least one image of a structure; processing, using a first processing technique, said at least one image to identify features of interest of said structure; processing, using a second processing technique, portions of said at least one image identified as features of interest, by separating said portions into sub portions and applying a grade to each sub-portion representing a quality of the structure of the feature of interest at that sub portion as represented in the image; and producing an output based on said grades of said sub-portions 20 of said features.
The above set out method provides the advantage that the level of repair required on a structure can be easily determined which in turn allows an estimate of the materials and manpower required in order to complete a repair to the structure. In particular, identifying the features which are likely to need repair, for example mortar in a block-based structure, allows the processing to be undertaken more quickly and more accurately than if the whole structure is being reviewed. Alternatively, features of the structure can be identified and then separated for analysis using different criteria to identify the level of degradation which has occurred. Furthermore, by surveying a structure before undertaking repairs portions of a structure -3 -which do not need to be repaired are identified and do not need to be accessed reducing the risk of injury in dangerous environments.
The method may further comprise producing a visual 5 representation of said identified features overlaid onto at least a portion of said at least one image; and inspecting said visual representation to determine whether the features of interest of the structure had been correctly identified.
The method may also further comprise, in the event of determining that the features of interest of the structure had not been correctly identified, undertaking a manual inspection of at least a portion of said at least one image to identify features of interest of said structure; and processing, using a third processing technique, said portion of said at least one image using said manually identified features of interest to train said first processing technique to better identify said features of interest of said structure.
Having a manual inspection or quality check of the output of the first processing technique is important, although not essential, in determining that the features of interest have been correctly identified. This step is not essential since a post repair review of the structure may be undertaken and this training technique would not be required. Similarly, if the same structure, or a similar structure, is being analysed at a later date the step of checking and training would, in all likelihood, be unnecessary.
In a preferred embodiment, producing the output comprises representing the position of features and identified grades of 30 said features using different colours.
In another preferred embodiment the different coloured output is overlaid onto said at least one image. -4 -
By overlaying a coloured representation of the features of interest onto the original image it is easy for an operator to quickly scan an image to check that the features of interest have been correctly identified. It is particularly preferable if the two images can be separated so that if a features appears to have been incorrectly identified at first site this can be checked by removing the coloured overlay. As a result, the quality checking process is undertaken at two levels. Firstly, an initial review quickly identifies that the features appear to be correct. For example, if the structure is a block or brickwork structure and it is mortar which is being identified it is easy to quickly identify that the features are in approximately the overlapping bricks. Then a more detailed review can be undertaken for any features which appear to be 15 slightly outside of the expected shape.
The method may further comprise generating said at least one image by combining a plurality of images.
The method may also further comprise generating said at least one image as a, dimensional point cloud from said plurality of images.
In a preferred embodiment said at least one image is orthorectified and at least one polygon is drawn around the feature of interest and displayed on said image.
The method ideally further comprises gathering said 25 plurality of images using a camera mounted on a flying device.
By combining multiple images together, in particular as a 3D point cloud with images gathered from a drone, the advantage is provided that a quick and simple drone-based survey is able to produce an accurate representation of the whole structure in three dimensions. The resolution of the multiple images into a 3D point cloud allows greater accuracy in the analysis of the quality of the features of interest by generating from multiple sources. This also improves the training that can be provided -5 -to the processes as operators are able to manipulate the 3D point cloud image in order to make the best determination of the structure of the feature of interest.
In a preferred embodiment at least one of the processing 5 techniques comprises a neural network.
In a preferred embodiment the first processing technique comprises a generative adversarial neural network.
In another preferred embodiment the second processing technique comprises convolutional neural network.
In a further preferred embodiment the third processing technique comprises discriminative neural network.
According to another aspect of the present invention there is provided a method of estimating a quantity of material required to repair a structure, comprising using a method as set out above to identify defects in said structure, wherein said step of producing an output comprises: estimating a surface area represented by at leas7_ one sub-portion; and calculating a volume of material required to repair said sub-20 portion of said structure by the volume of material required to repair a unit of surface area of the grade allocated to said sub-portion.
According to a further aspect of the present invention there is provided a method of repairing a structure, comprising: estimating a quantity of at least one material required to repair a structure as set out above; ordering the material or materials estimated; and using the material or materials ordered to repair the structure. According to a further aspect of the present invention 30 there is provided a method of detecting defects in a structure, comprising: -6 -receiving at least one image of a structure; processing, using a first processing technique, said at least one image to identify features of interest of said structure; and producing an output based on said features of interest. The method may further comprise: processing, using a second processing technique, portions of said at least one image identified as features of interest, by separating said portions into sub portions and applying a grade to each sub-portion representing a quality of the structure of the feature of interest at that sub portion as represented in the image; and producing an output based on said grades of said sub-portions of said features.
The method may also further comprise producing a visual representation of said identified features overlaid onto at least a portion of said at least one image; and inspecting said visual representation to determine whether the features of interest of the structure had been correctly 20 identified.
In a preferred embodiment, in the event of determining that the features of interest of the structure had not been correctly identified, the method further comprises undertaking a manual inspection of at least a portion of said at least one image to identify features of interest of said structure; and processing, using a third processing technique, said portion of said at least one image using said manually identified features of interest to train said first processing technique to better identify said features of interest of said structure.
In another preferred embodiment producing the output comprises representing the position of features and identified grades of said features using different colours.
In a further preferred embodiment, the different coloured 5 output is overlaid onto said at least one image.
The method may further comprise generating said at least one image by combining a plurality of images as a 3 dimensional point cloud from said plurality of images.
In a preferred embodiment at least one said processing 10 technique comprises a neural network.
In another preferred embodiment the first processing technique comprises a generative adversarial neural network.
In a further preferred embodiment, the second processing technique comprises convolutional neural network.
In a still further preferred embodiment, the third processing technique comprises discriminative neural network.
Preferred embodiments of the present invention will now be described, by way of example only, and not in any limitative sense with reference to the accompanying drawings in which:-Figure 1 is a schematic representation of an image gathering step of the present invention; Figure 2 is a schematic representation summarising the process of the present invention; Figures 3, 4 and 5 are example images used in the steps of the 25 present invention; and Figure 6 is a flowchart showing a summary of the process of the present invention.
The method of the present invention is used for detecting defects in a structure and then having identified and graded the 30 defects, using this data to estimate the material and work required to undertake repairs. Examples of structures for which -8 -the method may be suitable include, but are not limited to, block work structures such as reservoir dams and buildings, metal structures such as bridges and wind turbines including painted surfaces, concrete and other reinforced materials such as fibre reinforced plastics, timber, roof tiles, paving and engineered embankments. The method can be summarised as including stages relating to the gathering of images, identifying features of interest, grading the features of interest in the images and producing an output based on the grades.
Looking firstly at the gathering of images, and referring to figure 1, a structure, identified with reference 10, is to be assessed to detect defects. In the illustration of figure 1 the structure is a reservoir dam formed from blocks joined with mortar therebetween. Figure 1 illustrates that images can be obtained using static cameras 12 which are held by operators either at ground level or from the top of the structure. Images gathered in this way are at an oblique angle and for very large structures will not contain sufficient detail to enable good analysis to be undertaken. Preferably the images are gathered using a drone 14 which is able to fly a known path over the structure gathering a multiplicity of images. Other mobile image gathering devices may also be used including, but not limited to, remotely controlled devices such as unmanned aerial vehicles (UAV), rover, submersible ROV, handheld camera rig, cable suspended vehicle and the like. Once the multiple images are gathered, they are combined and orthorectified using photogrammetry software to produce a BD point cloud image which is used for the analysis. This data gathering stage is illustrated schematically in figure 2 and 6 at reference numeral 16 and the combining of images is illustrated in figure 6 at reference numeral 18. _ 9 _
The next stage in the process is the identification of features of the structure which are of interest also known as masking and illustrated in figure 2 with reference numeral 20. This step of masking is undecaken using a first processing 5 technique which is, for example, utilising the generator network from a pretrained generative adversarial network (GAN). Details of the training process are set out below. The generator network takes a three channel image portion of a pre-set size (for example 256 x 256 pixels) with bands corresponding to red, green 10 and blue. The generator network attempts to deduce a masking output image 22 based on the input image 23.
Ideally this identification process is undertaken on a portion of the image data in order to test the fitness of the generator neural network. This fitness or quality check is undertaken by an operator who compares the output of the masking function to the original image to determine whether the correct features have been identified.
The data output from the masking step 20 is illustrated in figure 4 where it has been overlaid on the original image (figure 3). Figures 3, 4 and 5 are example data using a brick wall of a building. The output illustration in figure 4 utilises two colours with the features of interest (the mortar bead between the bricks) highlighted using the colour red and indicated on the figure 4 with reference numeral 24 and vegetation (which also indicates potential damage to the structure) highlighted using the colour green and indicated with reference numeral 26. The operator is therefore able to undertake a two-stage check of the output of the masking function 20. The first stage is to look at the general shape of the masking data to determine whether it looks substantially correct. In the example shown in figure 4 the image, in figure 3, is of a brick work wall and It is simple to check that the masking data appears to show lines of bricks by highlighting the mortar bead lines between -10 -the bricks. This is particularly easy when the features of interest are highlighted in one colour. The second step is to look in more detail at some areas of the mask data to check that they are correct. This can be done for example by looking at a 5 portion of the image and perhaps a portion of the image in which the regular pattern of bricks is interrupted. An example of this is highlighted with reference numeral 28 in figure 4 and in figure 3. This highlighted portion Includes a brick which appears to be differently sized to those immediately adjacent 10 to it. This row of bricks shows the normal pattern of bricks laid end to end. However, part way along this row there is a brick which appears to be much shorter. This could be an incorrect assessment in the masking process. However, on closer inspection of figure 3 it is clear that this is a half brick.
This operator initiated checking of the masking function is indicated on figure 6 with reference 30.
If the operator is satisfied that the masking algorithm has correctly identified and masked the features of interest then the process can move on. The next step then depends on whether gradation of the identified features is required (indicated at step 31). If gradation is required the process moves on to the step of condition scoring or grading (indicated at 32) using a second processing technique. However, if the operator determines that the features of interest have not been correctly 25 identified then a training step (indicated at 35) is undertaken in order to improve the identification of those features.
This step of training is also necessary if this method is being applied to a new type of structure for the first time. To undertake the training step the operator takes a portion of an image and manually identifies the features of interest identifying them as either vegetation or the features to be inspected. Once the operator is satisfied that they have correctly identified the features of interest in the image this data represents the training data which will be used by a third processing technique to train the generator neural network to correctly identify those features of interest. The third processing technique is a discriminator neural network which 5 takes the training image and measures the "fitness" of the generator network algorithm to determine how often it correctly identifies an output. Once the discriminator network is satisfied that the generator network is able to take the original input image and produce an output identifying vegetation and features of interest that is sufficiently close to the operator generated training data then the generator neural network is rerun using all of the input data (repeating step 20 in figure 6). A further manual check (repeating step 30) can be applied to ensure that the masking data has correctly identified as the features of interest and vegetation and if not, the training rerun or additional training produced. This combination of the first and second processing techniques, the use of a generator neural network and a discriminator neural network can be together described as the use of a generative adversarial network.
With the masking process complete the method moves on to the grading or condition scoring step. This utilises a second processing technique 32 which is ideally a forked convolutional neural network. This process takes the original image data and applies a condition scoring or grade to portions of the image with these grades representing a quality of the structure of the feature of interest for that portion as represented in the image.
As set out above, the output of the masking step identifies features of interest (visually represented in red for the 30 operator) vegetation (represented in green) and other. The data points identified as other are not processed for condition scoring since they are not of interest. The inputs to the condition scoring algorithm are fixed resolution (for example -12 -x 100 pixel) three-channel images of the base photographic data and matching fixed resolution three channel images of the masking data. The output from the condition scoring algorithm is a matching fixed resolution image of the condition scoring in multiple colour bands representing the condition for example as follows.
Red (36) Total loss or bad condition Blue (38) Partial loss or unknown condition Green (40) Good condition By using two input images the convolution or neural network uses a forked topology such that the mask and the base image are studied and passed through the convolution or neural network before being combined for outputting and this allows the masking data to be used to efficiently screen areas which do not correspond to target features. The output image 33, shown in figure 5 includes the features shown in the colours red 36, blue 39 and green 40 and are easily discernible in a colour image (the colours red and blue are difficult to distinguish in the black and white images used herein). It should be noted that the items identified as vegetation 26 in figure 4 are also not processed as part of the grading of the structure. As a result, the output in figure 5 shows the vegetation as it is seen in the original image data (figure 3). However, the condition scoring process described above can be used for identifying species of vegetation. That is specific colours or shades of colours or patterns of shades of colours resulting from leaf shapes can be identified by the condition scoring.
The output of the method of the present invention can be in different forms including an image which simply represents the features of interest identified using the colours described above (identified with reference numeral 42). However, it can be more useful for communicating to non-specialists the nature of the degradation of the structure to use a red-amber-green -13 -colour coding and overlaying that image onto the original image of the structure. In the above example this simply means displaying the portions of the features of interest that are identified by the 'blue" category with the colour orange.
An alternative or additional output is to produce and estimate of the quantity of materials and/or work which is required in order to repair the structure in question (identified with reference numeral 44). In the features of interest that have been graded each pixel, which has been provided with a grading colour, represents a known or calculable surface area on the structure. With the three bands of grading, it is possible to estimate how much work and how much material will be required in order to return either the total loss or bad condition (red) portions to a good condition and the same for the partial loss or unknown condition (blue) portions. It is also possible to factor in the work required in order to clear vegetation and the likely material which will be required to repair the portions of the structure which have been infested and obscured by vegetation. As a result, an estimate of the materials and/or work required in order to repair the structure can be provided based on the masking and grading steps.
Jumping back to step 31, if no gradation of the identified features was required then steps 32 and 42 are missed and the identified features are then measured to determine the surface area and the estimate of the quantity of repair material required is made based on that surface area.
Once the steps of undertaking the survey, processing the images, producing an output and providing an estimate of the work and materials required, a repair can be undertaken to the structure without risk of significantly over ordering materials or over or under estimating the work required.
The above embodiment has been described with reference to masonry deterioration. However, substantially the same process -14 -can be used to identify other types of deterioration on other structures and surfaces. These applications rely on visual cues which can be picked up in photographic data collected from aerial, manually collected photographic data or images collected from submersible ROVs. The following is a non-exhaustive list of examples.
* Assessing the condition of metalwork (including anodic protection systems), where red/amber/green represents missing or rusted through, surface corrosion or bubbled paint, and good condition respectively.
* Assessing the condition of concrete or other reinforced materials such as fibre reinforced plastics, where red/amber/green represents cracking and spalling, microfractures and surface loss, and good condition respectively.
* Assessing the condition of blockwork, where red/amber/green represents missing blocks, cracked blocks and good condition respectively.
* Assessing the condition of timber, where red/amber/green represents missing or rotting sections, surface issues related to rotting or pests, and good condition respectively.
* Assessing the condition of roof tiles, where red/amber/green represents missing, damaged and present respectively.
* Assessing the condition of paving, where red/amber/green represents potholes, surface losses and good condition respectively.
* Assessing the condition of engineered embankments, where red/amber/green represents sinkholes and depressions, -15 -animal burrowing or poor grass cover, and good grass cover respectively.
* Assessing rendered or painted surfaces, where red/amber/green represents loss of surface protection, signs of distress or cracks and good condition respectively.
For each of these applications, estimated cost rates can be applied for each of the determined conditions and form the basis of a bill of quantities for use with repair contractors at the 10 pricing stage of a maintenance project.
It will be appreciated by persons skilled in the art that the above embodiments have been described by way of example only and not in any limitative sense, and that various alterations and modifications are possible without departure from the scope of the protection which is defined by the appended claims. For example, although it is described above that the images are preferably combined together it is, in principle, possible to undertake some or all of the steps described above using multiple separate images. Furthermore, for smaller structures or where a very high quality single images can be provided, a single image may be sufficient in order to gather useful data and/or provide estimates of the work or materials required in order to undertake a repair of the structure in question. The above described method utilises three different types of neural network. However, it is not necessarily the case that the processing techniques used must be neural networks and similar or the same processing techniques could be used at the different stages. For example, a support vector machine technique can be used as well as other non-neural network based statistical and machine learning techniques.
In the embodiments set out above one of the main outputs is a visual representation of the structure being surveyed. However, it should be noted that visual representations are not -16 -an essential output and a database identifying locations requiring repair and surface areas in need of repair is also an acceptable way to report the data identified by the present invention. That is, the defects are identified by location and extent (area of defect) only and, for example, recorded using x, y, z coordinates and the surface area of defect to the structure.
The method set out above uses number of pixels as method for determining surface area. However, it can be more effective 10 to use a polygon boundary using image segmentation as the mechanism for making that calculation.
The above embodiments have described using both the masking and condition scoring processes in the defect detection and evaluation technique. However, useful information is gained by only undertaking the masking process, in particular for structures which have a continuous single material (or uniform mixture) composition at the external surface such as concrete, painted and unpainted metals and the like. For such structures it is sufficient to use the masking process to identify defective areas in the structure and estimate a surface area (including using polygon boundaries as set out above) allowing an estimate of material volume required in order to make a repair. In this example the masking process is used for identifying cracks, spalling and the presence of vegetation as opposed to identifying construction blocks, mortar lines and distinguishing vegetation species where the condition scoring process is required in addition to the masking process. The use of the masking only process can also be useful for identifying damage to blocks within a larger structure.

Claims (25)

  1. -17 -Claims 1. A method of detecting defects in a structure, comprising: receiving at least one image of a structure; processing, using a first processing technique, said at least 5 one image to identify features of interest of said structure; processing, using a second processing technique, portions of said at least one image identified as features of interest, by separating said portions into sub portions and applying a grade to each sub-portion representing a quality of the structure of the feature of interest at that sub portion as represented in the image; and producing an output based on said grades of said sub-portions of said features.
  2. 2. A method according to claim 1, further comprising producing 15 a visual representation of said identified features overlaid onto at least a portion of said at least one image; and inspecting said visual representation to determine whether the features of interest of the structure had been correctly identified.
  3. 3. A method according to claim 2, further comprising, in the event of determining that the features of interest of the structure had not been correctly identified, undertaking a manual inspection of at least a portion of said at least one image to identify features of interest of said structure; and processing, using a third processing technique, said portion of said at least one image using said manually identified features of interest to train said first processing technique to better identify said features of interest of said structure.
  4. 4. A method according to any preceding claim, wherein 30 producing said output comprises representing the identified grades of said features using different colours.
  5. -18 - 5. A method according to claim 4, wherein said different coloured output is overlaid onto said at least one image.
  6. 6. A method according to any preceding claim, further comprising generating said at least one image by combining a 5 plurality of images.
  7. 7. A method according to claim 6, further comprising generating said at least one image as a 3 dimensional point cloud from said plurality of images.
  8. 8. A method according to claim 6 or 7, further comprising gathering said plurality of images using a camera mounted on a flying device.
  9. 9. A method according to any preceding claim, wherein at least one said processing technique comprises a neural network.
  10. 10. A method according to any preceding claim, wherein said 15 first processing technique comprises a generative adversarial neural network.
  11. 11. A method according to any preceding claim, wherein said second processing technique comprises convolutional neural network.
  12. 12. A method according to any preceding claim, wherein said third processing technique comprises discriminative neural network.
  13. 13. A method of estimating a quantity of material required to repair a structure, comprising using a method according to any 25 of the preceding claims to identify defects in said structure, wherein said step of producing an output comprises: estimating a surface area represented by at leas 7_ one sub-portion; and calculating a volume of material required to repair said sub-30 portion of said structure by the volume of material required to -19 -repair a unit of surface area of the grade allocated to said sub-portion.
  14. 14. A method of repairing a structure, comprising: estimating a quantity of at least one material required to repair 5 a structure according to claim 13; ordering the material or materials estimated; and using the material or materials ordered to repair the structure.
  15. 15. A method of detecting defects in a structure, comprising: receiving at least one image of a structure; processing, using a first processing technique, said at least one image to identify features of interest of said structure; and producing an output based on said features of interest.
  16. 16. A method according to claim 15, further comprising: processing, using a second processing technique, portions of said at least one image identified as features of interest, by separating said portions into sub portions and applying a grade to each sub-portion representing a quality of the structure of the feature of interest at that sub portion as represented in the image; and producing an output based on said grades of said sub-portions of said features.
  17. 17. A method according to claim 15 or 16, further comprising producing a visual representation of said identified features 25 overlaid onto at least a portion of said at least one image; and inspecting said visual representation to determine whether the features of interest of the structure had been correctly identified.
  18. 18. A method according to claim 2, further comprising, in the 30 event of determining that the features of interest of the -20 -structure had not been correctly identified, undertaking a manual inspection of at least a portion of said at least one image to identify features of interest of said structure; and processing, using a third processing technique, said portion of 5 said at least one image using said manually identified features of interest to train said first processing technique to better identify said features of interest of said structure.
  19. 19. A method according to any of claim 16 to 18, wherein producing said output comprises representing the identified 10 grades of said features using different colours.
  20. 20. A method according to claim 19, wherein said different coloured output is overlaid onto said at least one image.
  21. 21. A method according to any preceding claim, further comprising generating said at least one image by combining a 15 plurality of images as a 3 dimensional point cloud from said plurality of images.
  22. 22. A method according to any of claims 15 to 21, wherein at least one said processing technique comprises a neural network.
  23. 23. A method according to any of claims 15 to 22, wherein said 20 first processing technique comprises a generative adversarial neural network.
  24. 24. A method according to any of claims 15 to 23, wherein said second processing technique comprises convolutional neural network.
  25. 25. A method according to any of claims 15 to 24, wherein said third processing technique comprises discriminative neural network.
GB2306478.5A 2022-04-29 2023-05-02 A method of detecting defects in a structure and a method of estimating repair material quantity requirements Pending GB2620478A (en)

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GBGB2206337.4A GB202206337D0 (en) 2022-04-29 2022-04-29 A method of detecting defects in a structure and a method of estimating repair material quantity requirements

Publications (2)

Publication Number Publication Date
GB202306478D0 GB202306478D0 (en) 2023-06-14
GB2620478A true GB2620478A (en) 2024-01-10

Family

ID=81943784

Family Applications (2)

Application Number Title Priority Date Filing Date
GBGB2206337.4A Ceased GB202206337D0 (en) 2022-04-29 2022-04-29 A method of detecting defects in a structure and a method of estimating repair material quantity requirements
GB2306478.5A Pending GB2620478A (en) 2022-04-29 2023-05-02 A method of detecting defects in a structure and a method of estimating repair material quantity requirements

Family Applications Before (1)

Application Number Title Priority Date Filing Date
GBGB2206337.4A Ceased GB202206337D0 (en) 2022-04-29 2022-04-29 A method of detecting defects in a structure and a method of estimating repair material quantity requirements

Country Status (1)

Country Link
GB (2) GB202206337D0 (en)

Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180247416A1 (en) * 2017-02-27 2018-08-30 Dolphin AI, Inc. Machine learning-based image recognition of weather damage
US10354386B1 (en) * 2016-01-27 2019-07-16 United Services Automobile Association (Usaa) Remote sensing of structure damage
US20200034958A1 (en) * 2016-09-21 2020-01-30 Emergent Network Intelligence Ltd Automatic Image Based Object Damage Assessment
US20210089811A1 (en) * 2019-09-20 2021-03-25 Pictometry International Corp. Roof condition assessment using machine learning
US11392897B1 (en) * 2020-08-10 2022-07-19 United Services Automobile Association (Usaa) Intelligent system and method for assessing structural damage using aerial imagery

Patent Citations (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US10354386B1 (en) * 2016-01-27 2019-07-16 United Services Automobile Association (Usaa) Remote sensing of structure damage
US20200034958A1 (en) * 2016-09-21 2020-01-30 Emergent Network Intelligence Ltd Automatic Image Based Object Damage Assessment
US20180247416A1 (en) * 2017-02-27 2018-08-30 Dolphin AI, Inc. Machine learning-based image recognition of weather damage
US20210089811A1 (en) * 2019-09-20 2021-03-25 Pictometry International Corp. Roof condition assessment using machine learning
US11392897B1 (en) * 2020-08-10 2022-07-19 United Services Automobile Association (Usaa) Intelligent system and method for assessing structural damage using aerial imagery

Also Published As

Publication number Publication date
GB202206337D0 (en) 2022-06-15
GB202306478D0 (en) 2023-06-14

Similar Documents

Publication Publication Date Title
JP6666416B2 (en) Damage information extraction device, damage information extraction method, and damage information extraction program
CN108431585B (en) Information processing apparatus and information processing method
KR102112046B1 (en) Method for maintenance and safety diagnosis of facilities
WO2016006283A1 (en) Structure maintenance management system
Lee et al. Application and validation of simple image-mosaic technology for interpreting cracks on tunnel lining
KR102321249B1 (en) Ai-based analysis method of facility appearance
US11605158B2 (en) System and method for early identification and monitoring of defects in transportation infrastructure
JPWO2017043275A1 (en) Soundness determination device, soundness determination method, and soundness determination program
JP2023164699A (en) Damage diagram creation support device
Bal et al. Novel invisible markers for monitoring cracks on masonry structures
Omer et al. Performance evaluation of bridges using virtual reality
Einizinab et al. Enabling technologies for remote and virtual inspection of building work
GB2620478A (en) A method of detecting defects in a structure and a method of estimating repair material quantity requirements
CN116090838B (en) Automatic building site system of patrolling and examining based on camera
KR102229477B1 (en) Repair and reinforcement system of fire-damaged buildings
Ahlborn et al. Bridge condition assessment using remote sensors.
Psomas et al. Case Study: Assessing the structural condition of steel bridges using Terrestrial Laser Scanner (TLS)
Kaamin et al. Visual inspection of heritage mosques using unmanned aerial vehicle (uav) and condition survey protocol (csp) 1 matrix: A case study of tengkera mosque and kampung kling mosque, melaka
Mansuri et al. Development of automated web-based condition survey system for heritage monuments using deep learning
Gonçalves et al. Planar projection of mobile laser scanning data in tunnels
Flawinne Artificial intelligence to identify defects on bridges inspected by drone
JP2003178334A (en) Surface stability evaluation data generating system
Moyano et al. Reverse Engineering Based on Digital Data Capture In Situ as a Methodology for the Study of Space Labor Risk in Construction Works and Its Applicability in BIM
Acero Molina et al. Comparing the Accuracy between UAS Photogrammetry and LiDAR in Bridge Inspections
KR102378411B1 (en) Making and visualizing method of appearance inspection drawing using portable electronic device