GB2590468A - Analysing surfaces of vehicles - Google Patents

Analysing surfaces of vehicles Download PDF

Info

Publication number
GB2590468A
GB2590468A GB1918824.2A GB201918824A GB2590468A GB 2590468 A GB2590468 A GB 2590468A GB 201918824 A GB201918824 A GB 201918824A GB 2590468 A GB2590468 A GB 2590468A
Authority
GB
United Kingdom
Prior art keywords
vehicle
image
aircraft
images
image data
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB1918824.2A
Other versions
GB201918824D0 (en
Inventor
Fu Qiang
Cornet Christophe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Airbus Operations Ltd
Original Assignee
Airbus Operations Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Airbus Operations Ltd filed Critical Airbus Operations Ltd
Priority to GB1918824.2A priority Critical patent/GB2590468A/en
Publication of GB201918824D0 publication Critical patent/GB201918824D0/en
Publication of GB2590468A publication Critical patent/GB2590468A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/24Measuring arrangements characterised by the use of optical techniques for measuring contours or curvatures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C11/00Photogrammetry or videogrammetry, e.g. stereogrammetry; Photographic surveying
    • G01C11/02Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures
    • G01C11/025Picture taking arrangements specially adapted for photogrammetry or photographic surveying, e.g. controlling overlapping of pictures by scanning the object
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64FGROUND OR AIRCRAFT-CARRIER-DECK INSTALLATIONS SPECIALLY ADAPTED FOR USE IN CONNECTION WITH AIRCRAFT; DESIGNING, MANUFACTURING, ASSEMBLING, CLEANING, MAINTAINING OR REPAIRING AIRCRAFT, NOT OTHERWISE PROVIDED FOR; HANDLING, TRANSPORTING, TESTING OR INSPECTING AIRCRAFT COMPONENTS, NOT OTHERWISE PROVIDED FOR
    • B64F5/00Designing, manufacturing, assembling, cleaning, maintaining or repairing aircraft, not otherwise provided for; Handling, transporting, testing or inspecting aircraft components, not otherwise provided for
    • B64F5/40Maintaining or repairing aircraft
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64FGROUND OR AIRCRAFT-CARRIER-DECK INSTALLATIONS SPECIALLY ADAPTED FOR USE IN CONNECTION WITH AIRCRAFT; DESIGNING, MANUFACTURING, ASSEMBLING, CLEANING, MAINTAINING OR REPAIRING AIRCRAFT, NOT OTHERWISE PROVIDED FOR; HANDLING, TRANSPORTING, TESTING OR INSPECTING AIRCRAFT COMPONENTS, NOT OTHERWISE PROVIDED FOR
    • B64F5/00Designing, manufacturing, assembling, cleaning, maintaining or repairing aircraft, not otherwise provided for; Handling, transporting, testing or inspecting aircraft components, not otherwise provided for
    • B64F5/60Testing or inspecting aircraft components or systems
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01BMEASURING LENGTH, THICKNESS OR SIMILAR LINEAR DIMENSIONS; MEASURING ANGLES; MEASURING AREAS; MEASURING IRREGULARITIES OF SURFACES OR CONTOURS
    • G01B11/00Measuring arrangements characterised by the use of optical techniques
    • G01B11/002Measuring arrangements characterised by the use of optical techniques for measuring two or more coordinates
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M11/00Testing of optical apparatus; Testing structures by optical methods not otherwise provided for
    • G01M11/08Testing mechanical properties
    • G01M11/081Testing mechanical properties by using a contact-less detection method, i.e. with a camera
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M5/00Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings
    • G01M5/0033Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings by determining damage, crack or wear
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M5/00Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings
    • G01M5/0091Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings by using electromagnetic excitation or detection
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/9515Objects of complex shape, e.g. examined with use of a surface follower device
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/8851Scan or image signal processing specially adapted therefor, e.g. for scan signal adjustment, for detecting different kinds of defects, for compensating for structures, markings, edges
    • G01N2021/8854Grading and classifying of flaws
    • G01N2021/8861Determining coordinates of flaws
    • G01N2021/8864Mapping zones of defects
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01NINVESTIGATING OR ANALYSING MATERIALS BY DETERMINING THEIR CHEMICAL OR PHYSICAL PROPERTIES
    • G01N21/00Investigating or analysing materials by the use of optical means, i.e. using sub-millimetre waves, infrared, visible or ultraviolet light
    • G01N21/84Systems specially adapted for particular applications
    • G01N21/88Investigating the presence of flaws or contamination
    • G01N21/95Investigating the presence of flaws or contamination characterised by the material or shape of the object to be examined
    • G01N21/9515Objects of complex shape, e.g. examined with use of a surface follower device
    • G01N2021/9518Objects of complex shape, e.g. examined with use of a surface follower device using a surface follower, e.g. robot

Landscapes

  • Physics & Mathematics (AREA)
  • Engineering & Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Chemical & Material Sciences (AREA)
  • Analytical Chemistry (AREA)
  • Pathology (AREA)
  • Transportation (AREA)
  • Biochemistry (AREA)
  • General Health & Medical Sciences (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Immunology (AREA)
  • Health & Medical Sciences (AREA)
  • Manufacturing & Machinery (AREA)
  • Signal Processing (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Electromagnetism (AREA)
  • Multimedia (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Investigating Materials By The Use Of Optical Means Adapted For Particular Applications (AREA)
  • Image Processing (AREA)

Abstract

An apparatus to analyse the surface of a vehicle is disclosed. In one form, the apparatus comprises a coordinate acquisition interface 708 to acquire data representing three-dimensional coordinate data of a vehicle. The apparatus also comprises an image acquisition interface 706 to acquire image data representing a captured image of the vehicle, an image processor to process the image data to identify a surface defect of a surface of the vehicle represented by the image data, and a mapping engine 714 to map the captured image data to the three-dimensional coordinate data to determine the location on the surface of the vehicle of the surface defect. The defect may be indicative of corrosion, impact damage or a structural defect. The vehicle may be an aircraft (aeroplane). The classifier may be trained using undamaged vehicles. A method of preparing to repair an aircraft is disclosed.

Description

ANALYSING SURFACES OF VEHICLES
TECHNICAL FIELD
100011 The present invention relates to analysing a surface of a vehicle. Particularly, but not exclusively, the present invention relates to identifying a surface defect of a surface of the vehicle and mapping the surface defect to three-dimensional coordinate data of the vehicle.
BACKGROUND
100021 Modern aircraft, particularly airliners, are routinely inspected when in service to access whether repairs or maintenance are required. Typically, such inspections are performed by maintenance engineers who use a variety of methods including visual inspections. If the condition of the aircraft is deemed to warrant repair or maintenance, then the observations are typically recorded in a maintenance log for the respective aircraft, which may be stored in a maintenance database. Corresponding repairs or maintenance can then be commissioned and conducted as required.
SUMMARY
100031 A first aspect of the present invention provides apparatus to analyse the surface of a vehicle, comprising: a coordinate acquisition interface to acquire data representing three-dimensional coordinate data of a vehicle; an image acquisition interface to acquire image data representing a captured image of the vehicle; an image processor to process the image data to identify a surface defect of a surface of the vehicle represented by the image data; and a mapping engine to map the captured image data to the three-dimensional coordinate data to determine the location on the surface of the vehicle of the surface defect. By determining the location of surface defects, maintenance crews can easily classify and quantify occurrences of defects across vehicle fleets, or across vehicle of the same type.
[0004] Optionally, the apparatus further comprising a database manager to update a database for the vehicle represented by the image data with respective data relating surface regions of the vehicle and associated levels of identified defects This may provide useful information to users of the database without having to view the vehicle itself [0005] Optionally, the apparatus where the identified surface defect is indicative of at least one of: surface corrosion, impact damage, and a defect in the underlying structure of the vehicle. This information will be updated into the database and as such make it more freely available to maintenance crews Common problems that are identifiable as surface defects across vehicle types may be recorded and classified.
[0006] Optionally, the image processor identifies a surface defect in a captured image by reference to a library of undamaged vehicle images. A number of libraries already exist and therefore this may reduce the time and complexity of training the image processor.
[0007] Optionally, the image processor comprises a trained classifier, which is trained to identify surface defects by reference to images of an undamaged vehicles. The image processor in effect may be trained to identify or 'know' what the condition of a vehicle is supposed to look like and therefore is capable of spotting surface damage, as it appears to be different from expectation, without necessarily being trained using images of damage. Over time, as images are captured that are assessed and confirmed to show damage, the trained classifier may be further trained, or enhanced, by using those respective captured images.
[0008] Optionally, the vehicle may be an aircraft.
[0009] A second aspect of the present invention provides a method of analysing the surface of a vehicle, comprising: providing data representing three-dimensional coordinate data of a vehicle; providing image data representing a captured image of the vehicle; processing the image data to identify a surface defect of a surface of the vehicle represented by the image data; and matching the captured image data to the three-dimensional coordinate data to determine the location on the surface of the vehicle of the surface defect [00101 Optionally, the method further comprises updating a database for the vehicle represented by the image data with respective data relating surface regions of the vehicle and associated levels of identified defects. This may provide information to users of the database without having to view the vehicle itself [0011] Optionally, the surface defect is indicative of at least one of: surface corrosion, impact damage, and a defect in the underlying structure of the vehicle. This information will be updated into the database and as such make it more freely available to maintenance crews. Common problems that are identifiable as surface defects across vehicle types may be recorded and classified.
[0012] Optionally, processing the image identifies a surface defect in a captured image by reference to images of undamaged vehicles. In effect, processing the image may include training to identify or 'know' what the condition of a vehicle is supposed to look like and therefore is capable of spotting surface damage, as it appears to be different from expectation, without necessarily being trained using images of damage. Over time, as images are captured that are assessed and confirmed to show damage, the trained classifier may be further trained, or enhanced, by using those respective captured images.
100131 Optionally, the vehicle may be an aircraft.
100141 A third aspect of the present invention provides a method of preparing to repair an aircraft, comprising: providing a model of the three-dimensional locations of surface features on the aircraft; providing images of the surface of the aircraft; locating surface features in the images and mapping the images to the three-dimensional model by comparing the locations of the surface features in the images to the locations of surface features in the three dimensional model; identifying surface defects needing repair in the images; determining the location of the identified surface defects on the aircraft based on the above mapping of the images; preparing repair resources for the aircraft based on the determined location. By identifying the location and classification of surface defects, the repair resources required may be prepared in advance. This may save time for the maintenance crew and reduce the time in which an aircraft is in for repair.
100151 Optionally, preparing repair resources comprises providing an appropriate section of a structure repair manual. This may speed up the time it takes to identify a section in a repair manual, and as such reduce the time the maintenance crew needs to fix a problem.
[0016] Optionally, preparing repair resources comprises readying repair parts required for repairing the determined location of the surface defects. The maintenance crew may be able to order parts and/or resources required to repair in advance and thus this may further reduce the time that an aircraft is in for repairs and or maintenance. The resources may be paint, sanding supplies or body filler.
BRIEF DESCRIPTION OF THE DRAWINGS
[0001] Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: [0002] Figure 1 is a schematic view of an aircraft and a UAV for use in capturing images of the aircraft, according to an example; [0003] Figure 2A is an illustrative view of a part of an aircraft when viewed from a first direction; [0004] Figure 2B is an illustrative view of a part of an aircraft when viewed from a second direction; [0005] Figure 2C is an illustrative view of a part of an aircraft when viewed from a third direction; [0006] Figure 3 is an illustrative view of a part of a surface of an aircraft; [00071 Figure 4A is a process flow diagram according to an example; [00081 Figure 4B is a process flow diagram according to an example; [0009] Figure 4C is three-dimensional coordinate data according to an example; [0010] Figure 5 is a functional block diagram according to an example; [0011] Figure 6 is a functional block diagram according to an alternative example; [0012] Figure 7 is a functional block diagram according to another example; [0013] Figure 8 is a flowchart according an example; [0014] Figure 9 is a flowchart according to another example; 100151 Figure I 0 is a flowchart according to another example; and [0016] Figure 11 is a flow chart according to an example.
DETAILED DESCRIPTION
[0017] Between flights, when on the ground, aircraft are inspected for damage and in-service wear and tear by maintenance crews. Visual inspections may for example reveal surface corrosion or impact damage. Visual inspections can be time-consuming and issues may be missed due to human error.
[0018] Certain examples described herein relate to the use of computer vision and machine learning techniques to identify and characterise faults with an aircraft in a more automated way. Images can be captured by maintenance crew using an imaging device, such as a camera Cameras can even be mounted on an unmanned robots or aerial vehicles, which may be programmed or controlled to capture images of an entire aircraft in a relatively short period of time. The captured images can then be analysed and used to identify issues reliably and thereby augment the capabilities of the maintenance crew.
[0019] Figure 1 illustrates an example of a scenario 100 where an unmanned aerial vehicle (UAV) is used to inspect an aircraft. In particular, a capture device 102 mounted on the UAV 104 captures image data in the direction 106 of aircraft 108. In this example, the image capture device 102 comprises one or more digital cameras capable of capturing images in the visible and/or IR wavelengths. Any other suitable kind of camera or imaging device may be used. While in the present example the capture device 102 is mounted on the UAV 104 to capture images, it will be appreciated that the image capture device 102 may instead be mounted on any other vehicle, device, structure or may be simply handheld. For example, the maintenance crew may use handheld image capture devices to capture image data of the interior and exterior of the aircraft during routine maintenance inspections. The interior may be inside the aircraft or within compartment or areas, such as landing gear bays, which are only accessible from outside of the aircraft and thus can be inspected when the aircraft is stationary and being inspected.
[0020] The UAV 104 and image capture device 102 are programmed or controlled to capture image data, 'images', of the surfaces of the aircraft 108 from a plurality of different angles, aspects and positions. The captured images are for use in the computer vision applications described hereafter. The image capture device 102 may capture a sequence of individual images or a video as it travels around the aircraft 108 during the inspection. The captured images or videos are recorded onto storage devices and may in addition be communicated or streamed to a maintenance engineer for live inspection.
[0021] Figures 2A, 2B, and 2C illustrate examples of image data captured from three different positions, denoted as 200, 202 and 206, relative to an aircraft. Figure 2A is an image captured from a first position by the image capture device 102 mounted on the UAV 104. Similarly, Figure 2B is an image captured from a second position and Figure 2C is an image captured from a third position. All images captured depict the same surface regions and features of the aircraft 108 when viewed from different positions. For example, the wing surface 208 is visible and is currently under inspection. In all three illustrative examples 200, 202, and 206 it can be seen that the captured image shows a surface defect 210 on wing surface 208.
[0022] The captured images illustrated in Figures 2A, 2B, and 2C also show different features on the wing surface 208. Such surface features may include, but are not limited to, trailing edges 212, seams 214, leading edges 216, and fasteners such as rivets and bolts (not pictured).
[0023] While the examples in Figures 2A, 2B, and 2C show a wing surface, other surfaces may be similarly captured from a variety of orientations and locations and analysed. For example, images of the internal surfaces of the aircraft may be captured and processed. Moreover, while in the present example the images are from the same aircraft 108, it will be appreciated that different images 200, 202, and 206 may be captured from one or more different aircraft, for example, so that comparisons may be made or trends established between different aircraft of the same make.
[0024] Figure 3 illustrates an example of another captured image 300 of a surface 302 of an aircraft, when viewed from a given position. The captured image 300 depicts surface features such as the fasteners 304, which may be rivets, bolts, screws, weld spots or other forms of structural fastener. Moreover, the image 300 shows a strut 310 and a surface defect 308 that may be required to be repaired. As previously mentioned, surface defects may be, but are not limited to, surface corrosion, impact damage, and other forms of material wear. It may be beneficial to capture images of an aircraft in this manner for the purposes of monitoring and documenting the surface condition of the aircraft, and to identify common problems that arise. The images may be inspected by ground crews and/or compared to other existing images of the same aircraft. Beneficially, if there are changes to a surface area over time, then they will be recorded and directly comparable. Additionally, comparing similar images of the same surface areas across a type of aircraft can identify trends in surface damage and/or conditions.
[0025] Figure 4A illustrates a capture device. Example 400 shows a capture device 102 arranged to capture and store image data 404. The form of the image data 404 is dependent on the type of capture device 402. For example, if capture device 102 is a digital camera, the image data 404 may be captured by a charged coupled device (CCD) or a composite metal oxide (CMOS) chip, and formatted and stored in an appropriate format, such as in a.bmp, jpg format or the like.
[0026] Figure 4B illustrates multiple images 406, 408, 410 stored as image data 412. The image data contains images of surfaces of an aircraft or number of aircraft. Images may depict the same surface of a single aircraft captured from different locations or may comprise images of the aircraft taken at different times but from the same capture location.
[0027] In general, the captured image data 404, 406, 408, 410 represents a two-dimensional image of a three-dimensional scene or object (i.e. an aircraft). It may not be immediately evident to maintenance crew which part of an aircraft an image depicts. It is therefore perceived to be beneficial to provide means to correlate or map captured images to a respective location on the aircraft.
[0028] Figure 4C illustrates an example of three-dimensional coordinate data 414. The three-dimensional coordinate data 414 represents a structure 416 comprising the surface features that are present on the structure. The structure 416 may be part of an aircraft or vehicle. In this example, the surface features are fasteners 304 and fastener holes 312. The three-dimensional coordinate data 414 also comprises the known three-dimensional location information 418 of each of the fasteners 304 and fastener holes 312 within the structure. The three-dimensional location information 418 of the surface features of the aircraft is obtained or determined from, for example, the original aircraft designs, which may be encapsulated in CAD files, text files, or the like. Such files often capture a complex design such as for an aircraft in 'layers', and one layer may describe the locations in three-dimensions of the fasteners. In this example, the three-dimensional location information 418 is defined using cartesian coordinates (x, y, z) although other coordinate systems can be used. In some examples, the three-dimensional coordinate data may comprise only the co-ordinates of the fasteners. In other examples, certain other structural elements of the aircraft may be included, for example, to provide context for later image processing. The degree or level of inclusion of features in the three-dimensional coordinate data may be varied according to need, for example, depending on which features are used, according to examples, to identify locations on the surfaces of the aircraft. Features may include any visible features such as fasteners, fastener holes, seams, edges, windows, struts, cables, cable conduits, or any other discernible surface feature. Features may include normally-obscured features, which may be revealed and become discernible by removing a panel or service hatch.
[0029] Figure 5 illustrates an example of an apparatus 500 for mapping captured image data 504 to pre-generated, three-dimensional coordinate data of a respective aircraft. Beneficially, surface features such as fasteners are used as locators, or points of reference, for mapping an image to the three-dimensional coordinate data.
[00301 The apparatus 500 comprises an image acquisition interface 506 arid a coordinate acquisition interface 508. The image acquisition interface 506 is arranged to obtain image data 504, for example, from an image capture device 402 or 102. Image data 504, for example, represents an image of a surface or surfaces of an aircraft. The image acquisition interface 506 may comprise a wired interface, such as USB port, or a wireless interface. The image capture device according to the present example is directly coupled to the apparatus 500 and transfers captured image data 504 to the apparatus 500 through the image acquisition interface 506. In other examples, the image acquisition interface 506 may receive previously captured image data 504 from an external storage device, and, as such, is arranged to read and transfer such data 100311 The coordinate acquisition interface 508 is arranged to obtain the three-dimensional coordinate data 510, for example, from a data store. As with the image acquisition interface 506, the coordinate acquisition interface 508 may comprise a wired or wireless interface. In some examples, the coordinate data 510 including respective 3D co-ordinates of each fastener may reside in a text tile or a CSV file. For instance, each row in such a text or CSV file may provide details of a fastener and its respective location.
[00321 The apparatus 500 also comprises an image processor 512 and a mapping engine 514. The image processor 512 is configured to receive the image data 514 from the image acquisition interface 506 and output processed image data 516. The mapping engine 514 is arranged to receive the three-dimensional coordinate data 510 and processed image data 516 and output an image data surface location 518. The image data surface location 518 is the data that provides the information relating to the position on the aircraft that the image data 504 represents. For example, if image data relating to the image in Figure 2A is used, the apparatus 500 outputs the location of the image as the respective wing surface.
[0033] Figure 6 illustrates an example of an apparatus 500 for locating surface defects present in captured image data 604 according to a second example. For brevity, components with similar functions to those described in relation to Figure 5 above are labelled with the same reference numerals but increased by 100. For example, image acquisition interfaces 506 and 606 perform the same function in the respective systems.
[0034] Apparatus 600 comprises an image acquisition interface 606 and coordinate acquisition interface 608, respectively, to receive image data 604 and three-dimensional coordinate data 610 [0035] Additionally, the apparatus 600 comprises an image processor 612 and a mapping engine 614. The image processor 612 is configured to receive image data 604 from the image acquisition interface 606 and output the surface defects 620 and surface features present in the image data 604. Mapping engine 614 is arranged to receive three-dimensional coordinate data 610, processed image data 620, 605 and captured image data 604 to output locations of the image data surface defects 618. The surface defect location is the data that provides the information relating to the position on the aircraft that the surface defects present in the image data 504 represents. For example, if image data containing the surface defect 210 to the image in 2A is used, the apparatus 600 will output the location of the surface defect.
[00361 Figure 7 illustrates an example of an apparatus 700 for locating surface defects present in multiple image data 704, captured from multiple instances of a type or class of aircraft, and associating the surface defect with a surface region across a type or class of aircraft, according to a further example. For brevity, components with similar functions to those described in relation to Figure 5 and Figure 6 above are labelled with the same reference numerals but increased by 200 and 100 respectively.
[0037] Apparatus 700 also comprises correlation engine 724. The correlation engine 724 correlates a surface identified in the mapping engine with the surface defects 720 that have been identified by the image processor 712. If, for example, the captured images are of the same location on multiple different aircraft of a particular type, and all images (or at least a statistically-significant number or threshold, for example >10% or 20% or 50%, of the images) contain a similar surface defect 210, then the apparatus 700 will correlate the surface defect 210 with the respective location on that particular type of aircraft and, for example, highlight the respective location as being prone to surface defects. In some examples, the likelihood of corrosion or similar may be given a statistical likelihood and associated with all or at least some areas of a type of aircraft by using the present approaches. For example, if the surface defect is corrosion of a particular surface across multiple instances of a type of aircraft, then the apparatus 700 is arranged to highlight this information so that aircraft designers can investigate and perhaps modify a design or a maintenance schedule to address or at least reduce future occurrences of that particular type of surface defect. For example, if it is apparent that certain aircraft operating in relatively humid or high-salinity environments experience corrosion in certain surface areas, those areas, at least for aircraft that may operate in those environments in future, may be given additional anti-corrosion treatment before being entered into service.
[0038] Figure 8 illustrates a method 800 of mapping an image of a vehicle to a surface region of the vehicle, for example, using the apparatus of Figure 5. At block 802 the method acquires captured image data that represents an image of an aircraft. At block 804 the captured image data is processed to identify the locations of surface features depicted in the image data using feature identification, as will be described. At block 806 three-dimensional coordinate data is acquired, comprising the three-dimensional coordinates of surface features on the surface of the aircraft. At block 808 identified surface features of the captured image are matched with surface features of the three-dimensional coordinate to determine a surface region on the vehicle that is represented by the captured image.
[0039] An example applying the process of Figure 8, using the apparatus of Figure 5, to the captured image of Figure 3, will now be described.
[0040] First, captured image data 504 is acquired by the image acquisition interface 506. The captured image data 504 is then processed by the image processor 512 to identify surface features 516 within the image. The image processor 512 locates the fasteners 304 and fastener holes 312 present in the image. Next, the coordinate acquisition interface 508 acquires and provides to the mapping engine 514 the three-dimensional coordinate data 510. In this example, the coordinate data comprises the coordinates of all fasteners and fastener holes on all surfaces of the aircraft. The coordinate data may be divided in sections, for example: the front, rear, left and right of the aircraft. The image data 504 may contain location metadata information that indicates to the coordinate acquisition interface 508 which section of the three-dimensional coordinate data 510 to use, in order to reduce processing time.
[0041] The mapping engine 514 receives the processed captured image 516, which contains the location information of the surface features present in the image data 504, and the three-dimensional coordinate data 510. Using a 2D to 3D mapping function, the mapping engine 514 maps the identified surface features of the processed captured image data 516 to the corresponding features in the three-dimensional coordinate data510.
[0042] In another example, the image processor 512 locates other surface features such as the surface features identified in Figures 2A-2C, 212, 214, or 216. For example, the leading wing edge 216 may be mapped to the corresponding feature in the three-dimensional coordinate data.
[00431 Figure 9 is a flow chart of a method 900 for mapping an image of an aircraft to a surface region of the aircraft and locating a surface defect according to an example. At block 902 captured image data that represents an image of an aircraft is acquired. At block 904 the captured image data is processed to identify the locations of surface features and surface defects depicted in the image data using a feature identification method, as will be described. At block 906 three-dimensional coordinate data that comprises the coordinates of surface features on the surface of the aircraft is acquired. Finally, at block 908 identified surface features of the captured image are mapped with surface features of the three-dimensional coordinate data to determine the location on the surface of the vehicle of the surface defect.
100441 An example relating to applying process of Figure 9, using the apparatus of Figure 6, to the captured image of Figure 3, will now be described. First captured image data 604 is acquired by the image acquisition interface 606. The captured image data 604 is then processed by the image processor 612 to identify surface features within the image. In this example, the processor locates the fasteners 304 and faster holes 312, and the surface defect 308. However, it will be appreciated that the surface features may also be those described in relation to figure 2A. The coordinate acquisition interface 608 acquires and provides to the mapping engine 614 the three-dimensional coordinate data 610. In this example, the coordinate data comprises the coordinates of the fasteners and fastener holes on the surfaces of the whole aircraft. As before, the coordinate data may be divided in sections such that the three-dimensional coordinate data only comprises those surface features present on the exterior or interior of the aircraft; the front or rear of the aircraft; left or right of the aircraft. The image data 604 may contain location metadata information that indicates to the coordinate acquisition interface 608 which section of the three-dimensional coordinate data 610 to use.
[0045] The mapping engine 614 receives the processed captured image 620, 605, which contains the location information of the surface features 605 present in the image data 604, and the three-dimensional coordinate data 610 Using an 2D to 3D mapping function, the mapping engine 614 maps the identified surface features of the processed captured image data 620, 605 to the corresponding features in the three-dimensional coordinate data610. Finally, the mapping engine 614 provides the location of the surface defect 210 that has identified it the captured image data 604. The surface defect location may be used to identify a location in a structure repair manual to assist the maintenance crew during the repairs. If the manual is in an electronic form, such as in a document file opened within a document reader application, the application may include an application programming interface (API), which receives the location information and opens the manual at the appropriate page relating corresponding to the location of the surface defect. Moreover, the location may be communicated to a maintenance repair organisation (MRO) before the aircraft is brought in for such repairs. The surface defect location and information may al so be used to ready parts needed for repairs. This may include ordering parts in advance to the scheduled repair, and, as such, may reduce times that the maintenance crew has the aircraft in for repairs.
[00461 Figure 10 shows a flow chart of a method 1000 for mapping an image of an aircraft to a surface region of the aircraft according to an example. At block 1002 captured multiple image data 704 that represents more than one image of an aircraft is acquired. At block 1004 the captured multiple image data is processed to identify the locations of surface features and surface defects depicted in the multiple image data using a feature identification method. At block 1006 a three-dimensional representation of the vehicle is acquired. At block 1008 captured multiple image data 704 is matched to the three-dimensional representation of the vehicle. In some examples, the three-dimensional representation comprises coordinates of surface features, such as fasteners, on the surface of the aircraft, and matching can then be performed in a similar fashion to Figure 9. Other ways of matching may be used instead. At block 1010 the correlation engine 724 associates a surface region of the type of aircraft, determined from captured multiple image data of multiple instances of the type of aircraft and any respective identified defects, with a level of identified defects.
[0047] Figures 8-10 show flow charts that comprise the mapping engines 514, 614, and 714. A brief discussion on the methods that the mapping engine may apply to map the features present in a captured image to the three-dimensional coordinate data will now be described. In some examples, the mapping engines determine a surface region on the vehicle that is depicted by the captured image including by estimating from the image data a three-dimensional pose of the camera relative to the three-dimensional coordinate data. For example, the two-dimensional features may be mapped to three-dimensional features using a perspective-n-point approach, which estimates the pose of a calibrated camera given a set of n 3D points in the world and their corresponding 2D projections in the image. The variable, n, may be as low as three or four (identified points), with increasing confidence of pose estimation being attainable with a greater number of identified points, such as five, six or more. The 'perspective-n-point' method, for example, is described in: Fischler, M. A.; Bolles, R. C. (1981). "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography". Communications of the ACM. 24 (6): 381-395.
[0048] Another example that the mapping engines may use is to deploy calibrated image capture devices. This may be by calibrating the image capture device to the coordinate system of the three-dimensional coordinate data. In this respect, the mapping of the locations of the surface features from the two-dimensional image data to the three-dimensional coordinate data will be a transformation matrix and is more reliable than approaches using uncalibrated image capture devices.
[0049] According to an example, the image processors of Figures 5 to 7 identify surface defects in the captured image data by reference to a library of images including surface features and using an image comparison algorithm. In one example, the image processor is trained with images of defect-free aircraft, for example, in various lighting, states of cleanliness and/or normal surface deterioration such as faded paintwork. This may be an unsupervised training procedure, which trains the image processor to identify an image that looks 'out of the ordinary', for example as containing a surface defect, when the captured image is sufficiently different from a defect-free image. It may not matter if this approach leads to false positives, because images that are flagged as having potential surface defects will all be reviewed at by a member of the maintenance crew. Matching reliability may be improved over time as more images including surface defects are captured and classified, and used for additional training. In other examples, an image processor may comprise two classifiers, a first trained to identify a normal surface (and thereby to spot a surface defect when an image sufficiently differs from the norm) and another trained to identify surface defects. Both classifiers can be retrained or enhanced as more images are captured. In any event, in particular, the image processor is arranged to identify surface damage, which can then be mapped to the respective location or region on the respective aircraft.
[0050] With respect to surface defects, each kind of defect may have a certain distinctive signature that can be identified. For example, corrosion may be identifiable by colour variation r variations in surface roughness compared to the surrounding area. While certain kinds of corrosion, for example associated with steel, as such may be a deep red colour, and may be relatively easy to spot, other indicators, such as an uneven or rough painted surface (e.g. associated with non-ferrous surfaces such as aluminum) or a change in the colour of paint may also be used as an indicator of corrosion to an underlying surface. The image processer may be trained to identify the changes in colour and flag this as a surface defect such as corrosion. Another example method of identifying corrosion may be to determine the classification based on pixel pattern of a greyscale image to detect changes in the pattern. Thus, when identifying corrosion on non-ferrous surfaces, the identification may be based on a change in texture. Other kinds of surface defects may be identifiable in other ways. For example, impact damage may be identifiable by regions of high contrast (e.g. dark or light regions on what should be a flat surface) relative to the surroundings.
[0051] Figure 11 is a flow chart of a method 1100 that image processors 512, 612, or 712 may use to process the images and identify the surface features such as fasteners and surface defects. At block 1102 the image is pre-processed to prepare it for feature recognition. Pre-processing is used to highlight or isolate various shapes or patterns of the digital image. Pre-processing may take various forms. For example, the image may be digitally filtered to perform edge detection where identification of areas in the image where the brightness changes sharply. Another example may be to convert a colour image to greyscale for certain image processing steps. Other pre-processing methods such as ridge detection, blob detections, corner detection, thresholding, spotting unexpected pixel patterns (or expected pixel patterns for known surface defects) or template matching may additionally or optionally be used in the pre-processing step 1102. In any event, according to some examples, step 1102 produces a pre-processed image for both spotting the fasteners and for spotting surface defects. In other examples, multiple pre-processed images may be produced, where one may be more suitable for detecting fasteners and another may be more suitable for detecting surface defects.
[0052] At block 1104 the fastener locations are identified using the pre-processed image(s). By identifying the locations of the features, the image data is reduced to a set of coordinates, for example pixel coordinates. In some examples, a shape formed by the fasteners may be determined. For example, the shape may be an "X" shape (or any other particular shape), which may be advantageous, as will be described.
[0053] In the example of block 1104, the fasteners in the pre-processed image are identified by a classifier, which has undergone supervised trained using images of types of fasteners and other surface features In one example, the identified fasteners are used by the mapping engines 514, 614, and 714 to determine the location on the aircraft that the image represents [0054] Furthermore, block 1104 requires digital definitions of the shapes that describe the pattern of fasteners so that the matching stage can compare the shape with the three-dimensional coordinate data. In a simple example, the image may be represented as a simple bitmap on which the shape of the fasteners is identified or plotted, so that the shape can be identified in the three-dimensional coordinate data.
[0055] In the example of block 1106, which may occur in parallel to block 1104, surface defects in the image are identified using a classifier that has undergone unsupervised training, as has been previously described. In one example, the surface defect identification may follow the known Mask RCNN method for object instance segmentation. Namely, the image may be classified, i.e. determining that there are surface defects in the image. Then objects within the image are localised within the image. Following that, semantic segmentation may be performed, which is where every pixel is classified. In this example, the pixels that correspond to surface damage may be identified. Finally, object outlines at the pixel level are identified (instance segmentation). That is, bounding boxes and segmentation masks for each instance of an object in the image are generated. In another example, another filter stage may be used to identify colour or pattern variations in the image, which can be used for spotting corrosion. Such a filter may apply a weighting so that areas in the image that have red/brown hues are highlighted. In these and in other examples, various other image processing techniques may be applied in addition, or instead, to identify surface features and/or surface defects.
[0056] As has been described, corrosion can take many different forms, shapes and textures. As such, to identify the presence of corrosion in an image, classifiers may be used in the step 1106. In some examples, the classifier may comprise a neural network. In other examples, the classifier implements at least one of a random forest algorithm, a naive Bayes classifier, a support vector machine, a linear regression machine learning algorithm, or any other suitable algorithm or classifier which is suitable for the function described herein. For example, a supervised learning algorithm may be used to analyse the training data (comprising input values and respective output values) to infer a reasonable, learned function which maps the inputs to the outputs. The learned function may be represented in a neural network comprising an input layer, an output layer, and at least one hidden layer, wherein nodes of the at least one hidden layer area associated with one or more weights. Training the neural network comprises using the input values in the input layer and the output values in the output layer to generate and/or to update the one or more weights. The learned function may then be tested on a subset of the training data which was not used to train the learned function. This may allow the system to be validated before being applied to test data.
[0057] Once the fastener locations and surface defects are identified, the method 1100 creates a digital definition at block 1108. The digital definition describes elementary low-level characteristics such as the shape, the colour, or the texture, among others. There may be more specific descriptors such as the name and location of objects in the image.
This could be the name and location of the fasteners in an image, which in the example of Figure 3 would be the bolts 304 and bolt holes 312.
[0058] Mapping the identified features in the image to the three-dimensional coordinate data may be simplified by applying assumptions that the images are taken substantially perpendicular to the surface of the aircraft. This could be achieved by ensuring the UAV that the image capture device 102 is mounted on is controlled such that the images captured are perpendicular to the surface of the aircraft, thus greatly reducing the number of possible combinations of mappings between the image and the three-dimensional coordinate data. However, it may be also beneficial, or indeed only practical in areas with restricted room, to capture images at an angle relative to the surface to improve the identification of the fastener locations.
[0059] All the previously described methods 800, 900, 1000 may include an extra step of image rendering to generate at least a portion of a three-dimensional model of the aircraft by rendering captured image data onto the three-dimensional model at the determined surface region. Successive images may be rendered onto the same three-dimensional model to provide a "digital mock-up (DN4U)" of the aircraft. There are many ways that this can be achieved. One way that may be implemented in the present examples is to map the captured image data from the two-dimensional representation to a three-dimensional representation for rendering the captured image data onto the three-dimensional model. Conformal mapping may also be used to render the two-dimensional image onto the DMU.
[0060] It is to be noted that the term "or" as used herein is to be interpreted to mean "and/or", unless expressly stated otherwise It is also noted that the term "aircraft" is used throughout, but other types of vehicle or structure may be used, such as cars, lorries, bridges, and ships

Claims (15)

  1. CLAIMS: An apparatus to analyse the surface of a vehicle, comprising: a coordinate acquisition interface to acquire data representing three-dimensional coordinate data of a vehicle; an image acquisition interface to acquire image data representing a captured image of the vehicle; an image processor to process the image data to identify a surface defect of a surface of the vehicle represented by the image data; and a mapping engine to map the image data to the three-dimensional coordinate data to determine the location of the identified surface defect on the surface of the vehicle.
  2. 2. An apparatus according to claim 1, further comprising a database manager to update a database for the vehicle represented by the image data with respective data relating surface regions of the vehicle and associated levels of identified defects.
  3. 3. An apparatus according to either preceding claim, wherein an identified surface defect is indicative of at least one of: surface corrosion, impact damage, and a defect in the underlying structure of the vehicle.
  4. 4. An apparatus according to either preceding claim, wherein the image processor identifies a surface defect in a captured image by reference to a library of undamaged vehicle images.
  5. 5. An apparatus according to any one of the preceding claims, wherein the image processor comprises a trained classifier, which is trained to identify surface defects by reference to images of undamaged vehicles 6.
  6. An apparatus according to any one of the preceding claims, wherein the vehicle is an aircraft.
  7. A method of analysing the surface of a vehicle, comprising: providing data representing three-dimensional coordinate data of a vehicle; providing image data representing a captured image of the vehicle; processing the image data to identify a surface defect of a surface of the vehicle represented by the image data; and mapping the captured image data to the three-dimensional coordinate data to determine the location of the surface defect on the surface of the vehicle.
  8. 8. A method according to claim 7, further comprising the step of updating a database for the vehicle represented by the image data with respective data relating surface regions of the class of vehicle and associated levels of identified defects.
  9. 9. A method according to claim 7, wherein the identified surface defect is indicative of at least one of: surface corrosion, impact damage, arid a defect in the underlying structure of the vehicle.
  10. 10. A method according to claim 7 or claim 9, wherein the image is processed to identify a surface defect in a captured image by reference to a library of undamaged vehicle images.
  11. 11. A method according to any one of claims 7 to 10, wherein the image is processed by a trained classifier, which is trained to identify surface defects by reference to images of undamaged vehicles,
  12. 12. A method according to any one of claims 7 to 11, wherein the vehicle is an aircraft.
  13. 13. A method of preparing to repair an aircraft, comprising: providing a model of the three-dimensional locations of surface features on the aircraft; providing images of the surface of the aircraft; locating surface features in the images and mapping the images to the three-dimensional model by comparing the locations of the surface features in the images to the locations of surface features in the three-dimensional model; identifying surface defects needing repair in the images, determining the location of the identified surface defects on the aircraft based on the above mapping of the images; preparing repair resources for the aircraft based on the determined location.
  14. 14. A method according to claim 13, wherein preparing repair resources comprises providing an appropriate section of a structure repair manual.
  15. 15. A method according to claim 13 or claim 12, wherein preparing repair resources comprises readying repair parts and/or resources required for repairing the determined location of the surface defects
GB1918824.2A 2019-12-19 2019-12-19 Analysing surfaces of vehicles Pending GB2590468A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1918824.2A GB2590468A (en) 2019-12-19 2019-12-19 Analysing surfaces of vehicles

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1918824.2A GB2590468A (en) 2019-12-19 2019-12-19 Analysing surfaces of vehicles

Publications (2)

Publication Number Publication Date
GB201918824D0 GB201918824D0 (en) 2020-02-05
GB2590468A true GB2590468A (en) 2021-06-30

Family

ID=69322752

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1918824.2A Pending GB2590468A (en) 2019-12-19 2019-12-19 Analysing surfaces of vehicles

Country Status (1)

Country Link
GB (1) GB2590468A (en)

Families Citing this family (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111340796B (en) * 2020-03-10 2023-07-21 创新奇智(成都)科技有限公司 Defect detection method and device, electronic equipment and storage medium

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300984A1 (en) * 2010-02-23 2012-11-29 Lee Dann Recording the location of a point of interest on an object
WO2017153912A1 (en) * 2016-03-10 2017-09-14 Wpweb S.R.L. Method for analysing an aircraft, corresponding system of analysis of an aircraft, and de-icing and/or anti- icing system
GB2552092A (en) * 2017-07-04 2018-01-10 Daimler Ag Inspection system and method for automatic visual inspection of a motor vehicle
US20190185186A1 (en) * 2017-12-19 2019-06-20 Panton, Inc. Image recognition for vehicle safety and damage inspection

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20120300984A1 (en) * 2010-02-23 2012-11-29 Lee Dann Recording the location of a point of interest on an object
WO2017153912A1 (en) * 2016-03-10 2017-09-14 Wpweb S.R.L. Method for analysing an aircraft, corresponding system of analysis of an aircraft, and de-icing and/or anti- icing system
GB2552092A (en) * 2017-07-04 2018-01-10 Daimler Ag Inspection system and method for automatic visual inspection of a motor vehicle
US20190185186A1 (en) * 2017-12-19 2019-06-20 Panton, Inc. Image recognition for vehicle safety and damage inspection

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
FISCHLER, M. A.BOLLES, R. C.: "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography", COMMUNICATIONS OF THE ACM, vol. 24, no. 6, 1981, pages 381 - 395, XP001149167, DOI: 10.1145/358669.358692

Also Published As

Publication number Publication date
GB201918824D0 (en) 2020-02-05

Similar Documents

Publication Publication Date Title
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
US8238635B2 (en) Method and system for identifying defects in radiographic image data corresponding to a scanned object
US10621717B2 (en) System and method for image-based target object inspection
EP3049793B1 (en) Structural hot spot and critical location monitoring
CN112233097A (en) Road scene other vehicle detection system and method based on space-time domain multi-dimensional fusion
Katsamenis et al. Simultaneous Precise Localization and Classification of metal rust defects for robotic-driven maintenance and prefabrication using residual attention U-Net
JP2001153810A (en) System and method for detecting defect on work surface
Fondevik et al. Image segmentation of corrosion damages in industrial inspections
Beltrán-González et al. External and internal quality inspection of aerospace components
Mazzetto et al. Deep learning models for visual inspection on automotive assembling line
Doulamis Coupled multi-object tracking and labeling for vehicle trajectory estimation and matching
CN112102281A (en) Truck brake cylinder fault detection method based on improved Faster Rcnn
CN116311078A (en) Forest fire analysis and monitoring method and system
JP2019194565A (en) Machine vision system and method, and robot installation system and method
CN113033385A (en) Deep learning-based violation building remote sensing identification method and system
CN115830359A (en) Workpiece identification and counting method based on target detection and template matching in complex scene
CN116977738A (en) Traffic scene target detection method and system based on knowledge enhancement type deep learning
GB2590468A (en) Analysing surfaces of vehicles
Mumbelli et al. An application of Generative Adversarial Networks to improve automatic inspection in automotive manufacturing
GB2590469A (en) Analysing a class of vehicle
GB2591445A (en) Image mapping to vehicle surfaces
Dulecha et al. Crack detection in single-and multi-light images of painted surfaces using convolutional neural networks
CN113537397B (en) Target detection and image definition joint learning method based on multi-scale feature fusion
CN114926456A (en) Rail foreign matter detection method based on semi-automatic labeling and improved deep learning
CA3219745A1 (en) Texture mapping to polygonal models for industrial inspections