GB2591445A - Image mapping to vehicle surfaces - Google Patents

Image mapping to vehicle surfaces Download PDF

Info

Publication number
GB2591445A
GB2591445A GB1918822.6A GB201918822A GB2591445A GB 2591445 A GB2591445 A GB 2591445A GB 201918822 A GB201918822 A GB 201918822A GB 2591445 A GB2591445 A GB 2591445A
Authority
GB
United Kingdom
Prior art keywords
image
vehicle
image data
aircraft
captured
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
GB1918822.6A
Other versions
GB201918822D0 (en
Inventor
Fu Qiang
Cornet Christophe
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Airbus Operations Ltd
Original Assignee
Airbus Operations Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Airbus Operations Ltd filed Critical Airbus Operations Ltd
Priority to GB1918822.6A priority Critical patent/GB2591445A/en
Publication of GB201918822D0 publication Critical patent/GB201918822D0/en
Publication of GB2591445A publication Critical patent/GB2591445A/en
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • G06T7/0004Industrial image inspection
    • G06T7/001Industrial image inspection using an image reference approach
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/30Determination of transform parameters for the alignment of images, i.e. image registration
    • G06T7/33Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods
    • G06T7/344Determination of transform parameters for the alignment of images, i.e. image registration using feature-based methods involving models
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30108Industrial image inspection
    • G06T2207/30136Metal

Abstract

An apparatus for mapping an image of a vehicle e.g. an aircraft, to a surface region of the vehicle is disclosed. In one form, the apparatus comprises a coordinate acquisition interface to acquire three-dimensional coordinate data of a vehicle, the coordinate data comprising coordinates of surface features on surfaces of the vehicle e.g. fasteners such as rivets, bolts or screws or corresponding holes. The apparatus also comprises an image acquisition interface to acquire image data representing a captured image of the vehicle, an image processor to process the image data to identify one or more surface features in the captured image, and a mapping engine to match identified surface features of the captured image with surface features of the three-dimensional coordinate data to determine a surface region on the vehicle that is depicted by the captured image. The mapping engine may determine the pose of the image capture device and the image data may comprise calibrated image data generated by a calibrated image capture device.

Description

IMAGE MAPPING TO VEHICLE SURFACES
TECHNICAL FIELD
[0001] The present invention relates to mapping surface features in a two-dimensional image to three-dimensional coordinate data comprising coordinates of surface features of a vehicle such as an aircraft.
BACKGROUND
10002] Modern aircraft, particularly airliners, are routinely inspected when in service to access whether repairs or maintenance are required. Typically, such inspections are performed by maintenance engineers who use a variety of methods including visual inspections. If the condition of an aircraft is deemed to warrant repair or maintenance, then the observations are typically recorded in a maintenance log for the respective aircraft, which may be stored in a maintenance database. Corresponding repairs or maintenance can then be commissioned and conducted as required.
SUMMARY
[0003] A first aspect of the present invention provides an apparatus for determining a surface region depicted in an image of vehicle to a surface region of the vehicle, comprising: a coordinate acquisition interface to acquire three-dimensional coordinate data of a vehicle, the coordinate data comprising coordinates of surface features on surfaces of the vehicle; an image acquisition interface to acquire image data representing a captured image of the vehicle; an image processor to process the image data to identify one or more surface features in the captured image; and a mapping engine to match identified surface features of the captured image with surface features of the three-dimensional coordinate data to determine a surface region on the vehicle that is depicted by the captured image. Such an apparatus can be used to efficiently identify features within an image of a vehicle and map them to three-dimensional coordinate data. By mapping the surface features in the image to the three-dimensional coordinate data the maintenance crew can track, quantify, and analyse any changes in the surface features over time.
[0004] Optionally, the apparatus may identify and map surface features that are fasteners and/or fastener holes. The fasteners may be bolts, rivets, or screws. The precise location of every fastener in a vehicle is known and thus existing three-dimensional coordinate data used in the designing of the vehicle can be used. Using existing three-dimensional coordinate data minimises the requirements for additional modelling to map the captured image to the coordinate data. Moreover, fasteners are distinct features that can also be relatively easily identified during the processing of the image.
[0005] Optionally, the mapping engine determines a surface region on the vehicle that is depicted by the captured image including by estimating from the image data a pose of the camera.
[0006] Optionally, the image data comprises calibrated image data generated by a calibrated image capture device. For example, the image capture device may comprise a location system, such as an on-board GPS receiver and/or an orientation sensor, so that a location of where each image is captured, and/or the orientation of the image capture device when the image was captured, relative to the position and orientation of the vehicle, may be recorded with each respective captured image. Additionally, or alternatively, the image capture device may be calibrated with respect to a known world co-ordinate system. The absolute position of the captured image relative to the coordinate system of the three-dimensional coordinate data may therefore be determined. Thus, mapping features identified in the captured image may require less computational resources.
[0007] Optionally, the apparatus may further comprise a rendering engine, to generate at least a portion of a three-dimensional model of the vehicle by rendering captured image data onto the three-dimensional model at the determined surface region.
[00081 Optionally, the rendering engine maps the captured image data from a two-dimensional representation to a three-dimensional representation for rendering the captured image data onto the three-dimensional model [0009] Rendering the image onto the three-dimensional model shows the present state of the surface of the vehicle for a person that is not in proximity to the vehicle. The image, and identified features, can also be stored in a database to allow comparison to other vehicles of the same type. Moreover, it becomes possible to record changes over time and identify trends in surface defects that may be derived from such images.
[0010] Optionally, the vehicle may be an aircraft.
100111 A second aspect of the present invention provides a method of determining a surface region depicted in an image of a vehicle, comprising: providing three-dimensional coordinate data of a vehicle, the coordinate data comprising co-ordinates of surface features that are discernible on surfaces of the vehicle; providing image data representing a captured image of the vehicle; processing the image data to identify one or more surface features present in the captured image; and matching identified surface features of the captured image with surface features of the three-dimensional coordinate data to determine a surface region on the vehicle that is represented by the captured image.
[0012] Optionally, the method may identify and map surface features that are fasteners and/or fastener holes. The fasteners may be bolts, rivets, or screws. The precise location of every fastener in a vehicle is known arid thus existing three-dimensional coordinate data used in the designing of the vehicle can be used to map the fasteners located in the captured image data. By using existing three-dimensional coordinate data, this minimises the requirements for additional modelling to map the captured image to the coordinate data. Moreover, they are distinct features that can also be easily identified during the processing of the image.
[0013] Optionally, the mapping determines a surface region on the vehicle that is depicted by the captured image including by estimating from the image data a pose of the camera. The pose information provides a more accurate and efficient way to map the image data [0014] Optionally, the image data comprises calibrated image data generated by a calibrated image capture device. This is advantageous as the absolute position of the captured image relative to the coordinate system of the three-dimensional coordinate data is known. Thus, mapping features identified in the captured image requires less computational resources.
[0015] Optionally, the method may further comprise a rendering, to generate at least a portion of a three-dimensional model of the vehicle by rendering captured image data onto the three-dimensional model at the determined surface region. This is advantageous because they show the present state of the surface of the vehicle for a person that is not in proximity to the vehicle. Moreover, it may be beneficial as it will be possible to record changes over time and identify trends in surface defects that may be present in the image.
[0016] Optionally, the rendering maps the captured image data from a two-dimensional representation to a three-dimensional representation for rendering the captured image data onto the three-dimensional model.
[0017] Rendering the image onto the three-dimensional model shows the present state of the surface of the vehicle for a person that is not in proximity to the vehicle. The image, and identified features, can also be stored in a database to allow comparison to other vehicles of the same type. Moreover, it may be beneficial as it will be possible to record changes over time and identify trends in surface defects that may be present in the image.
[0018] Optionally, the vehicle may be an aircraft.
[0019] A third aspect of the present invention provides a method of mapping an image of an aircraft onto a model of the aircraft, comprising: providing a model of the three-dimensional coordinates of fasteners and fastener holes used on a surface of the aircraft; providing image data representing a captured image of the vehicle; identifying the locations of fasteners and fastener holes in the image; comparing the identified locations of fasteners and fastener holes in the image to the three-dimensional model; and projecting the image from its pose position onto the surface of the three-dimensional model in the imaged area.
BRIEF DESCRIPTION OF THE DRAWINGS
[0020] Embodiments of the invention will now be described, by way of example only, with reference to the accompanying drawings, in which: [0021] Figure 1 is a schematic view of an aircraft and a UAV for use in capturing images of the aircraft, according to an example; [0022] Figure 2A is an illustrative view of a part of an aircraft when viewed from a first direction; [0023] Figure 2B is an illustrative view of a part of an aircraft when viewed from a second direction; [0024] Figure 2C is an illustrative view of a part of an aircraft when viewed from a third direction; [0025] Figure 3 is an illustrative view of a part of a surface of an aircraft; [0026] Figure 4A is a process flow diagram according to an example; [0027] Figure 4B is a process flow diagram according to an example; 100281 Figure 4C is three-dimensional coordinate data according to an example; [0029] Figure 5 is a functional block diagram according to an example; [0030] Figure 6 is a functional block diagram according to an alternative example, [00311 Figure 7 is a functional block diagram according to another example; [0032] Figure 8 is a flowchart according an example; 100331 Figure 9 is a flowchart according to another example; [0034] Figure 10 is a flowchart according to another example and [0035] Figure 11 is a flow chart according to an example.
DETAILED DESCRIPTION
[0036] Between flights, when on the ground, aircraft are inspected for damage and in-service wear and tear by maintenance crews Visual inspections may for example reveal surface corrosion or impact damage. Visual inspections can be time-consuming and issues may be missed due to human error.
100371 Certain examples described herein relate to the use of computer vision and machine learning techniques to identify and characterise faults with an aircraft in a more automated way. Images can be captured by maintenance crew using an imaging device, such as a camera. Cameras can even be mounted on an unmanned robots or aerial vehicles, which may be programmed or controlled to capture images of an entire aircraft in a relatively short period of time. The captured images can then be analysed and used to identify issues reliably and thereby augment the capabilities of the maintenance crew.
100381 Figure 1 illustrates an example of a scenario 100 where an unmanned aerial vehicle (UAV) is used to inspect an aircraft. In particular, a capture device 102 mounted on the UAV 104 captures image data in the direction 106 of aircraft 108. In this example, the image capture device 102 comprises one or more digital cameras capable of capturing images in the visible and/or IR wavelengths. Any other suitable kind of camera or imaging device may be used. While in the present example the capture device 102 is mounted on the UAV 104 to capture images, it will be appreciated that the image capture device 102 may instead be mounted on any other vehicle, device, structure or may be simply handheld. For example, the maintenance crew may use handheld image capture devices to capture image data of the interior and exterior of the aircraft during routine maintenance inspections. The interior may be inside the aircraft or within compartment or areas, such as landing gear bays, which are only accessible from outside of the aircraft and thus can be inspected when the aircraft is stationary and being inspected.
[0039] The UAV 104 and image capture device 102 are programmed or controlled to capture image data, 'images', of the surfaces of the aircraft 108 from a plurality of different angles, aspects and positions. The captured images are for use in the computer vision applications described hereafter. The image capture device 102 may capture a sequence of individual images or a video as it travels around the aircraft 108 during the inspection. The captured images or videos are recorded onto storage devices and may in addition be communicated or streamed to a maintenance engineer for live inspection.
[0040] Figures 2A, 2B, and 2C illustrate examples of image data captured from three different positions, denoted as 200, 202 and 206, relative to an aircraft. Figure 2A is an image captured from a first position by the image capture device 102 mounted on the UAV 104. Similarly, Figure 2B is an image captured from a second position and Figure 2C is an image captured from a third position. All images captured depict the same surface regions and features of the aircraft 108 when viewed from different positions. For example, the wing surface 208 is visible and is currently under inspection. In all three illustrative examples 200, 202, and 206 it can be seen that the captured image shows a surface defect 210 on wing surface 208.
[0041] The captured images illustrated in Figures 2A, 2B, and 2C also show different features on the wing surface 208 Such surface features may include, but are not limited to, trailing edges 212, seams 214, leading edges 216, and fasteners such as rivets and bolts (not pictured) [0042] While the examples in Figures 2A, 2B, and 2C show a wing surface, other surfaces may be similarly captured from a variety of orientations and locations and analysed. For example, images of the internal surfaces of the aircraft may be captured and processed. Moreover, while in the present example the images are from the same aircraft 108, it will be appreciated that different images 200, 202, and 206 may be captured from one or more different aircraft, for example, so that comparisons may be made or trends established between different aircraft of the same make.
[0043] Figure 3 illustrates an example of another captured image 300 of a surface 302 of an aircraft, when viewed from a given position. The captured image 300 depicts surface features such as the fasteners 304, which may be rivets, bolts, screws, weld spots or other forms of structural fastener. Moreover, the image 300 shows a strut 310 and a surface defect 308 that may be required to be repaired. As previously mentioned, surface defects may be, but are not limited to, surface corrosion, impact damage, and other forms of material wear. It may be beneficial to capture images of an aircraft in this manner for the purposes of monitoring and documenting the surface condition of the aircraft, and to identify common problems that arise. The images may be inspected by ground crews and/or compared to other existing images of the same aircraft. Beneficially, if there are changes to a surface area over time, then they will be recorded and directly comparable. Additionally, comparing similar images of the same surface areas across a type of aircraft can identify trends in surface damage and/or conditions.
[0044] Figure 4A illustrates a capture device. Example 400 shows a capture device 102 arranged to capture and store image data 404. The form of the image data 404 is dependent on the type of capture device 402. For example, if capture device 102 is a digital camera, the image data 404 may be captured by a charged coupled device (CCD) or a composite metal oxide (CN40S) chip, and formatted and stored in an appropriate format, such as in a.bmp, jpg format or the like.
[0045] Figure 4B illustrates multiple images 406, 408, 410 stored as image data 412. The image data contains images of surfaces of an aircraft or number of aircraft Images may depict the same surface of a single aircraft captured from different locations or may comprise images of the aircraft taken at different times but from the same capture location [0046] In general, the captured image data 404, 406, 408, 410 represents a two-dimensional image of a three-dimensional scene or object e an aircraft) It may not be immediately evident to maintenance crew which part of an aircraft an image depicts. It is therefore perceived to be beneficial to provide means to correlate or map captured images to a respective location on the aircraft.
[0047] Figure 4C illustrates an example of three-dimensional coordinate data 414. The three-dimensional coordinate data 414 represents a structure 416 comprising the surface features that are present on the structure. The structure 416 may be part of an aircraft or vehicle. In this example, the surface features are fasteners 304 and fastener holes 312. The three-dimensional coordinate data 414 also comprises the known three-dimensional location information 418 of each of the fasteners 304 and fastener holes 312 within the structure. The three-dimensional location information 418 of the surface features of the aircraft is obtained or determined from, for example, the original aircraft designs, which may be encapsulated in CAD files, text files, or the like. Such files often capture a complex design such as for an aircraft in 'layers', and onelayer may describe the locations in three-dimensions of the fasteners. In this example, the three-dimensional location information 418 is defined using cartesian coordinates (x, y, z) although other coordinate systems can be used. In some examples, the three-dimensional coordinate data may comprise only the co-ordinates of the fasteners. In other examples, certain other structural elements of the aircraft may be included, for example, to provide context for later image processing. The degree or level of inclusion of features in the three-dimensional coordinate data may be varied according to need, for example, depending on which features are used, according to examples, to identify locations on the surfaces of the aircraft. Features may include any visible features such as fasteners, fastener holes, seams, edges, windows, struts, cables, cable conduits, or any other discernible surface feature. Features may include normally-obscured features, which may be revealed and become discernible by removing a panel or service hatch.
[0048] Figure 5 illustrates an example of an apparatus 500 for mapping captured image data 504 to pre-generated, three-dimensional coordinate data of a respective aircraft.
Beneficially, surface features such as fasteners are used as locators, or points of reference, for mapping an image to the three-dimensional coordinate data.
[0049] The apparatus 500 comprises an image acquisition interface 506 and a coordinate acquisition interface 508. The image acquisition interface 506 is arranged to obtain image data 504, for example, from an image capture device 402 or 102. Image data 504, for example, represents an image of a surface or surfaces of an aircraft. The image acquisition interface 506 may comprise a wired interface, such as USB port, or a wireless interface. The image capture device according to the present example is directly coupled to the apparatus 500 and transfers captured image data 504 to the apparatus 500 through the image acquisition interface 506. In other examples, the image acquisition interface 506 may receive previously captured image data 504 from an external storage device, and, as such, is arranged to read and transfer such data.
[0050] The coordinate acquisition interface 508 is arranged to obtain the three-dimensional coordinate data 510, for example, from a data store. As with the image acquisition interface 506, the coordinate acquisition interface 508 may comprise a wired or wireless interface. In some examples, the coordinate data 510 including respective 3D co-ordinates of each fastener may reside in a text file or a CSV file. For instance, each row in such a text or CSV file may provide details of a fastener and its respective location.
[0051] The apparatus 500 also comprises an Image processor 512 and a mapping engine 514. The image processor 512 is configured to receive the image data 514 from the image acquisition interface 506 and output processed image data 516. The mapping engine 514 is arranged to receive the three-dimensional coordinate data 510 and processed image data 516 and output an image data surface location 518. The image data surface location 518 is the data that provides the information relating to the position on the aircraft that the image data 504 represents. For example, if image data relating to the image in Figure 2A is used, the apparatus 500 outputs the location of the image as the respective wing surface.
[00521 Figure 6 illustrates an example of an apparatus 500 for locating surface defects present in captured image data 604 according to a second example. For brevity, components with similar functions to those described in relation to Figure 5 above are labelled with the same reference numerals but increased by 100. For example, image acquisition interfaces 506 and 606 perform the same function in the respective systems.
[0053] Apparatus 600 comprises an image acquisition interface 606 and coordinate acquisition interface 608, respectively, to receive image data 604 and three-dimensional coordinate data 610 [0054] Additionally, the apparatus 600 comprises an image processor 612 and a mapping engine 614. The image processor 612 is configured to receive image data 604 from the image acquisition interface 606 and output the surface defects 620 and surface features present in the image data 604. Mapping engine 614 is arranged to receive three-dimensional coordinate data 610, processed image data 620, 605 and captured image data 604 to output locations of the image data surface defects 618. The surface defect location is the data that provides the information relating to the position on the aircraft that the surface defects present in the image data 504 represents. For example, if image data containing the surface defect 210 to the image in 2A is used, the apparatus 600 will output the location of the surface defect.
[0055] Figure 7 illustrates an example of an apparatus 700 for locating surface defects present in multiple image data 704, captured from multiple instances of a type or class of aircraft, and associating the surface defect with a surface region across a type or class of aircraft, according to a further example. For brevity, components with similar functions to those described in relation to Figure 5 and Figure 6 above are labelled with the same reference numerals but increased by 200 and 100 respectively.
[0056] Apparatus 700 also comprises correlation engine 724 The correlation engine 724 correlates a surface identified in the mapping engine with the surface defects 720 that have been identified by the image processor 712. If, for example, the captured images are of the same location on multiple different aircraft of a particular type, and all images (or at least a statistically-significant number or threshold, for example >10% or 20% or 50%, of the images) contain a similar surface defect 210, then the apparatus 700 will correlate the surface defect 210 with the respective location on that particular type of aircraft and, for example, highlight the respective location as being prone to surface defects. In some examples, the likelihood of corrosion or similar may be given a statistical likelihood and associated with all or at least some areas of a type of aircraft by using the present approaches. For example, if the surface defect is corrosion of a particular surface across multiple instances of a type of aircraft, then the apparatus 700 is arranged to highlight this information so that aircraft designers can investigate and perhaps modify a design or a maintenance schedule to address or at least reduce future occurrences of that particular type of surface defect. For example, if it is apparent that certain aircraft operating in relatively humid or high-salinity environments experience corrosion in certain surface areas, those areas, at least for aircraft that may operate in those environments in future, may be given additional anti-corrosion treatment before being entered into service.
100571 Figure 8 illustrates a method 800 of mapping an image of a vehicle to a surface region of the vehicle, for example, using the apparatus of Figure 5. At block 802 the method acquires captured image data that represents an image of an aircraft. At block 804 the captured image data is processed to identify the locations of surface features depicted in the image data using feature identification, as will be described. At block 806 three-dimensional coordinate data is acquired, comprising the three-dimensional coordinates of surface features on the surface of the aircraft. At block 808 identified surface features of the captured image are matched with surface features of the three-dimensional coordinate to determine a surface region on the vehicle that is represented by the captured image.
[0058] An example applying the process of Figure 8, using the apparatus of Figure 5, to the captured image of Figure 3, will now be described [0059] First, captured image data 504 is acquired by the image acquisition interface 506. The captured image data 504 is then processed by the image processor 512 to identify surface features 516 within the image. The image processor 512 locates the fasteners 304 and fastener holes 312 present in the image. Next, the coordinate acquisition interface 508 acquires and provides to the mapping engine 514 the three-dimensional coordinate data 510. In this example, the coordinate data comprises the coordinates of all fasteners and fastener holes on all surfaces of the aircraft. The coordinate data may be divided in sections, for example: the front, rear, left and right of the aircraft. The image data 504 may contain location metadata information that indicates to the coordinate acquisition interface 508 which section of the three-dimensional coordinate data 510 to use, in order to reduce processing time.
[0060] The mapping engine 514 receives the processed captured image 516, which contains the location information of the surface features present in the image data 504, and the three-dimensional coordinate data 510. Using a 2D to 3D mapping function, the mapping engine 514 maps the identified surface features of the processed captured image data 516 to the corresponding features in the three-dimensional coordinate data510 [0061] In another example, the image processor 512 locates other surface features such as the surface features identified in Figures 2A-2C, 212, 214, or 216. For example, the leading wing edge 216 may be mapped to the corresponding feature in the three-dimensional coordinate data.
[0062] Figure 9 is a flow chart of a method 900 for mapping an image of an aircraft to a surface region of the aircraft and locating a surface defect according to an example. At block 902 captured image data that represents an image of an aircraft is acquired. At block 904 the captured image data is processed to identify the locations of surface features and surface defects depicted in the image data using a feature identification method, as will be described. At block 906 three-dimensional coordinate data that comprises the coordinates of surface features on the surface of the aircraft is acquired. Finally, at block 908 identified surface features of the captured image are mapped with surface features of the three-dimensional coordinate data to determine the location on the surface of the vehicle of the surface defect.
[00631 An example relating to applying process of Figure 9, using the apparatus of Figure 6, to the captured image of Figure 3, will now be described. First captured image data 604 is acquired by the image acquisition interface 606. The captured image data 604 is then processed by the image processor 612 to identify surface features within the image. In this example, the processor locates the fasteners 304 and faster holes 312, and the surface defect 308. However, it will be appreciated that the surface features may also be those described in relation to figure 2A. The coordinate acquisition interface 608 acquires and provides to the mapping engine 614 the three-dimensional coordinate data 610. In this example, the coordinate data comprises the coordinates of the fasteners and fastener holes on the surfaces of the whole aircraft. As before, the coordinate data may be divided in sections such that the three-dimensional coordinate data only comprises those surface features present on the exterior or interior of the aircraft; the front or rear of the aircraft; left or right of the aircraft. The image data 604 may contain location metadata information that indicates to the coordinate acquisition interface 608 which section of the three-dimensional coordinate data 610 to use.
100641 The mapping engine 614 receives the processed captured image 620, 605, which contains the location information of the surface features 605 present in the image data 604, and the three-dimensional coordinate data 610 Using an 2D to 3D mapping function, the mapping engine 614 maps the identified surface features of the processed captured image data 620, 605 to the corresponding features in the three-dimensional coordinate data610. Finally, the mapping engine 614 provides the location of the surface defect 210 that has identified it the captured image data 604. The surface defect location may be used to identify a location in a structure repair manual to assist the maintenance crew during the repairs. If the manual is in an electronic form, such as in a document file opened within a document reader application, the application may include an application programming interface (API), which receives the location information and opens the manual at the appropriate page relating corresponding to the location of the surface defect. Moreover, the location may be communicated to a maintenance repair organisation (MR0) before the aircraft is brought in for such repairs. The surface defect location and information may also be used to ready parts needed for repairs. This may include ordering parts in advance to the scheduled repair, and, as such, may reduce times that the maintenance crew has the aircraft in for repairs.
[0065] Figure 10 shows a flow chart of a method 1000 for mapping an image of an aircraft to a surface region of the aircraft according to an example. At block 1002 captured multiple image data 704 that represents more than one image of an aircraft is acquired. At block 1004 the captured multiple image data is processed to identify the locations of surface features and surface defects depicted in the multiple image data using a feature identification method. At block 1006 a three-dimensional representation of the vehicle is acquired. At block 1008 captured multiple image data 704 is matched to the three-dimensional representation of the vehicle. In some examples, the three-dimensional representation comprises coordinates of surface features, such as fasteners, on the surface of the aircraft, and matching can then be performed in a similar fashion to Figure 9. Other ways of matching may be used instead. At block 1010 the correlation engine 724 associates a surface region of the type of aircraft, determined from captured multiple image data of multiple instances of the type of aircraft and any respective identified defects, with a level of identified defects.
[0066] Figures 8-10 show flow charts that comprise the mapping engines 514, 614, and 714. A brief discussion on the methods that the mapping engine may apply to map the features present in a captured image to the three-dimensional coordinate data will now be described. In some examples, the mapping engines determine a surface region on the vehicle that is depicted by the captured image including by estimating from the image data a three-dimensional pose of the camera relative to the three-dimensional coordinate data. For example, the two-dimensional features may be mapped to three-dimensional features using a perspective-n-point approach, which estimates the pose of a calibrated camera given a set of n 3D points in the world and their corresponding 2D projections in the image. The variable, n, may be as low as three or four (identified points), with increasing confidence of pose estimation being attainable with a greater number of identified points, such as five, six or more. The 'perspective-n-point' method, for example, is described in: Fischler, M. A.; Bolles, R. C. (1981). "Random Sample Consensus: A Paradigm for Model Fitting with Applications to Image Analysis and Automated Cartography". Communications of the ACM. 24(6): 381-395.
[00671 Another example that the mapping engines may use is to deploy calibrated image capture devices. This may be by calibrating the image capture device to the coordinate system of the three-dimensional coordinate data. In this respect, the mapping of the locations of the surface features from the two-dimensional image data to the three-dimensional coordinate data will be a transformation matrix and is more reliable than approaches using uncalibrated image capture devices.
[0068] According to an example, the image processors of Figures 5 to 7 identify surface defects in the captured image data by reference to a library of images including surface features and using an image comparison algorithm. In one example, the image processor is trained with images of defect-free aircraft, for example, in various lighting, states of cleanliness and/or normal surface deterioration such as faded paintwork. This may be an unsupervised training procedure, which trains the image processor to identify an image that looks 'out of the ordinary', for example as containing a surface defect, when the captured image is sufficiently different from a defect-free image. It may not matter if this approach leads to false positives, because images that are flagged as having potential surface defects will all be reviewed at by a member of the maintenance crew. Matching reliability may be improved over time as more images including surface defects are captured and classified, and used for additional training. In other examples, an image processor may comprise two classifiers, a first trained to identify a normal surface (and thereby to spot a surface defect when an image sufficiently differs from the norm) and another trained to identify surface defects. Both classifiers can be retrained or enhanced as more images are captured. In any event, in particular, the image processor is arranged to identify surface damage, which can then be mapped to the respective location or region on the respective aircraft.
[0069] With respect to surface defects, each kind of defect may have a certain distinctive signature that can be identified. For example, corrosion may be identifiable by colour variation r variations in surface roughness compared to the surrounding area While certain kinds of corrosion, for example associated with steel, as such may be a deep red colour, and may be relatively easy to spot, other indicators, such as an uneven or rough painted surface (e.g. associated with non-ferrous surfaces such as aluminum) or a change in the colour of paint may also be used as an indicator of corrosion to an underlying surface. The image processer may be trained to identify the changes in colour and flag this as a surface defect such as corrosion. Another example method of identifying corrosion may be to determine the classification based on pixel pattern of a greyscale image to detect changes in the pattern. Thus, when identifying corrosion on non-ferrous surfaces, the identification may be based on a change in texture. Other kinds of surface defects may be identifiable in other ways. For example, impact damage may be identifiable by regions of high contrast (e.g. dark or light regions on what should be a flat surface) relative to the surroundings.
[0070] Figure 11 is a flow chart of a method 1100 that image processors 512, 612, or 712 may use to process the images and identify the surface features such as fasteners and surface defects. At block 1102 the image is pre-processed to prepare it for feature recognition. Pre-processing is used to highlight or isolate various shapes or patterns of the digital image. Pre-processing may take various forms. For example, the image may be digitally filtered to perform edge detection where identification of areas in the image where the brightness changes sharply. Another example may be to convert a colour image to greyscale for certain image processing steps. Other pre-processing methods such as ridge detection, blob detections, corner detection, thresholding, spotting unexpected pixel patterns (or expected pixel patterns for known surface defects) or template matching may additionally or optionally be used in the pre-processing step 1102. In any event, according to some examples, step 1102 produces a pre-processed image for both spotting the fasteners and for spotting surface defects. In other examples, multiple pre-processed images may be produced, where one may be more suitable for detecting fasteners and another may be more suitable for detecting surface defects.
[0071] At block 1104 the fastener locations are identified using the pre-processed image(s). By identifying the locations of the features, the image data is reduced to a set of coordinates, for example pixel coordinates. In some examples, a shape formed by the fasteners may be determined. For example, the shape may be an "X-shape (or any other particular shape), which may be advantageous, as will be described.
[0072] In the example of block 1104, the fasteners in the pre-processed image are identified by a classifier, which has undergone supervised trained using images of types of fasteners and other surface features. In one example, the identified fasteners are used by the mapping engines 514, 614, and 714 to determine the location on the aircraft that the image represents.
[0073] Furthermore, block 1104 requires digital definitions of the shapes that describe the pattern of fasteners so that the matching stage can compare the shape with the three-dimensional coordinate data. In a simple example, the image may be represented as a simple bitmap on which the shape of the fasteners is identified or plotted, so that the shape can be identified in the three-dimensional coordinate data.
[0074] In the example of block 1106, which may occur in parallel to block 1104, surface defects in the image are identified using a classifier that has undergone unsupervised training, as has been previously described. In one example, the surface defect identification may follow the known Mask RCNN method for object instance segmentation. Namely, the image may be classified, i.e. determining that there are surface defects in the image. Then objects within the image are localised within the image. Following that, semantic segmentation may be performed, which is where every pixel is classified. In this example, the pixels that correspond to surface damage may be identified. Finally, object outlines at the pixel level are identified (instance segmentation). That is, bounding boxes and segmentation masks for each instance of an object in the image are generated. In another example, another filter stage may be used to identify colour or pattern variations in the image, which can be used for spotting corrosion. Such a filter may apply a weighting so that areas in the image that have red/brown hues are highlighted. In these and in other examples, various other image processing techniques may be applied in addition, or instead, to identify surface features and/or surface defects.
[0075] As has been described, corrosion can take many different forms, shapes and textures. As such, to identify the presence of corrosion in an image, classifiers may be used in the step 1106. In some examples, the classifier may comprise a neural network. In other examples, the classifier implements at least one of a random forest algorithm, a naive Bayes classifier, a support vector machine, a linear regression machine learning algorithm, or any other suitable algorithm or classifier which is suitable for the function described herein. For example, a supervised learning algorithm may be used to analyse the training data (comprising input values and respective output values) to infer a reasonable, learned function which maps the inputs to the outputs. The learned function may be represented in a neural network comprising an input layer, an output layer, and at least one hidden layer, wherein nodes of the at least one hidden layer area associated with one or more weights. Training the neural network comprises using the input values in the input layer and the output values in the output layer to generate and/or to update the one or more weights. The learned function may then be tested on a subset of the training data which was not used to train the learned function This may allow the system to be validated before being applied to test data.
[0076] Once the fastener locations and surface defects are identified, the method 1100 creates a digital definition at block 1108. The digital definition describes elementary low-level characteristics such as the shape, the colour, or the texture, among others. There may be more specific descriptors such as the name and location of objects in the image. This could be the name and location of the fasteners in an image, which in the example of Figure 3 would be the bolts 304 and bolt holes 312.
[0077] Mapping the identified features in the image to the three-dimensional coordinate data may be simplified by applying assumptions that the images are taken substantially perpendicular to the surface of the aircraft. This could be achieved by ensuring the UAV that the image capture device 102 is mounted on is controlled such that the images captured are perpendicular to the surface of the aircraft, thus greatly reducing the number of possible combinations of mappings between the image and the three-dimensional coordinate data. However, it may be also beneficial, or indeed only practical in areas with restricted room, to capture images at an angle relative to the surface to improve the identification of the fastener locations [0078] All the previously described methods 800, 900, 1000 may include an extra step of image rendering to generate at least a portion of a three-dimensional model of the aircraft by rendering captured image data onto the three-dimensional model at the determined surface region. Successive images may be rendered onto the same three-dimensional model to provide a "digital mock-up (DIM)" of the aircraft. There are many ways that this can be achieved. One way that may be implemented in the present examples is to map the captured image data from the two-dimensional representation to a three-dimensional representation for rendering the captured image data onto the three-dimensional model. Conformal mapping may also be used to render the two-dimensional image onto the DMU.
[0079] It is to be noted that the term "or" as used herein is to be interpreted to mean "and/or", unless expressly stated otherwise It is also noted that the term "aircraft" is used throughout, but other types of vehicle or structure may be used, such as cars, lorries, bridges, and ships.

Claims (16)

  1. CLAIMS: An apparatus for determining a surface region depicted in an image of a vehicle, comprising: a coordinate acquisition interface to acquire three-dimensional coordinate data of a vehicle, the coordinate data comprising coordinates of surface features on surfaces of the vehicle; an image acquisition interface to acquire image data representing a captured image of the vehicle; an image processor to process the image data to identify one or more surface features in the captured image; and a mapping engine to match identified surface features of the captured image with surface features of the three-dimensional coordinate data to determine a surface region on the vehicle that is depicted by the captured image.
  2. 2. An apparatus according to claim 1, wherein the surface features are fasteners and/or fastener holes.
  3. 3. An apparatus according to claim 2, wherein the fasteners are at least one of: bolts, rivets, or screws.
  4. 4. An apparatus according to any one of the preceding claims, wherein the mapping engine determines a surface region on the vehicle that is depicted by the captured image including by estimating from the image data a pose of an image capture device that captured the image data.
  5. 5. An apparatus according to claim 4, wherein the image data comprises calibrated image data generated by a calibrated image capture device
  6. 6. An apparatus according to any one of the preceding claims, further comprising a rendering engine, to generate at least a portion of a three-dimensional model of the vehicle by rendering captured image data onto the three-dimensional model at the determined surface region.
  7. 7. An apparatus according to claim 6, wherein the rendering engine maps the captured image data from a two-dimensional representation to a three-dimensional representation for rendering the captured image data onto the three-dimensional model.
  8. 8. An apparatus according to any one of the preceding claims, wherein the vehicle is an aircraft.
  9. 9. A method of determining a surface region depicted in an image of a vehicle, comprising: providing three-dimensional coordinate data of a vehicle, the coordinate data comprising co-ordinates of surface features on surfaces of the vehicle; providing image data representing a captured image of the vehicle; processing the image data to identify one or more surface features in the captured image; and matching identified surface features of the captured image with surface features of the three-dimensional coordinate data to determine a surface region on the vehicle that is depicted by the captured image.
  10. 10. A method according to claim 9, wherein the surface features are fasteners and/or fastener holes.
  11. 11. A method according to claim 10, wherein the fasteners are at least one of bolts, rivets, or screws
  12. 12 A method according to any one of the claims 9 to 11, wherein the mapping engine determines a surface region on the vehicle that is depicted by the captured image including by estimating from the image data a pose of an image capture device that captured the image data.
  13. 13. A method according to claim 12, wherein the image data comprises calibrated image data generated by a calibrated image capture device.
  14. 14. A method according to any one of the claims 9 to 13, further comprising generating at least a portion of a three-dimensional model of the vehicle by rendering captured image data onto the three-dimensional model at the determined surface region.
  15. 15. A method according to claim 14, further comprising mapping the captured image data from a two-dimensional representation to a three-dimensional representation for rendering the captured image data onto the three-dimensional model.
  16. 16. A method of mapping an image of an aircraft onto a model of the aircraft, comprising: providing a model of the three-dimensional coordinates of fasteners and/or fastener holes on a surface of the aircraft; providing image data representing a captured image of the vehicle, the image captured by an imaging device having a pose relative to the aircraft; identifying the locations of fasteners and/or fastener holes in the image, comparing the identified locations of fasteners and/or fastener holes in the image to the fasteners and/or fastener holes of the three-dimensional model; and projecting the image from its pose position onto the surface of the three-dimensional model in the imaged area.
GB1918822.6A 2019-12-19 2019-12-19 Image mapping to vehicle surfaces Pending GB2591445A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
GB1918822.6A GB2591445A (en) 2019-12-19 2019-12-19 Image mapping to vehicle surfaces

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
GB1918822.6A GB2591445A (en) 2019-12-19 2019-12-19 Image mapping to vehicle surfaces

Publications (2)

Publication Number Publication Date
GB201918822D0 GB201918822D0 (en) 2020-02-05
GB2591445A true GB2591445A (en) 2021-08-04

Family

ID=69322865

Family Applications (1)

Application Number Title Priority Date Filing Date
GB1918822.6A Pending GB2591445A (en) 2019-12-19 2019-12-19 Image mapping to vehicle surfaces

Country Status (1)

Country Link
GB (1) GB2591445A (en)

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005059813A1 (en) * 2003-12-16 2005-06-30 Young-Chan Moon Method of scanning an image using surface coordinate values and device using thereof
US20180322698A1 (en) * 2016-08-22 2018-11-08 Pointivo, Inc. Methods and systems for wireframes of a structure or element of interest and wireframes generated therefrom
CN110570513A (en) * 2018-08-17 2019-12-13 阿里巴巴集团控股有限公司 method and device for displaying vehicle damage information
EP3627185A1 (en) * 2018-09-24 2020-03-25 Faro Technologies, Inc. Quality inspection system and method of operation

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2005059813A1 (en) * 2003-12-16 2005-06-30 Young-Chan Moon Method of scanning an image using surface coordinate values and device using thereof
US20180322698A1 (en) * 2016-08-22 2018-11-08 Pointivo, Inc. Methods and systems for wireframes of a structure or element of interest and wireframes generated therefrom
CN110570513A (en) * 2018-08-17 2019-12-13 阿里巴巴集团控股有限公司 method and device for displaying vehicle damage information
EP3627185A1 (en) * 2018-09-24 2020-03-25 Faro Technologies, Inc. Quality inspection system and method of operation

Also Published As

Publication number Publication date
GB201918822D0 (en) 2020-02-05

Similar Documents

Publication Publication Date Title
CN109816024B (en) Real-time vehicle logo detection method based on multi-scale feature fusion and DCNN
KR102613438B1 (en) Method of deep learning - based examination of a semiconductor specimen and system thereof
US8238635B2 (en) Method and system for identifying defects in radiographic image data corresponding to a scanned object
CN112233097B (en) Road scene other vehicle detection system and method based on space-time domain multi-dimensional fusion
US10621717B2 (en) System and method for image-based target object inspection
EP3049793B1 (en) Structural hot spot and critical location monitoring
CN110033431B (en) Non-contact detection device and detection method for detecting corrosion area on surface of steel bridge
CN112184765B (en) Autonomous tracking method for underwater vehicle
CN112149514A (en) Method and system for detecting safety dressing of construction worker
Mazzetto et al. Deep learning models for visual inspection on automotive assembling line
Doulamis Coupled multi-object tracking and labeling for vehicle trajectory estimation and matching
Fondevik et al. Image segmentation of corrosion damages in industrial inspections
JP2019194565A (en) Machine vision system and method, and robot installation system and method
CN116311078A (en) Forest fire analysis and monitoring method and system
CN116486287A (en) Target detection method and system based on environment self-adaptive robot vision system
GB2590468A (en) Analysing surfaces of vehicles
Mumbelli et al. An application of Generative Adversarial Networks to improve automatic inspection in automotive manufacturing
GB2591445A (en) Image mapping to vehicle surfaces
GB2590469A (en) Analysing a class of vehicle
CN114926456A (en) Rail foreign matter detection method based on semi-automatic labeling and improved deep learning
CA3219745A1 (en) Texture mapping to polygonal models for industrial inspections
Zhu et al. Fine-grained Vehicle Classification Technology Based on Fusion of Multi-convolutional Neural Networks.
CN112734788A (en) High-resolution SAR airplane target contour extraction method, system, storage medium and equipment
Evstafev et al. Surface Defect Detection and Recognition Based on CNN
Hogan et al. Using convolutional neural networks for relative pose estimation of a non-cooperative spacecraft with thermal infrared imagery