WO2019053468A1 - Damage detection and repair system - Google Patents
Damage detection and repair system Download PDFInfo
- Publication number
- WO2019053468A1 WO2019053468A1 PCT/GB2018/052644 GB2018052644W WO2019053468A1 WO 2019053468 A1 WO2019053468 A1 WO 2019053468A1 GB 2018052644 W GB2018052644 W GB 2018052644W WO 2019053468 A1 WO2019053468 A1 WO 2019053468A1
- Authority
- WO
- WIPO (PCT)
- Prior art keywords
- damage
- depth
- repair
- image
- frame
- Prior art date
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
- G06T7/001—Industrial image inspection using an image reference approach
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/30—Determination of transform parameters for the alignment of images, i.e. image registration
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10024—Color image
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20092—Interactive image processing based on input by user
- G06T2207/20104—Interactive definition of region of interest [ROI]
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30108—Industrial image inspection
- G06T2207/30164—Workpiece; Machine component
Definitions
- the present disclosure relates to the field of damage detection, and embodiments of the disclosure include aspects relating to image processing, damage detection, configuration management, and repair systems.
- Maintenance of an object can occur as either part of a routine procedure or otherwise scheduled maintenance (planned maintenance) or due to an unforeseen sudden damage to object (unplanned maintenance).
- planned maintenance occurs when an aircraft typically returns to a hangar to undergo routine checks and/or repair.
- Types of damage that can be expected to be identified in a routine check of the aircraft can be general wear to the different parts of the craft (fuselage, wing, tail, tyres etc.), corrosions, scratches, dents etc. This list of damage types is by no means exhaustive. Although planned maintenance of vehicles is routinely scheduled, vehicles can equally require somewhat urgent attention if they become damaged.
- aspects of the present disclosure provide a system for identifying damage to an object and then generate data representative of a course of repair for the damage.
- the generated data is representative data to inform the course of a repair.
- This data is then sent to an operator who can repair the object based on the generated data.
- the object in question is a vehicle.
- the vehicle is an aircraft.
- damage detection and repair is usually performed by an operator at the sight of the aircraft. An operator (who may, for example be a trained engineer) assesses the damage and determines a course of repair to repair the damage so that the aircraft will be fit for flying.
- the engineer will diagnose the damage which can involve taking measurements (for example in the case of a dent) to assess the 'severity' of the damage.
- the damage could also be a missing part or component of a part in which case the diagnosis is simply that this component is missing.
- the operator can then determine a course of repair which may involve looking up a course of repair in a structural repair manual (SRM).
- SRM structural repair manual
- the SRM can advise as to the specific parts, processes and procedures required to repair the damage.
- the aircraft has suffered a dent but the actual extent of the damage (e.g. depth, width and height) of the dent needs to be determined so that it can be precisely repaired.
- the damaged area may need to be cut out and repaired with a structural repair doubler, for example, to repair the dent but the type and size of the repair will depend on the dimensions of the dent.
- the type of aircraft needs to be known so that the part can be ordered if necessary since specific types of parts are not interchangeable between different types of aircraft.
- This method necessarily involves the operator being at the aircraft for the length of time required to survey the damage, and often longer so that the operator can view the damage while assessing how to repair the aircraft.
- damage is scanned or photographed and assessed by, for example, a 3D sensor. A course of repair is then generated and the damage is repaired based on the course of repair
- an image of the damage is taken on the basis of which the extent of the damage is assessed. Based on the assessment a database is interpreted to generate data representative of a course of repair which is then automatically documented and sent to a user for an approval. On the basis of the user approval the course of repair is then put into action to repair the damage.
- a method of detecting damage of an object and repairing the damage comprising: capturing an image of the damage; generating data representative of a course of repair for the damage based on the image of the damage and historical repair data of that type of object; and then repairing the object based on the generated data.
- Using historical repair data means that, if the object has suffered from that particular type of damage previously, the course of repair used to repair that damage at that time can be either replicated in full to repair the current damage, or otherwise used to generate the current course of repair. It may be for example that the earlier course of repair was sufficient to fully repair the damage, or alternatively it may be that it is determined that the damage has occurred too recently having regard to the last time this damage occurred. In the latter instance the same repair may not be used exactly but may be used to facilitate a more improved course of repair so that the same type of damage will not occur so frequently.
- the step of generating data representative of a course of repair optionally comprises generating a damage assessment report which contains data representative of the location, type, extent (or any combination thereof) of the damage. Having the damage represented in a damage assessment report allows a user to assess the extent and type of the damage without actually being at or near the damaged object. This saves operator and repair skill and time since the assessment can be carried out remotely, as opposed to only at the object itself.
- the historical database can comprise SRM data, in particular all or part of an SRM manual and/or other existing repair data, for example approved repair data.
- the historical database can also be populated with previously generated one-off (bespoke) repairs to address specific damage that is not covered by the existing SRM. Continual updates of the historical database may also be performed so that up-to-date/recent repairs are also stored in the database.
- the damage assessment report can be a 2D image of the object via which the dimensions of the damage can be ascertained. An example of generating such a 2D image is also part of this disclosure.
- the step of generating data representative of a course of repair optionally comprises accessing a historical database containing historical repair data and performing a comparison between entries in the damage assessment report and entries in the historical database.
- the comparison optionally comprises defining a tolerance and comparing whether an entry in the damage assessment report matches an entry in the database within the tolerance, a match being reported if the entries are within the tolerance.
- the data representative of a course of repair may be equal to the entry in the database.
- the historical repair data may comprise a full history of all damage suffered by and repairs performed on the object or similar comparable objects.
- the step of generating data representative of a course of repair can optionally further comprise accessing a repair database containing repair data for that object or similar comparable objects and performing a comparison between entries in the damage assessment report and entries in the repair database.
- the comparison may comprise defining a tolerance and comparing whether an entry in the damage assessment report matches an entry in the repair database within the tolerance, a match being reported if the entries are within the tolerance.
- Holding all information e.g. standard repair data and historical values, in either a single database or separate databases, facilitates sharing of such data with users that are part of a network.
- historical values of all types of repairs performed on and all types of courses of repair generated for each object that is part of a network may be stored in a database. This allows the course of repair to be generated not only based on historical data of repairs and/or courses of repairs in relation to that specific object; but also other objects of the same or similar type.
- Repairing the object may comprise generating a repair report comprising an augmented reality view of the object and the data representative of the course of repair. This advantageously provides the end-engineer to whom the course of repair may be sent to conveniently view, for example, any parts that the course of repair suggests would repair the damage. This, in turn, reduces the amount of time the engineer spends reviewing the damage and the course of repair.
- a method of producing a 2D image such that dimensions of an object within the image can be determined from the image. This method can be used to determine the extent of the damage to the object since the dimensions of the damage can be determined and used to populate a damage assessment report.
- this method When used to determine the extent of damage to an object this method has the advantage that anyone (e.g. a layperson) can photograph the object and send it to another user (who can also be a layperson) that can determine the dimensions of the damage.
- the damage assessment report When the damage assessment report has been generated it can be therefore done remotely since it can be generated based on the 2D image.
- An engineer need only then be involved at the end of the method - where they need to approve the course of repair that has been generated so that the damage is actually repaired - and therefore this saves the amount of skilled time required to assess and repair any damage.
- a method of producing a processed image of an object comprising the steps of: capturing a first depth frame around the object and taking an image of the object; transforming the first depth frame into a 3D point cloud, the 3D point cloud including a set of data points; transforming the 3D point cloud so that it is substantially aligned with the image; converting the transformed 3D point cloud to fit a pixel grid; smoothing the converted 3D point cloud using all available depth values and aligning the result with the image to produce a processed image of the object.
- the processed image of the object is displayed, printed, or otherwise outputted to visualize the object and any potential damage.
- the image is preferably a colour image.
- the image may be a colour photograph.
- Use of a colour image is advantageous as it means that a colour component can be added to the point cloud values. This makes it possible to visualise the 3D point cloud in colour (the components being the RGB values of the colour photograph pixel corresponding to the transformed point cloud vertex.
- Capturing the first depth frame may comprise defining a first shape around the object, the first depth frame being captured so that the object is inside the first shape.
- the first shape may be any suitable shape, e.g. rectangular, circular, triangular etc. However, in a preferred embodiment it is rectangle.
- a preferred first depth frame is a 640x480 array of values. However, this is exemplary only and the depth frame may be an array of values of any size.
- Transforming the depth frame into a 3D point cloud may comprise defining 3D coordinates of the point cloud in terms of the coordinates of the depth frame, and the intrinsic values of the camera that captured the depth frame and the image of the object. The intrinsic values may be the optical centres and the focal lengths of the camera.
- the intrinsic values that may be used may be any intrinsic value of the camera and are not limited to only a combination of the optical centres and focal lengths.
- the coordinates of the point cloud may be defined using any relationship of the original coordinates of the depth frame and any intrinsic values of the camera. For example, the relationship may be linear.
- Transforming the 3D point cloud to align it with the image may comprise multiplying each vector in the 3D point cloud with a transformation matrix, wherein the transformation matrix is a function of the position and orientation (POSE) matrices of the camera that captured the depth frame and the image of the object.
- the transformation matrix may be a function of the inverse of one or more of the POSE matrices of the camera.
- the step of smoothing the pixel grid optionally comprises defining smoothed coordinates as a function of the coordinates of the 3D point cloud aligned with the image.
- the method optionally further comprises the step of defining a second shape around the object, and the method optionally comprises the step of capturing a second depth frame within the second shape.
- the method optionally further comprises the step of aligning the second depth frame with the first depth frame.
- the second shape contains the first shape.
- the first shape contains the second shape.
- portion(s) of the first shape can overlap with portion(s) of the second shape.
- Aligning the second depth frame with the first depth frame optionally comprises multiplying each vector in the second depth frame with a transformation matrix, which is a function of the POSE matrices of the camera that captured the depth frame and the image of the object.
- the transformation matrix may be a function of the inverse of one or more of the POSE matrices of the camera.
- Aligning the second depth frame with the first depth frame optionally comprises rounding each point to the nearest coordinate in the first depth frame.
- the method optionally comprises capturing a subsequent depth frame, or subsequent depth frames.
- the method can further comprise the step of aligning the subsequent depth frame(s) with the first depth frame.
- the alignment of the subsequent depth frame(s) with the first depth frame can comprise multiplying each vector in the subsequent depth frame(s) with a transformation matrix which is a function of one or more of the POSE matrices of the camera. This alignment can optionally comprise rounding each point to the nearest coordinate in the first depth frame.
- the method can further comprise the step of smoothing the frame resulting from the alignment of the first and second and/or the first and subsequent depth frames.
- twenty depth frames can be captured. It will be clear to a skilled person that more, or fewer, depth frames could be used however.
- subsequent frames are captured at different angles.
- different frames are captured having a 3D angle of at least 15 degrees between them.
- capturing additional frames from different angles can reduce the number of frames required to be captured. For example, in some instances 9 depth frames can be captured at different angles. It will be clear to a skilled person that more, or fewer, depth frames could be used however.
- aligning a first frame with any subsequent frame includes any of the steps referred to herein and additionally or alternatively includes: utilizing the camera POSE matrices for gross angle correction, translating the point clouds so that the point nearest the centre of the frames are aligned or are on top of each other, and applying an ICP (iterative closer point) algorithm to ensure that the point clouds overlap as closely as possible.
- the second shape can be substantially the first shape or exactly the first shape.
- the method may further comprise the step of smoothing the frame resulting from the alignment of the first and second depth frames.
- the smoothing step may comprise defining smoothed coordinates as a function of the coordinates of the frame resulting from the alignment of the first and second depth frames.
- the method optionally further comprises aligning the smoothed frame with the image.
- the step of smoothing the frame resulting from the alignment of the first and second depth frames may further comprise, for each pixel in the frame to be smoothed, calculating a mean value of the depth of the pixel and the depth of a set number of surrounding pixels, discarding all depth values that are not within one standard deviation from the mean, and calculating a mean depth for all depth values not discarded, and assigning that mean depth to the pixel as the smoothed value of that pixel.
- Aligning the smoothed frame with the image may comprise multiplying each vector in the smoothed frame with a transformation matrix, which is a function of the POSE matrices of the camera that captured the depth frame and the image of the object.
- the transformation matrix may be a function of the inverse of one or more of the POSE matrices of the camera.
- the above method of producing a processed image of an object can be used in the above method of detecting damage of an object and repairing the damage.
- the captured image of the damage in the method of detecting damage of an object and repairing the damage can be a processed image, processed according to the above method of producing a processed image of an object. Accordingly, there is provided a method according any one of the appended claims 25-36, wherein the image of the damage is a processed image processed according to any one of appended claims 1 -24.
- a system for damage detection and repair of an object comprising: an image capture device for capturing an image of an object; a historical database comprising historical data; generating means for generating data representative of a course of repair of the object based on the image captured by the image capture device and the historical data in the historical database.
- the system optionally further comprises a standard repair database comprising data from a standard repair manual, and wherein the system is configured to access the standard repair database and to communicate with the generating means.
- the system optionally further comprising a computing device configured to access images captured by the image capture device; and configured to display an image of the type of object captured.
- the system optionally further comprising an anonymiser configured to anonymise the captured image and/or stored information.
- an anonymiser configured to anonymise the captured image and/or stored information.
- Digital information captured during the above-mentioned process can be used to populate a "configuration management system" to provide a truly digital record for the history of any given object. In this way, this configuration management system is a digital record for any changes made to the object and can be accessed by any user having the required permissions.
- Fig. 1 is a flowchart of a method of detecting damage of an object and repairing the damage according to a first aspect of the present invention
- Fig. 2 is a flowchart of a method of generating data representative of a course of repair according to a first aspect of the present invention
- FIG. 3 is a flowchart of a method of generating data representative of a course of repair according to a second aspect of the present invention
- Fig. 4 is a flowchart of a method of producing a 2D image of an object from a photograph taken of that object;
- FIG. 5 is a schematic diagram of the method according to the first aspect of the present invention.
- Fig. 6 is a schematic diagram of a system for damage detection and repair of an object according to the first aspect of the present invention
- Fig. 7 is a schematic diagram of how the method according to Fig. 4 can be used to assess damage
- Fig. 8 is a flowchart of a method of smoothing a combined depth frame according to the first aspect of the present invention.
- FIGs. 9A and 9B are schematic diagrams of parts of the method according to the first aspect of the present invention.
- Fig. 1 shows a method 100 of detecting damage of an object and repairing the damage.
- an image of an object is captured.
- This could be, for example, in the form of a 3D scan by a 3D scanner attached to a tablet or smartphone.
- the image of the object depicts damage to the object.
- the image may be centred on a point of the object that is damaged.
- the image could be captured according to the system described below. Specifically, a model of the object (or the same/similar type of object) could be displayed to the user, and the user can select on the model the location of the damage on the object. Thereafter the user can take the image, and the image is then linked to the model of the object.
- the object is an aircraft, the user knows the make/model of the aircraft.
- the user can then input into a system the make/model which can prompt a system to display a model (e.g. a 3D model) and the user can then locate the damage on the model of the aircraft and the image corresponds to the location of the damage.
- a model e.g. a 3D model
- a further step 104 the extent of the damage is assessed. This can comprise, for example, determining and capturing the geometrical size of the damage and position of the damage relative to other features (e.g. adjacent features of the object).
- this step can comprise determining the location of the damage relative to a feature of the aircraft, e.g. the fuselage and/or wing and/or frame and/or stringer etc.
- a damage assessment report is generated that represents the extent of the damage.
- the damage assessment report could contain information about the type of the damage or the dimensions of the damage (such as when the damage is a missing part of the object or a dent in the object).
- the extent of the damage is considered relative to the underlying support structure.
- Steps 104 and 106 can be done automatically.
- a computer could recursively generate cross-sections within a defined area (the defined area including the damage) and then the values of the cross-sections could be evaluated to find minimum and maximum values (e.g. of damage depths at various points on the cross-sections), and an extent of the damage can be automatically determined from the minimum and maximum values.
- the image capture can be in response to a prompt generated by a computing device.
- step 108 of generating data comprises step 200 in which a comparison is performed between the damage assessment report and a database 109 comprising historical data of damage done to the object or similar objects and known approved repair data from sources such as the SRM, and then generating repairs for that instance of damage.
- the database can comprise historical data of damage done to that aircraft or one of a fleet of aircraft of the same type. If it is determined that the damage assessment report is equal to a historical damage entry in the database 109 (or within a tolerance) then the method proceeds to step 202 in which the course of repair for the historical damage is retrieved from the database.
- step 204 a database 1 11 is accessed and the damage assessment report is compared to entries in the database 1 11 for matches within a tolerance.
- the database 11 1 can comprise entries in a standard repair manual or structural repair manual for the object. The comparison step is to determine if a known repair exists from the SRM for the type of damage to the object.
- step 202 a course of repair for the damage is retrieved from the database based on the SRM entry for that type of damage. If there is no match then the method proceeds to step 206 in which the course of repair is generated but it is indicated that there was not a match either in the historical 109 or the SRM database 1 11.
- step 110 the method then proceeds to step 110 in which the object is repaired based on the generated data.
- the method provides an autonomous way of generating a way of repairing damage to an object.
- step 108 the method first (step 300) accesses the database 11 1 of known repairs, which for example could be the SRM, to compare the damage assessment report to database entries to seek a match. Whether or not there is a match the method proceeds to step 302 in which a historical database 109 is accessed and a comparison is performed between the damage assessment report and entries in the historical database 109 to check if the same type of damage has been previously suffered by the object.
- step 304 a course of repair is generated.
- the generated data for a course of repair is then based on a combination of results from each comparison, if applicable. For example, if each step produces a match then the course of repair from the SRM database is generated together with the historic course of repair from the database 109. If neither step produces a match the course of repair is generated but it is indicated that there was not a match either in the historical 109 or the SRM database 1 11.
- the databases 109 and 1 1 1 can be the same database. Alternatively, the order of accessing the database as described with reference to and depicted in Figure 3 could be reversed.
- the historical data can include data pertaining to all variations of a particular type of object.
- the data can comprise data pertaining to each model/type of aircraft. This allows the data to be generated based on a past repair not done to the aircraft that has suffered the damage, but to a past repair done to the same or similar type of aircraft. In this way, information is effectively shared between users.
- Step 1 10 of repairing the object can comprise generating a further report for sending to an engineer.
- the report can comprise the damage assessment report and the data representative of a course of repair for the engineer to consider whether the course of repair is appropriate given the damage.
- Step 1 10 can comprise generating an augmented-reality (AR) display of the object and the damage, in addition to the components necessary to conduct the repair.
- AR augmented-reality
- Fig. 5 shows as schematic of the method 100.
- An object 150 has suffered damage 151 and an image capture device 152 is used to capture an image of the damage (step 102).
- the damage is assessed and in step 106 a damage assessment report is generated.
- the captured image may be sent to an operator for processing in order to generate the damage assessment report or alternatively the user that operated the image capture device 152 to capture the image may generate the damage assessment report.
- a database 1 13 is accessed that comprises historical database 109 and repair database 1 11.
- data is generated that represents a course of repair for the damage 151 of the object 150.
- This data representative of the course of repair can be sent to a user (e.g. a skilled operator/engineer) for approval 109 before the damage is repaired.
- Figure 6 shows a system 101 for damage detection and repair of an object.
- An image capture device 152 is configured to capture an image of an object. In use the image capture device 152 will be used in the field by an engineer and also potentially non-expert users such as pilots and ground staff (in the case of the object being an aircraft). Images taken by the image capture device 152 are synchronised to a cloud database 163 that can be used across multiple sites (i.e. by a different engineer in a different location looking at a different object etc.).
- a cloud-based web service 161 and a cloud-based web application 162 provide a mechanism to enable sharing of information (for example captured images) between "base and field operations" (e.g. information may be shared between an operator on the ground, directly taking an image of the damage and another party who is not on the ground.
- the web service 161 provides the mechanism for sharing data across devices while the web application 162 provides access to the information to desk-based engineers (e.g. at computing device 160).
- computing device 160 is illustrated as a desktop computer but this is by way of illustration only.
- a computing device can be a personal computer, server, mobile computing device, and/or a combination of one or more computing resources in a cloud or distributed computing environment.
- the computing device 160 includes at least one processor for transforming, converting, and smoothing depth frames, points clouds; generating data representative of a course of repair; performing any of the steps described herein and/or any other processing activity.
- the computing device 160 includes memory(ies) and/or storage device(s) for storing data including image data, depth frame data, point cloud data, pixel grids and/or any other data.
- the computing device 160 includes at least one I/O interface, and at least one network interface for interfacing with imaging devices, databases, input devices, displays and/or other devices and/or components.
- Computing device 160 can receive images captured by image capture device 152.
- a desk-based engineer may view an image of the damage and carry out an assessment of the damage (step 104 and 106 of the method) before the comparison is made between the damage and the database entries.
- the web application 162 interfaces with computing device 160 so that the web application 162 can provide access to information captured by the image capture device 152 to a remote user who is at the computing device 160. Any images captured by device 152 can be viewed via computing device 160. The web application 162 therefore allows information contained in the image to be checked via another user.
- Data from the repairs database 1 11 is accessed for a comparison with any damage noted by an engineer at computer 162 (from accessing the image captured by device 152). This therefore enables engineering decisions to be made quicker.
- Other business data may also be stored either in this database or a separate database and also used to generate data representative of a course of repair.
- Historical database 109 contains historical information such as:
- the anonymiser 165 is depicted as being 'ahead' of historical database 109 it is to be understood that their positions may be reversed. E.g. information captured can be compared with entries in a historical database and thereafter anonymised. It is also to be understood that the anonymiser 165 can be provided anywhere in the system 101 and can, at any stage in the associated method, anonymise data.
- the object in question could be an aircraft and it may have undergone impact damage. Whilst the aircraft in question may have not suffered impact damage before, another aircraft of the same type may have. With the system of the present invention this damage, its assessment, and its repair are known and are used to generate data representative of a course of repair to repair the damage. This saves operator skill and time because the skills of an engineer are only utilised when they have to be.
- All information concerning a wide range of situations and object types is placed in data platform 166 for data analytics.
- Third party databases 170, 171 can also be accessed and used to inform insight into the causes of damage as well as to assist in determining the effect of different scenarios of the object.
- database 170 is a flight database
- database 171 is a weather database. These can be used to help inform the cause of damage to an aircraft and what effects different scenarios will have on the aircraft.
- the data platform 166 is used to gather information and insight from all of the gathered data (particularly but not exclusively concerning the damage and defects). This stored information can be used to inform insights that will assist in devising new and improved parts or repairs.
- computing device 160 or image capture device 152 may display an image of the specific object type and prompt a user to start capturing damage (e.g. with image capture device 152).
- the object 150 is an aircraft.
- the system is configured to generate a prompt requesting a user to enter the type of the aircraft that is damaged.
- the system will access an aircraft database containing aircraft information and interactive images.
- the system is configured to display an image of the aircraft type in response to the type of aircraft that has been indicated.
- Image capture may only begin when a user indicates, via a simple tap gesture, what part of the aircraft has suffered damage or is defective. In some embodiments only once the user has done so is image capture allowed. [00100] The user may then be prompted to take an image of the damage or defect once the location of the aircraft has been indicated. As the model is interactive the displayed aircraft may be rotated and/or scaled and/or zoomed until the area of damage is in view. The damage assessment report may be stored (e.g. in any one of the aforementioned databases) with the tagged location on the aircraft model.
- the method 1000 comprises capturing a depth frame of an object.
- depth frame it is meant an array of values which can be represented by pixels on an image. The value of each pixel corresponds to the distance from a depth sensor of the camera taking the image to the object in the real-world.
- the method begins with step 1001 in which a first shape is defined around an object that has suffered damage.
- the first shape is a rectangle defined around the object, but it will be understood that the first shape could be any suitable shape.
- the rectangle corresponds to where the object will be in the sense that the rectangle contains all of the object.
- the first shape can be defined by a user, and may be scalable in both height and width.
- the first shape encompasses the object.
- the first shape controls the area for a point cloud, or multiple point clouds to be captured to increase the resolution and accuracy of the final image.
- a depth frame is captured and an image (for example a colour photograph) is taken of the object.
- the depth frame 701 is captured such that it contains the first shape 702, while the object, which comprises damage 700, is maintained in the first shape 702 (e.g. a rectangle).
- the depth frame contains the first shape which itself contains the object.
- the depth frame is a 640x480 array of depth values.
- the depth frame can be the entire view of the camera/sensor.
- the damage 700 could be all or part of the object (e.g. the entire object and not just the point of the object that has suffered damage).
- step 1003 the 640x480 depth frame is transformed into a 3D point cloud.
- the 3D point cloud is generated by processing multiple depth frames from a 3D scanner.
- a point cloud is a set of data points in a coordinate system. Each point in the 3D point cloud represents the location of each 'pixel' (in the 640x480 depth frame) in 3D space.
- the 3D point cloud is generated using knowledge of the camera's intrinsics.
- the intrinsic parameters of a camera can include focal length, the optical centre (or principal offset), and a skew coefficient. These can be represented by an intrinsic camera matrix which is usually responsible for transforming 3D camera coordinates to 2D homogeneous coordinates of the image.
- the coordinates of the 3D point cloud are defined as functions of the original coordinates of the 640x480 depth frame; and the x- and y- optical centres and the x- and y- focal lengths of the cameras intrinsics as follows:
- x 3D f(x, y, z, x c , y c , f x , f y )
- y 3D g(x, y, z, x c , y c , f x , f y )
- z 2D h(x, y, z, x c , y c , f x , f y )
- x3D, y3D and z3D are the coordinates of the 3D point cloud; x, y, and z are the coordinates (row, column and depth) of the 640x480 array, xc and yc are the optical centres in pixels (camera intrinsics) and fx and fy are the focal lengths in pixels (camera intrinsics); and where f, g, and h are functions whose purpose here is to denote that x3D, y3D and z3D are functions of the variables x,y,z, xc, yc, fx and fy. Note that there may be zero-dependence on one of these variables, e.g. z3D may not be dependent x etc.
- step 1004 the 3D point cloud is aligned with the image. This comprises transforming the 3D point cloud to be aligned with the poses (position and orientation) of the camera that took the image.
- xalign, yalign and zalign are the coordinates of the transformed (aligned) 3D point cloud, and T is a transformation matrix formed from the camera POSE matrix.
- the aligned point cloud is smoothed or flattened in order to determine which pixel locations in the original 640x480 coordinate system lie within the defined rectangle on the colour photograph.
- the 3D point cloud that is aligned with the camera poses is transformed so that it 'fits' the original 640x480 pixel grid (the original depth frame).
- the inverse of the functions f, g and h are applied to each vector in the aligned 3D point cloud, i.e., [00118] x smooth ⁇ f 1 ( x align)
- xsmooth, ysmooth and zsmooth denote the smoothed values of the aligned 3D point cloud. Any values (points in the original 640x480 pixel grid) which lie outside the first shape are discarded (in another embodiment, any values that lie outside the original 640x480 depth range are discarded). In the event that there are multiple values per pixel location in the 640x480 grid; the value farthest away from the camera is discarded.
- a second shape in the form of a new rectangle is defined in the original 640x480 depth frame's coordinate system, (the second shape will be hereafter referred to as the "capture rectangle").
- the capture rectangle 703 is defined such that it contains the original defined rectangle (first shape) 704 that was defined in the camera's coordinate system; the first shape 704 containing the damage 700.
- the second shape (“capture rectangle") can be of substantially the same dimension, or even the same dimension, as the first shape.
- the second shape can be substantially the same as the first shape; or exactly the same as the first shape.
- the second shape whilst described and depicted as a rectangle, could be any suitable shape.
- the second and first shape could be rectangles, and they could be the same rectangle.
- depth frames may be captured encompassing the damage, and the first shape whose coordinates have undergone the alignment/smoothing/re-alignment etc.
- step 1007 a finite number of depth frames are captured, but the pixels captured are limited to only those which lie within the capture rectangle.
- step 1008 the captured depth frames are aligned with the original 640x480 depth frame as per Equation 2 above. This aligns the captured depth frames with the original depth frame using the cameras POSE matrix.
- step 1009 the realigned captured depth frames are smoothed/flattened as per Equation 3 above.
- the multiple depth frames are then of a single coordinate system/frame, hereafter referred to as the "combined depth frame”.
- each point in the combined depth frame is rounded to the nearest coordinate in the original 640x480 depth frame.
- step 1011 the rounded, combined depth frame is itself smoothed/flattened using all available depth values.
- step 101 1 of smoothing the combined depth frame comprises method 1012 in which each pixel in the combined depth frame is assigned a "most accurate depth”.
- Step 101 1 comprises method 1012 repeated for each pixel in the combined depth frame.
- Method 1012 comprises step 1012a in which a single pixel is identified (the "target pixel").
- step 1012b a "pixel set” is defined, the pixel set comprising the target pixel and a set number of surrounding pixels.
- step 1012c a "depth set” is defined the depth set comprising the depths of all pixels in the pixel set.
- step 1012d the mean of the depth set is calculated (i.e.
- step 1012e the standard deviation of the pixel set is calculated (i.e. the standard deviation for all the depths in the pixel set).
- step 1012f a "modified depth set" is defined, the modified depth set comprising all depths in the depth set that are within one standard deviation from the mean of the depth set. I.e. the modified depth set is calculated by discarding all depths which fall outside one standard deviation from the mean.
- step 1012g the mean of the modified depth set is calculated.
- step 1012h the mean of the modified depth set is assigned to the target pixel as the depth value for that pixel. The method then returns to step 1012a for the next target pixel. This method is repeated, forming step 101 1 , for each pixel in the combined depth frame.
- step 1012 the smoothed combined depth frame is then aligned with the colour photograph as per Equation 2
- This method results in a 2D 640x480 array of depth values that correspond to equivalent locations on the colour photograph. It is a 2D colour image of the captured objected from which height, width and depth measurements can be taken interactively by a user through simple tap gestures.
- This method can be used in conjunction with the first aspect of the invention.
- an image is captured of an object that has been damaged.
- image capture generates a processed image from which height, width and depth measurements can be taken by any user (i.e. a layperson). It is from these values that the damage assessment report may be generated and compared with the historical and optionally the standard repair database to generate data representative of the course of repair.
- the damage assessment report may be generated and compared with the historical and optionally the standard repair database to generate data representative of the course of repair.
- the 'depth' of the damage may need to be determined.
- the 'depth' of the damage can be defined as the length of a 'surface normal', which is defined as the surface of the object prior to it receiving the impact damage.
- a user looking at the object and the damage can not accurately make this determination without physically measuring it. in this case the user must be a trained engineer or otherwise suitably qualified person. However even manual measurements would likely differ from engineer to engineer.
- the processed image allows information to be determined about the undamaged surface surrounding the damage which can be used to determine the general curvature of the surface before (and after) the damage, thereby allowing a calculation of the surface normal. This, in turn, will allow for a more accurate damage assessment report and more accurate data generated for the repair.
- estimating the curvature includes: selecting points from a first frame from just outside the shape (e.g. rectangle) used for making all frames except for the first frame.
- the selected points are used in conjunction with standard polynomial regression to estimate the curvature of a set of vertical lines and horizontal lines passing through the rectangular or other area.
- the curvatures of those lines are then used to give an estimate (from closest vertical and horizontal line predictions) of the depth of each point within the shape (rectangle or other shape) if it continued the curvature found outside of that shape.
- points on the edge of the shape used for additional depth frame captures are used to estimate an approximate flat plane equation that passes as closely as possible through the object at the points where the shape is defined.
- the system displays the depths within the rectangle either as displacements from that flat plane, or from the predicted curvature points.
- a straight line cross section A may be defined by the user that defines the extent of the damage being assessed. As will be readily apparent the limits of the cross section start and end from and at the points where the object goes from being undamaged to damaged. The start and end points of the cross section are then extended to points A' and A".
- A1 is the point of greatest depth from cross section A and A2 is the distance between surface normal B and cross section A. The sum of A1 and A2 is the "damage depth".
- the dimensions A1 and A2 should be aligned and the maximum combined deviation of A1 +A2 is the maximum damage depth.
- the extended cross section (from A' to A") is then overlaid with the point cloud and interpolated to determine the surface normal B.
- a depth calculation may then be performed (e.g. using Heron's formula) to determine the depth of the damage.
- This information is contained in the damage assessment report and then, by means of a lookup table a standard repair may be recommended in the form of generated data. The data is based on historical data, since the object may have suffered equivalent or similar damage in the past and therefore the past data can be used exactly or substantially again. Additionally, standard repair data may be used to generate the data for a course of repair. [00141] In either case, a comparison may be made between the 'damage depth' being an entry in the damage assessment report and individual entries of the historical and standard repair database(s).
- the database entries can be used to generate data representing a course of repair to repair that damage.
- the determined damage depth may have a historical match whose damage was within allowable limits or was repaired using an approved SRM repair.
- the data representative of the repair may utilise this data (and may, in fact, be substantially the same).
Landscapes
- Engineering & Computer Science (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Quality & Reliability (AREA)
- Management, Administration, Business Operations System, And Electronic Commerce (AREA)
Abstract
A method of producing a processed image of an object comprises the steps of capturing a first depth frame around an object and taking an image of the object, transforming the first depth frame into a 3D point cloud, the 3D point cloud including a set of data points, transforming the 3D point cloud so that it is substantially aligned with the image, converting the transformed 3D point cloud to fit a pixel grid, smoothing the converted 3D point cloud using all available depth values and aligning the result with the image to produce a processed image of the object.
Description
Damage Detection and Repair System CROSS-REFERENCE TO RELATED APPLICATIONS
[0001] The present application claims all benefit, including priority, of GB Patent Application 1714853.7, entitled "Damage Detection and Repair System" and filed on September 15, 2017, which is hereby incorporated by reference in its entirety.
FIELD
[0002] The present disclosure relates to the field of damage detection, and embodiments of the disclosure include aspects relating to image processing, damage detection, configuration management, and repair systems. BACKGROUND
[0003] Maintenance of an object, such as a vehicle, can occur as either part of a routine procedure or otherwise scheduled maintenance (planned maintenance) or due to an unforeseen sudden damage to object (unplanned maintenance). In the case of aircraft, planned maintenance occurs when an aircraft typically returns to a hangar to undergo routine checks and/or repair. Alternatively, if an aircraft has been struck by a foreign object during flight or on the ground or otherwise needs attention there will be a need for inspection of any damage that has not been scheduled.
[0004] Types of damage that can be expected to be identified in a routine check of the aircraft can be general wear to the different parts of the craft (fuselage, wing, tail, tyres etc.), corrosions, scratches, dents etc. This list of damage types is by no means exhaustive. Although planned maintenance of vehicles is routinely scheduled, vehicles can equally require somewhat urgent attention if they become damaged.
SUMMARY
[0005] Aspects of the present disclosure provide a system for identifying damage to an object and then generate data representative of a course of repair for the damage. The generated data is representative data to inform the course of a repair. This data is then sent to an operator who can repair the object based on the generated data. Particularly, but not exclusively, the object in question is a vehicle. In an exemplary embodiment the vehicle is an aircraft.
[0006] Conventionally, damage detection and repair is usually performed by an operator at the sight of the aircraft. An operator (who may, for example be a trained engineer) assesses the damage and determines a course of repair to repair the damage so that the aircraft will be fit for flying. More specifically, the engineer will diagnose the damage which can involve taking measurements (for example in the case of a dent) to assess the 'severity' of the damage. The damage could also be a missing part or component of a part in which case the diagnosis is simply that this component is missing. The operator can then determine a course of repair which may involve looking up a course of repair in a structural repair manual (SRM). For example, the SRM can advise as to the specific parts, processes and procedures required to repair the damage.
[0007] For example, it may be clear that the aircraft has suffered a dent but the actual extent of the damage (e.g. depth, width and height) of the dent needs to be determined so that it can be precisely repaired. The damaged area may need to be cut out and repaired with a structural repair doubler, for example, to repair the dent but the type and size of the repair will depend on the dimensions of the dent. In the case where a component part is missing, the type of aircraft needs to be known so that the part can be ordered if necessary since specific types of parts are not interchangeable between different types of aircraft.
[0008] This method necessarily involves the operator being at the aircraft for the length of time required to survey the damage, and often longer so that the operator can view the damage while assessing how to repair the aircraft.
[0009] Moreover, the method relies heavily on the skill and knowledge of the operator who must draw from his experience with repairs in general and the types of aircraft that the operator encounters. Any delay in repairing the aircraft is undesirable since a delay of even a few hours can cost an airline hundreds of thousands of pounds in compensation. [0010] There is therefore a need for a quick and efficient way to determine if an object is ready for normal use again (e.g. if a car is safe to be taken out, if an aircraft is ready to fly etc.) that is quick and efficient and minimises the amount of operator/engineer time required.
[0011] According to the present invention damage is scanned or photographed and assessed by, for example, a 3D sensor. A course of repair is then generated and the damage is repaired based on the course of repair
[0012] More specifically, an image of the damage is taken on the basis of which the extent of the damage is assessed. Based on the assessment a database is interpreted to generate
data representative of a course of repair which is then automatically documented and sent to a user for an approval. On the basis of the user approval the course of repair is then put into action to repair the damage.
[0013] According to the present invention there is provided a method of detecting damage of an object and repairing the damage, the method comprising: capturing an image of the damage; generating data representative of a course of repair for the damage based on the image of the damage and historical repair data of that type of object; and then repairing the object based on the generated data.
[0014] Using historical repair data means that, if the object has suffered from that particular type of damage previously, the course of repair used to repair that damage at that time can be either replicated in full to repair the current damage, or otherwise used to generate the current course of repair. It may be for example that the earlier course of repair was sufficient to fully repair the damage, or alternatively it may be that it is determined that the damage has occurred too recently having regard to the last time this damage occurred. In the latter instance the same repair may not be used exactly but may be used to facilitate a more improved course of repair so that the same type of damage will not occur so frequently.
[0015] The step of generating data representative of a course of repair optionally comprises generating a damage assessment report which contains data representative of the location, type, extent (or any combination thereof) of the damage. Having the damage represented in a damage assessment report allows a user to assess the extent and type of the damage without actually being at or near the damaged object. This saves operator and repair skill and time since the assessment can be carried out remotely, as opposed to only at the object itself.
[0016] The historical database can comprise SRM data, in particular all or part of an SRM manual and/or other existing repair data, for example approved repair data. The historical database can also be populated with previously generated one-off (bespoke) repairs to address specific damage that is not covered by the existing SRM. Continual updates of the historical database may also be performed so that up-to-date/recent repairs are also stored in the database. [0017] The damage assessment report can be a 2D image of the object via which the dimensions of the damage can be ascertained. An example of generating such a 2D image is also part of this disclosure.
[0018] The step of generating data representative of a course of repair optionally comprises accessing a historical database containing historical repair data and performing a comparison between entries in the damage assessment report and entries in the historical database. [0019] The comparison optionally comprises defining a tolerance and comparing whether an entry in the damage assessment report matches an entry in the database within the tolerance, a match being reported if the entries are within the tolerance. In the event of a match between an entry in the damage assessment report and an entry in the database, the data representative of a course of repair may be equal to the entry in the database. [0020] The historical repair data may comprise a full history of all damage suffered by and repairs performed on the object or similar comparable objects.
[0021] The step of generating data representative of a course of repair can optionally further comprise accessing a repair database containing repair data for that object or similar comparable objects and performing a comparison between entries in the damage assessment report and entries in the repair database.
[0022] The comparison may comprise defining a tolerance and comparing whether an entry in the damage assessment report matches an entry in the repair database within the tolerance, a match being reported if the entries are within the tolerance.
[0023] Holding all information, e.g. standard repair data and historical values, in either a single database or separate databases, facilitates sharing of such data with users that are part of a network. For example, historical values of all types of repairs performed on and all types of courses of repair generated for each object that is part of a network may be stored in a database. This allows the course of repair to be generated not only based on historical data of repairs and/or courses of repairs in relation to that specific object; but also other objects of the same or similar type.
[0024] Repairing the object may comprise generating a repair report comprising an augmented reality view of the object and the data representative of the course of repair. This advantageously provides the end-engineer to whom the course of repair may be sent to conveniently view, for example, any parts that the course of repair suggests would repair the damage. This, in turn, reduces the amount of time the engineer spends reviewing the damage and the course of repair.
[0025] In a further aspect of the invention there is provided a method of producing a 2D image such that dimensions of an object within the image can be determined from the image. This method can be used to determine the extent of the damage to the object since the dimensions of the damage can be determined and used to populate a damage assessment report. When used to determine the extent of damage to an object this method has the advantage that anyone (e.g. a layperson) can photograph the object and send it to another user (who can also be a layperson) that can determine the dimensions of the damage. When the damage assessment report has been generated it can be therefore done remotely since it can be generated based on the 2D image. An engineer need only then be involved at the end of the method - where they need to approve the course of repair that has been generated so that the damage is actually repaired - and therefore this saves the amount of skilled time required to assess and repair any damage.
[0026] According to this aspect of the invention there is provided a method of producing a processed image of an object comprising the steps of: capturing a first depth frame around the object and taking an image of the object; transforming the first depth frame into a 3D point cloud, the 3D point cloud including a set of data points; transforming the 3D point cloud so that it is substantially aligned with the image; converting the transformed 3D point cloud to fit a pixel grid; smoothing the converted 3D point cloud using all available depth values and aligning the result with the image to produce a processed image of the object. [0027] In some embodiments, the processed image of the object is displayed, printed, or otherwise outputted to visualize the object and any potential damage.
[0028] The image is preferably a colour image. The image may be a colour photograph. Use of a colour image is advantageous as it means that a colour component can be added to the point cloud values. This makes it possible to visualise the 3D point cloud in colour (the components being the RGB values of the colour photograph pixel corresponding to the transformed point cloud vertex.
[0029] Capturing the first depth frame may comprise defining a first shape around the object, the first depth frame being captured so that the object is inside the first shape. The first shape may be any suitable shape, e.g. rectangular, circular, triangular etc. However, in a preferred embodiment it is rectangle. A preferred first depth frame is a 640x480 array of values. However, this is exemplary only and the depth frame may be an array of values of any size.
[0030] Transforming the depth frame into a 3D point cloud may comprise defining 3D coordinates of the point cloud in terms of the coordinates of the depth frame, and the intrinsic values of the camera that captured the depth frame and the image of the object. The intrinsic values may be the optical centres and the focal lengths of the camera. However, the intrinsic values that may be used may be any intrinsic value of the camera and are not limited to only a combination of the optical centres and focal lengths. The coordinates of the point cloud may be defined using any relationship of the original coordinates of the depth frame and any intrinsic values of the camera. For example, the relationship may be linear.
[0031] Transforming the 3D point cloud to align it with the image may comprise multiplying each vector in the 3D point cloud with a transformation matrix, wherein the transformation matrix is a function of the position and orientation (POSE) matrices of the camera that captured the depth frame and the image of the object. The transformation matrix may be a function of the inverse of one or more of the POSE matrices of the camera.
[0032] The step of smoothing the pixel grid optionally comprises defining smoothed coordinates as a function of the coordinates of the 3D point cloud aligned with the image.
[0033] The method optionally further comprises the step of defining a second shape around the object, and the method optionally comprises the step of capturing a second depth frame within the second shape. The method optionally further comprises the step of aligning the second depth frame with the first depth frame. In some embodiments, the second shape contains the first shape. In other embodiments, the first shape contains the second shape. In yet other embodiments, portion(s) of the first shape can overlap with portion(s) of the second shape.
[0034] Aligning the second depth frame with the first depth frame optionally comprises multiplying each vector in the second depth frame with a transformation matrix, which is a function of the POSE matrices of the camera that captured the depth frame and the image of the object. The transformation matrix may be a function of the inverse of one or more of the POSE matrices of the camera. Aligning the second depth frame with the first depth frame optionally comprises rounding each point to the nearest coordinate in the first depth frame.
[0035] The method optionally comprises capturing a subsequent depth frame, or subsequent depth frames. In this case, the method can further comprise the step of aligning the subsequent depth frame(s) with the first depth frame. As above, the alignment of the subsequent depth frame(s) with the first depth frame can comprise multiplying each vector in the subsequent depth frame(s) with a transformation matrix which is a function of one or
more of the POSE matrices of the camera. This alignment can optionally comprise rounding each point to the nearest coordinate in the first depth frame. The method can further comprise the step of smoothing the frame resulting from the alignment of the first and second and/or the first and subsequent depth frames. [0036] For example, twenty depth frames can be captured. It will be clear to a skilled person that more, or fewer, depth frames could be used however.
[0037] In some embodiments, subsequent frames are captured at different angles. In some embodiments, different frames are captured having a 3D angle of at least 15 degrees between them. [0038] In some embodiments, capturing additional frames from different angles can reduce the number of frames required to be captured. For example, in some instances 9 depth frames can be captured at different angles. It will be clear to a skilled person that more, or fewer, depth frames could be used however.
[0039] In some embodiments, aligning a first frame with any subsequent frame includes any of the steps referred to herein and additionally or alternatively includes: utilizing the camera POSE matrices for gross angle correction, translating the point clouds so that the point nearest the centre of the frames are aligned or are on top of each other, and applying an ICP (iterative closer point) algorithm to ensure that the point clouds overlap as closely as possible. [0040] The second shape can be substantially the first shape or exactly the first shape.
[0041] The method may further comprise the step of smoothing the frame resulting from the alignment of the first and second depth frames. The smoothing step may comprise defining smoothed coordinates as a function of the coordinates of the frame resulting from the alignment of the first and second depth frames. [0042] The method optionally further comprises aligning the smoothed frame with the image.
[0043] The step of smoothing the frame resulting from the alignment of the first and second depth frames may further comprise, for each pixel in the frame to be smoothed, calculating a mean value of the depth of the pixel and the depth of a set number of surrounding pixels, discarding all depth values that are not within one standard deviation from the mean, and calculating a mean depth for all depth values not discarded, and assigning that mean depth to the pixel as the smoothed value of that pixel.
[0044] Aligning the smoothed frame with the image may comprise multiplying each vector in the smoothed frame with a transformation matrix, which is a function of the POSE matrices of the camera that captured the depth frame and the image of the object. The transformation matrix may be a function of the inverse of one or more of the POSE matrices of the camera. [0045] The above method of producing a processed image of an object can be used in the above method of detecting damage of an object and repairing the damage. Specifically, the captured image of the damage in the method of detecting damage of an object and repairing the damage can be a processed image, processed according to the above method of producing a processed image of an object. Accordingly, there is provided a method according any one of the appended claims 25-36, wherein the image of the damage is a processed image processed according to any one of appended claims 1 -24.
[0046] According to a further aspect of the invention there is provided a system for damage detection and repair of an object comprising: an image capture device for capturing an image of an object; a historical database comprising historical data; generating means for generating data representative of a course of repair of the object based on the image captured by the image capture device and the historical data in the historical database.
[0047] The system optionally further comprises a standard repair database comprising data from a standard repair manual, and wherein the system is configured to access the standard repair database and to communicate with the generating means. [0048] The system optionally further comprising a computing device configured to access images captured by the image capture device; and configured to display an image of the type of object captured.
[0049] The system optionally further comprising an anonymiser configured to anonymise the captured image and/or stored information. [0050] Digital information captured during the above-mentioned process (for example the image of the damage) can be used to populate a "configuration management system" to provide a truly digital record for the history of any given object. In this way, this configuration management system is a digital record for any changes made to the object and can be accessed by any user having the required permissions. DESCRIPTION OF THE FIGURES
[0051] The invention will now be described with reference to the accompanying drawings in which:
[0052] Fig. 1 is a flowchart of a method of detecting damage of an object and repairing the damage according to a first aspect of the present invention; [0053] Fig. 2 is a flowchart of a method of generating data representative of a course of repair according to a first aspect of the present invention;
[0054] Fig. 3 is a flowchart of a method of generating data representative of a course of repair according to a second aspect of the present invention;
[0055] Fig. 4 is a flowchart of a method of producing a 2D image of an object from a photograph taken of that object;
[0056] Fig. 5 is a schematic diagram of the method according to the first aspect of the present invention;
[0057] Fig. 6 is a schematic diagram of a system for damage detection and repair of an object according to the first aspect of the present invention; [0058] Fig. 7 is a schematic diagram of how the method according to Fig. 4 can be used to assess damage;
[0059] Fig. 8 is a flowchart of a method of smoothing a combined depth frame according to the first aspect of the present invention.
[0060] Figs. 9A and 9B are schematic diagrams of parts of the method according to the first aspect of the present invention.
DETAILED DESCRIPTION
[0061] Fig. 1 shows a method 100 of detecting damage of an object and repairing the damage.
[0062] In step 102 an image of an object is captured. This could be, for example, in the form of a 3D scan by a 3D scanner attached to a tablet or smartphone. The image of the object depicts damage to the object. For example, the image may be centred on a point of the object that is damaged.
[0063] The image could be captured according to the system described below. Specifically, a model of the object (or the same/similar type of object) could be displayed to the user, and the user can select on the model the location of the damage on the object. Thereafter the user can take the image, and the image is then linked to the model of the object. [0064] For example, when the object is an aircraft, the user knows the make/model of the aircraft. The user can then input into a system the make/model which can prompt a system to display a model (e.g. a 3D model) and the user can then locate the damage on the model of the aircraft and the image corresponds to the location of the damage. This has the advantage that, if handed to another user, that other user can see the image of the damage and the location (on the model) of the aircraft where the damage is.
[0065] In a further step 104 the extent of the damage is assessed. This can comprise, for example, determining and capturing the geometrical size of the damage and position of the damage relative to other features (e.g. adjacent features of the object). When the object is an aircraft this step can comprise determining the location of the damage relative to a feature of the aircraft, e.g. the fuselage and/or wing and/or frame and/or stringer etc.
[0066] In step 106 a damage assessment report is generated that represents the extent of the damage. For example the damage assessment report could contain information about the type of the damage or the dimensions of the damage (such as when the damage is a missing part of the object or a dent in the object). Preferably the extent of the damage is considered relative to the underlying support structure.
[0067] Steps 104 and 106 can be done automatically. For example a computer could recursively generate cross-sections within a defined area (the defined area including the damage) and then the values of the cross-sections could be evaluated to find minimum and maximum values (e.g. of damage depths at various points on the cross-sections), and an extent of the damage can be automatically determined from the minimum and maximum values.
[0068] The image capture can be in response to a prompt generated by a computing device.
[0069] In step 108 data is generated that is representative of a course of repair and in step 1 10 the object is repaired based on the generated data. [0070] Referring to Fig. 2 step 108 of generating data comprises step 200 in which a comparison is performed between the damage assessment report and a database 109 comprising historical data of damage done to the object or similar objects and known
approved repair data from sources such as the SRM, and then generating repairs for that instance of damage. For example when the object is an aircraft, then the database can comprise historical data of damage done to that aircraft or one of a fleet of aircraft of the same type. If it is determined that the damage assessment report is equal to a historical damage entry in the database 109 (or within a tolerance) then the method proceeds to step 202 in which the course of repair for the historical damage is retrieved from the database.
[0071] Data representative of a course of repair is then generated, the data being equal to (with a tolerance) the course of repair that was generated for the historical damage. In this way the method utilises past repairs for the same (within the tolerance) type of damage to generate the course of repair. If it is determined that the damage assessment report does not have an equivalent entry in the database 109 (e.g. the object has not suffered this particular type of damage before) then the method proceeds to step 204 in which a database 1 11 is accessed and the damage assessment report is compared to entries in the database 1 11 for matches within a tolerance. The database 11 1 can comprise entries in a standard repair manual or structural repair manual for the object. The comparison step is to determine if a known repair exists from the SRM for the type of damage to the object. If it is determined that there is a match then the method proceeds to step 202 in which a course of repair for the damage is retrieved from the database based on the SRM entry for that type of damage. If there is no match then the method proceeds to step 206 in which the course of repair is generated but it is indicated that there was not a match either in the historical 109 or the SRM database 1 11.
[0072] Returning to Fig. 1 , the method then proceeds to step 110 in which the object is repaired based on the generated data. In this way the method provides an autonomous way of generating a way of repairing damage to an object. [0073] Referring to Fig. 3 an alternative of step 108 is provided in which the method first (step 300) accesses the database 11 1 of known repairs, which for example could be the SRM, to compare the damage assessment report to database entries to seek a match. Whether or not there is a match the method proceeds to step 302 in which a historical database 109 is accessed and a comparison is performed between the damage assessment report and entries in the historical database 109 to check if the same type of damage has been previously suffered by the object.
[0074] Whether or not there is a match the method then proceeds to step 304 where a course of repair is generated. The generated data for a course of repair is then based on a combination of results from each comparison, if applicable. For example, if each step
produces a match then the course of repair from the SRM database is generated together with the historic course of repair from the database 109. If neither step produces a match the course of repair is generated but it is indicated that there was not a match either in the historical 109 or the SRM database 1 11. [0075] Of course, the databases 109 and 1 1 1 can be the same database. Alternatively, the order of accessing the database as described with reference to and depicted in Figure 3 could be reversed.
[0076] The historical data can include data pertaining to all variations of a particular type of object. For example, when the object is an aircraft (or other type of vehicle) the data can comprise data pertaining to each model/type of aircraft. This allows the data to be generated based on a past repair not done to the aircraft that has suffered the damage, but to a past repair done to the same or similar type of aircraft. In this way, information is effectively shared between users.
[0077] Step 1 10 of repairing the object can comprise generating a further report for sending to an engineer. The report can comprise the damage assessment report and the data representative of a course of repair for the engineer to consider whether the course of repair is appropriate given the damage.
[0078] Step 1 10 can comprise generating an augmented-reality (AR) display of the object and the damage, in addition to the components necessary to conduct the repair. In this way, if the damage lies in a substructure of the object and would not otherwise be visible from a view of the exterior of the object alone then a user can explore the extent of the damage visually.
[0079] Fig. 5 shows as schematic of the method 100. An object 150 has suffered damage 151 and an image capture device 152 is used to capture an image of the damage (step 102). In step 104 the damage is assessed and in step 106 a damage assessment report is generated. As part of the generation of the damage assessment report, the captured image may be sent to an operator for processing in order to generate the damage assessment report or alternatively the user that operated the image capture device 152 to capture the image may generate the damage assessment report. [0080] A database 1 13 is accessed that comprises historical database 109 and repair database 1 11. In step 108 data is generated that represents a course of repair for the damage 151 of the object 150. As above, this is based on a comparison of entries in the damage assessment report with entries in the database 1 13 (comprising databases 109 and
1 11). This data representative of the course of repair can be sent to a user (e.g. a skilled operator/engineer) for approval 109 before the damage is repaired.
[0081] Figure 6 shows a system 101 for damage detection and repair of an object. An image capture device 152 is configured to capture an image of an object. In use the image capture device 152 will be used in the field by an engineer and also potentially non-expert users such as pilots and ground staff (in the case of the object being an aircraft). Images taken by the image capture device 152 are synchronised to a cloud database 163 that can be used across multiple sites (i.e. by a different engineer in a different location looking at a different object etc.). [0082] A cloud-based web service 161 and a cloud-based web application 162 provide a mechanism to enable sharing of information (for example captured images) between "base and field operations" (e.g. information may be shared between an operator on the ground, directly taking an image of the damage and another party who is not on the ground.
[0083] The web service 161 provides the mechanism for sharing data across devices while the web application 162 provides access to the information to desk-based engineers (e.g. at computing device 160). Note, computing device 160 is illustrated as a desktop computer but this is by way of illustration only. In some embodiments, a computing device can be a personal computer, server, mobile computing device, and/or a combination of one or more computing resources in a cloud or distributed computing environment. [0084] In some embodiments, the computing device 160 includes at least one processor for transforming, converting, and smoothing depth frames, points clouds; generating data representative of a course of repair; performing any of the steps described herein and/or any other processing activity. In some embodiments, the computing device 160 includes memory(ies) and/or storage device(s) for storing data including image data, depth frame data, point cloud data, pixel grids and/or any other data. In some embodiments, the computing device 160 includes at least one I/O interface, and at least one network interface for interfacing with imaging devices, databases, input devices, displays and/or other devices and/or components.
[0085] Computing device 160 can receive images captured by image capture device 152. Here, a desk-based engineer may view an image of the damage and carry out an assessment of the damage (step 104 and 106 of the method) before the comparison is made between the damage and the database entries.
[0086] The web application 162 interfaces with computing device 160 so that the web application 162 can provide access to information captured by the image capture device 152 to a remote user who is at the computing device 160. Any images captured by device 152 can be viewed via computing device 160. The web application 162 therefore allows information contained in the image to be checked via another user.
[0087] All data is captured and stored in database 164 which is fully backed-up.
[0088] Data from the repairs database 1 11 is accessed for a comparison with any damage noted by an engineer at computer 162 (from accessing the image captured by device 152). This therefore enables engineering decisions to be made quicker. Other business data may also be stored either in this database or a separate database and also used to generate data representative of a course of repair.
[0089] All information that is captured is anonymised via anonymiser 165.
[0090] Historical database 109 contains historical information such as:
[0091] All damage that has previously been done to that object, data that was generated as a recommended course of repair to repair that damage; and the course of repair that was actually undertaken to repair the damage; and
[0092] All damage that has previously been done to that type of object, data that was generated as a recommended course of repair to repair that damage; and the course of repair that was actually undertaken to repair the damage. [0093] In this way, the system 101 knows (i) if the same kind of damage has already been done to the object, and if this has already been repaired then the same repair recommendation may be used in the generated data for repair; and (ii) if the same kind of damage has not been done to the object, but the same type of damage has been done to a different object that is nevertheless of the same type, which has been repaired, then the same repair recommendation may also be used.
[0094] Although the anonymiser 165 is depicted as being 'ahead' of historical database 109 it is to be understood that their positions may be reversed. E.g. information captured can be compared with entries in a historical database and thereafter anonymised. It is also to be understood that the anonymiser 165 can be provided anywhere in the system 101 and can, at any stage in the associated method, anonymise data.
[0095] For example, the object in question could be an aircraft and it may have undergone impact damage. Whilst the aircraft in question may have not suffered impact damage before, another aircraft of the same type may have. With the system of the present invention this damage, its assessment, and its repair are known and are used to generate data representative of a course of repair to repair the damage. This saves operator skill and time because the skills of an engineer are only utilised when they have to be.
[0096] All information concerning a wide range of situations and object types (e.g. situations and aircraft types) is placed in data platform 166 for data analytics. Third party databases 170, 171 can also be accessed and used to inform insight into the causes of damage as well as to assist in determining the effect of different scenarios of the object. For example, database 170 is a flight database and database 171 is a weather database. These can be used to help inform the cause of damage to an aircraft and what effects different scenarios will have on the aircraft.
[0097] The data platform 166 is used to gather information and insight from all of the gathered data (particularly but not exclusively concerning the damage and defects). This stored information can be used to inform insights that will assist in devising new and improved parts or repairs.
[0098] All of web service 161 , web interface 162, databases 164, 109, 11 1 , 170 and 171 , anonymiser 165 and data platform 166 may be stored in a private cloud. [0099] As part of the step of generating a damage assessment report, computing device 160 or image capture device 152 may display an image of the specific object type and prompt a user to start capturing damage (e.g. with image capture device 152). For example, the object 150 is an aircraft. The system is configured to generate a prompt requesting a user to enter the type of the aircraft that is damaged. In response to the answer the system will access an aircraft database containing aircraft information and interactive images. The system is configured to display an image of the aircraft type in response to the type of aircraft that has been indicated. Image capture may only begin when a user indicates, via a simple tap gesture, what part of the aircraft has suffered damage or is defective. In some embodiments only once the user has done so is image capture allowed. [00100] The user may then be prompted to take an image of the damage or defect once the location of the aircraft has been indicated. As the model is interactive the displayed aircraft may be rotated and/or scaled and/or zoomed until the area of damage is in view. The
damage assessment report may be stored (e.g. in any one of the aforementioned databases) with the tagged location on the aircraft model.
[00101] In a further embodiment there is provided a method of producing a 2D image of an object from a photograph taken of that object. With reference to Figure 4, the method 1000 comprises capturing a depth frame of an object. By "depth frame" it is meant an array of values which can be represented by pixels on an image. The value of each pixel corresponds to the distance from a depth sensor of the camera taking the image to the object in the real-world.
[00102] The method begins with step 1001 in which a first shape is defined around an object that has suffered damage. In an exemplary embodiment the first shape is a rectangle defined around the object, but it will be understood that the first shape could be any suitable shape. The rectangle corresponds to where the object will be in the sense that the rectangle contains all of the object.
[00103] The first shape can be defined by a user, and may be scalable in both height and width. The first shape encompasses the object. As will be evident from the below the first shape controls the area for a point cloud, or multiple point clouds to be captured to increase the resolution and accuracy of the final image.
[00104] In step 1002 a depth frame is captured and an image (for example a colour photograph) is taken of the object. With reference to Figure 9A, the depth frame 701 is captured such that it contains the first shape 702, while the object, which comprises damage 700, is maintained in the first shape 702 (e.g. a rectangle). In this way, the depth frame contains the first shape which itself contains the object. In an exemplary embodiment the depth frame is a 640x480 array of depth values. The depth frame can be the entire view of the camera/sensor. Note that the damage 700 could be all or part of the object (e.g. the entire object and not just the point of the object that has suffered damage).
[00105] In step 1003 the 640x480 depth frame is transformed into a 3D point cloud. The 3D point cloud is generated by processing multiple depth frames from a 3D scanner. As is known to the skilled person, a point cloud is a set of data points in a coordinate system. Each point in the 3D point cloud represents the location of each 'pixel' (in the 640x480 depth frame) in 3D space.
[00106] The 3D point cloud is generated using knowledge of the camera's intrinsics. For example, the intrinsic parameters of a camera can include focal length, the optical centre (or principal offset), and a skew coefficient. These can be represented by an intrinsic camera
matrix which is usually responsible for transforming 3D camera coordinates to 2D homogeneous coordinates of the image. To generate the 3D point cloud using the 640x480 depth frame, the coordinates of the 3D point cloud are defined as functions of the original coordinates of the 640x480 depth frame; and the x- and y- optical centres and the x- and y- focal lengths of the cameras intrinsics as follows:
[00107] x3D = f(x, y, z, xc, yc, fx, fy)
[00108] y3D = g(x, y, z, xc, yc, fx, fy)
[00109] z2D = h(x, y, z, xc, yc, fx, fy)
[00110] Equation 1.
[00111] Where x3D, y3D and z3D are the coordinates of the 3D point cloud; x, y, and z are the coordinates (row, column and depth) of the 640x480 array, xc and yc are the optical centres in pixels (camera intrinsics) and fx and fy are the focal lengths in pixels (camera intrinsics); and where f, g, and h are functions whose purpose here is to denote that x3D, y3D and z3D are functions of the variables x,y,z, xc, yc, fx and fy. Note that there may be zero-dependence on one of these variables, e.g. z3D may not be dependent x etc.
[00112] In step 1004 the 3D point cloud is aligned with the image. This comprises transforming the 3D point cloud to be aligned with the poses (position and orientation) of the camera that took the image.
[00113] This is accomplished by multiplying each vector (x3D, y3D, z3D) in the 3D point cloud by a transformation matrix T. i.e.,
[00115] Equation 2
[00116] Where xalign, yalign and zalign are the coordinates of the transformed (aligned) 3D point cloud, and T is a transformation matrix formed from the camera POSE matrix.
[00117] At step 1005 the aligned point cloud is smoothed or flattened in order to determine which pixel locations in the original 640x480 coordinate system lie within the defined rectangle on the colour photograph. To achieve the smoothing/flattening, the 3D
point cloud that is aligned with the camera poses is transformed so that it 'fits' the original 640x480 pixel grid (the original depth frame). To smooth the aligned 3D point cloud the inverse of the functions f, g and h are applied to each vector in the aligned 3D point cloud, i.e., [00118] xsmooth~f 1 (xalign)
[00119] J smooth - a'1^ align)
[00120] zsmooth~h 1 (zalign)
[00121] Equation 3.
[00122] Where xsmooth, ysmooth and zsmooth denote the smoothed values of the aligned 3D point cloud. Any values (points in the original 640x480 pixel grid) which lie outside the first shape are discarded (in another embodiment, any values that lie outside the original 640x480 depth range are discarded). In the event that there are multiple values per pixel location in the 640x480 grid; the value farthest away from the camera is discarded.
[00123] The method then proceeds to step 1006 in which a second shape in the form of a new rectangle is defined in the original 640x480 depth frame's coordinate system, (the second shape will be hereafter referred to as the "capture rectangle"). With reference to Figure 9B, the capture rectangle 703 is defined such that it contains the original defined rectangle (first shape) 704 that was defined in the camera's coordinate system; the first shape 704 containing the damage 700. In an alternative embodiment, the second shape ("capture rectangle") can be of substantially the same dimension, or even the same dimension, as the first shape. The second shape can be substantially the same as the first shape; or exactly the same as the first shape. The second shape, whilst described and depicted as a rectangle, could be any suitable shape. Therefore, the second and first shape could be rectangles, and they could be the same rectangle. [00124] In defining the second shape (exemplified as the 'capture rectangle') depth frames may be captured encompassing the damage, and the first shape whose coordinates have undergone the alignment/smoothing/re-alignment etc.
[00125] In step 1007 a finite number of depth frames are captured, but the pixels captured are limited to only those which lie within the capture rectangle.
[00126] In step 1008 the captured depth frames are aligned with the original 640x480 depth frame as per Equation 2 above. This aligns the captured depth frames with the original depth frame using the cameras POSE matrix.
[00127] In step 1009 the realigned captured depth frames are smoothed/flattened as per Equation 3 above. The multiple depth frames are then of a single coordinate system/frame, hereafter referred to as the "combined depth frame".
[00128] In step 1010 each point in the combined depth frame is rounded to the nearest coordinate in the original 640x480 depth frame.
[00129] In step 1011 the rounded, combined depth frame is itself smoothed/flattened using all available depth values. Referring to Figure 8, step 101 1 of smoothing the combined depth frame comprises method 1012 in which each pixel in the combined depth frame is assigned a "most accurate depth". Step 101 1 comprises method 1012 repeated for each pixel in the combined depth frame. Method 1012 comprises step 1012a in which a single pixel is identified (the "target pixel"). In step 1012b a "pixel set" is defined, the pixel set comprising the target pixel and a set number of surrounding pixels. In step 1012c a "depth set" is defined the depth set comprising the depths of all pixels in the pixel set. In step 1012d the mean of the depth set is calculated (i.e. the mean of the depths of pixels in the pixel set). In step 1012e the standard deviation of the pixel set is calculated (i.e. the standard deviation for all the depths in the pixel set). In step 1012f a "modified depth set" is defined, the modified depth set comprising all depths in the depth set that are within one standard deviation from the mean of the depth set. I.e. the modified depth set is calculated by discarding all depths which fall outside one standard deviation from the mean. In step 1012g the mean of the modified depth set is calculated. In step 1012h the mean of the modified depth set is assigned to the target pixel as the depth value for that pixel. The method then returns to step 1012a for the next target pixel. This method is repeated, forming step 101 1 , for each pixel in the combined depth frame.
[00130] The result is that noise is reduced but sharper edges are produced for the image than if the mean of all depth values were used.
[00131] In step 1012 the smoothed combined depth frame is then aligned with the colour photograph as per Equation 2
[00132] This method results in a 2D 640x480 array of depth values that correspond to equivalent locations on the colour photograph. It is a 2D colour image of the captured
objected from which height, width and depth measurements can be taken interactively by a user through simple tap gestures.
[00133] This method can be used in conjunction with the first aspect of the invention. For example, in a first aspect of the invention an image is captured of an object that has been damaged. Utilising this method in that image capture generates a processed image from which height, width and depth measurements can be taken by any user (i.e. a layperson). It is from these values that the damage assessment report may be generated and compared with the historical and optionally the standard repair database to generate data representative of the course of repair. [00134] For example, when there has been some form of impact damage done to an object the 'depth' of the damage may need to be determined. The 'depth' of the damage can be defined as the length of a 'surface normal', which is defined as the surface of the object prior to it receiving the impact damage. A user looking at the object and the damage can not accurately make this determination without physically measuring it. in this case the user must be a trained engineer or otherwise suitably qualified person. However even manual measurements would likely differ from engineer to engineer.
[00135] Utilising the method of the second aspect of the invention however, the processed image allows information to be determined about the undamaged surface surrounding the damage which can be used to determine the general curvature of the surface before (and after) the damage, thereby allowing a calculation of the surface normal. This, in turn, will allow for a more accurate damage assessment report and more accurate data generated for the repair.
[00136] An example of how this may be achieved is as follows with reference to Figure 7. Although a 2D example is illustrated (in which the 'damage' is 2-dimensional and therefore the cross section is a line and not a 2D area) the skilled person would readily appreciate how the following teaching may be extended to 3D damage where the "straight" cross section is a plane and not a line.
[00137] In some embodiments, estimating the curvature includes: selecting points from a first frame from just outside the shape (e.g. rectangle) used for making all frames except for the first frame. The selected points are used in conjunction with standard polynomial regression to estimate the curvature of a set of vertical lines and horizontal lines passing through the rectangular or other area. The curvatures of those lines are then used to give an estimate (from closest vertical and horizontal line predictions) of the depth of each
point within the shape (rectangle or other shape) if it continued the curvature found outside of that shape.
[00138] In some embodiments, points on the edge of the shape used for additional depth frame captures are used to estimate an approximate flat plane equation that passes as closely as possible through the object at the points where the shape is defined.
[00139] In some embodiments, the system displays the depths within the rectangle either as displacements from that flat plane, or from the predicted curvature points.
[00140] Once the method of the second aspect has been performed in relation to an object which has suffered impact damage 700, the resulting processed image will be of that damage. A straight line cross section A may be defined by the user that defines the extent of the damage being assessed. As will be readily apparent the limits of the cross section start and end from and at the points where the object goes from being undamaged to damaged. The start and end points of the cross section are then extended to points A' and A". A1 is the point of greatest depth from cross section A and A2 is the distance between surface normal B and cross section A. The sum of A1 and A2 is the "damage depth". The dimensions A1 and A2 should be aligned and the maximum combined deviation of A1 +A2 is the maximum damage depth. The extended cross section (from A' to A") is then overlaid with the point cloud and interpolated to determine the surface normal B. A depth calculation may then be performed (e.g. using Heron's formula) to determine the depth of the damage. This information is contained in the damage assessment report and then, by means of a lookup table a standard repair may be recommended in the form of generated data. The data is based on historical data, since the object may have suffered equivalent or similar damage in the past and therefore the past data can be used exactly or substantially again. Additionally, standard repair data may be used to generate the data for a course of repair. [00141] In either case, a comparison may be made between the 'damage depth' being an entry in the damage assessment report and individual entries of the historical and standard repair database(s). If there is a match within a tolerance then the database entries can be used to generate data representing a course of repair to repair that damage. For impact damage, for example, the determined damage depth may have a historical match whose damage was within allowable limits or was repaired using an approved SRM repair. In this case, the data representative of the repair may utilise this data (and may, in fact, be substantially the same).
[00142] Although the invention has been exemplified by capturing a 640x480 depth frame this is to be understood as exemplary only. Depth frames of other dimensions are within the scope of the invention and indeed the skilled person will readily recognise that any depth frame can be suitable, depending on the specific application of the invention. For example, a 320x240 array of values could be used as the first depth frame.
[00143] Although the embodiments have been described in detail, it should be understood that various changes, substitutions and alterations can be made herein without departing from the scope. Moreover, the scope of the present application is not intended to be limited to the particular embodiments of the process, machine, manufacture, composition of matter, means, methods and steps described in the specification.
[00144] As one of ordinary skill in the art will readily appreciate from the disclosure, processes, machines, manufacture, compositions of matter, means, methods, or steps, presently existing or later to be developed, that perform substantially the same function or achieve substantially the same result as the corresponding embodiments described herein may be utilized. Accordingly, the appended claims are intended to include within their scope such processes, machines, manufacture, compositions of matter, means, methods, or steps.
[00145] As can be understood, the examples described above and illustrated are intended to be exemplary only.
Claims
1 A method of producing a processed image of an object comprising the steps of capturing a first depth frame around an object and taking an image of the object; transforming the first depth frame into a 3D point cloud, the 3D point cloud including a set of data points; transforming the 3D point cloud so that it is substantially aligned with the image; converting the transformed 3D point cloud to fit a pixel grid; smoothing the converted 3D point cloud using all available depth values and aligning the result with the image to produce a processed image of the object.
2. The method of claim 1 wherein capturing the first depth frame comprises defining a first shape around the object, the first depth frame being captured so that the object is inside the first shape, and the first shape is inside the captured depth frame.
3. The method of claim 1 or 2 wherein the first depth frame is a 640x480 array of values.
4. The method of any preceding claim wherein transforming the depth frame into a 3D point cloud comprises defining 3D coordinates of the point cloud in terms of the coordinates of the depth frame, and the intrinsic values of the camera that captured the depth frame and the image of the object.
5. The method of claim 4 wherein the intrinsic values are the optical centres and the focal lengths.
6. The method of any preceding claim wherein transforming the 3D point cloud to align it with the image comprises multiplying each vector in the 3D point cloud with a transformation matrix, wherein the transformation matrix is a function of a position and orientation (POSE) matrix of the camera that captured the depth frame and the image of the object.
7. The method of any preceding claim wherein the step of smoothing the pixel grid comprises defining smoothed coordinates as a function of the coordinates of the 3D point cloud aligned with the image.
8. The method of any preceding claim further comprising the step of defining a second shape around the object, at least a portion of the second shape contained within the first shape.
9. The method of claim 8 further comprising the step of capturing a second depth frame, 5 limiting the pixels of the captured second depth frame to those within the second shape.
10. The method of claim 9 further comprising the step of capturing a subsequent depth frame, limiting the pixels of the captured subsequent depth frame to those within the second shape.
1 1. The method of claim 9 or 10 wherein 20 depth frames are captured.
10 12. The method of any of claims 8-11 wherein the second shape is substantially the first shape or is exactly the first shape.
13. The method of any of claims 9-12 further comprising the step of aligning the second depth frame with the first depth frame.
14. The method of any of claims 9-12 further comprising aligning the subsequent depth 15 frame with the first depth frame.
15. The method of claim 13 wherein aligning the second depth frame with the first depth frame comprises multiplying each vector in the second depth frame with a transformation matrix, which is a function of a POSE matrix of the camera that captured the depth frame and the images of the object.
20 16. The method of claim 13 or 15 wherein aligning the second depth frame with the first depth frame comprises rounding each point to the nearest coordinate in the first depth frame.
17. The method of claim 14 wherein aligning the subsequent depth frame with the first depth frame comprises multiplying each vector in the subsequent depth frame with a
25 transformation matrix, which is a function of a POSE matrix of the camera that captured the depth frame and the images of the object.
18. The method of claim 14 or 17 wherein aligning the subsequent depth frame with the first depth frame comprises rounding each point to the nearest coordinate in the first depth frame.
19. The method of any of claims 13-18 further comprising the step of smoothing the frame resulting from the alignment of the first and second, and/or the first and subsequent, depth frames.
20. The method of claim 19 wherein the smoothing step comprises defining smoothed 5 coordinates as a function of the coordinates of the frame resulting from the alignment of the first and second depth frames.
21. The method of claim 20 further comprising aligning the smoothed frame with the image.
22. The method of any of claims 19-21 wherein the step of smoothing the frame resulting 10 from the alignment of the first and second depth frames further comprises, for each pixel in the frame to be smoothed, calculating a mean value of the depth of the pixel and the depth of a set number of surrounding pixels, discarding all depth values that are not within one standard deviation from the mean, and calculating a mean depth for all depth values not discarded, and assigning that mean depth to the pixel as the smoothed value of that pixel.
15 23. The method of claim 21 wherein aligning the smoothed frame with the image comprises multiplying each vector in the smoothed frame with a transformation matrix, which is a function of a POSE matrix of the camera that captured the depth frame and the image of the object.
24. The method of any preceding claim wherein the image is a colour image.
20 25. A method of detecting damage of an object and repairing the damage, the method comprising: capturing an image of the damage; generating data representative of a course of repair for the damage based on the image of the damage and historical repair data of that type of object; and
25 repairing the object based on the generated data.
26. The method of claim 25, wherein capturing an image of the damage further comprises determining the position of the damage on the object.
27. The method of claim 25 or 26, wherein generating data representative of a course of repair is also based on the position of the damage.
30
28. The method of any of claims 25-27 wherein the step of generating data representative of a course of repair comprises generating a damage assessment report which contains data representative of at least one of: (i) a type of damage; (ii) an extent of the damage; and (iii) a position of the damage on the object.
5 29. The method of any of claims 25-28 wherein the step of generating data representative of a course of repair comprises accessing a historical database containing historical repair data and performing a comparison between entries in the damage assessment report and entries in the historical database.
30. The method of claim 29, wherein the historical database comprises data from a 10 structural repair manual (SRM) and/or other existing approved repair data.
31. The method of claim 29 or 30, wherein the comparison comprises defining a tolerance and comparing whether an entry in the damage assessment report matches an entry in the database within the tolerance, a match being reported if the entries are within the tolerance.
15 32. The method of claim 31 , wherein, in the event of a match between an entry in the damage assessment report and an entry in the database, the data representative of a course of repair is equal to the entry in the database.
33. The method of any one of claims 25-32, wherein the historical repair data comprises a full history of all damage suffered by and repairs performed on the object or similar
20 comparable objects.
34. The method of any one of claims 25-33 wherein generating data representative of a course of repair further comprises accessing a repair database containing repair data for that object or similar comparable objects and performing a comparison between entries in the damage assessment report and entries in the repair database.
25 35. The method of claim 34 wherein the comparison comprises defining a tolerance and comparing whether an entry in the damage assessment report matches an entry in the repair database within the tolerance, a match being reported if the entries are within the tolerance.
36. The method of any one of claims 25-35 wherein repairing the object comprises generating a repair report comprising an augmented reality view of the object and the data 30 representative of the course of repair.
37. The method of any one of claims 25-36 wherein the capturing step includes capturing multiple frames of depth information of the object and generating the image based on the multiple frames.
38. The method of any one of claims 25-36, wherein the image of the damage is a 5 processed image, processed according to the method of any one of claims 1 -24.
39. A system for damage detection and repair of an object comprising: an image capture device for capturing an image of an object; a historical database comprising historical data; generating means for generating data representative of a course of repair of the 10 object based on the image captured by the image capture device and the historical data in the historical database.
40. A system according to claim 39 further comprising a standard repair database comprising data from a standard repair manual, and wherein the system is configured to access the standard repair database and to communicate with the generating means.
15 41. A system according to claim 39 or 40 further comprising a computing device configured to access images captured by the image capture device; and configured to display an image of the type of object captured.
42. A system according to any one of claims 39-41 further comprising an anonymiser configured to anonymise the captured image and/or stored information.
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
GB1714853.7 | 2017-09-15 | ||
GB1714853.7A GB2566491B (en) | 2017-09-15 | 2017-09-15 | Damage detection and repair system |
Publications (1)
Publication Number | Publication Date |
---|---|
WO2019053468A1 true WO2019053468A1 (en) | 2019-03-21 |
Family
ID=60159273
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
PCT/GB2018/052644 WO2019053468A1 (en) | 2017-09-15 | 2018-09-17 | Damage detection and repair system |
Country Status (2)
Country | Link |
---|---|
GB (1) | GB2566491B (en) |
WO (1) | WO2019053468A1 (en) |
Cited By (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN111932505A (en) * | 2020-07-20 | 2020-11-13 | 湖北美和易思教育科技有限公司 | Book damage automatic detection method and device |
CN111932543A (en) * | 2020-10-16 | 2020-11-13 | 上海一嗨成山汽车租赁南京有限公司 | Method, electronic device and storage medium for determining pixilated car damage data |
US11138718B2 (en) | 2019-08-09 | 2021-10-05 | Caterpillar Inc. | Methods and systems for determining part wear using a bounding model |
Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006083297A2 (en) * | 2004-06-10 | 2006-08-10 | Sarnoff Corporation | Method and apparatus for aligning video to three-dimensional point clouds |
US20160012646A1 (en) * | 2014-07-10 | 2016-01-14 | Perfetch, Llc | Systems and methods for constructing a three dimensional (3d) color representation of an object |
Family Cites Families (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20100194749A1 (en) * | 2009-01-30 | 2010-08-05 | Gerald Bernard Nightingale | Systems and methods for non-destructive examination of an engine |
CN101927391A (en) * | 2010-08-27 | 2010-12-29 | 大连海事大学 | Method for performing automatic surfacing repair on damaged metal part |
CN102538677A (en) * | 2012-01-16 | 2012-07-04 | 苏州临点三维科技有限公司 | Optics-based quick pipeline detection method |
-
2017
- 2017-09-15 GB GB1714853.7A patent/GB2566491B/en active Active
-
2018
- 2018-09-17 WO PCT/GB2018/052644 patent/WO2019053468A1/en active Application Filing
Patent Citations (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
WO2006083297A2 (en) * | 2004-06-10 | 2006-08-10 | Sarnoff Corporation | Method and apparatus for aligning video to three-dimensional point clouds |
US20160012646A1 (en) * | 2014-07-10 | 2016-01-14 | Perfetch, Llc | Systems and methods for constructing a three dimensional (3d) color representation of an object |
Non-Patent Citations (2)
Title |
---|
JONG-SEN LEE: "Digital Image Smoothing and the Sigma Filter", vol. 24, 1 January 1983 (1983-01-01), pages 255 - 269, XP002511283, ISSN: 0734-189X, Retrieved from the Internet <URL:http://www.sciencedirect.com/science?_ob=ArticleListURL&_method=list&_ArticleListID=855305751&_sort=d&view=c&_acct=C000049880& _version=1&_urlVersion=0&_userid=987766&md5=950d6052c81072aea0b8c419fac73a35>> [retrieved on 20090120], DOI: 10.1016/0734-189X(83)90047-6 * |
MURE-DUBOIS JAMES, HÜGLI HEINZ: "Merging of range images for inspection or safety applicaions", PROC OF SPIE, vol. 7088, 1 January 2008 (2008-01-01), pages 70660K1 - 70660K12, XP040442105 * |
Cited By (4)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US11138718B2 (en) | 2019-08-09 | 2021-10-05 | Caterpillar Inc. | Methods and systems for determining part wear using a bounding model |
US11676262B2 (en) | 2019-08-09 | 2023-06-13 | Caterpillar Inc. | Methods and systems for determiing part wear using a bounding model |
CN111932505A (en) * | 2020-07-20 | 2020-11-13 | 湖北美和易思教育科技有限公司 | Book damage automatic detection method and device |
CN111932543A (en) * | 2020-10-16 | 2020-11-13 | 上海一嗨成山汽车租赁南京有限公司 | Method, electronic device and storage medium for determining pixilated car damage data |
Also Published As
Publication number | Publication date |
---|---|
GB2566491B (en) | 2022-04-06 |
GB201714853D0 (en) | 2017-11-01 |
GB2566491A (en) | 2019-03-20 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN113125458B (en) | Method and system for checking and evaluating coating state of steel structure | |
EP2030139B1 (en) | Method and apparatus for obtaining photogrammetric data to estimate impact severity | |
EP3208774B1 (en) | Methods for localization using geotagged photographs and three-dimensional visualization | |
KR102094341B1 (en) | System for analyzing pot hole data of road pavement using AI and for the same | |
US20170223338A1 (en) | Three dimensional scanning system and framework | |
CN109472829B (en) | Object positioning method, device, equipment and storage medium | |
WO2019053468A1 (en) | Damage detection and repair system | |
WO2021119739A1 (en) | Method and system for detecting physical features of objects | |
KR20160018944A (en) | Method of generating a preliminary estimate list from the mobile device recognizing the accident section of the vehicle | |
US20020094134A1 (en) | Method and system for placing three-dimensional models | |
US11544839B2 (en) | System, apparatus and method for facilitating inspection of a target object | |
CN102202159A (en) | Digital splicing method for unmanned aerial photographic photos | |
EP4120190A1 (en) | Method for inspecting an object | |
EP4123578A1 (en) | Method for inspecting an object | |
US20240257337A1 (en) | Method and system for surface deformation detection | |
WO2018056129A1 (en) | Information processing device, information processing method, and storage medium | |
WO2022251906A1 (en) | Method and system for detecting coating degradation | |
US20180025479A1 (en) | Systems and methods for aligning measurement data to reference data | |
US11922659B2 (en) | Coordinate calculation apparatus, coordinate calculation method, and computer-readable recording medium | |
CN113836337A (en) | BIM display method, device, equipment and storage medium | |
JP7433623B2 (en) | Structure image management system | |
EP4120189A1 (en) | Method for inspecting an object | |
KR20160018945A (en) | Method of generating a preliminary estimate list from the mobile device recognizing the accident section of the vehicle | |
KR20220039450A (en) | System and method for establishing structural exterior map |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
121 | Ep: the epo has been informed by wipo that ep was designated in this application |
Ref document number: 18782131 Country of ref document: EP Kind code of ref document: A1 |
|
NENP | Non-entry into the national phase |
Ref country code: DE |
|
122 | Ep: pct application non-entry in european phase |
Ref document number: 18782131 Country of ref document: EP Kind code of ref document: A1 |