EP4281941A1 - Procédé, aéronef et système de détection d'une caractéristique d'un objet à l'aide d'une première et d'une deuxième résolution - Google Patents

Procédé, aéronef et système de détection d'une caractéristique d'un objet à l'aide d'une première et d'une deuxième résolution

Info

Publication number
EP4281941A1
EP4281941A1 EP22701368.7A EP22701368A EP4281941A1 EP 4281941 A1 EP4281941 A1 EP 4281941A1 EP 22701368 A EP22701368 A EP 22701368A EP 4281941 A1 EP4281941 A1 EP 4281941A1
Authority
EP
European Patent Office
Prior art keywords
images
feature
image
recording unit
resolution
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
EP22701368.7A
Other languages
German (de)
English (en)
Inventor
Ulrich Seng
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Top Seven GmbH and Co KG
Original Assignee
Top Seven GmbH and Co KG
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Top Seven GmbH and Co KG filed Critical Top Seven GmbH and Co KG
Publication of EP4281941A1 publication Critical patent/EP4281941A1/fr
Pending legal-status Critical Current

Links

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/176Urban or other man-made structures
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M5/00Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings
    • G01M5/0075Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings by means of external apparatus, e.g. test benches or portable test systems
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U10/00Type of UAV
    • B64U10/10Rotorcrafts
    • B64U10/13Flying platforms
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03DWIND MOTORS
    • F03D17/00Monitoring or testing of wind motors, e.g. diagnostics
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F03MACHINES OR ENGINES FOR LIQUIDS; WIND, SPRING, OR WEIGHT MOTORS; PRODUCING MECHANICAL POWER OR A REACTIVE PROPULSIVE THRUST, NOT OTHERWISE PROVIDED FOR
    • F03DWIND MOTORS
    • F03D17/00Monitoring or testing of wind motors, e.g. diagnostics
    • F03D17/001Inspection
    • F03D17/003Inspection characterised by using optical devices, e.g. lidar or cameras
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M5/00Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings
    • G01M5/0033Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings by determining damage, crack or wear
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01MTESTING STATIC OR DYNAMIC BALANCE OF MACHINES OR STRUCTURES; TESTING OF STRUCTURES OR APPARATUS, NOT OTHERWISE PROVIDED FOR
    • G01M5/00Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings
    • G01M5/0091Investigating the elasticity of structures, e.g. deflection of bridges or air-craft wings by using electromagnetic excitation or detection
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0088Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots characterized by the autonomous decision making process, e.g. artificial intelligence, predefined behaviours
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/0094Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots involving pointing a payload, e.g. camera, weapon, sensor, towards a fixed or moving target
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/0002Inspection of images, e.g. flaw detection
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/17Terrestrial scenes taken from planes or by drones
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/25UAVs specially adapted for particular uses or applications for manufacturing or servicing
    • B64U2101/26UAVs specially adapted for particular uses or applications for manufacturing or servicing for manufacturing, inspections or repairs
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2101/00UAVs specially adapted for particular uses or applications
    • B64U2101/30UAVs specially adapted for particular uses or applications for imaging, photography or videography
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B64AIRCRAFT; AVIATION; COSMONAUTICS
    • B64UUNMANNED AERIAL VEHICLES [UAV]; EQUIPMENT THEREFOR
    • B64U2201/00UAVs characterised by their flight controls
    • B64U2201/10UAVs characterised by their flight controls autonomous, i.e. by navigating independently from ground or air stations, e.g. by using inertial navigation systems [INS]
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F05INDEXING SCHEMES RELATING TO ENGINES OR PUMPS IN VARIOUS SUBCLASSES OF CLASSES F01-F04
    • F05BINDEXING SCHEME RELATING TO WIND, SPRING, WEIGHT, INERTIA OR LIKE MOTORS, TO MACHINES OR ENGINES FOR LIQUIDS COVERED BY SUBCLASSES F03B, F03D AND F03G
    • F05B2260/00Function
    • F05B2260/80Diagnostics
    • FMECHANICAL ENGINEERING; LIGHTING; HEATING; WEAPONS; BLASTING
    • F05INDEXING SCHEMES RELATING TO ENGINES OR PUMPS IN VARIOUS SUBCLASSES OF CLASSES F01-F04
    • F05BINDEXING SCHEME RELATING TO WIND, SPRING, WEIGHT, INERTIA OR LIKE MOTORS, TO MACHINES OR ENGINES FOR LIQUIDS COVERED BY SUBCLASSES F03B, F03D AND F03G
    • F05B2270/00Control
    • F05B2270/80Devices generating input signals, e.g. transducers, sensors, cameras or strain gauges
    • F05B2270/804Optical devices
    • F05B2270/8041Cameras
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10141Special mode during image acquisition
    • G06T2207/10148Varying focus

Definitions

  • Embodiments in accordance with the present invention relate to methods, aircraft and systems for detecting a feature of an object. Further exemplary embodiments relate to AI (artificial intelligence)-supported inspections of wind turbines using drones.
  • AI artificial intelligence
  • Wind turbines form an integral part of a sustainable energy supply.
  • the use of the inexhaustible resource wind enables emission-free and safe production of electricity.
  • the type of energy generation by converting mechanical energy into electrical energy does not pose any immediate risks, it is still of great importance to regularly maintain and check the systems themselves. Due to the high performance and corresponding dimensioning of modern wind turbines, these very high mechanical forces have to withstand over a long period of many years. Damage should therefore be detected in good time so that it can be repaired, not only with regard to the safety of the system, but also with regard to the efficiency of the system, for example in the case of damage to the rotors of the wind turbines, which can reduce the efficiency of the system .
  • the object on which the present invention is based is to create a concept that enables features of an object to be detected with little expenditure of time and with high accuracy at low cost.
  • Exemplary embodiments according to a first aspect of the present invention create a method for detecting a feature of an object, the method having a step (a) which comprises flying off the object and optically detecting at least part of the object by at least one recording unit with a first resolution, to generate a plurality of images, each image representing an at least partially different region of the object.
  • the method further includes a step (b) comprising evaluating the plurality of images to classify the generated images into images not containing the feature and into images containing the feature.
  • the method includes a step (c), which has a new optical detection of those areas of the object whose assigned images contain the feature, with a second resolution that is higher than the first resolution.
  • the method also includes a step (b) which includes evaluating the plurality of images to classify the generated images into images not containing the feature and into images containing the feature.
  • the method also includes a step (c), which includes providing the partial images of those regions of the object whose assigned images contain the feature.
  • an unmanned aircraft for example a drone, for detecting a feature of an object, with at least one recording unit for generating images by optical detection.
  • the unmanned aircraft can be controlled in order to fly over the object and to optically capture at least a part of the object by the recording unit with a first resolution in order to generate a plurality of images, each image representing an at least partially different area of the object.
  • the unmanned aircraft can be controlled in order to optically capture those areas of the object whose assigned images contain the feature again with a second resolution that is higher than the first resolution.
  • an unmanned aircraft for example a drone, for detecting a feature of an object, with at least one recording unit for generating images by optical detection.
  • the unmanned aircraft can be controlled in order to fly over the object and to optically capture at least part of the object using the recording unit in order to generate a plurality of images, each image representing an at least partially different area of the object.
  • the unmanned aircraft can be controlled in order to generate an image with a first resolution and a plurality of partial images, each with a second resolution that is higher than the first resolution, for each area.
  • the system is designed to evaluate the plurality of images in order to classify the generated images into images that do not contain the feature and into images that contain the feature.
  • the unmanned aircraft can be controlled in order to optically record those areas of the object whose assigned images contain the feature again with a second resolution that is higher than the first resolution.
  • FIG. 1 For exemplary embodiments according to the second aspect of the present invention create a system for detecting a feature of an object with an unmanned aerial vehicle, for example a drone, wherein the unmanned aerial vehicle can be controlled in order to fly over the object and at least a part of the object by the recording unit optically sensed to produce a plurality of images, each image representing an at least partially different region of the object.
  • the unmanned aircraft can be controlled in order to generate an image with a first resolution and a plurality of partial images, each with a second resolution that is higher than the first resolution, for each area.
  • the system is designed to evaluate the plurality of images in order to classify the generated images into images that do not contain the feature and into images that contain the feature, and the partial images of those areas of the object whose associated images Feature contain, provide, e.g. for classification or cataloging of the recorded features.
  • Embodiments according to the first and second aspect of the present invention are based on the core idea of flying over the object to detect a feature of an object and capturing at least part of the object with a recording unit with a first resolution and for those areas of the object that have the feature to provide recordings with a second resolution that is higher than the first resolution, eg by re-capturing such an area (first aspect) with the second resolution or by generating several partial images with the second Resolution for each area and providing those sub-images that contain the feature.
  • the object can be a large and difficult to access object, such as a wind turbine, oil rig, bridge, crane, factory, refinery, or ship.
  • the feature of the object can be damage to the object, for example.
  • a method according to the invention can be used to detect this damage, for example in the form of cracks, holes, deviations from geometric norms, such as bending, rusting or other indicators that allow conclusions to be drawn about the structural integrity of the object .
  • such an object By flying over the object, for example with an unmanned aircraft such as a drone, such an object can be detected with little expenditure of time and resources.
  • the use of people for the detection itself such as climbers in wind turbines, can be dispensed with, which not only saves costs, but also prevents people from being endangered.
  • parts of the object can be detected that would otherwise be unreachable.
  • one approach according to the invention is to generate new higher resolution images after capturing and classifying images of an object from those areas of the object that contain a flaw or other feature of interest.
  • a rough detection of the object or of features of the object can be carried out on the basis of a plurality of images with the first resolution. These images can be evaluated and classified in terms of the feature, such as damage, to those areas of the object whose associated images contain the feature to recapture at the second, higher resolution. On the basis of images with the second resolution, small damage, for example, can thus also be detected with high accuracy.
  • the evaluation of the plurality of images can be automated, for example based on approaches from machine learning (machine learning), or manually, for example by a human. Furthermore, the evaluation of the plurality of images can also take place in a partially automated manner, for example with automated methods which support a human in the evaluation.
  • another flight can be carried out after an evaluation of the plurality of images on the ground in order to generate new images with the second evaluation, or the evaluation can take place during the flight, i.e. the object flies off and at least part of the object is optically detected .
  • the distance between the recording unit and the object can be reduced or a zoom setting of the recording unit can be changed.
  • a raster image can be generated, e.g., based on a previous image classification, with each partial image of the raster image having the second, higher resolution.
  • a plurality of partial images can also be generated, each with the second resolution.
  • the entire amount of data can be generated for evaluation, for example with regard to damage to the object.
  • a holistic scanning of the object can thus be created, which consists, for example, of a large number of images with the first resolution, with a large number or even each image of the images with the first resolution being provided with a large number of partial images with the second resolution, so that a Data set with multi-level accuracy for object description is available.
  • This can have advantages, for example, in applications that are particularly critical to safety, in which complete verification of the status of the object is required.
  • a data record can be used to create an intuitive possibility for evaluation, for example for a human being.
  • the Human for example, from an overview of the object in the form of a 3-D model, consisting of the images of the first resolution, parts of the object through, for a large number or even all images of the first resolution, stored partial images with the second resolution, important areas of the examine the object more closely.
  • the plurality of images with the first resolution and the plurality of partial images with the second resolution can also be evaluated automatically, for example, in order to split the generated images into images that do not contain the feature and into images that contain the feature, so that, for example, partial images of those areas of the object whose associated images contain the feature can be made available and, for example, highlighted to the human being.
  • images with the first resolution can be generated together with high-resolution raster images with the second resolution, the images are classified and, based on the classification, sub-images, for example raster images, are provided from the images that represent defective areas. It should be pointed out here that the evaluation of the plurality of images and/or sub-images can take place in an automated or partially automated manner or manually.
  • a higher-resolution raster image when flying for the first time, in addition to the first images required for the evaluation, e.g. an evaluation on the ground using a laptop, a higher-resolution raster image, for example, can also be automatically generated for each image, i.e. a plurality of partial images each with the second Resolution, are generated.
  • an unmanned aircraft for example a drone, which includes the recording unit
  • the drone can stop at a waypoint of its flight trajectory, generate the first image and then, for example, immediately generate the high-resolution raster image from the same position.
  • a zoom lens for example, can be used to increase the resolution.
  • the recording unit can also have a plurality of cameras, for example two or three cameras.
  • the recording unit can have a plurality of lenses or camera lenses (multi-lens camera, multi-lens camera) to increase the resolution.
  • the high-resolution images with the features can then be displayed directly from the raster images. This means that there is no need for a second flight, since all the necessary high-resolution images are already available and only the selection has to be made.
  • an automated evaluation of the images for example the images with the first resolution or the images with the second resolution, can be carried out during or after the object has flown off.
  • a decision can be made, for example in flight, directly after capturing an image with the first resolution, by evaluating the image or classifying the image as to whether further images with the second, higher resolution of the associated area of the object are to be captured will.
  • one or more images with the first resolution and, associated with one or more images with the first resolution, a multiplicity of images with the second, higher resolution can also be generated for at least part of the object and the evaluation or Classification can be done in-flight, i.e. at take-off, or after take-off based on any combination of the images.
  • the evaluation or classification can also be carried out after the object has taken off, i.e. after the object has taken off, to generate the images with the first resolution and before the object has flown off again to generate images with the second resolution on an external computer, i.e
  • a computer e.g. laptop, which is not part of the recording unit or an unmanned aircraft, such as a drone, can be carried out.
  • methods in accordance with the present invention enable accurate scanning of an object with little expenditure of time and resources.
  • step (b) is carried out after the object has been flown over, and step (c) comprises flying over to those regions of the object whose associated images contain the feature.
  • pre-processing can take place on the basis of the images with the first resolution from step (a), on the basis of which step (c) can be carried out, so that, for example, only those areas of the Object whose associated images contain the feature are recorded again with the second resolution.
  • Significant time and data savings can thus be achieved.
  • the automated evaluation of the majority of images after the object has taken off the evaluation can also be carried out on an external computer, so that an unmanned aerial vehicle, for example, only has to meet low hardware requirements to fly over the object. Furthermore, particularly in the case of large objects, it may be necessary to restore the aircraft's ability to fly after it has taken off, which can be carried out at the same time as the evaluation to save time.
  • Step (c) can also be carried out after step (b) and, for example, after the stopover explained above.
  • the images can be evaluated after the flight and then the object can be flown again to generate higher resolution images of the defective areas.
  • the recording unit generates an image with the same focal length in step (a) and in step (c). Furthermore, the object is flown to in step (a) in such a way that the recording unit is at a first distance from the object when an image is generated, and the object is flown to in step (c) in such a way that the recording unit is a second distance when an image is generated to the object that is less than the first distance.
  • the object is flown to in step (a) and in step (c) in such a way that the recording unit is at the same or similar distance from the object when an image is generated. Furthermore, in step (a) the recording unit generates an image with a first focal length and in step (c) an image with a second focal length, the second focal length being greater than the first focal length.
  • the resolution can accordingly be increased by changing the focal length of a lens of the recording unit, for example by changing a zoom setting or by exchanging the lens.
  • the recording unit can, for example, be converted during a stopover in order to record recordings in step (c) with the second focal length.
  • the recording unit can also be equipped with cameras of different focal lengths, so that steps (a) and (c) can also be carried out during a single flight.
  • the recording unit in step (a) uses at least one of a first camera with the first focal length, a first lens with the first focal length and/or a first camera lens with the first focal length or a zoom lens with a first zoom setting according to the first focal length.
  • step (c) includes replacing the first camera of the recording unit with a second camera with the second focal length and/or replacing the first lens of the recording unit with a second lens with the second focal length and/or replacing the first camera lens Recording unit with a second camera lens with the second focal length or setting the zoom lens of the recording unit to a second zoom setting according to the second focal length.
  • Both the change of the camera, the lens and/or the camera lens, and the change of the zoom setting can be carried out in the course of a stopover or even while the object is taking off, eg automatically.
  • the resolution can be increased without changing the detection distance, i.e. the distance between the recording unit and the object, so that, for example, there are no changes when the object flies off automatically or autonomously of waypoints, which determine the trajectory of departure, must take place.
  • the renewed optical detection of the area in step (c) comprises the generation of a plurality of partial images of the area, each with the second resolution.
  • step (c) can include, for example, in particular the above-described approach to those areas of the object whose assigned images contain the feature. In simple terms, a raster recording with a higher resolution of the corresponding area can thus be generated. In this way, damage can be reliably and accurately detected.
  • each image generated in step (a) is assigned position and/or orientation information of the recording unit.
  • the areas of the object to be flown over are determined using the position and/or location information of the images that contain the feature.
  • waypoints for the renewed departure or arrival can be generated based on the position information of the recording unit.
  • a trajectory can be generated which, for example, only contains waypoints that are suitable for detecting areas containing possible damage. Accordingly, renewed on or Flying off the object to generate images with the second resolution can be carried out with little expenditure of time.
  • step (b) includes the transmission of the images generated in step (a) from the unmanned aircraft to a computer, for example a laptop computer, and the evaluation of the images by the computer.
  • the evaluation of the images includes an automated evaluation of the images, for example part of the evaluation can be carried out automatically or the entire evaluation of the images.
  • step (c) can be optically recorded again during departure from step (a), so that those areas of the object that are recorded with the second resolution can be selected on the basis of the images with the first resolution can take place during departure.
  • an aircraft with lower hardware requirements can be used.
  • the images can also be transmitted and evaluated by the computer during a landing, for example in preparation for the object flying again to generate the images with the second resolution.
  • the unmanned aerial vehicle autonomously flies the object in step (a). Furthermore, step (b) comprises generating waypoints using the position and/or location information of the images containing the feature and transmitting the waypoints, for example from the computer described above, to the unmanned aerial vehicle. Also, in step (c), the UAV autonomously flies to the areas of the object using the waypoints. As a result, the object or the feature of the object can be detected fully autonomously. A time-efficient trajectory can be planned using the waypoints, by means of which the corresponding areas of the object can be recorded with the second resolution in step (c). The waypoints can be created, for example, based on a CAD model of the object.
  • the model can be improved by detecting the object, and on the basis of this the waypoints can be adjusted.
  • the object is flown over autonomously with an unmanned aircraft, eg a drone, which includes the recording unit.
  • the unmanned aerial vehicle comprises a computer, wherein step (b) comprises evaluating the images and generating waypoints using the position and/or location information of the images containing the feature by the computer of the unmanned aerial vehicle.
  • the evaluation of the images includes an automated evaluation of the images, the evaluation can therefore, for example, be carried out in a fully automated or partially automated manner.
  • the UAV autonomously flies to the areas of the object using the waypoints.
  • the object or the feature of the object can be detected by a single and thus, for example, time-efficient flight.
  • the generation of the waypoints can also include an adaptation of existing waypoints, so that the unmanned aircraft adapts its own flight trajectory while the images are being recorded on the basis of the evaluation of the images.
  • steps (a) to (c) are performed while flying the object such that in step (a) an image of an area is generated and in step (b) the image generated in step (a ) generated image is classified before generating an image for another area.
  • steps (a) to (c) are performed while flying the object such that when in step (b) the image is classified as containing the feature, in step (c) the area prior to generation of the further image is optically examined is recaptured before generating an image for the further area, and if in step (b) the image is classified as not containing the feature, an image is generated for the further area.
  • the feature or the object can thus be detected when the object flies off once.
  • the classification of the images generated in step (a) makes it possible to avoid taking images with the second, higher resolution, which do not contain the feature, ie damage to the object, for example, so that only relevant information is recorded.
  • the evaluation of the images can be carried out during the flight and images with the higher resolution, for example only of defective areas, can be generated.
  • the recording unit generates an image with the same focal length in step (a) and in step (c).
  • the object is flown to in step (a) in such a way that the recording unit is at a first distance from the object when an image is generated.
  • the distance between the recording unit and the object is reduced to a second distance which is less than the first distance.
  • the higher resolution is achieved by getting closer to the object. The distance can be reduced quickly and easily, for example, by adapting the flight trajectory.
  • the object is flown to in step (a) such that the recording unit is at a first distance from the object when generating an image, and the recording unit generates an image with a first focal length. Furthermore, in step (c), the distance from the capturing unit to the object is equal to or similar to the first distance, and the capturing unit generates an image with a second focal length which is greater than the first focal length.
  • the predetermined waypoint can be retained, so that no additional adaptation of the flight trajectory is necessary. In this case, for example, only the length of time during which the unmanned aircraft is located at the corresponding waypoint can be increased in order to generate images with a second resolution.
  • the distance between the recording unit and the object in step (c) can be similar or even the same as the first distance, so that an improvement in resolution due to a change in focal length outweighs an improvement in resolution due to a change in distance.
  • the two distances can deviate from one another by a few percent, e.g. by less than 5% or less than 10% or less than 20%.
  • the predetermined waypoint or the flight trajectory can also be varied or adapted according to exemplary embodiments.
  • the recording unit comprises at least one of a plurality of objectives, a zoom lens, a plurality of cameras and a plurality of camera lenses. Furthermore, in step (a), the recording unit uses a first camera, a first lens, and/or a first camera lens with the first focal length or sets the zoom lens to a first zoom setting according to the first focal length. In addition, in step (c), the recording unit uses a second camera, a second lens and/or a second camera lens with the second focal length or adjusts the zoom lens to a second zoom setting according to the second focal length. These adjustments can be carried out automatically during the flight, so that the feature of the object can be detected with a single flight of the object, for example.
  • the zoom setting and/or the selection of the camera, the lens and/or the camera lens can be linked to the waypoints and/or a time sequence of the flight, for example.
  • the renewed optical detection of the area in step (c) comprises the generation of a plurality of partial images of the area, each with the second resolution.
  • the plurality of partial images can accordingly form a raster image with a higher resolution, so that, for example, a plurality of detailed images can be generated associated with an image with the first resolution.
  • the object is flown over autonomously with an unmanned aircraft, e.g. a drone, which includes the recording unit.
  • the unmanned aircraft includes a computer, step (b) comprising the evaluation of the images by the computer of the unmanned aircraft.
  • the evaluation of the images includes an automated evaluation of the images, the evaluation can therefore, for example, be carried out in a fully automated or partially automated manner.
  • the computing power by the aircraft the feature of the object can be detected, for example, with a single flight of the object, so that the method can be carried out in a particularly time-efficient manner.
  • the aircraft can use the computing power to plan its own trajectory, for example in the form of waypoints, adaptively, based on the evaluation results. A fully autonomous detection of the feature of the object can thus be carried out.
  • Embodiments according to the first aspect of the present invention have the following step (d), wherein step (d) includes transmitting the images generated in step (c) to an evaluation unit, eg for classifying or cataloging the detected features.
  • the images with the higher resolution, which are generated in step (c) can be transmitted to an evaluation unit, such as an external computer, for evaluating the features of the object, for example damage to the object, such as cracks.
  • This information can then be entered into an existing model of the object, for example.
  • further instructions can also be sent back to the unmanned aircraft by the evaluation unit, for example when a specific type of damage is detected, a new approach and detection of the damage, for example using other measurement methods.
  • an electrical measurement of the lightning protection device of the wind turbine could be carried out on the basis of an image analysis.
  • the object is flown to in step (a) in such a way that the recording unit is at a fixed distance from the object when generating an image and the partial images. Furthermore, in step (a), the recording unit generates the image with a first focal length and the partial images with a second focal length, which is greater than the first focal length. In other words, the resolution is increased by changing the focal length.
  • the recording unit comprises a zoom lens which uses a first zoom setting according to the first focal length when generating the image, and which uses a second zoom setting according to the second focal length when generating the partial images.
  • the zoom setting can be changed during flight, so that both the first and second resolution images can be taken from the same waypoints of the flight trajectory. Accordingly, this type of generation of the partial images can be carried out in a particularly time-efficient manner.
  • step (b) includes the transmission of the images and partial images generated in step (a) from the unmanned aircraft to a computer, eg a laptop computer, and the computer evaluating the images.
  • the evaluation of the images includes an automated evaluation of the images; the evaluation of the images can, for example, take place in a fully automated or partially automated manner.
  • step (c) includes the provision by the computer of the partial images of the image assigned to the region. The evaluation can be carried out using machine learning methods, for example. In simple terms, classified images and detail images in the form of partial images are thus generated by the method according to the invention and made available by the computer. Thereby For example, an intuitive evaluation or assessment of the feature, ie damage, for example, can be carried out.
  • the object is flown over autonomously with an unmanned aircraft, e.g. a drone, which includes the recording unit.
  • the unmanned aircraft includes a computer, step (b) comprising the evaluation of the images and the partial images by the computer of the unmanned aircraft.
  • the evaluation of the images (B1-B4) and the sub-images (B1 1 -B44) includes an automated evaluation of the images (B1-B4) and the sub-images (B1 1 -B44).
  • the images and partial images can therefore be evaluated, for example, in a partially automated or fully automated manner.
  • the unmanned aircraft transmits the partial images to an evaluation unit, e.g.
  • the feature of the object can already be detected with a reduced expenditure of time while the object is flying off.
  • the classified or cataloged features can then be made available by the evaluation unit, e.g. directly to a person for evaluation, who, for example, can in turn make further measurements, e.g. with non-optical measurement methods (e.g. lightning protection measurement and/or moisture measurement), of relevant areas can initiate.
  • non-optical measurement methods e.g. lightning protection measurement and/or moisture measurement
  • the feature to be detected comprises a defect of the object or a predetermined element of the object.
  • the error can be, for example, a crack, a hole, rust, damage to the paintwork or other optically recognizable surface changes.
  • the feature can also be a predetermined element of the object, such as a characteristic geometry of the object, for example the wing tips or the rotor flanges in the case of a wind turbine, or a specific component such as a rivet or screw.
  • step (b) comprises AI or machine learning.
  • Such methods can evaluate or analyze images at high speed with little use of resources. categorize, for example, in terms of the presence of a feature of an object.
  • a method according to the invention with a known reference object or a known reference feature can be used to create training data for corresponding methods.
  • the AI or machine learning can be used alone or to support a human. Methods of this type make it possible to save a great deal of time, particularly with regard to evaluating a large number of images for large objects, such as an oil rig or a windmill.
  • the object comprises a power generation plant, e.g. a wind turbine or a solar plant, or an industrial plant, e.g. an oil rig, a factory, a refinery, or a building, e.g. a skyscraper, or a Infrastructure facility, e.g. a bridge.
  • the object can also be a crane, for example.
  • a method according to the invention can be carried out, for example with regard to an industrial plant during ongoing operation, without endangering a person or without the operation having to be stopped.
  • the unmanned aircraft eg a drone
  • the unmanned aircraft is designed to transmit the plurality of images to an external computer, eg a laptop computer, which converts the generated images into images that do not contain the feature and classified into images that do contain the feature.
  • the unmanned aircraft is designed to receive information from the external computer that indicates the areas of the object that are to be optically recorded with the second resolution. Based on the information from the external computer, the unmanned aircraft can generate images of the respective areas with the second resolution accordingly.
  • the communication and the use of the information can take place during the flight, for example during departure and the generation of the images with the first resolution of the object, or during a stopover, for example between the departure of the object and the generation of the images with the first resolution and reflying the object to generate images of selected areas of the object at the second resolution.
  • the stopover can also be used to restore the ability of the unmanned aircraft to fly, e.g. to change the batteries of a drone.
  • the calculator can also provide the trajectory planning of the unmanned aircraft, for example for an autonomous flight, and form the previously explained evaluation unit.
  • the unmanned aircraft e.g. a drone
  • a computer which is designed to evaluate the plurality of images in order to convert the generated images into images that do not contain the feature and classify into the images containing the feature.
  • FIG. 1 shows a schematic representation of an object and a recording unit with a flight trajectory according to an embodiment of the present invention
  • FIG. 2 shows a schematic representation of a detail from FIG. 1 with a modified flight trajectory according to an exemplary embodiment of the present invention
  • FIG. 3 shows a schematic representation of an image classification according to an embodiment of the present invention
  • FIG. 4 shows a schematic side view of a wind turbine with damage that is detected with the aid of exemplary embodiments according to the present invention
  • 5 shows a flowchart of a method for detecting a feature of an object according to an embodiment of the present invention.
  • FIG. 6 shows a flowchart of a further method for detecting a feature of an object according to an embodiment of the present invention.
  • FIG. 1 shows an object 110, an unmanned aircraft 120 with a recording unit 130 and a computer 140.
  • the aircraft 120 is only to be regarded as optional, as a possible implementation of a mobile recording unit, for example in the form of a drone with a camera.
  • the aircraft 120 flies over the object in order to detect one or more features 110a.
  • the flight trajectory is marked with waypoints WP1 to WP4.
  • the flight trajectory can come from a previous trajectory plan, for example.
  • waypoints can be generated, for example by a computer 140, which is made available to the aircraft 120 via a connection 140a.
  • the computer 140 can also be part of the aircraft 120 so that the aircraft plans its own trajectory autonomously, for example.
  • the trajectory can also be specified manually by a human pilot, for example in the absence of a 3D model of object 110.
  • the recording unit 130 captures the front side 110b of the object 110 or, to put it more generally, a part of the object.
  • the recording unit 130 generates a plurality of images B1 -B4 with a first resolution, each image at least partially showing a different area of the object 110 or the Represents front 110b of the object.
  • the images B1-B4 can partially overlap, or to put it another way, they can have information about a partially identical image detail or about a partially identical area of the object. However, the images may also not overlap.
  • each of the waypoints WP1-WP4 is associated with one of the images B1-B4. To put it simply, a large number of waypoints can be defined from which the object is recorded, for example photographed.
  • Images B1-B4 are classified 140c into images 210 not containing feature 110a, comprising images B1, B3, B4, and into images 220 containing feature 1 10a, comprising image B2.
  • the classification is implemented in FIG. 2 as an example with computer 140 and can be carried out by any classification unit.
  • the classification can be implemented in particular with methods of machine learning.
  • the classification can be carried out, for example, with artificial intelligence (CI or artificial intelligence (AI)).
  • the computer 140 or also a corresponding classification unit can be part of the aircraft 120, so that a corresponding communication 140a, 140b is to be regarded as optional.
  • the bidirectionality of the communication 140a, 140b shown in FIG. 1 is also optional; any combination of information can be transmitted from the aircraft to the computer or vice versa.
  • a large number of possible task distributions e.g. with regard to image classification, calculation of waypoints, classification of the feature
  • the recording unit 130 can re-scan the area of the object 110 containing the feature through the image B2 assigned to the area on which the feature 110a was detected, with a second resolution that is higher than the one first resolution, optical detection.
  • the aircraft can remain on the waypoint WP2 after generating the image B2 and after recognizing the feature on the image B2 and can record the corresponding area of the object again.
  • the aircraft 120 does not have to stay on the waypoint WP2, but can only maintain an equal or similar distance to the object 110, for example.
  • the acquisition unit 130 may increase the focal length, for example by adjusting a zoom setting or changing a lens. As a result, an image B21 with a higher resolution, which is a partial image of the image B2, for example, can be generated.
  • the position of the feature can also be evaluated by using position and attitude data of the aircraft 120 in order to align the recording unit 130 with the feature 1 10a, for Generation of the image B21.
  • the image B21 does not necessarily have to include a sub-area of the image B2.
  • the area of the object captured with image B21 can, for example, only be selected as a function of the position of feature 110a, so that the image section of image B21 can be selected independently of areas in images B1-B4.
  • a set of partial images B21-B24 can also be generated with the second resolution. Furthermore, independently of a classification and recognition of the feature 110a in the images B1 -B4, a large number of areas of the object, or for example each area of the object, can be recorded by a number of partial images of the area, each with the second resolution (e.g. for Image B1 sub-images B11-B14, for image B2 sub-images B21-B24 etc.). Corresponding sub-images can be transmitted, for example, via communication 140b to an evaluation unit, for example in the form of computer 140, for example for classifying or cataloging detected feature 110a.
  • an evaluation unit for example in the form of computer 140, for example for classifying or cataloging detected feature 110a.
  • the images B1 -B4 can also be classified after flying off the waypoints WP1 -WP4 and then landing at the starting point S.
  • a second flight trajectory 150 can then be specified for the aircraft 120 via the communication 140a on the basis of the classification and associated position and/or attitude information via the feature 110a.
  • FIG. 1 shows trajectory 150 as a flight from starting point S to waypoint WP2 and back to starting point S.
  • waypoint WP2 the previously explained re-detection of the area of the object takes place.
  • a region of the object containing feature 110a can be recorded with a single image B21 or with a grid of partial images of image B2, for example including partial images B21-B24, with the second resolution.
  • the increase in resolution can in turn be achieved by changing a zoom setting or by changing the lens of the recording unit 130 .
  • These adjustments can, for example, be carried out manually during landing, after the object 110 has taken off for the first time and before the object 110 has been approached via the flight trajectory 150 .
  • the flight itself can in turn be carried out autonomously or manually.
  • FIG. 3 shows a section of FIG. 1 with the object 110, the aircraft 120, the recording unit 130 and the computer 140.
  • Way point WP2 has a first distance di from the object 110.
  • the aircraft 120 can approach the object at waypoint WP 2a, so that the distance is reduced to the distance ds.
  • the object 110 or the region of the object which has the feature 110a can be recorded with a higher resolution, for example without changing a zoom setting or replacing a lens.
  • the adaptation of the trajectory of the aircraft 120 can take place during the flight, as optionally shown in FIG. 3 .
  • the trajectory can be changed during the flight by communication 140b with the computer 140 .
  • the recording unit or the aircraft can display the image B2 communicate with the computer 140 and then receive back new waypoints WP2a.
  • the aircraft itself may include the computer 140 and perform the classification and trajectory adjustment itself. Any combination is also possible, so that the aircraft or the recording unit can, for example, carry out the classification independently, communicate the classification result to an external computer and receive back waypoints.
  • a corresponding trajectory adjustment with a reduction in distance can also take place during an intermediate landing via communication 140a from FIG. 1 .
  • the flight trajectory can also consist a priori of waypoints at different distances from the object 100, so that, for example, a grid of partial images (B11-B14, B21-B24, etc.) for a plurality of areas of the object with the second resolution is created by reducing the distance.
  • FIG. 4 shows a wind turbine 400 with a tower 410, a nacelle 420, a rotor hub 430, rotors or blades 440, rotor blade or blade tips 450 and rotor blade flanges 460.
  • One of the rotors 440 has damage 110a.
  • This damage 110a or the defect can be a crack, for example.
  • the damage 110a can be the feature of the wind turbine that is to be detected.
  • creating a model of the wind turbine 400 e.g.
  • the wing tips 450 and/or the rotor blade flanges 460 can be features of the wind turbine 400 which are to be recorded.
  • a 3-D model of the wind power plant can be created from their recording, for example via a known position of the aircraft at the time of recording and the imaging geometry. Waypoints for autonomous inspection flights can in turn be generated from this 3D model. Also shown are captured images 210 not containing the feature and captured images 220 containing the feature in the form of image 220a in the event that the feature is damage 110a and alternatively or additionally images 220b in the event that the feature is the wing tips 450.
  • partial image 470 is entered as an example for clarification.
  • areas marked with images 220a containing the feature can be re-marked with a higher resolution can be captured, but not the entire previously scanned area of the object must be scanned again, but only a portion of the original image section or the area of the object can be captured.
  • an image Bx does not necessarily have to be divided completely into partial images Bxx by means of renewed acquisition with increased resolution.
  • a corresponding partial image 470 does not have to lie completely within the image with the first resolution, from the evaluation of which the position of the feature was identified.
  • a "raster capture" in simple terms is indicated in comparison for a detection of the wing tips 450 for the images 220b.
  • error detection takes place, i.e., for example, damage 110 a is identified by identifying error-free areas ( Figures 210) , which are referred to as patterns in this context, whereby this pattern recognition can be used to deduce error 1 10a, for example.
  • Figures 210) which are referred to as patterns in this context, whereby this pattern recognition can be used to deduce error 1 10a, for example.
  • the areas of the wind power plant 400 or sample that the class considers not error-free, that is to say as defective and therefore not excluded are flown to again in order to generate high-resolution error images. Automatic generation of waypoints can be used for renewed approach. Referring to FIG.
  • the high-resolution defect images can be, for example, the image with the second resolution B21 or also the multiplicity of partial images B21-B24.
  • high-resolution defect images can be generated by using zoom lenses and/or by flying closer to the areas recognized as not being defect-free. Furthermore, an exchange of the lens of the recording unit is also possible.
  • AI-supported image/error recognition or pattern recognition can be used, for example, in the following tasks or missions.
  • the AI support can be used in several stages. Three stages are to be explained below as an example, with features of the individual stages being interchangeable and combinable as desired, unless stated otherwise. They are only intended to explain the idea regarding the use of KI with regard to feature detection and are therefore not to be regarded as restrictive.
  • Step 1
  • the Kl generates waypoints (waypoints) for an inspection or an inspection flight, for example after a calibration flight, or waypoints for an error flight, i.e. the approach to the wind turbine 400, to detect the error 110a or another feature (e.g. rotor tips 540). increased resolution, after an inspection flight.
  • the drone carries out the inspection flight autonomously and transmits the images to the remote computer after landing, for example when changing the battery.
  • the image/error recognition or pattern recognition then takes place on the remote computer.
  • the results for example waypoints for a subsequent inspection flight, for example based on a generically generated CAD model of the wind turbine after the calibration flight, or waypoints for the subsequent error flight, e.g. the approach of the detected error 1 10a, e.g. at a short distance and photographing the error 100a with high resolution, are transmitted back to the drone after calculation.
  • the Kl runs or calculates on an additional computing unit in real time (real time) on the drone and controls the inspection flight after calibration or the error approach during the inspection flight.
  • the additional computing unit can be, for example, an add-on GPU (graphics processing unit) board (additional graphics processor board) or CPU (central processing unit) or special Kl boards.
  • the computer or computer described above can also be part of the drone and thus provide the hardware for operating the AI.
  • the drone is equipped with its own local intelligence, e.g. through the additional computing unit, and carries out the calculations onboard, e.g. locally on its own drone hardware, in real time.
  • the drone can carry out actions, for example the calculation of the waypoints for the subsequent inspection flight, directly during the calibration flight and then, for example, carry out the inspection flight directly afterwards. Errors are detected in real time and there is an immediate, direct approach or zoom in on the error detected and the error is photographed with a correspondingly high resolution. The inspection flight is then continued until the next error.
  • the inspection flight can be carried out at a distance of 8 m from the wing 440 and a renewed approach to the detected error at a distance of 3 m.
  • the drone transmits the data, for example all image data from the inspection flight which, for example, have a low or the first resolution (e.g. corresponding to images B1 -B4 from FIG. 1 ), and all error or partial images created , which, for example, have a high or the second resolution (e.g. one or more of the sub-images B11, . . . , B44 from FIG. 1), to a cloud or an external computer.
  • the errors 110a can then be categorized by an operator from the error images and the error logs can be created interactively.
  • Detected and high-resolution error images are categorized by the Kl.
  • the error images stored in the cloud can be automatically categorized with the help of the Kl and the error logs can be created automatically.
  • the AI can be used for the three aforementioned tasks, as explained below.
  • the basic principle of the Kl support is the optical detection and detection of the wing tips 450 by Kl and the calculation of the positions of the wing tips 450, and the optical detection and detection of the wing flanges 460 by Kl and calculation of the positions, distances and angles the Wing flanges 460.
  • the pitch angles (inclination angles) of the wings can also be detected and/or calculated.
  • a final calculation or modification of a generic model, for example a CAD model, of the wind power plant 400 can then be carried out with these values, including the positioning and alignment of the plant and the deflection (bending) of the blades 440 .
  • the waypoints for the inspection flights can then be calculated from this data. This can be done with a stopover (remote - level 1) or in real time, without a stopover (local intelligence - level 2).
  • Wing inspection methods of the present invention are similar to tower inspection methods, but with a significantly reduced number of images, for example.
  • a necessary or advantageous resolution of approximately 1.6 pixels/mm for an initial assessment or for the use of Kl for example with a wing length of approximately 70 m at a distance of 7 m from the drone or the recording unit about 25 images per side for the wing are generated or, for example, may be necessary.
  • an improved resolution for example the second resolution explained above, for example a resolution of more than 3.5 pixels/mm required by an expert, detected defects can or should be flown to again or immediately at a distance of approx. 3 m from the wing.
  • a change in a zoom setting or a change in the lens used can also be carried out, as described above.
  • the numerical values are based on the 300s described above DJI drone with P1 full frame camera and 50mm lens.
  • the image sensor has a width of 35.9 mm with 8197 pixels and a height of 24 mm with 5460 pixels.
  • the possible tasks described above, or missions of the method according to the invention with regard to a wind turbine are entered in the table in the form of a tower inspection, a blade inspection and a calibration (calibration flight). Furthermore, an example of a previously described error flight is also entered.
  • the fourth column shows the distance between the aircraft or the drone and the wind turbine
  • the fifth column shows the image width covered by a respective image section in relation to the surface of the wind turbine in mm
  • the sixth column shows the image width covered by a respective image section Image height in relation to the surface of the wind turbine in mm and in the seventh column the respective resolution in pixels/mm.
  • the inspection of a wind turbine can start, for example, with the calibration or the calibration flight.
  • a pilot flies the aircraft, for example the drone, over the wind turbine at a distance of 25 m.
  • the wind turbine recorded optically, whereby an image corresponds to an image area in reality with a width of 18 m and a height of 12 m. Accordingly, images from this optical acquisition have a resolution of 0.46 pixels/mm.
  • a CAD model of the wind turbine can be created or a generic model can be modified by recognizing characteristic features of the wind turbine, such as the wing tips. Due to the large distance and the low resolution, this step can be carried out in a short time. In this case, the recognition of the features can be carried out in particular using and using methods of machine learning.
  • the distance can also be, for example, in a range or interval such that the distance is, for example, at most 25 m or at most 20 m, or is in a range from 20 m to 25 m. Furthermore, the distance can also be 20 m, for example.
  • Waypoints for the inspection flights can then be generated on the basis of the CAD model. These waypoints can in turn be generated using a Kl. Both the evaluation and the waypoint generation can be carried out by the aircraft itself after landing after the calibration flight or even during the calibration flight. In order to be able to detect errors with sufficient accuracy, the distance between the aircraft and the wind turbine is reduced during the inspection flights that are then carried out autonomously, for example. For example, a distance of 7 m can be set for the wing inspection, so that an image generated corresponds to a width of approx. 5 m and a height of approx. 3.3 m in reality, which results in a resolution of 1.63 pixels/ mm leads. With 25 images per wing side, with 3 sides and an overlap of 20 cm, 75 images result with an exemplary length of the wing of 70 m. Similarly, the tower inspection results in 304 images with a resolution of 1.28 pixels/mm.
  • Kl can in turn be used to evaluate the images of the inspection flights. As already mentioned, this can already categorize the images during the flight itself and divide them into images that show damage and into images that show no damage. Alternatively, this can also be done after landing on an external computer. The great advantage of machine learning methods becomes clear from the large number of images, which would require a lot of time if evaluated by a human.
  • corresponding locations can be flown to again in the event of a faulty flight.
  • the detection with the second or higher resolution can also take place in the course of the tower or wing inspection take place.
  • the distance between the aircraft and the object can be reduced to 3 m, or a corresponding lens can be attached (e.g. during a stopover) or a zoom setting (e.g. in flight) can be adjusted accordingly. This enables a resolution of 3.87 pixels/mm to be achieved.
  • FIG. 5 shows a flow chart of a method for detecting a feature of an object according to an embodiment of the present invention.
  • FIG. 5 shows the method steps 510-530 in an order which is only intended to serve as an example. Accordingly, the method steps can also be used in a different order.
  • Step 510 includes flying the object and optically capturing at least a portion of the object by at least one acquisition unit 130 at a first resolution to generate a plurality of images, each image having a represents at least partially different area of the object.
  • Step 520 includes, for example, an automated evaluation of the plurality of images in order to classify the generated images into images that do not contain the feature and into images that contain the feature.
  • Step 530 includes re-optically capturing those areas of the object whose associated images contain the feature at a second resolution that is higher than the first resolution.
  • FIG. 6 shows a flowchart of a further method for detecting a feature of an object according to an embodiment of the present invention.
  • 6 shows the method steps 610-630 in an order which is intended to serve only as an example. Accordingly, the method steps can also be used in a different order.
  • Step 610 comprises flying off the object and optically capturing at least a part of the object by at least one recording unit in order to generate a plurality of images, each image representing an at least partially different area of the object, and wherein for one area an image with a first Resolution and a plurality of sub-images are each generated with a second resolution that is higher than the first resolution.
  • Step 620 includes, for example, an automated evaluation of the plurality of images in order to classify the generated images into images that do not contain the feature and into images that contain the feature.
  • Step 630 includes providing the partial images of those areas of the object whose associated images contain the feature.
  • optical detection can also include detection in the infrared range, ie for example by means of infrared cameras.
  • embodiments of the invention may be implemented in hardware or in software. Implementation can be performed using a digital storage medium such as a floppy disk, DVD, Blu-ray Disc, CD, ROM, PROM, EPROM, EEPROM or FLASH memory, hard disk or other magnetic or optical memory, on which electronically readable control signals are stored, which can interact with a programmable computer system in such a way or interact that the respective method is carried out. Therefore, the digital storage medium can be computer-readable.
  • a digital storage medium such as a floppy disk, DVD, Blu-ray Disc, CD, ROM, PROM, EPROM, EEPROM or FLASH memory, hard disk or other magnetic or optical memory, on which electronically readable control signals are stored, which can interact with a programmable computer system in such a way or interact that the respective method is carried out. Therefore, the digital storage medium can be computer-readable.
  • some embodiments according to the invention comprise a data carrier having electronically readable control signals capable of interacting with a programmable computer system in such a way that one of the methods described herein is carried out.
  • embodiments of the present invention can be implemented as a computer program product with a program code, wherein the program code is effective to perform one of the methods when the computer program product runs on a computer.
  • the program code can also be stored on a machine-readable carrier, for example.
  • exemplary embodiments include the computer program for performing one of the methods described herein, the computer program being stored on a machine-readable carrier.
  • an embodiment of the method according to the invention is thus a computer program that has a program code for performing one of the herein has the method described when the computer program runs on a computer.
  • a further exemplary embodiment of the method according to the invention is therefore a data carrier (or a digital storage medium or a computer-readable medium) on which the computer program for carrying out one of the methods described herein is recorded.
  • the data carrier, digital storage medium, or computer-readable medium is typically tangible and/or non-transitory.
  • a further exemplary embodiment of the method according to the invention is therefore a data stream or a sequence of signals which represents the computer program for carrying out one of the methods described herein.
  • the data stream or sequence of signals may be configured to be transferred over a data communication link, such as the Internet.
  • Another embodiment includes a processing device, such as a computer or programmable logic device, configured or adapted to perform any of the methods described herein.
  • a processing device such as a computer or programmable logic device, configured or adapted to perform any of the methods described herein.
  • Another embodiment includes a computer on which the computer program for performing one of the methods described herein is installed.
  • a further exemplary embodiment according to the invention comprises a device or a system which is designed to transmit a computer program for carrying out at least one of the methods described herein to a recipient.
  • the transmission can take place electronically or optically, for example.
  • the recipient may be a computer, mobile device, storage device, or similar device.
  • the device or the system can, for example, comprise a file server for transmission of the computer program to the recipient.
  • a programmable logic device e.g., a field programmable gate array, an FPGA, and/or dedicated or purpose-built AI chips
  • a field programmable gate array can cooperate with a microprocessor to perform any of the methods described herein.
  • the methods are performed on the part of any hardware device. This can be hardware that can be used universally, such as a computer processor (CPU), or hardware that is specific to the method, such as an ASIC.
  • the devices described herein may be implemented, for example, using hardware apparatus, or using a computer, or using a combination of hardware apparatus and a computer.
  • the devices described herein, or any components of the devices described herein may be implemented at least partially in hardware and/or in software (computer program).
  • the methods described herein may be implemented, for example, using hardware apparatus, or using a computer, or using a combination of hardware apparatus and a computer.

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Remote Sensing (AREA)
  • Theoretical Computer Science (AREA)
  • Mechanical Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Automation & Control Theory (AREA)
  • Multimedia (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Chemical & Material Sciences (AREA)
  • Sustainable Energy (AREA)
  • Combustion & Propulsion (AREA)
  • Sustainable Development (AREA)
  • General Engineering & Computer Science (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Game Theory and Decision Science (AREA)
  • Artificial Intelligence (AREA)
  • Evolutionary Computation (AREA)
  • Health & Medical Sciences (AREA)
  • Medical Informatics (AREA)
  • Business, Economics & Management (AREA)
  • Electromagnetism (AREA)
  • Quality & Reliability (AREA)
  • Image Processing (AREA)
  • Image Analysis (AREA)
  • Wind Motors (AREA)

Abstract

Des modes de réalisation selon un premier aspect et un second aspect de la présente invention sont basés sur le concept de noyau de détection d'une caractéristique (110a, 450, 460) d'un objet (110, 400), qui comprend : le survol de l'objet et la détection d'au moins une partie (110b) de l'objet au moyen d'une unité d'enregistrement (130) à l'aide d'une première résolution et, pour les régions de l'objet qui ont la caractéristique, la fourniture d'enregistrements à l'aide d'une seconde résolution qui est supérieure à la première résolution.
EP22701368.7A 2021-01-22 2022-01-20 Procédé, aéronef et système de détection d'une caractéristique d'un objet à l'aide d'une première et d'une deuxième résolution Pending EP4281941A1 (fr)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
DE102021200583.7A DE102021200583A1 (de) 2021-01-22 2021-01-22 Verfahren, Luftfahrzeug und System zur Erfassung eines Merkmals eines Objekts mit einer ersten und zweiten Auflösung
PCT/EP2022/051278 WO2022157268A1 (fr) 2021-01-22 2022-01-20 Procédé, aéronef et système de détection d'une caractéristique d'un objet à l'aide d'une première et d'une deuxième résolution

Publications (1)

Publication Number Publication Date
EP4281941A1 true EP4281941A1 (fr) 2023-11-29

Family

ID=80122463

Family Applications (1)

Application Number Title Priority Date Filing Date
EP22701368.7A Pending EP4281941A1 (fr) 2021-01-22 2022-01-20 Procédé, aéronef et système de détection d'une caractéristique d'un objet à l'aide d'une première et d'une deuxième résolution

Country Status (5)

Country Link
US (1) US20230366775A1 (fr)
EP (1) EP4281941A1 (fr)
JP (1) JP2024512186A (fr)
DE (1) DE102021200583A1 (fr)
WO (1) WO2022157268A1 (fr)

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
DE102011017564B4 (de) 2011-04-26 2017-02-16 Airbus Defence and Space GmbH Verfahren und System zum Prüfen einer Oberfläche auf Materialfehler
WO2017050893A1 (fr) * 2015-09-22 2017-03-30 Pro-Drone Lda. Inspection autonome de structures allongées à l'aide de véhicules aériens sans pilote

Also Published As

Publication number Publication date
JP2024512186A (ja) 2024-03-19
WO2022157268A1 (fr) 2022-07-28
US20230366775A1 (en) 2023-11-16
DE102021200583A1 (de) 2022-07-28

Similar Documents

Publication Publication Date Title
EP3430368B1 (fr) Aéronef pour balayer un objet et système d'analyse d'endommagement de l'objet
EP2702382B1 (fr) Procédé et système pour examiner une surface sous le rapport des défauts de matière
WO2018167006A1 (fr) Procédé et arrangement pour une surveillance d'état d'une installation avec des moyens fonctionnels
EP3596570B2 (fr) Procédé de contrôle de la continuité électrique d'un paratonnerre d'une éolienne
EP3271231A1 (fr) Procédé et dispositif pour surveiller une trajectoire de consigne à parcourir par un véhicule au sujet de l'absence de collision
DE102008053928A1 (de) Verfahren zur Inspektion von Rotorblättern an Windkraftanlagen
EP4200587B1 (fr) Procédé et appareil volant pour surveiller des états opérationnels et déterminer des probabilités de défaillance de systèmes de lignes électriques
DE202015102791U1 (de) System zur Erfassung von Bilddaten einer Oberfläche eines Objekts und Kamerasystem zur Verwendung in einem solchen System
DE102017210787A1 (de) Verfahren und Vorrichtung zum Ermitteln von Anomalien in einem Kommunikationsnetzwerk
WO2022038228A1 (fr) Procédé et système de capture d'objets
EP3635680A1 (fr) Procédé et dispositif mis en oeuvre par un ordinateur pour générer automatiquement des données d'image caractérisées et dispositif d'analyse pour contrôler un composant
EP4281941A1 (fr) Procédé, aéronef et système de détection d'une caractéristique d'un objet à l'aide d'une première et d'une deuxième résolution
EP2492701B1 (fr) Procédé et dispositif destinés au test d'une éolienne
DE102022214330A1 (de) Verfahren zur Erzeugung mindestens einer Ground Truth aus der Vogelperspektive
EP4145238A1 (fr) Procédé de commande d'un véhicule aérien sans pilote pour un vol d'inspection servant à inspecter un objet et véhicule aérien d'inspection sans pilote
EP4102179A1 (fr) Procédé et dispositif pour localiser une image d'un objet prise à distance
EP2410319A1 (fr) Procédé et système de détection de modules solaires défectueux
EP3173618B1 (fr) Procédé de contrôle d'éléments d'éoliennes, en particulier pales de rotor
DE202024102354U1 (de) System einer Drohne für brückenverlegte Wärmerohre und der Inspektion
DE102021101717A1 (de) Verfahren zum Bereitstellen von fusionierten Daten, Assistenzsystem und Kraftfahrzeug
DE102020131607A1 (de) Verfahren zum Bereitstellen eines Trainingsdatensatzes, Verfahren zum Betrieb einer selbstlernenden Bauteildefekt-Erkennungseinrichtung und Selbstlernende Bauteildefekt-Erkennungseinrichtung
DE202021002490U1 (de) Autonomes , adaptives und selbstlernendes System zur Analyse des Windverhaltens
WO2022167022A1 (fr) Procédé pour contrôler un composant d'une turbomachine
DE102019104800A1 (de) Vorrichtung zum Ermitteln eines Bewegungskorridors für Leichtbauluftfahrzeuge
WO2014124621A1 (fr) Unité de contrôle pour objet volant

Legal Events

Date Code Title Description
STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: UNKNOWN

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE

PUAI Public reference made under article 153(3) epc to a published international application that has entered the european phase

Free format text: ORIGINAL CODE: 0009012

STAA Information on the status of an ep patent application or granted ep patent

Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE

17P Request for examination filed

Effective date: 20230817

AK Designated contracting states

Kind code of ref document: A1

Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR

DAV Request for validation of the european patent (deleted)
DAX Request for extension of the european patent (deleted)