EP3811286A1 - Method and assembly for detecting objects on systems - Google Patents
Method and assembly for detecting objects on systemsInfo
- Publication number
- EP3811286A1 EP3811286A1 EP19766186.1A EP19766186A EP3811286A1 EP 3811286 A1 EP3811286 A1 EP 3811286A1 EP 19766186 A EP19766186 A EP 19766186A EP 3811286 A1 EP3811286 A1 EP 3811286A1
- Authority
- EP
- European Patent Office
- Prior art keywords
- image
- images
- dimensional representation
- pcd
- point cloud
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
Classifications
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T7/00—Image analysis
- G06T7/0002—Inspection of images, e.g. flaw detection
- G06T7/0004—Industrial image inspection
-
- G—PHYSICS
- G01—MEASURING; TESTING
- G01S—RADIO DIRECTION-FINDING; RADIO NAVIGATION; DETERMINING DISTANCE OR VELOCITY BY USE OF RADIO WAVES; LOCATING OR PRESENCE-DETECTING BY USE OF THE REFLECTION OR RERADIATION OF RADIO WAVES; ANALOGOUS ARRANGEMENTS USING OTHER WAVES
- G01S17/00—Systems using the reflection or reradiation of electromagnetic waves other than radio waves, e.g. lidar systems
- G01S17/88—Lidar systems specially adapted for specific applications
- G01S17/89—Lidar systems specially adapted for specific applications for mapping or imaging
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T17/00—Three dimensional [3D] modelling, e.g. data description of 3D objects
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/17—Terrestrial scenes taken from planes or by drones
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/10—Terrestrial scenes
- G06V20/176—Urban or other man-made structures
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06V—IMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
- G06V20/00—Scenes; Scene-specific elements
- G06V20/60—Type of objects
- G06V20/64—Three-dimensional objects
-
- H—ELECTRICITY
- H02—GENERATION; CONVERSION OR DISTRIBUTION OF ELECTRIC POWER
- H02G—INSTALLATION OF ELECTRIC CABLES OR LINES, OR OF COMBINED OPTICAL AND ELECTRIC CABLES OR LINES
- H02G1/00—Methods or apparatus specially adapted for installing, maintaining, repairing or dismantling electric cables or lines
- H02G1/02—Methods or apparatus specially adapted for installing, maintaining, repairing or dismantling electric cables or lines for overhead lines or cables
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10004—Still image; Photographic image
- G06T2207/10012—Stereo images
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10028—Range image; Depth image; 3D point clouds
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/10—Image acquisition modality
- G06T2207/10032—Satellite or aerial image; Remote sensing
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/20—Special algorithmic details
- G06T2207/20004—Adaptive image processing
- G06T2207/20012—Locally adaptive
-
- G—PHYSICS
- G06—COMPUTING; CALCULATING OR COUNTING
- G06T—IMAGE DATA PROCESSING OR GENERATION, IN GENERAL
- G06T2207/00—Indexing scheme for image analysis or image enhancement
- G06T2207/30—Subject of image; Context of image processing
- G06T2207/30181—Earth observation
- G06T2207/30184—Infrastructure
Definitions
- the invention relates to a method according to the preamble of claim 1 and an arrangement according to the preamble of claim 10.
- the operating resources are recognized and the positions of the operating resources are determined taking into account the position of the first vehicle, whereby detailed images of the operating facilities are determined by means of a second vehicle with a detail camera that is aligned with the respective positions of the operating means means are generated.
- a single aircraft such as a drone or a helicopter is used to detect masts and isolators when flying over an overhead line using the overview camera, determine the position of the isolators and then use the detail camera to obtain high-resolution images of the isolators. In this way, defective insulators can be identified easily and reliably.
- Overhead lines have been used overflights with helicopters and image recordings to detect damage or objects on the overhead line.
- the decision whether an object such as A bird's nest, a balloon or a kite (children's toys) lying on the conductor ropes or below on the ground is difficult to hit with a pure aerial view and prone to errors. So far, this has usually been done by manually evaluating the image recordings. If objects on the line are mistakenly recognized, this results in useless costs and efforts for triggered maintenance.
- Detections in one or more image recordings cannot always be clearly assigned to a specific 3D object. Since the individual 2D images do not contain any depth information, the distance to an object along the line of sight cannot be determined. Detections can therefore not be restricted to the relevant areas of 3D space, which can lead to irrelevant false detections
- the object of the invention is to provide a method with which objects in systems can be recognized automatically and reliably.
- the invention solves this problem with a method according to claim 1.
- the problem of object recognition is solved by using the 3D information associated with the respective 2D points. Due to the parallax effect, objects under the system such as an overhead line is shown in the pictures at different points in relation to the line. A parallax effect arises when an observer shifts his own position and an apparent change in the position of an object occurs. The effect of parallax is described in detail on Wikipedia, for example (permanent link:
- a system can be, for example, an electrical system such as an overhead line or an overhead line. However, it can also be a pipeline.
- an object can be a bird's nest, a car, or a kite.
- the object of the invention is to provide 3D information in conjunction with 2D detections in 3D space in order to Reduce alarm rate compared to simple 2D detections.
- the invention it is possible to differentiate safely, quickly and automatically between objects on a system - ie above ground level - and objects below the system - ie close to the ground level. This is an advantage because of dangers or damage to a system such as an overhead line must be removed immediately by maintenance technicians. If an object such as If a bird's nest or a kite is incorrectly recognized as being on the line, the line is switched off and / or maintenance is unnecessarily triggered, which causes costs and reduces the availability of the system.
- the three-dimensional representation of the system provided is used to restrict a search space for the system or to include the recognized object as a component assign the system in the three-dimensional representation.
- the use of the three-dimensional representation already available makes it possible, in particular, to reduce the computing power required for image evaluation by restricting the search space.
- a recognized object can be assigned to a component contained in the three-dimensional representation, so that relevant and irrelevant objects can be separated easily and reliably for further evaluation.
- Both options have proven to be very beneficial. For example, the first option allows a false negative rate to be feared in the case in question, at the expense of a slightly higher computing power.
- the second option allows, for example, by means of a less computation-intensive pre-classification, to identify potential problem areas while the images are being taken and, if necessary, to carry out a more precise inspection of these objects automatically.
- a combination of these two options has also proven to be advantageous.
- the three-dimensional representation is recorded as a three-dimensional point cloud (PCD), the three-dimensional point cloud (PCD) being semantically segmented in order to restrict a search space for the system in the three-dimensional point cloud (PCD).
- PCD three-dimensional point cloud
- a restriction of the search space in this way has proven to be particularly advantageous for typical applications, such as those that occur in the inspection of overhead lines.
- the three-dimensional representation is obtained by means of a “light detection and ranging (LIDAR)” sensor and recorded as a three-dimensional point cloud (PCD).
- LIDAR light detection and ranging
- PCD three-dimensional point cloud
- the three-dimensional point cloud is semantically segmented in order to restrict a search space for the system in the three-dimensional point cloud (PCD).
- This is an advantage because the object recognition is restricted to the relevant area, which considerably reduces the calculation requirements and / or increases the speed of the calculations. The complexity is reduced because the search space is restricted to relevant scene content. If the evaluation is carried out on board the aircraft, weight can be saved in this embodiment because a less powerful computer device is required.
- a typical example is LIDAR data of a high-voltage line, in which (automatically) those points are determined that belong to the overhead lines or are approximated by a parametric model of a chain line.
- An example of a method for segmenting image data is known from the publication "Mask R-CNN" by Kaiming He et al.
- a classic 2D detector is used in this restricted search area, which is pre-trained for certain accident classes.
- an anomaly detection automatically becomes a model of the norm
- the conductor region is determined (e.g. using auto encoders) and outliers are detected. Both approaches determine in
- the image space can not only be limited to visible light, but can also extend to adjacent spectral ranges such as (thermal) infrared and ultraviolet light.
- the detection answers or pixel color values in the individual images for each 3D point of the system can not only be limited to visible light, but can also extend to adjacent spectral ranges such as (thermal) infrared and ultraviolet light.
- the 3D point cloud is optionally semantically segmented (optional, since the entire PCD can also be semantically relevant for the inspection task);
- detection results are either generated selectively or the existing detection results are read out (the latter if the detection in the image space was carried out across the board);
- the remaining 3D points can again be projected back into the image space and thus result in the final detection result in the image space.
- an overhead line is used as the system, and the semantic segmentation is carried out by using a model of a chain line for detecting conductor lines of the overhead line.
- the position and orientation of the representation are determined by means of a position determining device. This can be done, for example, by means of a receiver for "Global Positioning System (GPS)" signals, the orientation depending on the viewing direction of the sensor arrangement (LIDAR or camera). The viewing direction can be determined, for example, by means of a tilt sensor in conjunction with a compass, which are provided in the aircraft.
- GPS Global Positioning System
- the images are recorded by means of a camera for visible light.
- the light visible to humans is usually specified with wavelengths between 380 nm to 780nm (permanent link:
- the camera is guided along the system with an aircraft in order to record the first and the second image at the two different positions.
- the three-dimensional representation of the system is projected into the two images in order to determine the sections from each.
- the evaluation device is advises provided. This is an advantage because an evaluation and object detection can take place directly during an overflight.
- the images and coordinates of the recognized objects can be saved and transmitted to the operator of the system after the flight.
- the detected objects can be transmitted by radio data communication during the flight.
- the evaluation device is provided as a central server. This is an advantage because it saves weight and space in the aircraft. For example, all the data recorded by the camera and the LIDAR can be temporarily stored on a data memory and, after the end of the flight, transmitted to the evaluation device for evaluation. Alternatively, the data can be transmitted to the evaluation device by means of radio data communication even during the flight.
- Figure 1 shows an example of a semantic segmentation of
- Figure 2 shows an example of images of an overhead line in different frequency ranges
- Figure 3 shows an example of anomaly detection of objects on an overhead line
- Figure 4 shows an example of a detection of the position of
- Figure 1 shows an example of a semantic segmentation of lidar image data.
- the viewing angle cp of the LIDAR with respect to the location coordinate x is shown.
- a color scale 3 shows how strongly the LIDAR signals were received. It can be seen that after a successful segmentation of the overhead line cable, line 1 is highlighted using a model of a chain function. The other lines 2 remain in the background.
- Figure 2 shows an example of images of an overhead line in different frequency ranges.
- An image in the visible frequency range (VIS), in the infra-red frequency range (IF) and in the ultraviolet frequency range (UV) is shown from left to right.
- VIS visible frequency range
- IF infrared area
- UV ultraviolet frequency range
- FIG. 3 shows an example of an anomaly detection of artificially inserted objects on an overhead line.
- the picture is taken from above during an overflight.
- cable 1 run over wooded areas and a road 4, which bifurcates in the upper part of the picture.
- a kite 6 is arranged on one of the conductor ropes.
- the evaluation algorithm correctly marks the objects as deviating from the expected course of the ropes.
- the algorithm cannot easily obtain the depth information, ie it cannot decide whether the car and in particular whether the kite is on the line or below on the ground.
- Figure 4 shows two scenes side by side.
- Two masts 9 each carry an overhead line.
- Trees 10 can be seen below the overhead line.
- a first and a second image are recorded at two different positions 7, 8 during an overflight of the line.
- both images target a section 11 of the line in the line of sight. If an object is arranged directly on or on the line, the object appears on the line at the same location from both perspectives. It is different in the right picture for tree 10.
- tree 10 does not appear at the same location on the line, but because of the parallax effect from viewing direction 7 on section 11 and from viewing direction 8 on section 12 of the Management. This means that the tree 10 does not have to be arranged at the same height as the line, but rather below it. This principle enables a simple automated distinction between objects arranged on or on a system and objects arranged on the ground.
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- General Physics & Mathematics (AREA)
- Theoretical Computer Science (AREA)
- Multimedia (AREA)
- Remote Sensing (AREA)
- Quality & Reliability (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Radar, Positioning & Navigation (AREA)
- Electromagnetism (AREA)
- Computer Graphics (AREA)
- Geometry (AREA)
- Software Systems (AREA)
- Computer Networks & Wireless Communication (AREA)
- Length Measuring Devices By Optical Means (AREA)
- Image Analysis (AREA)
- Image Processing (AREA)
Abstract
Description
Claims
Applications Claiming Priority (2)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
EP18190055.6A EP3614299A1 (en) | 2018-08-21 | 2018-08-21 | Method and assembly for identifying objects on installations |
PCT/EP2019/072269 WO2020038944A1 (en) | 2018-08-21 | 2019-08-20 | Method and assembly for detecting objects on systems |
Publications (1)
Publication Number | Publication Date |
---|---|
EP3811286A1 true EP3811286A1 (en) | 2021-04-28 |
Family
ID=63350461
Family Applications (2)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18190055.6A Withdrawn EP3614299A1 (en) | 2018-08-21 | 2018-08-21 | Method and assembly for identifying objects on installations |
EP19766186.1A Pending EP3811286A1 (en) | 2018-08-21 | 2019-08-20 | Method and assembly for detecting objects on systems |
Family Applications Before (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
EP18190055.6A Withdrawn EP3614299A1 (en) | 2018-08-21 | 2018-08-21 | Method and assembly for identifying objects on installations |
Country Status (5)
Country | Link |
---|---|
US (1) | US11989870B2 (en) |
EP (2) | EP3614299A1 (en) |
CN (1) | CN112639803B (en) |
BR (1) | BR112021002143A2 (en) |
WO (1) | WO2020038944A1 (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US10748038B1 (en) * | 2019-03-31 | 2020-08-18 | Cortica Ltd. | Efficient calculation of a robust signature of a media unit |
US11703457B2 (en) * | 2020-12-29 | 2023-07-18 | Industrial Technology Research Institute | Structure diagnosis system and structure diagnosis method |
NO347027B1 (en) * | 2021-06-02 | 2023-04-24 | Kleon Solutions As | Method and system for detecting a line above ground from a helicopter |
Family Cites Families (14)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
US20020014533A1 (en) | 1995-12-18 | 2002-02-07 | Xiaxun Zhu | Automated object dimensioning system employing contour tracing, vertice detection, and forner point detection and reduction methods on 2-d range data maps |
US9525862B2 (en) | 2011-08-31 | 2016-12-20 | Metaio Gmbh | Method for estimating a camera motion and for determining a three-dimensional model of a real environment |
CN102930246B (en) | 2012-10-16 | 2015-04-08 | 同济大学 | Indoor scene identifying method based on point cloud fragment division |
JP2014089548A (en) * | 2012-10-30 | 2014-05-15 | Sharp Corp | Road surface level difference detection method, road surface level difference detection device and vehicle equipped with the road surface level difference detection device |
US9449227B2 (en) * | 2014-01-08 | 2016-09-20 | Here Global B.V. | Systems and methods for creating an aerial image |
CN107727076B (en) | 2014-05-05 | 2020-10-23 | 赫克斯冈技术中心 | Measuring system |
JP6397801B2 (en) | 2015-06-30 | 2018-09-26 | 日立オートモティブシステムズ株式会社 | Object detection device |
US10970877B2 (en) | 2015-09-30 | 2021-04-06 | Sony Corporation | Image processing apparatus, image processing method, and program |
JP6299720B2 (en) | 2015-10-02 | 2018-03-28 | トヨタ自動車株式会社 | Object recognition device and smoke determination method |
US11156573B2 (en) * | 2016-06-30 | 2021-10-26 | Skydio, Inc. | Solar panel inspection using unmanned aerial vehicles |
JP6794243B2 (en) | 2016-12-19 | 2020-12-02 | 日立オートモティブシステムズ株式会社 | Object detector |
CN107729878A (en) | 2017-11-14 | 2018-02-23 | 智车优行科技(北京)有限公司 | Obstacle detection method and device, equipment, vehicle, program and storage medium |
CN108229548A (en) * | 2017-12-27 | 2018-06-29 | 华为技术有限公司 | A kind of object detecting method and device |
CN108364304A (en) * | 2018-04-11 | 2018-08-03 | 湖南城市学院 | A kind of system and method for the detection of monocular airborne target |
-
2018
- 2018-08-21 EP EP18190055.6A patent/EP3614299A1/en not_active Withdrawn
-
2019
- 2019-08-20 US US17/268,458 patent/US11989870B2/en active Active
- 2019-08-20 WO PCT/EP2019/072269 patent/WO2020038944A1/en unknown
- 2019-08-20 CN CN201980054890.5A patent/CN112639803B/en active Active
- 2019-08-20 BR BR112021002143-4A patent/BR112021002143A2/en unknown
- 2019-08-20 EP EP19766186.1A patent/EP3811286A1/en active Pending
Also Published As
Publication number | Publication date |
---|---|
US20210174061A1 (en) | 2021-06-10 |
CN112639803B (en) | 2024-06-11 |
CN112639803A (en) | 2021-04-09 |
US11989870B2 (en) | 2024-05-21 |
WO2020038944A1 (en) | 2020-02-27 |
EP3614299A1 (en) | 2020-02-26 |
BR112021002143A2 (en) | 2021-05-04 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
DE102009015142B4 (en) | Vehicle surroundings recognition device and control system for tracking a preceding vehicle | |
DE102014209137B4 (en) | Method and device for calibrating a camera system of a motor vehicle | |
EP3376213A1 (en) | Method and assembly for monitoring the state of a system with operating elements | |
DE112011103690T5 (en) | Detection and tracking of moving objects | |
WO2020038944A1 (en) | Method and assembly for detecting objects on systems | |
DE102011111440A1 (en) | Method for representation of environment of vehicle, involves forming segments of same width from image points of equal distance in one of image planes, and modeling objects present outside free space in environment | |
EP3782117A1 (en) | Method, device and computer-readable storage medium having instructions for processing sensor data | |
DE102018215055A1 (en) | Method for determining a lane change indication of a vehicle, a computer-readable storage medium and a vehicle | |
DE102013012930A1 (en) | Method for determining a current distance and / or a current speed of a target object from a reference point in a camera image, camera system and motor vehicle | |
DE102016201741A1 (en) | Method for height detection | |
EP3663881B1 (en) | Method for controlling an autonomous vehicle on the basis of estimated movement vectors | |
EP3545506A1 (en) | Method and system for detecting a raised object located within a parking area | |
WO2020038984A1 (en) | Method and assembly for detecting corona discharges of a system comprising equipment | |
DE102016223094A1 (en) | Method and system for detecting a raised object located within a parking lot | |
DE102020127315B4 (en) | System and method for annotating automotive radar data | |
DE112021002598T5 (en) | IMAGE PROCESSING DEVICE | |
DE102016110691A1 (en) | Apparatus and method for determining a respective position of a plurality of aircraft located on an airport surface | |
DE102018121158A1 (en) | Ground sensing point method and driver support system configured to perform such a method | |
DE102018202753A1 (en) | Method for determining a distance between a motor vehicle and an object | |
DE102019220616B4 (en) | PROCEDURE FOR SIMULTANEOUS LOCATION AND IMAGE | |
DE102021206475A1 (en) | Obstacle detection in the track area based on depth data | |
DE102022201639B3 (en) | Method for avoiding collisions of watercraft and device for carrying out the method | |
DE102019102423A1 (en) | Method for live annotation of sensor data | |
DE102023203319A1 (en) | lane-based obstacle detection | |
DE102022209401A1 (en) | Method for generating training data for an adaptive method |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: UNKNOWN |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: THE INTERNATIONAL PUBLICATION HAS BEEN MADE |
|
PUAI | Public reference made under article 153(3) epc to a published international application that has entered the european phase |
Free format text: ORIGINAL CODE: 0009012 |
|
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: REQUEST FOR EXAMINATION WAS MADE |
|
17P | Request for examination filed |
Effective date: 20210121 |
|
AK | Designated contracting states |
Kind code of ref document: A1 Designated state(s): AL AT BE BG CH CY CZ DE DK EE ES FI FR GB GR HR HU IE IS IT LI LT LU LV MC MK MT NL NO PL PT RO RS SE SI SK SM TR |
|
AX | Request for extension of the european patent |
Extension state: BA ME |
|
DAV | Request for validation of the european patent (deleted) | ||
DAX | Request for extension of the european patent (deleted) | ||
STAA | Information on the status of an ep patent application or granted ep patent |
Free format text: STATUS: EXAMINATION IS IN PROGRESS |
|
17Q | First examination report despatched |
Effective date: 20230201 |