CN108389256B - Two-three-dimensional interactive unmanned aerial vehicle electric power tower inspection auxiliary method - Google Patents

Two-three-dimensional interactive unmanned aerial vehicle electric power tower inspection auxiliary method Download PDF

Info

Publication number
CN108389256B
CN108389256B CN201711186377.0A CN201711186377A CN108389256B CN 108389256 B CN108389256 B CN 108389256B CN 201711186377 A CN201711186377 A CN 201711186377A CN 108389256 B CN108389256 B CN 108389256B
Authority
CN
China
Prior art keywords
dimensional
point cloud
image
power tower
unmanned aerial
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201711186377.0A
Other languages
Chinese (zh)
Other versions
CN108389256A (en
Inventor
葛嵩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Qianxun Spatial Intelligence Inc
Original Assignee
Qianxun Spatial Intelligence Inc
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Qianxun Spatial Intelligence Inc filed Critical Qianxun Spatial Intelligence Inc
Priority to CN201711186377.0A priority Critical patent/CN108389256B/en
Publication of CN108389256A publication Critical patent/CN108389256A/en
Application granted granted Critical
Publication of CN108389256B publication Critical patent/CN108389256B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/10Constructive solid geometry [CSG] using solid primitives, e.g. cylinders, cubes
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06QINFORMATION AND COMMUNICATION TECHNOLOGY [ICT] SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES; SYSTEMS OR METHODS SPECIALLY ADAPTED FOR ADMINISTRATIVE, COMMERCIAL, FINANCIAL, MANAGERIAL OR SUPERVISORY PURPOSES, NOT OTHERWISE PROVIDED FOR
    • G06Q50/00Systems or methods specially adapted for specific business sectors, e.g. utilities or tourism
    • G06Q50/06Electricity, gas or water supply
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • G06V20/13Satellite images
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/40Scenes; Scene-specific elements in video content
    • G06V20/46Extracting features or characteristics from the video content, e.g. video fingerprints, representative shots or key frames

Abstract

The invention provides a two-three-dimensional interactive unmanned aerial vehicle power tower inspection auxiliary method, which comprises the following steps: step 1, taking an image or a video shot by an unmanned aerial vehicle as input, and extracting a key frame of the image or the video; step 2, extracting the electric power tower information from the extracted key frames of the images or videos based on the significance and the connectivity; step 3, completing reconstruction of semi-dense three-dimensional point cloud based on SfM and MVS and obtaining a corresponding relation between a two-dimensional image and the three-dimensional point cloud; step 4, completing the construction of a two-dimensional and three-dimensional interactive visualization system; step 5, automatically dividing to obtain a complete point cloud of the target to be detected based on the target to be detected under one view angle selected by a user in the three-dimensional point cloud; and 6, automatically finding out the image with the target to be detected based on the corresponding relation between the two-dimensional image and the three-dimensional point cloud, marking the position of the target to be detected in the image, and intelligently sequencing the candidate images. The invention reduces the burden of manual detection and further improves the efficiency.

Description

Two-three-dimensional interactive unmanned aerial vehicle electric power tower inspection auxiliary method
Technical Field
The invention relates to the technical field of power grid inspection, in particular to a two-three-dimensional interactive unmanned aerial vehicle power tower inspection auxiliary method.
Background
In recent years, the construction and development of national power grids are rapid, the total length of power transmission lines of the built six-span provincial power grids exceeds 115 ten thousand kilometers, and the distribution areas of power towers are mostly areas with complex terrain and severe natural environment, so that the traditional mode of manually and periodically patrolling (which has high labor intensity, low efficiency, time consumption and potential safety hazard for patrolling workers) cannot meet the requirements of ensuring the safety and stable work of a power system. In recent years, along with rapid development of aviation, remote sensing and information processing technologies, active exploration is carried out on power system routing inspection in the mechanization and automation directions, wherein unmanned aerial vehicle routing inspection gradually becomes one of the most efficient modes of power system routing inspection due to the advantages of high flexibility (high speed, no terrain limitation), low cost, high safety and the like. Carry on different sensors, unmanned aerial vehicle can be closely, and the multi-angle carries out omnidirectional information acquisition to the electric power tower of waiting to examine, generates image or video, supplies to detect workman online or off-line and carries out defect and hidden danger and detect.
Because the task of patrolling and examining the electric power tower is very complicated, include: 1) the inspection targets are diverse, the types of the power towers are hundreds, and the power facilities on the power towers, such as insulators, are also various; 2) the shot images have diversity, and the shot distance and the shot angle are changed; 3) the problem area detection method based on computer vision and machine learning (deep learning) at present has high missing detection and false detection rates and robustness, and is difficult to meet the actual operation requirements. Automatic target extraction and identification and problem detection are still a problem to be overcome.
Therefore, the collected images or videos are analyzed to complete the detection work, and the most effective means is manual. Although the manual method can ensure the accuracy of the extraction result to a certain extent and has stronger robustness, errors caused by human factors are introduced, and more importantly, the method is low in energy consumption, time consumption and efficiency. Along with the wide application of patrolling and examining unmanned aerial vehicle, the frequency of patrolling and examining can improve by a wide margin, and the image or the video data volume of waiting to analyze also can increase by a wide margin, and the time of artifical detection most all is used for, on looking for effectual image and locating the target of waiting to examine in this image in image or video, and in order to obtain the comprehensive information of a target of waiting to examine, still retrieve the image of each angle. Therefore, an effective auxiliary system can be used for preprocessing the image or video to be inspected, and further quickly positioning the image to be inspected (in multiple angles) and the position of the target to be inspected in the image, and is one of effective means for improving the manual inspection efficiency.
Most of the existing technologies adopt a mode of acquiring alternative images based on a tower model, a position and high-precision positioning of an unmanned aerial vehicle. The method can automatically finish inspection flight, obtains the minimum images to be detected, and relatively fixes the position and the size of the object to be detected in the images. However, in the method, the high-precision position and type of the tower to be detected need to be known, and meanwhile, the high-precision position of the unmanned aerial vehicle under the same reference and the preset shooting angle and posture of the camera need to be acquired. However, collecting high-precision positions of towers nationwide is a heavy task, repeated collection caused by the current demand needs to be considered, and establishment of a uniform position reference is one of pain points of unmanned aerial vehicle inspection application at present, so that usability of the method is greatly influenced. In addition, the types of the electric power towers are hundreds, automatic matching of the electric power towers at present can be performed only by creating a model (including parameters such as feature selection, data types and attribute types), and the accuracy of tower matching directly influences the parameter setting of unmanned aerial vehicle routing inspection. The wrong tower model selection leads to wrong selection of the collected image if the selection is light, and leads to flight accidents if the selection is heavy. In recent years, three-dimensional model reconstruction technology is gradually applied to power facility inspection, and two ways are mainly applied to the technology. On one hand, the laser radar technology is applied to generate point cloud, and then the point cloud is analyzed. However, lidar equipment is expensive and needs to be matched with a calibrated digital camera, so that the scheme is high in cost. On the other hand, the three-dimensional models (DSM, DEM and DOM) of the to-be-detected region are generated in an aerial measuring mode, and the scheme can obtain a finer three-dimensional model, but the process is relatively heavier and time-consuming. And above-mentioned two kinds of modes all lack two three-dimensional information's interaction and integration, and then promote the efficiency of patrolling and examining, alleviate artificial burden.
Disclosure of Invention
In reality, the most effective mode is manual detection at present for power tower inspection based on unmanned aerial vehicles, and the most burdensome task is to search effective images in numerous images or videos and locate an object to be inspected. Under the condition of lacking effective input (shaft tower model, unmanned aerial vehicle position, shooting angle, etc.), because the influence such as shooting angle, shooting distance causes the size, position, the shooting position difference of target in the image, difficult quick, the accurate positioning target of waiting to examine in two-dimensional image. In the reconstructed three-dimensional point cloud, the target to be selected can be checked at 360 degrees and quickly positioned, but the point cloud is difficult to be used for target problem and area detection due to insufficient information quantity. The invention aims to solve the technical problem of how to reduce the burden of manual detection and improve the efficiency under the condition that a machine cannot replace the manual detection.
Based on this, the technical scheme adopted by the invention is as follows:
the input required by the invention is continuous images or videos obtained by scanning shooting of the electric power tower to be detected (spiral ascending type or crossed arch type shooting tracks are recommended (manual flight can be realized, and automatic acquisition can be realized under the condition that the height and the diameter of the electric power tower which are easy to obtain are known)), and the method comprises the following steps:
step 1, selecting a minimum set of images meeting reconstruction requirements based on a key frame extraction technology.
And 2, extracting power tower information from the extracted image based on the significance and the connectivity, and reducing the number of the feature points participating in reconstruction.
And 3, completing semi-dense three-dimensional point cloud reconstruction and acquiring a two-three-dimensional corresponding relation based on an SfM (Structure from Motion) and MVS (Multi-View Stereo) technology.
And 4, completing construction of a two-three-dimensional interactive visualization system based on OpenGL.
And 5, automatically segmenting to obtain complete point clouds of the target to be detected based on the target to be detected under one view angle selected by the user from the three-dimensional point clouds.
And 6, automatically finding out the image with the target based on the two-dimensional and three-dimensional corresponding relation, marking the position of the target in the image, and intelligently sequencing all candidate images so as to maximally reduce the workload of manual detection.
The method introduces lightweight three-dimensional point cloud reconstruction aiming at the electric power tower, generates a semi-dense three-dimensional point cloud model of the electric power tower, and constructs a two-three-dimensional interactive auxiliary electric power tower inspection system based on the corresponding relation between a two-dimensional image and the three-dimensional point cloud obtained in the reconstruction process so as to improve the efficiency of the manual inspection process of the electric power tower.
Drawings
Fig. 1 is a flow chart of an auxiliary method for patrolling a two-dimensional and three-dimensional interactive unmanned aerial vehicle power tower.
Detailed Description
According to the method, the three-dimensional point cloud of the tower to be detected is generated in a three-dimensional reconstruction mode, and the corresponding relation between the two-dimensional image and the three-dimensional point cloud is obtained. A two-three-dimensional interactive power tower patrol visual auxiliary system is constructed based on the corresponding relation so as to fuse the characteristics of two three dimensions and complement advantages and disadvantages to improve the efficiency of manual detection. The invention is further illustrated below with reference to the figures and examples.
Fig. 1 is a flow chart of an auxiliary method for patrolling a two-dimensional and three-dimensional interactive unmanned aerial vehicle power tower, which specifically comprises the following steps:
1. image pre-processing
The input of the method is that the unmanned aerial vehicle of the electric power tower to be detected shoots a high-definition image, and in order to comprehensively obtain three-dimensional information of the tower to be detected, the flight track of the unmanned aerial vehicle during collection is recommended to be in a spiral rising or crossed arched door shape. In order to simplify the need for shooting, the invention supports video as input (of course images with a certain degree of overlap can also be used as input), and further the invention does not require the parameters of the camera and the position of the drone as input. The point cloud three-dimensional reconstruction based on the high-definition image is a time-consuming process, and in order to improve the efficiency, two operations are introduced: and extracting key frames and extracting a target area of the power tower in the image so as to reduce the calculated amount in the reconstruction process.
The video is used as input, the complexity of unmanned aerial vehicle image acquisition is greatly reduced, but the data volume of input images is increased invisibly, and the shooting frame rate of the current camera is higher, so that a large number of redundant frames exist in the video. There are many choices for the method of selecting key frames, such as methods of frame comparison, clustering, target/time extraction, etc. based on time domain or spatial domain. Aiming at the characteristics of routing inspection, the video has time continuity, and the key frame extraction can be carried out by using a method of adjacent frame comparison based on a time domain. The similarity of the images can be selected as the measure of comparison, and the similarity between the images can be objectively evaluated based on a fusion decision of PSNR (Peak Signal to Noise ratio) peak Signal-to-Noise ratio and SSIM (structural similarity) structural similarity.
The images and videos acquired by the inspection purpose have higher resolution, so that the problem that the quantity of extracted feature points is huge, and the calculation amount in reconstruction is increased is brought. And the electric power tower is of a hollow structure, so that most of the extracted feature points are not points on the tower to be reconstructed. In order to solve the problem, a step of performing electric power tower extraction on the extracted key frame image is introduced. Based on the characteristic that the power towers are distributed in remote mountainous areas, namely, the power towers are greatly different from the surrounding environment, the power towers can be separated from the background by adopting an FT algorithm (Frequency Tuned significant area Detection) preferably through a significant area extraction method. Because the power tower has the hollow structure, the separation result of the middle filling part of the hollow structure is poor by the mode of extracting the saliency region, and other saliency regions may exist in the image, the secondary extraction based on connectivity is introduced on the basis of the saliency region extraction. The electric power tower has strong linear connectivity, so the background area of the hollow part and the salient area of other small parts can be filtered out based on the secondary extraction of the connectivity. The power tower can be separated from the background through the two-step process.
2. Three-dimensional reconstruction
Three-dimensional reconstruction based on monocular cameras is currently a mature technology, and reconstruction pipeline based on SfM (Structure from Motion) and MVS (Multi-View Stereo) technologies has achieved a very good effect in industry. Some commercial software, such as Pix4D, PhotoScan, Smart 3D, etc., are widely used. Some open source tools such as Colmap, OpenMVG, PMVS, MVE, etc. have also been recognized. The invention preferably adopts two widely applied open source tools, namely OpenMVG (SfM tool) and CMVS/PMVS (MVS tool), as main tools for three-dimensional reconstruction. After the sparse point cloud reconstruction and the camera internal and external parameter estimation based on the SfM are completed, in the process of generating the dense point cloud, matching and reconstruction are only carried out on the feature points on the power tower separated in the previous step, and then the number of the point clouds participating in reconstruction is greatly reduced, so that the algorithm efficiency is improved. Through the steps, the reconstructed semi-dense point cloud of the power tower, the internal and external parameters of the camera and the corresponding relation between the two-dimensional image and the three-dimensional point cloud (which pixel of the image corresponds to the point in the three-dimensional point cloud) are obtained.
3. Two-three-dimensional interactive auxiliary system construction
The invention preferably uses OpenGL to render and interact three-dimensional scenes and point clouds. If there is no real position data of the drone in flight as input, the reconstructed point cloud is relative to the coordinate system without real direction. In order to solve the problem, based on the distribution characteristics of the point cloud of the power tower, Principal Component Analysis (PCA) is firstly carried out on the point cloud to obtain three principal directions, wherein the direction corresponding to the maximum characteristic value, namely the longitudinal distribution direction of the power tower, is taken as a Z axis, and a plane formed by the other two directions is taken as an XY plane. Of course, if the real position data of the unmanned aerial vehicle flight is provided, the reconstructed coordinate system is the real world coordinate system. The interactive system can display three-dimensional electric power tower point cloud data, the estimated camera position and the estimated projection direction in three dimensions. Due to the two-dimensional interaction characteristic, a shot image corresponding to the camera and point cloud data which can be seen by the camera can be displayed by clicking any camera. And correspondingly, any point on the tower model is selected, so that the image of the point and the corresponding camera position can be displayed.
In order to further simplify the workload of manual detection and improve the efficiency, the following three technologies are adopted:
firstly, the targets to be detected on the electric power tower are all three-dimensional objects, so that the targets are difficult to select on the point cloud for one time, and the complete targets to be detected can be selected only by rotating detection personnel at different angles to perform multiple times of point selection. Since the target components on the power tower, such as insulators, have linearly distributed features, a point cloud segmentation algorithm selected based on initialization is preferably used. The detection personnel only need to linearly select part of the point cloud along the direction of the point set of the target to be detected at any angle, construct an external expansion body and geometric characteristics (part with large curvature change) based on the direction of the point set, and separate the complete point cloud of the target.
Secondly, the three-dimensional point and the two-dimensional image are in a one-to-many relationship, namely, one three-dimensional point can appear in a plurality of images simultaneously, and in order to reduce the workload of detection personnel, an intelligent sorting algorithm of the images to be selected is adopted. Firstly, images to be selected are divided into 4 types (based on XY plane, each 90-degree interval forms 1 type) based on the angle of image shooting, and ranking is carried out in each type of image set based on the area occupied by the target object and the distribution position (close to the center) as the standard. Therefore, the detection personnel select the target point set to be detected under one visual angle, and the auxiliary system displays 4 images with different angles (if existing) and the first ranking.
Thirdly, the pixels corresponding to the target point clouds to be selected can be highlighted in the displayed image by means of the two three-dimensional corresponding relations. Of course, due to the distribution and density of the point cloud, only a portion of the pixels in the image may be selected. In this case, based on the GrabCut algorithm (one of the image segmentation algorithms in OpenCV), the selected pixel is used as initialization, so that the whole object to be selected can be segmented, and the minimum bounding box of the object to be selected can be highlighted for analysis and judgment by a detection person.
The algorithms and tools used by the method, such as image preprocessing, three-dimensional reconstruction, two-three-dimensional interactive system construction and the like, can be selected at will, and the invention provides a frame flow method.
Although the present invention has been described with reference to the preferred embodiments, it is not intended to limit the present invention, and those skilled in the art can make variations and modifications of the present invention without departing from the spirit and scope of the present invention by using the methods and technical contents disclosed above.

Claims (8)

1. A two-three-dimensional interactive unmanned aerial vehicle power tower inspection auxiliary method is characterized by comprising the following steps:
step 1, taking an image or a video shot by an unmanned aerial vehicle as input, and extracting a key frame of the image or the video;
step 2, extracting the electric power tower information from the extracted key frames of the images or videos based on the significance and the connectivity;
step 3, completing reconstruction of semi-dense three-dimensional point cloud based on SfM and MVS and obtaining a corresponding relation between a two-dimensional image and the three-dimensional point cloud;
step 4, completing the construction of a two-dimensional and three-dimensional interactive visualization system;
step 5, automatically dividing to obtain a complete point cloud of the target to be detected based on the target to be detected under one view angle selected by a user in the three-dimensional point cloud;
step 6, automatically finding out the image with the target to be detected based on the corresponding relation between the two-dimensional image and the three-dimensional point cloud, marking the position of the target to be detected in the image, and intelligently sequencing all candidate images;
after the sparse point cloud reconstruction and the camera internal and external parameter estimation based on the SfM are completed, matching and reconstructing are carried out on the electric power tower information extracted in the step 2 in the process of generating the dense point cloud, and the corresponding relation between the reconstructed semi-dense three-dimensional point cloud of the electric power tower, the internal and external parameters of the camera and the two-dimensional image and the three-dimensional point cloud is obtained;
the step 6 specifically comprises the following steps:
dividing the candidate images into 4 classes based on the image shooting angle, forming 1 class in each 90-degree interval based on an XY plane, and intelligently sequencing in each class of image set based on the area occupied by the target to be detected and the distribution position as standards;
and based on the corresponding relation between the two-dimensional image and the three-dimensional point cloud, highlighting pixels corresponding to the point cloud of the target to be detected in the displayed two-dimensional image.
2. The method for assisting in patrolling the electric power tower of the two-three-dimensional interactive unmanned aerial vehicle according to claim 1, wherein in the step 1, the unmanned aerial vehicle shoots images or videos in a spiral ascending or crossing arch mode.
3. The method for assisting in patrolling the electric power tower of the two-three-dimensional interactive unmanned aerial vehicle according to claim 1, wherein in the step 1, the key frames of the image or the video are extracted based on a time domain adjacent frame comparison method.
4. The auxiliary two-three-dimensional interactive unmanned aerial vehicle power tower inspection method according to claim 3, wherein the similarity of the images is selected for measurement of adjacent frame comparison, and the similarity of the images is evaluated based on a fusion decision of peak signal-to-noise ratio and structural similarity.
5. The method for assisting in patrolling the power tower of the two-three-dimensional interactive unmanned aerial vehicle according to claim 1, wherein in the step 2, an FT algorithm is adopted for extracting the information of the power tower based on the saliency.
6. The auxiliary method for patrolling two-three-dimensional interactive unmanned aerial vehicle power tower according to claim 1, wherein in the step 4, the construction of a two-three-dimensional interactive visualization system is completed based on OpenGL.
7. The auxiliary method for power tower inspection of two-three-dimensional interactive unmanned aerial vehicle according to claim 6, wherein PCA principal component analysis is performed on the three-dimensional point cloud in the step 4 to obtain three principal directions, wherein a direction corresponding to a maximum eigenvalue, namely a longitudinal distribution direction of the power tower, is taken as a Z axis, and a plane formed by the other two directions is taken as an XY plane.
8. The two-three-dimensional interactive unmanned aerial vehicle power tower inspection auxiliary method according to claim 1, wherein in the step 5, based on the point cloud segmentation algorithm selected by initialization, the user linearly selects a part of three-dimensional point cloud along the direction of the point set of the object to be inspected, and based on the direction of the point set, an external expansion body and geometric characteristics are constructed, so that the complete point cloud of the object to be inspected is automatically separated.
CN201711186377.0A 2017-11-23 2017-11-23 Two-three-dimensional interactive unmanned aerial vehicle electric power tower inspection auxiliary method Active CN108389256B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201711186377.0A CN108389256B (en) 2017-11-23 2017-11-23 Two-three-dimensional interactive unmanned aerial vehicle electric power tower inspection auxiliary method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201711186377.0A CN108389256B (en) 2017-11-23 2017-11-23 Two-three-dimensional interactive unmanned aerial vehicle electric power tower inspection auxiliary method

Publications (2)

Publication Number Publication Date
CN108389256A CN108389256A (en) 2018-08-10
CN108389256B true CN108389256B (en) 2022-03-01

Family

ID=63075978

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201711186377.0A Active CN108389256B (en) 2017-11-23 2017-11-23 Two-three-dimensional interactive unmanned aerial vehicle electric power tower inspection auxiliary method

Country Status (1)

Country Link
CN (1) CN108389256B (en)

Families Citing this family (11)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2020056405A1 (en) * 2018-09-14 2020-03-19 Northwestern University Data-driven representation and clustering discretization method and system for design optimization and/or performance prediction of material systems and applications of same
CN109615197A (en) * 2018-11-30 2019-04-12 中北大学 Tailing dam security level appraisal procedure based on two-dimension cloud model
CN109887013B (en) * 2019-01-14 2021-06-25 苏州数设科技有限公司 PCA-based point cloud registration final determination method and system
CN109862390B (en) * 2019-02-26 2021-06-01 北京融链科技有限公司 Method and device for optimizing media stream, storage medium and processor
CN110223297A (en) * 2019-04-16 2019-09-10 广东康云科技有限公司 Segmentation and recognition methods, system and storage medium based on scanning point cloud data
CN111311967A (en) * 2020-03-31 2020-06-19 普宙飞行器科技(深圳)有限公司 Unmanned aerial vehicle-based power line inspection system and method
CN112163055B (en) * 2020-09-09 2023-11-21 成都深瑞同华科技有限公司 Tower labeling method
CN112767391B (en) * 2021-02-25 2022-09-06 国网福建省电力有限公司 Power grid line part defect positioning method integrating three-dimensional point cloud and two-dimensional image
CN112802083B (en) * 2021-04-15 2021-06-25 成都云天创达科技有限公司 Method for acquiring corresponding two-dimensional image through three-dimensional model mark points
CN113238578B (en) * 2021-05-11 2022-12-16 上海电力大学 Routing planning method and system for power tower unmanned aerial vehicle inspection route
CN116934756B (en) * 2023-09-18 2023-12-05 中国建筑第五工程局有限公司 Material detection method based on image processing

Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271591A (en) * 2008-04-28 2008-09-24 清华大学 Interactive multi-vision point three-dimensional model reconstruction method
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined

Family Cites Families (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
EP3086283B1 (en) * 2015-04-21 2019-01-16 Hexagon Technology Center GmbH Providing a point cloud using a surveying instrument and a camera device

Patent Citations (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN101271591A (en) * 2008-04-28 2008-09-24 清华大学 Interactive multi-vision point three-dimensional model reconstruction method
CN103093191A (en) * 2012-12-28 2013-05-08 中电科信息产业有限公司 Object recognition method with three-dimensional point cloud data and digital image data combined

Non-Patent Citations (5)

* Cited by examiner, † Cited by third party
Title
基于PSNR与SSIM联合的图像质量评价模型;佟雨兵等;《中国图象图形学报》;20061231;第11卷(第12期);正文第1760-1761页 *
基于显著性检测的人体轮廓提取问题研究与应用;王闪闪;《中国优秀硕士学位论文全文数据库-信息科技辑》;20170215(第02期);正文第1-4、28-32、44页 *
基于点云数据的主成分分析重构表面算法;张贺等;《黑龙江工程学院学报(自然科学版)》;20100331;第24卷(第1期);全文 *
基于特征的车身典型复杂曲面点云数据的分割;钟婷婷;《中国优秀硕士学位论文全文数据库-信息科技辑》;20150915(第09期);正文第44-46页 *
采用去抖动模糊算法的稠密三维重建;郑恩等;《计算机工程与应用》;20170112;正文第1-8页 *

Also Published As

Publication number Publication date
CN108389256A (en) 2018-08-10

Similar Documents

Publication Publication Date Title
CN108389256B (en) Two-three-dimensional interactive unmanned aerial vehicle electric power tower inspection auxiliary method
CN110415342B (en) Three-dimensional point cloud reconstruction device and method based on multi-fusion sensor
CN106356757B (en) A kind of power circuit unmanned plane method for inspecting based on human-eye visual characteristic
CN111612059B (en) Construction method of multi-plane coding point cloud feature deep learning model based on pointpilars
CN106826833B (en) Autonomous navigation robot system based on 3D (three-dimensional) stereoscopic perception technology
Sun et al. Aerial 3D building detection and modeling from airborne LiDAR point clouds
Cheng et al. 3D building model reconstruction from multi-view aerial imagery and lidar data
Hoppe et al. Online Feedback for Structure-from-Motion Image Acquisition.
CN107392247B (en) Real-time detection method for ground object safety distance below power line
Lam et al. Urban scene extraction from mobile ground based lidar data
CN109598794B (en) Construction method of three-dimensional GIS dynamic model
Haala Detection of buildings by fusion of range and image data
CN109685886A (en) A kind of distribution three-dimensional scenic modeling method based on mixed reality technology
CN109523528A (en) A kind of transmission line of electricity extracting method based on unmanned plane binocular vision SGC algorithm
CN114782626A (en) Transformer substation scene mapping and positioning optimization method based on laser and vision fusion
Borsu et al. Automated surface deformations detection and marking on automotive body panels
CN103035006A (en) High-resolution aerial image partition method based on LEGION and under assisting of LiDAR
CN116844068B (en) Building mapping method, system, computer equipment and storage medium
CN116030208A (en) Method and system for building scene of virtual simulation power transmission line of real unmanned aerial vehicle
Alidoost et al. Y-shaped convolutional neural network for 3d roof elements extraction to reconstruct building models from a single aerial image
CN114639115A (en) 3D pedestrian detection method based on fusion of human body key points and laser radar
Maurer et al. Automated inspection of power line corridors to measure vegetation undercut using UAV-based images
CN113920254B (en) Monocular RGB (Red Green blue) -based indoor three-dimensional reconstruction method and system thereof
CN113781639B (en) Quick construction method for digital model of large-scene road infrastructure
CN115731545A (en) Cable tunnel inspection method and device based on fusion perception

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant