WO2021036587A1 - 面向电力巡检场景的定位方法及系统 - Google Patents

面向电力巡检场景的定位方法及系统 Download PDF

Info

Publication number
WO2021036587A1
WO2021036587A1 PCT/CN2020/103175 CN2020103175W WO2021036587A1 WO 2021036587 A1 WO2021036587 A1 WO 2021036587A1 CN 2020103175 W CN2020103175 W CN 2020103175W WO 2021036587 A1 WO2021036587 A1 WO 2021036587A1
Authority
WO
WIPO (PCT)
Prior art keywords
point cloud
scene
closed
inspection
loop
Prior art date
Application number
PCT/CN2020/103175
Other languages
English (en)
French (fr)
Inventor
徐敏
李尼格
彭林
侯战胜
于海
王刚
何志敏
朱亮
鲍兴川
王鹤
韩海韵
刘伟
Original Assignee
全球能源互联网研究院有限公司
国家电网有限公司
国网江苏省电力有限公司
国网江苏省电力有限公司电力科学研究院
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by 全球能源互联网研究院有限公司, 国家电网有限公司, 国网江苏省电力有限公司, 国网江苏省电力有限公司电力科学研究院 filed Critical 全球能源互联网研究院有限公司
Publication of WO2021036587A1 publication Critical patent/WO2021036587A1/zh

Links

Images

Classifications

    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/10Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration
    • G01C21/12Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning
    • G01C21/16Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation
    • G01C21/165Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00 by using measurements of speed or acceleration executed aboard the object being navigated; Dead reckoning by integrating acceleration or speed, i.e. inertial navigation combined with non-inertial navigation instruments
    • GPHYSICS
    • G01MEASURING; TESTING
    • G01CMEASURING DISTANCES, LEVELS OR BEARINGS; SURVEYING; NAVIGATION; GYROSCOPIC INSTRUMENTS; PHOTOGRAMMETRY OR VIDEOGRAMMETRY
    • G01C21/00Navigation; Navigational instruments not provided for in groups G01C1/00 - G01C19/00
    • G01C21/20Instruments for performing navigational calculations

Definitions

  • This application relates to the field of visual positioning, for example, to a positioning method and system for power inspection scenes.
  • Direct scanning of the environment can form a three-dimensional model, and then realize the reorganization of multiple types of information on site with the spatial dimension as the main dimension, and the time dimension and logical relationship as the supplement;
  • the intelligent recognition of the environment is the active identification of on-site operation information, through the identification of operation objectives and analysis Work environment, realize the automatic acquisition of work and equipment information; finally based on high-precision stereo positioning, integrated work environment and work task scenarios, and then combined with experience and knowledge to realize the active visual push of information, and provide accurate and timely auxiliary guidance for operators.
  • the three-dimensional model of the job site is the real-time collection of on-site environmental information, and cognition is the extraction of core key components.
  • the precise positioning and adaptive navigation technology can integrate the content and experience of the job to provide operators with intuitive and convenient job guidance and improve the power grid.
  • the use of the dynamic scanning 3D model construction technology of depth vision can quickly and cost-effectively build the 3D model of the job site, and has the ability to promote and apply on a large scale.
  • the present application provides a positioning method and system oriented to a power inspection scene to solve the problem of non-standard acquisition poses in related technologies and inability to perform accurate positioning in a scene obtained by modeling.
  • This application provides a positioning method oriented to power inspection scenarios, including:
  • the position and pose compensation is performed on the positioning point cloud based on the closest scene point cloud, and the position of the inspector is determined; In the case of the nearest scene point cloud, the position of the inspector is not within the scope of the inspection scene.
  • a positioning system oriented to power inspection scenarios including:
  • the closed-loop screening module is set to: obtain the location point cloud of the inspector based on the binocular camera, and perform closed-loop screening on the location point cloud and the scene point cloud in the pre-built inspection scene;
  • the judging module is set to: in the case of obtaining the nearest scene point cloud through the closed-loop screening, perform pose compensation on the positioning point cloud based on the nearest scene point cloud to determine the position of the inspector; In the case that the nearest scene point cloud cannot be obtained through filtering, the position of the inspector is not within the scope of the inspection scene.
  • Fig. 1 is a flowchart of a positioning method oriented to a power inspection scenario according to this application;
  • Figure 2 is a flow chart for constructing relative pose estimation of this application
  • FIG. 3 is a flowchart of the patrol and positioning operation according to an embodiment of the application.
  • Figure 4 shows the flow chart of the patrol scene construction in this application.
  • This embodiment provides a positioning method oriented to a power patrol scene, and a flowchart of the method is shown in FIG. 1.
  • This application provides a positioning method oriented to power inspection scenarios, including:
  • the position and pose compensation is performed on the positioning point cloud based on the closest scene point cloud, and the position of the inspector is determined;
  • the position of the inspector is not within the scope of the inspection scene.
  • the scene point cloud is the point cloud information when the map is created.
  • the construction of the inspection scene includes:
  • Step 1 Obtain the initial pose of the binocular camera through the IMU geomagnetometer
  • Step 2 Based on the initial pose, create a key frame through a binocular camera, and construct a scene point cloud in the inspection scene to be constructed according to the key frame;
  • Step 3 Store the scene point cloud in the patrol scene to be constructed into the set topology map, and perform closed-loop detection; if the closed-loop detection is passed, repeat the step two for the first set number of times And step three to obtain a patrol scene; in the case that the scene point cloud in the patrol scene to be constructed fails the closed-loop detection, a key frame is recreated, and the step two is repeated for a second set number of times And step three, until the closed loop detection is passed.
  • the creation of a key frame by a binocular camera and constructing a scene point cloud in the inspection scene to be constructed according to the key frame includes:
  • the three-dimensional depth information is corrected by the pose calculation, and the scene point cloud in the inspection scene to be constructed is constructed by the visual odometer.
  • the acquisition of the location point cloud of the inspector based on the binocular camera, and the closed-loop screening of the location point cloud and the scene point cloud in the pre-built inspection scene include:
  • the performing pose compensation on the positioning point cloud based on the nearest scene point cloud to determine the position of the inspector includes:
  • the construction of the distance function includes:
  • a rotation matrix and a translation matrix are constructed, and a distance function is obtained according to the rotation matrix and the translation matrix.
  • the constructing a rotation matrix and a translation matrix based on the feature vector includes:
  • T is the translation matrix
  • ⁇ ' is the centroid of the first point cloud
  • is the centroid of the current positioning point cloud
  • R is the rotation matrix
  • is the centroid of the current positioning point cloud
  • N is the number of points in the positioning point cloud
  • N ' is the number of first points cloud midpoint
  • N' N
  • d i as a first point cloud points
  • i 1,2,3 ...
  • x is a value on the abscissa
  • y is The value on the ordinate
  • z is the value on the depth coordinate.
  • the construction of the symmetric matrix is as shown in the following formula:
  • Q is a symmetric matrix
  • tr( ⁇ A, d) is the trace of the matrix ⁇ A
  • d is a sub-element in the symmetric matrix
  • ⁇ T is the rank of the sub-element in the matrix.
  • E is the distance function
  • R is a rotation matrix
  • a i is the anchor point cloud points
  • i 1,2,3 ... N
  • d i is the first point cloud point
  • T is the translation matrix
  • N being The number of points in the locating point cloud
  • x is the value on the abscissa
  • y is the value on the ordinate
  • z is the value on the depth coordinate.
  • the detected scene points will be passed Before cloud as the closest scene point cloud, it also includes:
  • the closest scene point cloud includes a scene point cloud and/or location information having a link relationship with the scene point cloud.
  • the scene point cloud is the point cloud created when the map is created, which is equivalent to the standard library point cloud. Then the point cloud collected by the positioning personnel on-site is compared with the scene point cloud to obtain the positioning information.
  • the method further includes:
  • a link is constructed between the position of the inspector and the scene point cloud in the corresponding inspection scene.
  • This application is based on the binocular camera to obtain the location point cloud of the inspector, and performs a closed-loop screening of the location point cloud and the scene point cloud in the pre-built inspection scene; if the closed-loop screening obtains the nearest scene point cloud, then Perform pose compensation on the current positioning point cloud based on the nearest scene point cloud, and determine the position of the inspector; otherwise, the position of the current inspector is not within the scope of the inspection scene.
  • This application performs a closed-loop screening of the location point cloud and the scene point cloud of the inspector, and accurately finds the scene point cloud closest to the location point cloud, which improves the accuracy of matching between the location point cloud and the scene point cloud; in addition, by combining Positioning the point cloud for pose compensation can reduce the initial pose error of the inspector and improve the positioning accuracy of the inspector.
  • Step 1 Aiming at indoor and outdoor scenes of substations, deep vision sensors are used to build point cloud arrays to achieve rapid modeling of the on-site operating environment, that is, the modeling of inspection scenes.
  • the image depth information and the pose information of the scene points are obtained through the ZED SDK of the supporting ZED Camera.
  • ZED SDK uses the binocular images collected by ZED Camera to calculate the binocular disparity to recover the depth information of the three-dimensional scene, and then outputs the visual odometer through the posture calculation, and determines the position of the current scene point according to the output of the visual odometer. Since the visual odometer cannot give the initial pose, it is necessary to additionally obtain the value of the Inertial Measurement Unit (IMU) geomagnetometer to determine the current system orientation.
  • IMU Inertial Measurement Unit
  • Step two closed-loop detection, when assuming a closed loop is considered to be established (FIG current scene scan the scene and the ground loop detection library scene), a new anchor point on the old L t L i between the setpoint A closed-loop link is established, and the anchor points forming the closed-loop are not merged, only a link is added between them.
  • This can not only save the different signatures of the same location, but also have a better estimate of the future closed-loop hypothesis, which is in a highly dynamic environment or when the environment changes cyclically over time (such as day to night or weather changes, etc.) ) Time is very important.
  • Step 3 After the closed-loop assumption is selected, it is determined that the current scene point is visited in the map that has been explored.
  • the current location is detected in the closed loop (the current location refers to the location point scanned this time, and the scene point refers to the location point coordinates of the last scan.
  • the location point in this paragraph is the new location point L t in step 2
  • scene point refers to the step of the old target point two L i) with the map a scene point forms a closed loop (i.e., re-access), to obtain the scene point global pose stored in the node, then that current pose substantially close scenes Click the global pose saved in the map.
  • Step 4 In the actual environment, although the scene point detected by the closed loop and the scene point in the map are roughly at the same position, the visual sensor cannot completely maintain the same posture during the two acquisitions. In order to obtain more accurate positioning results, it is necessary to calculate the relative pose of the current binocular airport scenic spot location and the location of the scene point in the map library when the map is created, that is, by using image feature matching and registration to restore one of the two scene images The rotation relationship matrix R and the translation relationship matrix T between the two, and finally the corrected accurate positioning information is obtained.
  • the actually acquired scene point has a positioning attitude deviation from the scene point saved in the map library.
  • X W be the global coordinate system
  • X K be the camera coordinate system.
  • the actual objects in the world environment are marked by green dots, and the imaging points of objects under different camera poses are marked by red dots.
  • the corresponding feature is found from the overlapping area of the two closed-loop images, and the camera's pose transformation is solved according to the imaging position of the feature point pair and the corresponding position in the world, that is, the rotation and translation matrix.
  • the relative pose estimation is to accurately register the two images, and the point cloud images of the two frames are continuously iterated to achieve precise stitching, and the corresponding rotation and translation matrices R and T are obtained.
  • the construction flow chart of the relative pose estimation is as follows As shown in Figure 2.
  • the core is to find the rotation and translation matrix R and T between the 3D point cloud to be registered and the reference 3D point cloud, so that the optimal registration between the two point cloud pairs overlaps as much as possible. It converts the problem of three-dimensional point cloud data registration into three-dimensional point space distance error and extreme value problem to solve. It is essentially a matching method based on the least squares method. The algorithm repeatedly selects the corresponding point cloud pairs and calculates the optimal rotation and translation transformation matrices R and T until the distance error threshold between the point clouds is met.
  • a i (x, y, z) and d i (x, y, z ) represent each independently of the point cloud A (x, y, z) and the point cloud d (x, y, z) Point, N is the number of points in the point cloud.
  • tr( ⁇ A, d) is the trace of the matrix ⁇ A, d, and ⁇ represents the sub-element in the symmetric matrix Q.
  • Step 5 Divide the dangerous area, connect the electric field operation auxiliary system to the setting system, obtain the live equipment and area data, and use the current field workers' tasks, the system automatically delineates the coordinates of the work area, for the field workers outside the scope of the work area Carry out alarm reminders to ensure the personal safety of on-site operators.
  • This embodiment provides a positioning system oriented to power inspection scenarios, including:
  • the closed-loop screening module is set to: obtain the location point cloud of the inspector based on the binocular camera, and perform closed-loop screening on the location point cloud and the scene point cloud in the pre-built inspection scene;
  • the judging module is set to: in the case of obtaining the nearest scene point cloud through the closed-loop screening, perform pose compensation on the positioning point cloud based on the nearest scene point cloud to determine the position of the inspector; In the case that the nearest scene point cloud cannot be obtained through filtering, the position of the inspector is not within the scope of the inspection scene.
  • the closed-loop screening module includes: a modeling module;
  • Modeling module including:
  • the initial pose acquisition sub-module is set to: obtain the initial pose of the binocular camera through the IMU geomagnetometer;
  • the scene point cloud construction sub-module is set to: based on the initial pose, create a key frame through a binocular camera, and construct a scene point cloud in the inspection scene to be constructed according to the key frame;
  • Closed-loop detection sub-module store the scene point cloud in the to-be-built inspection scene into the set topology map, and perform closed-loop detection; if the closed-loop detection is passed, repeat the first set number of times Steps two and three are to obtain a patrol scene; in the case that the scene point cloud in the patrol scene to be constructed fails the closed-loop detection, a key frame is recreated, and the second set number of times is repeated. Steps 2 and 3 until the closed loop detection is passed.
  • the scene point cloud construction sub-module includes:
  • the creation judgment unit is set to: create a key frame through the binocular camera, if the creation fails, continue to create the key frame; in the case that the creation of the key frame is successful, collect the image of the scene point through the key frame and obtain the scene 3D depth information of points;
  • the scene point cloud construction unit is configured to: perform pose correction on the three-dimensional depth information through pose estimation, and construct the scene point cloud in the to-be-built inspection scene through a visual odometer.
  • the closed-loop screening module includes:
  • the three-dimensional data acquisition sub-module is set to: acquire all three-dimensional coordinate data in the location point cloud of the inspector through the binocular camera;
  • the screening sub-module is set to: perform closed-loop detection based on all the three-dimensional coordinate data in the location point cloud of the inspector and the three-dimensional coordinate data of all the scene point clouds in the inspection scene, and pass the detected scene point cloud As the closest point cloud of the scene.
  • the judgment module includes:
  • the distance function construction sub-module is set to: construct a distance function based on the closest scene point cloud and the positioning point cloud that are filtered out;
  • the pose compensation sub-module is configured to: perform pose compensation on all three-dimensional coordinates in the positioning point cloud based on the distance function, and determine the position of the inspector.
  • the distance function construction sub-module includes:
  • the first point cloud construction unit is configured to: filter points in the positioning point cloud that are less than a set threshold from the nearest scene point cloud to form a first point cloud;
  • the covariance matrix construction unit is set to: centralize the positioning point cloud and the first point cloud to obtain the positioning point cloud centroid and the first point cloud centroid, and based on the positioning point cloud centroid and the first point cloud The centroid constructs the covariance matrix;
  • the symmetric matrix construction unit is set to: construct a symmetric matrix based on the covariance matrix, and obtain eigenvalues of the symmetric matrix;
  • the eigenvector solving unit is configured to select the largest eigenvalue among the eigenvalues of the symmetric matrix to solve the eigenvector;
  • the distance function construction unit is configured to construct a rotation matrix and a translation matrix based on the feature vector, and obtain a distance function according to the rotation matrix and the translation matrix.
  • the distance function construction unit includes:
  • the conversion subunit is set to: convert the eigenvector into a rotation matrix according to the Rodrigue rotation formula;
  • the translation matrix construction unit is set to: obtain the translation matrix according to the following formula
  • T is the translation matrix
  • ⁇ ' is the centroid of the first point cloud
  • is the centroid of the positioning point cloud
  • R is the rotation matrix
  • the covariance matrix construction unit calculates the centroid ⁇ of the positioning point cloud by the following formula:
  • is the anchor point cloud centroid
  • N is the number of anchor points cloud midpoint
  • N ' is the number of first points cloud midpoint
  • N' N
  • d i as a first point cloud points
  • i 1,2,3 ...
  • x is a value on the abscissa
  • y is The value on the ordinate
  • z is the value on the depth coordinate.
  • the covariance matrix construction unit constructs the covariance matrix by the following formula:
  • the symmetric matrix construction unit constructs the symmetric matrix by the following formula:
  • Q is a symmetric matrix
  • tr( ⁇ A, d) is the trace of the matrix ⁇ A
  • d is a sub-element in the symmetric matrix
  • ⁇ T is the rank of the sub-element in the matrix.
  • the distance function construction unit constructs the distance function by the following formula:
  • E is the distance function
  • R is a rotation matrix
  • a i is the anchor point cloud points
  • i 1,2,3 ... N
  • d i is the first point cloud point
  • T is the translation matrix
  • N being The number of points in the locating point cloud
  • x is the value on the abscissa
  • y is the value on the ordinate
  • z is the value on the depth coordinate.
  • the closed-loop screening module further includes: a closed-loop link detection sub-module.
  • the closed-loop link detection sub-module is set to: determine whether the scene point cloud has a closed-loop link, and if so, compare all the three-dimensional coordinate data in the location point cloud of the inspector with the scene point cloud in the inspection scene Close-loop detection is performed on the point cloud of the closed-loop link; in the case that there is no closed-loop link, the closed-loop detection is performed on all the three-dimensional coordinate data and the three-dimensional coordinate data of all the scene point clouds in the inspection scene;
  • the closest scene point cloud includes a scene point cloud and/or position information having a link relationship with the scene point cloud.
  • the system also includes: a link building module.
  • the link construction module is configured to construct a link between the position of the inspector and the scene point cloud in the corresponding inspection scene.
  • This embodiment provides an on-site operation method for power patrol scene positioning.
  • the operation flowchart is shown in Fig. 3 and includes:
  • the user wears a wearable smart helmet to carry out on-site operations and log in to the on-site operation assistance system.
  • the smart helmet vibration/sound alarm is used to remind the dangerous personnel to be in the dangerous area, and the background monitoring personnel are notified in time, so as to maximize the protection Personal safety of on-site workers.
  • This application relates to a binocular posture and visual information fusion positioning method oriented to power inspection scenes, which uses closed-loop detection technology in deep visual scene recognition to overcome the problem that visual odometers cannot be initially positioned and can only be positioned in an open loop.
  • the shortcomings of posture calculation and easy accumulation of positioning errors are described in detail below.
  • the currently collected scene map is matched with the scene in the map library in real time, assisting the binocular vision posture adjustment algorithm, and obtaining high-precision topological positioning results, thereby greatly improving the depth-based
  • the accuracy of the navigation and positioning of the visual sensor in the indoor environment of the substation can realize the precise positioning of the substation personnel, thereby delimiting the dangerous area, realizing the safety management and control of the personnel on-site operation station, and greatly improving the personal safety of the on-site operators.
  • the embodiments of the present application may be provided as methods, systems, or computer program products. Therefore, this application may adopt the form of a complete hardware embodiment, a complete software embodiment, or an embodiment combining software and hardware. Moreover, this application may use one or more computer-usable storage media containing computer-usable program codes (including disk storage, portable compact disk read-only memory (Compact Disc Read Only Memory, CD-ROM), optical storage, etc.) In the form of a computer program product implemented on it.
  • computer-usable storage media containing computer-usable program codes (including disk storage, portable compact disk read-only memory (Compact Disc Read Only Memory, CD-ROM), optical storage, etc.)
  • CD-ROM Compact Disc Read Only Memory
  • optical storage etc.
  • These computer program instructions can also be stored in a computer-readable memory that can guide a computer or other programmable data processing equipment to work in a specific manner, so that the instructions stored in the computer-readable memory produce an article of manufacture including the instruction device.
  • the device implements the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.
  • These computer program instructions can also be loaded on a computer or other programmable data processing equipment, so that a series of operation steps are executed on the computer or other programmable equipment to produce computer-implemented processing, so as to execute on the computer or other programmable equipment.
  • the instructions provide steps for implementing the functions specified in one process or multiple processes in the flowchart and/or one block or multiple blocks in the block diagram.

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Automation & Control Theory (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

一种面向电力巡检场景的定位方法及系统,方法包括:基于双目相机获取巡检人员的定位点云,将定位点云与预先构建的巡检场景中的场景点云进行闭环筛选(S1);在闭环筛选得到最近的场景点云的情况下,基于最近的场景点云对定位点云进行位姿补偿,确定巡检人员的位置(S2);在闭环筛选无法得到最近的场景点云的情况下,巡检人员的位置不在巡检场景范围内。

Description

面向电力巡检场景的定位方法及系统
本申请要求在2019年08月29日提交中国专利局、申请号为201910805641.7的中国专利申请的优先权,该申请的全部内容通过引用结合在本申请中。
技术领域
本申请涉及视觉定位领域,例如涉及一种面向电力巡检场景的定位方法及系统。
背景技术
传统电力巡检作业方式下,由于电力设备操作趋于繁杂,组件逻辑不可视,尤其是智能站的高度集成模块和更复杂的逻辑关系,运检人员无法独立完成任务,对厂家依赖度高,在作业过程中易发生误操作、误入间隔等危险情况,造成人员伤亡事件,同时巡检作业实际操作时间不足一半,花费较多甚至70%的时间在准备工作和确认沟通上。
随着IT技术发展,环境扫描建模定位和物体识别技术已在消费电子、工业、医疗、机器人等领域有一定的创新应用。通过研发该基础共性关键技术将为电网现场作业提供新的自动化、智能化解决方案,改进作业模式,提高作业效率,实现主动安全防护。首先,环境建模与智能重构是现场作业全景信息采集和组织的基础,利用多种传感技术和高效智能的图像处理算法,无需在作业现场额外部署基站或者传感器,通过便携式终端在作业现场直接扫描所处环境即可形成三维模型,进而将现场多类信息实现空间维度为主,时间维度、逻辑关系为辅的重组;环境智能识别是对现场作业信息主动辨识,通过识别作业目标、分析作业环境,实现作业及设备信息的自动获取;最后基于高精度立体定位,综合作业环境和作业任务情景,进而与经验知识结合实现信息主动的可视化推送,提供作业人员准确、及时的辅助引导。
作业现场的三维模型是对现场环境信息的实时采集,认知是对核心关键部件的提取,精准定位和适应性导航技术可以融合作业内容和经验知识为作业人员提供直观便捷的作业指导,提升电网现场作业工作质量、效率及安全作业水平。首先,利用深度视觉的动态扫描三维模型构建技术,能快速低成本建立作业现场的三维模型,具备大规模推广应用能力;其次,利用作业人员便携式终端设备实现人员和目标设备的快速立体定位,有助于在作业现场辅助员工快速抵达作业地点,根据设定的操作顺序指导完成作业任务,提高作业人员工作效率;由于对作业现场的三维模型是在标准建模的环境下构建的,通过标准三维 采集的位姿对作业现场进行三维信息录入;而在后续作业过程,工作人员携带便携式终端设备对工作现场进行扫描定位时,由于不确定因素导致采集位姿的不标准,进而无法在建模得到的场景进行精确定位,误差发生可能性大,无法确保工作的准确性。
发明内容
本申请提供了一种面向电力巡检场景的定位方法及系统以解决相关技术中所存在的采集位姿不标准,无法在建模得到的场景中进行精确定位的问题。
本申请提供一种面向电力巡检场景的定位方法,包括:
基于双目相机获取巡检人员的定位点云,将所述定位点云与预先构建的巡检场景中的场景点云进行闭环筛选;
在所述闭环筛选得到最近的场景点云的情况下,基于所述最近的场景点云对定位点云进行位姿补偿,确定所述巡检人员的位置;在所述闭环筛选无法得到所述最近的场景点云的情况下,巡检人员的位置不在巡检场景范围内。
还提供一种面向电力巡检场景的定位系统,包括:
闭环筛选模块设置为:基于双目相机获取巡检人员的定位点云,将所述定位点云与预先构建的巡检场景中的场景点云进行闭环筛选;
判断模块设置为:在所述闭环筛选得到最近的场景点云的情况下,基于所述最近的场景点云对定位点云进行位姿补偿,确定所述巡检人员的位置;在所述闭环筛选无法得到所述最近的场景点云的情况下,巡检人员的位置不在巡检场景范围内。
附图说明
图1为本申请的一种面向电力巡检场景的定位方法的流程图;
图2为本申请的相对位姿推算构建流程图;
图3为本申请实施例的巡检定位操作流程图;
图4位本申请中巡检场景构建流程图。
具体实施方式
下面结合说明书附图和实例对本申请的内容进行说明。
实施例1:
本实施例提供了一种面向电力巡检场景的定位方法,方法流程图如图1所示。
本申请提供一种面向电力巡检场景的定位方法,包括:
基于双目相机获取巡检人员的定位点云,将所述定位点云与预先构建的巡检场景中的场景点云进行闭环筛选;
在所述闭环筛选得到最近的场景点云的情况下,基于所述最近的场景点云对定位点云进行位姿补偿,确定所述巡检人员的位置;在所述闭环筛选无法得到所述最近的场景点云的情况下,巡检人员的位置不在巡检场景范围内。其中,场景点云即在创建地图时的点云信息。
可选的,所述巡检场景的构建,包括:
步骤一:通过IMU地磁计获取双目相机的初始位姿;
步骤二:基于所述初始位姿,通过双目相机创建关键帧,并根据所述关键帧构建待构建的巡检场景中的场景点云;
步骤三:将所述待构建的巡检场景中的场景点云存入设定的拓扑地图中,并进行闭环检测;若通过所述闭环检测,则以第一设定次数重复所述步骤二和步骤三,得到巡检场景;在所述待构建的巡检场景中的场景点云未通过所述闭环检测的情况下,重新创建关键帧,并以第二设定次数重复所述步骤二和步骤三,直到通过闭环检测。
可选的,所述通过双目相机创建关键帧,并根据所述关键帧构建待构建的巡检场景中的场景点云,包括:
通过双目相机创建关键帧,如果创建失败,则继续创建关键帧;在创建所述关键帧成功的情况下,通过所述关键帧采集场景点的图像并获取所述场景点的三维深度信息;
将所述三维深度信息通过位姿推算进行位姿校正,并通过视觉里程计构建待构建的巡检场景中的场景点云。
可选的,所述基于双目相机获取巡检人员的定位点云,将所述定位点云与预先构建的巡检场景中的场景点云进行闭环筛选,包括:
通过双目相机获取巡检人员的定位点云中的所有三维坐标数据;
基于所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的所有的场景点云的三维坐标数据进行闭环检测,将通过检测的场景点云作为最接近的场景点云。
可选的,所述基于所述最近的场景点云对定位点云进行位姿补偿,确定所述巡检人员的位置,包括:
基于筛选出的最近的场景点云和所述定位点云构建距离函数;
基于所述距离函数对所述定位点云中所有的三维坐标进行位姿补偿,确定所述巡检人员的位置。
可选的,所述距离函数的构建,包括:
筛选所述当前定位点云中与所述最近的场景点云中距离小于设定阈值的点,构成第一点云;
将所述定位点云和第一点云,通过中心化,计算得到定位点云质心和第一点云质心,并基于所述定位点云质心和第一点云质心构建协方差矩阵;
基于所述协方差矩阵构建对称矩阵,并得到对称矩阵的特征值;
选择所述对称矩阵的特征值中最大的特征值来求解特征向量;
基于所述特征向量,构建旋转矩阵和平移矩阵,并根据所述旋转矩阵和平移矩阵,得到距离函数。
可选的,所述基于所述特征向量,构建旋转矩阵和平移矩阵,包括:
根据罗德里格旋转公式将特征向量转化为旋转矩阵;
根据下式求得平移矩阵;
T=μ′-Rμ
式中,T为平移矩阵,μ′为第一点云质心,μ为当前定位点云质心,R为旋转矩阵。
可选的,所述定位点云质心μ的计算,如下式所示:
Figure PCTCN2020103175-appb-000001
其中,μ为当前定位点云质心,N为定位点云中点的数量,A i为定位点云中的点,i=1,2,3…N;
所述第一点云质心μ′的计算,如下式所示:
Figure PCTCN2020103175-appb-000002
其中,N′为第一点云中点的数量,N′=N,d i为第一点云中的点,i=1,2,3…N,x为横坐标上的值,y为纵坐标上的值,z为深度坐标上的值。
可选的,所述协方差矩阵的构建,如下式所示:
Figure PCTCN2020103175-appb-000003
其中,ΣA,d为协方差矩阵,N为定位点云中点的数量,A i为定位点云中的点,d i为第一点云中的点,μ′为第一点云质心,μ为定位点云质心,T表示转置,i=1,2,3…N。
可选的,所述对称矩阵的构建,如下式所示:
Figure PCTCN2020103175-appb-000004
其中,Q为对称矩阵,tr(ΣA,d)为矩阵ΣA,d的迹,Δ为对称矩阵Q中的子元素,Δ T为对阵矩阵中的子元素的秩。
可选的,所述距离函数的构建,如下式所示:
Figure PCTCN2020103175-appb-000005
其中,E为距离函数,R为旋转矩阵,A i为定位点云中的点,i=1,2,3…N,d i为第一点云中的点,T为平移矩阵,N为定位点云中点的数量,x为横坐标上的值,y为纵坐标上的值,z为深度坐标上的值。
可选的,在所述基于所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的所有的场景点云的三维坐标数据进行闭环检测,将通过检测的场景点云作为最接近的场景点云之前,还包括:
判断所述巡检场景中的场景点云是否存在闭环链接,若存在,则将所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的场景点云闭环链接的点云进行闭环检测;在不存在闭环链接的情况下,将所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的所有的场景点云的三维坐标数据进行闭环检测。
可选的,所述最接近的场景点云包括场景点云和/或与所述场景点云具有链接关系的位置信息。场景点云是在创建地图时候创建的点云,相当于标准库点云,然后定位人员现场采集的点云再和场景点云比对得出定位信息。
可选的,在所述确定巡检人员的位置之后还包括:
将所述巡检人员的位置与对应的所述巡检场景中的场景点云之间构建链接。
本申请基于双目相机获取巡检人员的定位点云,将所述定位点云与预先构建的巡检场景中的场景点云进行闭环筛选;若所述闭环筛选得到最近的场景点云,则基于所述最近的场景点云对当前定位点云进行位姿补偿,确定所述巡检人员的位置;否则,当前巡检人员的位置不在巡检场景范围内。本申请将巡检人员的定位点云与场景点云进行闭环筛选,准确寻找到与定位点云最近的场景点云,提高了定位点云与场景点云之间匹配的精度;此外,通过将定位点云进行位姿补偿,能够减少巡检人员的初始位姿误差,提高了巡检人员的定位精度。
步骤一:针对变电站室内外场景,利用深度视觉传感器通过构建点云阵列,实现现场作业环境的快速建模,即巡检场景的建模。获取图像深度信息和场景点的位姿信息是通过配套ZED Camera的ZED SDK获取的。ZED SDK利用ZED Camera采集到的双目图像进行双目视差计算进而恢复出三维场景的深度信息,再通过位姿推算输出视觉里程计,根据视觉里程计的输出从而确定当前场景点的位置。由于视觉里程计无法给出初始的位姿,因此需要额外通过获取惯性测量单元(Inertial measurement unit,IMU)地磁计的数值来确定当前系统的朝向。巡检场景构建流程图如图4所示。
步骤二:进行闭环检测,当一个闭环假设被认为是成立时(将当前现场扫描的场景图与地图库中场景进行闭环检测),新的定位点L t就与旧的定位点L i之间建立了闭环链接,形成闭环的定位点不进行融合,仅仅在它们之间添加一条 链接。这既能使同一定位点的不同签名得以保存,又能够对将来闭环假设有更好的估计,这在高度动态变换的环境中或者当环境随着时间循环变化(比如白天到黑夜或者天气变换等)时非常重要。
步骤三:当通过闭环假设选择后,则认定当前的场景点在已探索过的地图中被访问。假设在闭环检测到当前地点(当前地点是指本次扫描到的定位点,场景点指的是上一次扫描到的定位点坐标,本段中的定位点就是步骤二中的新定位点L t,场景点指的就是步骤二中的旧定位点L i)与地图中一个场景点形成闭环(即重新访问),获取节点中存储的该场景点全局位姿,则认为当前位姿大致接近场景点在地图中保存的全局位姿。
步骤四:在实际环境中,闭环检测出的场景点与地图中的场景点虽然大致位于同一位置,但是视觉传感器在两次采集的过程中不可能完全保持相同的位姿。为了得到更加精确的定位结果,需要对当前双目相机场景点位置和创建地图时地图库中场景点位置进行相对位姿推算,即通过利用图像特征匹配及配准,恢复出两帧场景图像之间的旋转关系矩阵R和平移关系矩阵T,最终获得修正后的精确定位信息。
实际获取的场景点与地图库中保存的场景点具有一定位姿偏差。设X W为全局坐标系,X K为相机坐标系。世界环境中的实际物体由绿色点标出,物体在不同相机位姿下的成像点由红色点标出。经过配准过程,从两帧闭环的图像的重叠区域中找出对应的特征,根据特征点对的成像位置与对应的在世界中的位置来求解相机的位姿变换,即旋转平移矩阵。
由于从不同角度获得的场景图像,彼此之间仅仅是部分重叠,重叠区域占总体图像特征的比率一般较低,这就要求拓扑定位闭环检测到的匹配图像之间的相对位置,不能与真实的相对位置相差太大,因此,需要进行位姿补偿,位姿补偿是通过相对位姿推算实现的。相对位姿推算将两张图像间进行精确配准,通过不断迭代两帧图像的点云图来实现精确地拼合,并得到对应的旋转与平移矩阵R和T,相对位姿推算的构建流程图如图2所示。
核心是要找到待配准三维点云与参考三维点云之间的旋转平移矩阵R和T,使得两点云对之间尽可能重合的最优配准。它将三维点云数据配准的问题转变成三维点空间距离误差和极值的问题来进行求解。其本质上是基于最小二乘法的匹配方法,该算法重复进行选择对应关系的点云对,计算出最优旋转平移变换矩阵R和T,直到满足点云之间距离误差阈值的要求。
1.对于给定的三维点云A(x,y,z)和B(x,y,z),筛选出A(x,y,z)中每一个点在点 云B(x,y,z)中最近的点,由此获得点云集合d(x,y,z);
2.中心化,对点云A(x,y,z)和点云d(x,y,z)分别求出其质心:
Figure PCTCN2020103175-appb-000006
Figure PCTCN2020103175-appb-000007
其中,A i(x,y,z)和d i(x,y,z)分别表示点云A(x,y,z)和点云d(x,y,z)中的每一个独立的点,N为点云中点的数量。
3.计算点云A(x,y,z)和点云d(x,y,z)的协方差矩阵:
Figure PCTCN2020103175-appb-000008
4.由ΣA,d构造出4*4的矩阵Q:
Figure PCTCN2020103175-appb-000009
其中,tr(ΣA,d)为矩阵ΣA,d的迹,Δ表示对称矩阵Q中的子元素。
5.计算求得对称矩阵Q的特征值,并选择最大的特征值来求解特征向量,根据罗德里格旋转公式将特征向量转化为旋转矩阵R,并由T=μ′-Rμ求得平移矩阵T。
6.根据求得的旋转矩阵R和平移矩阵T,定义距离函数为:
Figure PCTCN2020103175-appb-000010
当前旋转矩阵R和平移矩阵T所对应的距离函数小于一定阈值时,则认为求得的R和T满足精度要求,否则重复步骤1-步骤6直至满足阈值要求或达到迭代次数的上限。
对当前位姿和已走过的路径上的位姿进行偏差补偿。利用上一步求得的旋转平移矩阵[R,T]对当前全局位姿进行纠正,同时纠正之前走过的路径点上的位姿,得到正确的全局路径。
步骤五:危险区域划分,将电力现场作业辅助系统对接设定系统,获取带电设备及区域数据,通过当前现场作业人员工作任务,系统自动划定作业区域坐标,对于超出作业区域范围的现场作业人员进行报警提醒,保障现场作业人 员的人身安全。
实施例2:
本实施例提供了一种面向电力巡检场景的定位系统,包括:
闭环筛选模块设置为:基于双目相机获取巡检人员的定位点云,将所述定位点云与预先构建的巡检场景中的场景点云进行闭环筛选;
判断模块设置为:在所述闭环筛选得到最近的场景点云的情况下,基于所述最近的场景点云对定位点云进行位姿补偿,确定所述巡检人员的位置;在所述闭环筛选无法得到所述最近的场景点云的情况下,巡检人员的位置不在巡检场景范围内。
所述闭环筛选模块中,包括:建模模块;
建模模块,包括:
初始位姿获取子模块设置为:通过IMU地磁计获取双目相机的初始位姿;
场景点云构建子模块设置为:基于所述初始位姿,通过双目相机创建关键帧,并根据所述关键帧构建待构建的巡检场景中的场景点云;
闭环检测子模块:将所述待构建的巡检场景中的场景点云存入设定的拓扑地图中,并进行闭环检测;若通过所述闭环检测,则以第一设定次数重复所述步骤二和步骤三,得到巡检场景;在所述待构建的巡检场景中的场景点云未通过所述闭环检测的情况下,重新创建关键帧,并以第二设定次数重复所述步骤二和步骤三,直到通过闭环检测。
所述场景点云构建子模块,包括:
创建判断单元设置为:通过双目相机创建关键帧,如果创建失败,则继续创建关键帧;在创建所述关键帧成功的情况下,通过所述关键帧采集场景点的图像并获取所述场景点的三维深度信息;
场景点云构建单元设置为:将所述三维深度信息通过位姿推算进行位姿校正,并通过视觉里程计构建所述待构建的巡检场景中的场景点云。
所述闭环筛选模块,包括:
三维数据获取子模块设置为:通过双目相机获取巡检人员的定位点云中的所有三维坐标数据;
筛选子模块设置为:基于所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的所有的场景点云的三维坐标数据进行闭环检测,将通过检测的场景点云作为最接近的场景点云。
所述判断模块,包括:
距离函数构建子模块设置为:基于筛选出的最近的场景点云和所述定位点云构建距离函数;
位姿补偿子模块设置为:基于所述距离函数对所述定位点云中所有的三维坐标进行位姿补偿,确定所述巡检人员的位置。
所述距离函数构建子模块,包括:
第一点云构建单元设置为:筛选所述定位点云中与所述最近的场景点云中距离小于设定阈值的点,构成第一点云;
协方差矩阵构建单元设置为:将所述定位点云和第一点云,通过中心化,计算得到定位点云质心和第一点云质心,并基于所述定位点云质心和第一点云质心构建协方差矩阵;
对称矩阵构建单元设置为:基于所述协方差矩阵构建对称矩阵,并得到对称矩阵的特征值;
特征向量求解单元设置为:选择所述对称矩阵的特征值中最大的特征值来求解特征向量;
距离函数构建单元设置为:基于所述特征向量,构建旋转矩阵和平移矩阵,并根据所述旋转矩阵和平移矩阵,得到距离函数。
所述距离函数构建单元,包括:
转换子单元设置为:根据罗德里格旋转公式将特征向量转化为旋转矩阵;
平移矩阵构建单元设置为:根据下式求得平移矩阵;
T=μ′-Rμ
式中,T为平移矩阵,μ′为第一点云质心,μ为定位点云质心,R为旋转矩阵。
所述协方差矩阵构建单元通过下式计算定位点云质心μ:
Figure PCTCN2020103175-appb-000011
其中,μ为定位点云质心,N为定位点云中点的数量,A i为定位点云中的点,i=1,2,3…N;
所述第一点云质心μ′的计算,如下式所示:
Figure PCTCN2020103175-appb-000012
其中,N′为第一点云中点的数量,N′=N,d i为第一点云中的点,i=1,2,3…N,x为横坐标上的值,y为纵坐标上的值,z为深度坐标上的值。
所述协方差矩阵构建单元通过下式构建协方差矩阵:
Figure PCTCN2020103175-appb-000013
其中,ΣA,d为协方差矩阵,N为定位点云中点的数量,A i为定位点云中的点,d i为第一点云中的点,μ′为第一点云质心,μ为定位点云质心,T表示转置,i=1,2,3…N。
所述对称矩阵构建单元通过下式构建对称矩阵:
Figure PCTCN2020103175-appb-000014
其中,Q为对称矩阵,tr(ΣA,d)为矩阵ΣA,d的迹,Δ为对称矩阵Q中的子元素,Δ T为对阵矩阵中的子元素的秩。
所述距离函数构建单元通过下式构建距离函数:
Figure PCTCN2020103175-appb-000015
其中,E为距离函数,R为旋转矩阵,A i为定位点云中的点,i=1,2,3…N,d i为第一点云中的点,T为平移矩阵,N为定位点云中点的数量,x为横坐标上的值,y为纵坐标上的值,z为深度坐标上的值。
所述闭环筛选模块,还包括:闭环链接检测子模块。
闭环链接检测子模块设置为:判断所述场景点云是否存在闭环链接,若存在,则将所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的场景点云闭环链接的点云进行闭环检测;在不存在闭环链接的情况下,将所述所有三维坐标数据与所述巡检场景中的所有的场景点云的三维坐标数据进行闭环检测;
所述最接近的场景点云包括场景点云和/或与所述场景点云具有链接关系的位置信息。
所述系统还包括:链接构建模块。
链接构建模块设置为:将所述巡检人员的位置与对应的所述巡检场景中的场景点云之间构建链接。
实施例3:
本实施例提供了一种面向电力巡检场景定位的现场操作方法,操作流程图如图3所示,包括:
1、用户佩戴可穿戴智能头盔,开展现场作业,登录现场作业辅助系统。
2、打开智能头盔上的双目视觉传感器获取景深点云数据。
3、利用点云阵列实时计算人员定位坐标。
4、通过闭环检测和位姿纠正算法,优化人员定位坐标的精度,从而获取现场作业人员的实时定位信息。
5、对接D6000生产管理系统,获取变电站内带电设备及区域的坐标信息。
6、实时判断是否有作业人员超出安全作业范围,对于超出作业范围的危险人员及时通过智能头盔振动/声音报警的方式,提醒危险人员处于危险区域,并及时通知后台监控人员,从而最大程度的保障现场作业人员的人身安全。
本申请涉及一种面向电力巡检场景下的双目姿态与视觉信息融合定位方法,利用深度视觉场景识别中的闭环检测技术,克服视觉里程计无法初始定位的问题以及只能开环地进行位姿推算、容易累计定位误差的缺点,通过闭环检测实时对当前采集的场景图与地图库中的场景进行匹配,辅助双目视觉姿态调整算法,获得高精度的拓扑定位结果,从而大幅提高基于深度视觉传感器在变电站室内环境下导航定位的精度,实现变电站人员的精确定位,从而划定危险区域,实现人员现场作业站位的安全管控,大幅提高现场作业人员的人身安全。
本申请的实施例可提供为方法、系统、或计算机程序产品。因此,本申请可采用完全硬件实施例、完全软件实施例、或结合软件和硬件方面的实施例的形式。而且,本申请可采用在一个或多个其中包含有计算机可用程序代码的计算机可用存储介质(包括磁盘存储器、便携式紧凑磁盘只读存储器(Compact Disc Read Only Memory,CD-ROM)、光学存储器等)上实施的计算机程序产品的形式。
本申请是参照根据本申请实施例的方法、设备(系统)、和计算机程序产品的流程图和/或方框图来描述的。可由计算机程序指令实现流程图和/或方框图中的每一流程和/或方框、以及流程图和/或方框图中的流程和/或方框的结合。可提供这些计算机程序指令到通用计算机、专用计算机、嵌入式处理机或其他可编程数据处理设备的处理器以产生一个机器,使得通过计算机或其他可编程数据处理设备的处理器执行的指令产生用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的装置。
这些计算机程序指令也可存储在能引导计算机或其他可编程数据处理设备以特定方式工作的计算机可读存储器中,使得存储在该计算机可读存储器中的指令产生包括指令装置的制造品,该指令装置实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能。
这些计算机程序指令也可装载到计算机或其他可编程数据处理设备上,使得在计算机或其他可编程设备上执行一系列操作步骤以产生计算机实现的处理,从而在计算机或其他可编程设备上执行的指令提供用于实现在流程图一个流程或多个流程和/或方框图一个方框或多个方框中指定的功能的步骤。

Claims (16)

  1. 一种面向电力巡检场景的定位方法,包括:
    基于双目相机获取巡检人员的定位点云,将所述定位点云与预先构建的巡检场景中的场景点云进行闭环筛选;
    在所述闭环筛选得到最近的场景点云的情况下,基于所述最近的场景点云对所述定位点云进行位姿补偿,确定所述巡检人员的位置;在所述闭环筛选无法得到所述最近的场景点云的情况下,所述巡检人员的位置不在巡检场景范围内。
  2. 如权利要求1所述的方法,其中,所述巡检场景的构建,包括:
    通过惯性测量单元IMU地磁计获取双目相机的初始位姿;
    基于所述初始位姿,通过所述双目相机创建关键帧,并根据所述关键帧构建待构建的巡检场景中的场景点云;
    将所述待构建的巡检场景中的场景点云存入设定的拓扑地图中,并进行闭环检测;在待构建的巡检场景中的场景点云通过所述闭环检测的情况下,以第一设定次数重复执行基于所述初始位姿,通过所述双目相机创建关键帧,并根据所述关键帧构建待构建的巡检场景中的场景点云,以及将所述待构建的巡检场景中的场景点云存入设定的拓扑地图中,并进行闭环检测的步骤,得到所述巡检场景;在所述待构建的巡检场景中的场景点云未通过所述闭环检测的情况下,重新创建关键帧,并以第二设定次数重复执行基于所述初始位姿,通过所述双目相机创建关键帧,并根据关键帧构建所述待构建的巡检场景中的场景点云,以及将所述待构建的巡检场景中的场景点云存入设定的拓扑地图中,并进行闭环检测的步骤,直到通过所述闭环检测。
  3. 如权利要求2所述的方法,其中,所述通过所述双目相机创建关键帧,并根据所述关键帧构建待构建的巡检场景中的场景点云,包括:
    通过所述双目相机创建所述关键帧;在创建所述关键帧成功的情况下,通过所述关键帧采集场景点的图像并获取所述场景点的三维深度信息;
    将所述三维深度信息通过位姿推算进行位姿校正,并通过视觉里程计构建所述待构建的巡检场景中的场景点云。
  4. 如权利要求1所述的方法,其中,所述基于双目相机获取巡检人员的定位点云,将所述定位点云与预先构建的巡检场景中的场景点云进行闭环筛选,包括:
    通过所述双目相机获取所述巡检人员的定位点云中的所有三维坐标数据;
    基于所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的 所有的场景点云的三维坐标数据进行闭环检测,将通过检测的场景点云作为所述最接近的场景点云。
  5. 如权利要求4所述的方法,其中,所述基于所述最近的场景点云对所述定位点云进行位姿补偿,确定所述巡检人员的位置,包括:
    基于筛选出的所述最近的场景点云和所述定位点云构建距离函数;
    基于所述距离函数对所述定位点云中所有的三维坐标进行所述位姿补偿,确定所述巡检人员的位置。
  6. 如权利要求5所述的方法,其中,所述距离函数的构建,包括:
    筛选所述定位点云中与所述最近的场景点云中距离小于设定阈值的点,构成第一点云;
    将所述定位点云和所述第一点云,通过中心化,计算得到定位点云质心和第一点云质心,并基于所述定位点云质心和所述第一点云质心构建协方差矩阵;
    基于所述协方差矩阵构建对称矩阵,并得到所述对称矩阵的特征值;
    选择所述对称矩阵的特征值中最大的特征值来求解特征向量;
    基于所述特征向量,构建旋转矩阵和平移矩阵,并根据所述旋转矩阵和所述平移矩阵,得到所述距离函数。
  7. 如权利要求6所述的方法,其中,所述基于所述特征向量,构建旋转矩阵和平移矩阵,包括:
    根据罗德里格旋转公式将所述特征向量转化为所述旋转矩阵;
    根据下式求得所述平移矩阵;
    T=μ′-Rμ
    式中,T为所述平移矩阵,μ′为所述第一点云质心,μ为所述定位点云质心,R为所述旋转矩阵。
  8. 如权利要求7所述的方法,其中,所述定位点云质心μ的计算,如下式所示:
    Figure PCTCN2020103175-appb-100001
    其中,μ为所述定位点云质心,N为所述定位点云中点的数量,A i为所述定位点云中的点,i=1,2,3…N;
    所述第一点云质心μ′的计算,如下式所示:
    Figure PCTCN2020103175-appb-100002
    其中,N′为所述第一点云中点的数量,N′=N,d i为所述第一点云中的点,i=1,2,3…N,x为横坐标上的值,y为纵坐标上的值,z为深度坐标上的值。
  9. 如权利要求6所述的方法,其中,所述协方差矩阵的构建,如下式所示:
    Figure PCTCN2020103175-appb-100003
    其中,∑A,d为协方差矩阵,N为所述定位点云中点的数量,A i为所述定位点云中的点,d i为所述第一点云中的点,μ′为所述第一点云质心,μ为所述定位点云质心,T表示转置,i=1,2,3…N。
  10. 如权利要求9所述的方法,其中,所述对称矩阵的构建,如下式所示:
    Figure PCTCN2020103175-appb-100004
    其中,Q为所述对称矩阵,tr(∑A,d)为所述矩阵∑A,d的迹,Δ为所述对称矩阵Q中的子元素,Δ T为所述对阵矩阵中的子元素的秩。
  11. 如权利要求6所述的方法,其中,所述距离函数的构建,如下式所示:
    Figure PCTCN2020103175-appb-100005
    其中,E为所述距离函数,R为所述旋转矩阵,A i为所述定位点云中的点,i=1,2,3…N,d i为所述第一点云中的点,T为所述平移矩阵,N为所述定位点云中点的数量,x为横坐标上的值,y为纵坐标上的值,z为深度坐标上的值。
  12. 如权利要求4所述的方法,其中,在所述基于所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的所有的场景点云的三维坐标数据进行闭环检测,将通过检测的场景点云作为所述最接近的场景点云之前,还包括:
    判断所述巡检场景中的场景点云是否存在闭环链接,在存在闭环链接的情况下,将所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的场景点云中存在闭环链接的点云进行闭环检测;在不存在闭环链接的情况下, 将所述巡检人员的定位点云中的所有三维坐标数据与所述巡检场景中的所有的场景点云的三维坐标数据进行闭环检测。
  13. 如权利要求12所述的方法,其中,
    所述最接近的场景点云包括以下至少之一:场景点云、与所述场景点云具有链接关系的位置信息。
  14. 如权利要求1所述的方法,其中,在所述确定所述巡检人员的位置之后,还包括:
    将所述巡检人员的位置与对应的所述巡检场景中的场景点云之间构建链接。
  15. 一种面向电力巡检场景的定位系统,包括:
    闭环筛选模块设置为:基于双目相机获取巡检人员的定位点云,将所述定位点云与预先构建的巡检场景中的场景点云进行闭环筛选;
    判断模块设置为:在所述闭环筛选得到最近的场景点云的情况下,基于所述最近的场景点云对所述定位点云进行位姿补偿,确定所述巡检人员的位置;在所述闭环筛选无法得到所述最近的场景点云的情况下,所述巡检人员的位置不在巡检场景范围内。
  16. 如权利要求15所述的系统,其中,所述闭环筛选模块包括:建模模块;
    所述建模模块,包括:初始位姿获取子模块、场景点云构建子模块和闭环检测子模块;
    所述初始位姿获取子模块设置为:通过惯性测量单元IMU地磁计获取双目相机的初始位姿;
    所述场景点云构建子模块设置为:基于所述初始位姿,通过所述双目相机创建关键帧,并根据所述关键帧构建待构建的巡检场景中的场景点云;
    所述闭环检测子模块设置为:将所述待构建的巡检场景中的场景点云存入设定的拓扑地图中,并进行闭环检测;在所述待构建的巡检场景中的场景点云通过所述闭环检测的情况下,以第一设定次数重复执行基于所述初始位姿,通过所述双目相机创建关键帧,并根据所述关键帧构建待构建的巡检场景中的场景点云,以及将所述待构建的巡检场景中的场景点云存入设定的拓扑地图中,并进行闭环检测的步骤,得到所述巡检场景;在所述待构建的巡检场景中的场景点云未通过所述闭环检测的情况下,重新创建关键帧,并以第二设定次数重复执行基于所述初始位姿,通过所述双目相机创建关键帧,并根据关键帧构建所述待构建的巡检场景中的场景点云,以及将所述待构建的巡检场景中的场景 点云存入设定的拓扑地图中,并进行闭环检测的步骤,直到通过所述闭环检测。
PCT/CN2020/103175 2019-08-29 2020-07-21 面向电力巡检场景的定位方法及系统 WO2021036587A1 (zh)

Applications Claiming Priority (2)

Application Number Priority Date Filing Date Title
CN201910805641.7A CN110702101A (zh) 2019-08-29 2019-08-29 一种面向电力巡检场景的定位方法及系统
CN201910805641.7 2019-08-29

Publications (1)

Publication Number Publication Date
WO2021036587A1 true WO2021036587A1 (zh) 2021-03-04

Family

ID=69194114

Family Applications (1)

Application Number Title Priority Date Filing Date
PCT/CN2020/103175 WO2021036587A1 (zh) 2019-08-29 2020-07-21 面向电力巡检场景的定位方法及系统

Country Status (2)

Country Link
CN (1) CN110702101A (zh)
WO (1) WO2021036587A1 (zh)

Families Citing this family (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111443091B (zh) * 2020-04-08 2023-07-25 中国电力科学研究院有限公司 电缆线路隧道工程缺陷判断方法
CN112612002A (zh) * 2020-12-01 2021-04-06 北京天地玛珂电液控制系统有限公司 煤矿井下全工作面场景空间的数字化构建系统和方法
CN113421356B (zh) * 2021-07-01 2023-05-12 北京华信傲天网络技术有限公司 一种对复杂环境中设备的巡检系统和方法
CN114267146A (zh) * 2021-12-22 2022-04-01 国网浙江省电力有限公司宁波供电公司 隧道监控方法及装置、存储介质及电子设备
CN116233219B (zh) * 2022-11-04 2024-04-30 国电湖北电力有限公司鄂坪水电厂 基于人员定位算法的巡检方法、装置

Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN106840148A (zh) * 2017-01-24 2017-06-13 东南大学 室外作业环境下基于双目摄像机的可穿戴式定位与路径引导方法
CN107657640A (zh) * 2017-09-30 2018-02-02 南京大典科技有限公司 基于orb slam的智能巡防巡检管理方法
CN107796397A (zh) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 一种机器人双目视觉定位方法、装置和存储介质
US20180113469A1 (en) * 2016-10-26 2018-04-26 The Charles Stark Draper Laboratory, Inc. Vision-Aided Inertial Navigation with Loop Closure
CN109558879A (zh) * 2017-09-22 2019-04-02 华为技术有限公司 一种基于点线特征的视觉slam方法和装置
CN109658373A (zh) * 2017-10-10 2019-04-19 中兴通讯股份有限公司 一种巡检方法、设备及计算机可读存储介质
CN109682385A (zh) * 2018-11-05 2019-04-26 天津大学 一种基于orb特征的即时定位与地图构建的方法

Patent Citations (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US20180113469A1 (en) * 2016-10-26 2018-04-26 The Charles Stark Draper Laboratory, Inc. Vision-Aided Inertial Navigation with Loop Closure
CN106840148A (zh) * 2017-01-24 2017-06-13 东南大学 室外作业环境下基于双目摄像机的可穿戴式定位与路径引导方法
CN107796397A (zh) * 2017-09-14 2018-03-13 杭州迦智科技有限公司 一种机器人双目视觉定位方法、装置和存储介质
CN109558879A (zh) * 2017-09-22 2019-04-02 华为技术有限公司 一种基于点线特征的视觉slam方法和装置
CN107657640A (zh) * 2017-09-30 2018-02-02 南京大典科技有限公司 基于orb slam的智能巡防巡检管理方法
CN109658373A (zh) * 2017-10-10 2019-04-19 中兴通讯股份有限公司 一种巡检方法、设备及计算机可读存储介质
CN109682385A (zh) * 2018-11-05 2019-04-26 天津大学 一种基于orb特征的即时定位与地图构建的方法

Also Published As

Publication number Publication date
CN110702101A (zh) 2020-01-17

Similar Documents

Publication Publication Date Title
WO2021036587A1 (zh) 面向电力巡检场景的定位方法及系统
CN107179086B (zh) 一种基于激光雷达的制图方法、装置及系统
CN106840148B (zh) 室外作业环境下基于双目摄像机的可穿戴式定位与路径引导方法
Momeni-k et al. Height estimation from a single camera view
CN110751123B (zh) 一种单目视觉惯性里程计系统及方法
CN112419501A (zh) 一种地空异构协同地图构建方法
Hertzberg et al. Experiences in building a visual SLAM system from open source components
CN114897988B (zh) 一种铰链式车辆中的多相机定位方法、装置及设备
CN115371673A (zh) 一种未知环境中基于Bundle Adjustment的双目相机目标定位方法
Muehlfellner et al. Evaluation of fisheye-camera based visual multi-session localization in a real-world scenario
CN114972668A (zh) 基于高度信息的激光slam方法和系统
Kostavelis et al. SPARTAN system: Towards a low-cost and high-performance vision architecture for space exploratory rovers
KR20230082497A (ko) 3차원 포인트 클라우드를 이용한 구조물의 실시간 검진 자동화 방법 및 장치
CN117470259A (zh) 一种子母式空地协同多传感器融合三维建图系统
CN116957360A (zh) 一种基于无人机的空间观测与重建方法及系统
Ma et al. Semantic geometric fusion multi-object tracking and lidar odometry in dynamic environment
Mazzei et al. A lasers and cameras calibration procedure for VIAC multi-sensorized vehicles
CN113421332A (zh) 一种三维重建方法、装置、电子设备及存储介质
Xue et al. Rough registration of BIM element projection for construction progress tracking
Choi et al. Autonomous homing based on laser-camera fusion system
Chessa et al. Bio-inspired active vision for obstacle avoidance
Zhang et al. Point cloud registration with 2D and 3D fusion information on mobile robot integrated vision system
CN116704138B (zh) 倾斜摄影三维模型的建立方法及装置
Doherty et al. Ensuring Accuracy in Auto-Bounding Box Generation for the Autonomous Aerial Refueling Mission
CN115700507B (zh) 地图更新方法及其装置

Legal Events

Date Code Title Description
121 Ep: the epo has been informed by wipo that ep was designated in this application

Ref document number: 20859042

Country of ref document: EP

Kind code of ref document: A1

NENP Non-entry into the national phase

Ref country code: DE

122 Ep: pct application non-entry in european phase

Ref document number: 20859042

Country of ref document: EP

Kind code of ref document: A1