CN114638909A - Substation semantic map construction method based on laser SLAM and visual fusion - Google Patents

Substation semantic map construction method based on laser SLAM and visual fusion Download PDF

Info

Publication number
CN114638909A
CN114638909A CN202210295464.4A CN202210295464A CN114638909A CN 114638909 A CN114638909 A CN 114638909A CN 202210295464 A CN202210295464 A CN 202210295464A CN 114638909 A CN114638909 A CN 114638909A
Authority
CN
China
Prior art keywords
information
map
laser
transformer substation
laser radar
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202210295464.4A
Other languages
Chinese (zh)
Inventor
吴秋轩
周忠容
曾平良
田杨阳
毛万登
孟秦源
张波涛
袁少光
耿俊成
赵健
吕强
仲朝亮
罗艳斌
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
State Grid Corp of China SGCC
Hangzhou Dianzi University
Electric Power Research Institute of State Grid Henan Electric Power Co Ltd
Original Assignee
State Grid Corp of China SGCC
Hangzhou Dianzi University
Electric Power Research Institute of State Grid Henan Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by State Grid Corp of China SGCC, Hangzhou Dianzi University, Electric Power Research Institute of State Grid Henan Electric Power Co Ltd filed Critical State Grid Corp of China SGCC
Priority to CN202210295464.4A priority Critical patent/CN114638909A/en
Publication of CN114638909A publication Critical patent/CN114638909A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T11/002D [Two Dimensional] image generation
    • G06T11/003Reconstruction from projections, e.g. tomography
    • G06T11/006Inverse problem, transformation from projection-space into object-space, e.g. transform methods, back-projection, algebraic methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06FELECTRIC DIGITAL DATA PROCESSING
    • G06F18/00Pattern recognition
    • G06F18/20Analysing
    • G06F18/25Fusion techniques
    • G06F18/251Fusion techniques of input or preprocessed data
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/80Analysis of captured images to determine intrinsic or extrinsic camera parameters, i.e. camera calibration
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10032Satellite or aerial image; Remote sensing
    • G06T2207/10044Radar image
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20081Training; Learning
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/20Special algorithmic details
    • G06T2207/20084Artificial neural networks [ANN]

Landscapes

  • Engineering & Computer Science (AREA)
  • Theoretical Computer Science (AREA)
  • General Physics & Mathematics (AREA)
  • Physics & Mathematics (AREA)
  • Computer Vision & Pattern Recognition (AREA)
  • Data Mining & Analysis (AREA)
  • General Engineering & Computer Science (AREA)
  • Algebra (AREA)
  • Evolutionary Computation (AREA)
  • Bioinformatics & Computational Biology (AREA)
  • Bioinformatics & Cheminformatics (AREA)
  • Artificial Intelligence (AREA)
  • Life Sciences & Earth Sciences (AREA)
  • Evolutionary Biology (AREA)
  • Mathematical Analysis (AREA)
  • Mathematical Optimization (AREA)
  • Mathematical Physics (AREA)
  • Pure & Applied Mathematics (AREA)
  • Image Analysis (AREA)
  • Length Measuring Devices By Optical Means (AREA)

Abstract

The invention discloses a transformer substation semantic map construction method based on laser SLAM and vision fusion, which comprises the following steps: s1-1, carrying out internal reference calibration and external reference combined calibration of the laser radar and the camera on the depth camera; s1-2, synchronously preprocessing data acquired by the depth camera and the laser radar; s2-1, performing map modeling of the operation and maintenance environment through point cloud data acquired by a laser radar and odometer information; s2-2, obtaining an RGBD image of the depth camera, performing target recognition through depth learning, understanding scene information, and obtaining semantic information of the RGBD image; s2-3, performing coordinate conversion, projecting the target identified in the step S2-2 to a grid map, and providing environment cognitive information for the transformer substation; and S3, repeating the step S2 to complete the construction of the semantic map. By adopting the technical scheme, the algorithm has the advantage of high adaptability to different weather environments and illumination conditions in the process of drawing construction, can effectively remove laser motion distortion, improves the drawing construction precision and reduces accumulated errors.

Description

Substation semantic map construction method based on laser SLAM and visual fusion
Technical Field
The invention relates to the technical field of substation SLAM map construction, in particular to a substation semantic map construction method based on laser SLAM and vision fusion.
Background
With the proposal of the concept of the smart power grid, the world disputes to develop intelligent equipment of a power system through the advanced sensors, measuring technologies, equipment technologies, control methods and decision support systems of the country so as to ensure the intelligent, informationized, economic and environment-friendly operation of the power grid, and the power inspection robot is widely applied to a transformer substation as a typical representative of the intelligent equipment. As the high-voltage equipment in the power place of the transformer substation is numerous, the working environment is complex and the requirement on the safety level is high. The inspection robot has the advantages that the inspection robot must be strictly installed on a specified safe road to operate in the inspection process, the defects of the traditional manual inspection mode are gradually highlighted, the labor intensity is high, the inspection efficiency is low, the labor cost is high, and particularly, the manual inspection mode with high safety risk cannot meet the development requirements of modern industrial systems. Any measures exceeding the safe driving area, such as touching high-voltage equipment, instrument and meter, may bring great damage to the robot itself or the substation, and even cause the whole substation power supply system to be paralyzed, so the understanding ability of the robot to the environment needs to be improved to understand the high-level semantic information in the environment. Under the requirement, the robot needs to improve the cognitive ability of the surrounding environment, needs to understand high-level semantic information in the environment, and is an effective solution for constructing a semantic map containing the semantic information.
Disclosure of Invention
According to the defects of the prior art, the invention provides the transformer substation semantic map construction method based on the laser SLAM and the vision fusion, the target detection precision and the map semantic property are improved in an environment perception system, and the target detection requirement of a transformer substation complex inspection environment is met.
In order to solve the technical problems, the technical scheme of the invention is as follows:
a transformer substation semantic map construction method based on laser SLAM and visual fusion comprises the following steps:
s1, acquisition and preprocessing of sensor data
S1-1, carrying out internal reference calibration on the depth camera and external reference combined calibration of the laser radar and the camera;
s1-2, synchronously preprocessing data acquired by the depth camera and the laser radar;
s2, construction of transformer substation semantic map model
S2-1, performing map modeling of the operation and maintenance environment through point cloud data acquired by a laser radar and odometer information;
s2-2, obtaining an RGBD image of the depth camera, performing target recognition through depth learning, understanding scene information, and obtaining semantic information of the RGBD image;
s2-3, performing coordinate conversion, projecting the target identified in the step S2-2 to a grid map, and providing environment cognitive information for the transformer substation;
and S3, repeating the step S2 to complete the construction of the semantic map.
Preferably, in step S1-1, the internal reference calibration method of the depth camera is as follows: and determining the corresponding relation between the characteristic points in the calibration plate and the actual calibration plate, and acquiring the internal reference and distortion coefficient of the depth camera.
Preferably, in step S1-1, the external reference joint calibration method for the lidar and the camera is as follows: and the laser radar and depth camera external parameter combined calibration is realized by searching three-dimensional points detected by the laser radar and two-dimensional points detected by the corresponding depth camera.
Preferably, in the step S1-2, it is first ensured that clock sources of the laser radar and the depth camera on hardware are uniform; and then processed using a time synchronizer.
Preferably, in step S2-1, first, laser beams are continuously emitted and reflection information is obtained according to the laser radar during the high-speed rotation process, and then distance information of obstacles in the working range of the substation is collected to combine into spatial point cloud information, and the map information of the substation is obtained through filtering processing, map splicing and loop detection, and finally, map modeling of the operation and maintenance environment is performed according to the obtained map information of the substation.
Preferably, the map modeling of the operation and maintenance environment is implemented by using a cartographer algorithm, the data flow process of the whole cartographer algorithm is to collect two sensor data of a laser radar and a milemeter from a sensor at the beginning, each frame of laser radar data is subjected to down-sampling by a filter, then enters a local SLAM for local matching to obtain a camera pose, and then the pose estimation is optimized by fusing milemeter data.
Preferably, each frame of laser radar data needs to be subjected to motion filtering, point cloud data with too small position moving distance or short time interval is removed, and if the point cloud data can be filtered, the frame of scanning data is updated into the subgraph; when the subgraph construction is completed and new scanning data is not received any more, the new scanning data is added into the global SLAM to form global constraint for participating in loop detection of a back end.
Preferably, in step S2-2, after acquiring the RGBD map of the depth camera, the depth learning module may identify semantic information of the object and its position in the image by using YOLO v3 algorithm, and then, according to the depth information of the image acquired from the depth camera, the angle and position of the object relative to the depth camera may be known by using the depth information, and the scale information of the target may be understood by using a scene understanding method.
Preferably, the scene understanding method comprises the following steps: by utilizing the characteristic of long and straight inspection environment, obtaining a characteristic line of a detection target by analyzing the relation between the vanishing point and three straight line groups in the horizontal, vertical and depth directions obtained in the vanishing point detection process; and obtaining the geometric parameters of the specific target by utilizing the relation between the cross ratio invariance and the target cube, thereby realizing coordinate conversion and map annotation.
Preferably, in step S2-3, the pixel plane coordinates can be mapped to the imaging plane through operation, where (u, v,1) represents the homogeneous coordinates of the target in the pixel coordinate system, and u, v,10And v0Is a translation relation of the origin of the pixel coordinate system and the optical axis, ZcObject depth information obtained for depth camera measurements, fxAnd fyThe focal lengths of the cameras in the x and y directions, respectively, can be obtained by calibration,
Figure BDA0003563136960000031
the translation and rotation relation exists between the camera coordinate system and the robot coordinate system, the rotation matrix R and the translation matrix T belong to camera external parameters, can be set manually, and can obtain the coordinate representation of the target object in the robot coordinate system through calculation,
Figure BDA0003563136960000041
and representing different types of detection targets by cubes with different colors through coordinate conversion, and marking the cubes on a map by using a Markerrray to finish the construction of the semantic map.
The invention has the following characteristics and beneficial effects:
by adopting the technical scheme, the method has the advantages that,
1) the method for constructing the cartographer with maturity and high precision is applied, the framework of graph optimization is adopted, the adaptability to different weather environments and illumination conditions is high in the graph constructing process, the algorithm can effectively remove the laser motion distortion, the graph constructing precision is improved, and the accumulated error is reduced.
2) The target detection algorithm based on deep learning realizes target tracking of the detection frame, establishes a tracking sequence of a detection result, and adds the condition of whether the detection target is tracked into the fusion algorithm, thereby further improving the target detection precision and reducing the false detection rate.
3) The mapping method based on laser and vision fusion runs under an ROS robot operating system, point cloud and image data can be obtained in real time, a fusion algorithm obtains target detection information in real time and carries out tracking and optimal matching, the operation speed of the algorithm is equivalent to the data acquisition speed of a laser radar, and the real-time requirement under the scene of a transformer substation is met.
4) The method breaks through the requirements of reconstruction of complex operation and maintenance environments of the intelligent power grid and highly intelligent operation of scene visual perception, intelligently identifies target images under complex environments such as complex terrain and large spatial scale, comprehensively refines the power grid operation and maintenance scene analysis and cognition method based on machine vision under natural environment, and realizes a knowledge expression mechanism of cognition information in a visual environment model.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings used in the description of the embodiments or the prior art will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and for those skilled in the art, other drawings can be obtained according to these drawings without creative efforts.
Fig. 1 is an architecture diagram of a substation semantic map construction method based on laser SLAM and visual fusion in an embodiment of the present invention.
Fig. 2 is a flowchart for understanding a substation scene based on vanishing points according to an embodiment of the present invention.
FIG. 3 is a fused image according to an embodiment of the present invention.
Detailed Description
It should be noted that the embodiments and features of the embodiments may be combined with each other without conflict.
In the description of the present invention, it is to be understood that the terms "center", "longitudinal", "lateral", "up", "down", "front", "back", "left", "right", "vertical", "horizontal", "top", "bottom", "inner", "outer", and the like, indicate orientations or positional relationships based on those shown in the drawings, and are used only for convenience in describing the present invention and for simplicity in description, and do not indicate or imply that the referenced devices or elements must have a particular orientation, be constructed and operated in a particular orientation, and thus, are not to be construed as limiting the present invention. Furthermore, the terms "first", "second", etc. are used for descriptive purposes only and are not to be construed as indicating or implying relative importance or implicitly indicating the number of technical features indicated. Thus, a feature defined as "first," "second," etc. may explicitly or implicitly include one or more of that feature. In the description of the present invention, "a plurality" means two or more unless otherwise specified.
In the description of the present invention, it should be noted that, unless otherwise explicitly specified or limited, the terms "mounted," "connected," and "connected" are to be construed broadly, e.g., as meaning either a fixed connection, a removable connection, or an integral connection; can be mechanically or electrically connected; they may be connected directly or indirectly through intervening media, or they may be interconnected between two elements. The specific meaning of the above terms in the present invention can be understood by those of ordinary skill in the art through specific situations.
The invention provides a transformer substation semantic map construction method based on laser SLAM and visual fusion, which comprises the following steps as shown in figure 1:
s1, acquisition and preprocessing of sensor data
S1-1, carrying out internal reference calibration and external reference combined calibration of the laser radar and the camera on the depth camera;
s1-2, synchronously preprocessing data acquired by the depth camera and the laser radar;
s2, construction of transformer substation semantic map model
S2-1, performing map modeling of the operation and maintenance environment through point cloud data acquired by a laser radar and odometer information;
s2-2, obtaining an RGBD image of the depth camera, performing target recognition through depth learning, understanding scene information, and obtaining semantic information of the RGBD image;
s2-3, performing coordinate conversion, projecting the target identified in the step S2-2 to a grid map, and providing environment cognitive information for the transformer substation;
and S3, repeating the step S2 to complete the construction of the semantic map.
In the above technical scheme
1) The cartographer-based map building method is a mature algorithm with high precision in the current laser SLAM, and the algorithm adopts a map optimization framework. And in the process of drawing construction, the method has the advantage of higher adaptability to different weather environments and illumination conditions. The algorithm can effectively remove the laser motion distortion, improve the accuracy of image construction and reduce the accumulated error.
2) The target detection algorithm based on deep learning realizes target tracking of the detection frame, establishes a tracking sequence of a detection result, and adds the condition of whether the detection target is tracked into the fusion algorithm, thereby further improving the target detection precision and reducing the false detection rate.
3) The image building method based on laser and vision fusion runs under an ROS robot operating system, point cloud and image data can be obtained in real time, and a fusion algorithm obtains target detection information in real time and carries out tracking and optimal matching. The operation speed of the algorithm is equivalent to the data acquisition speed of the laser radar, and the real-time requirement under the scene of the transformer substation is met.
In step S1-1, the method for calibrating the internal reference of the depth camera includes: and determining the corresponding relation between the characteristic points in the calibration plate and the actual calibration plate, and acquiring the internal reference and distortion coefficient of the depth camera.
It will be appreciated that the actual calibration plate described above may be obtained by manual measurement.
Specifically, a plurality of images at different angles can be acquired by moving the position of the camera, and then the characteristic points are identified by an algorithm and the corresponding relation with the characteristic points of the pixel coordinate system is solved to obtain the internal reference and distortion coefficient of the camera.
Specifically, the external reference combined calibration method of the laser radar and the camera comprises the following steps: and the laser radar and depth camera external parameter combined calibration is realized by searching three-dimensional points detected by the laser radar and two-dimensional points detected by the corresponding depth camera.
It can be understood that the calibration object is needed in the calibration process, corresponding points can be manually selected or calculated, and when the positions of a plurality of three-dimensional space points and projection points of the three-dimensional space points are known, the transformation relation of the spirit camera relative to the laser radar coordinate system is solved by constructing a PnP problem.
In the step S1-2, it is further provided that the clock sources of the laser radar and the depth camera on the hardware are first ensured to be uniform,
specifically, the laser radar and the depth camera can independently acquire data carrying timestamps in the subsequent process under the same clock;
further, a time synchronizer is used for processing.
Specifically, the Time Synchronizer provided by the ROS system is used for processing, and an acceptance protocol is established by simultaneously subscribing to a laser radar point cloud topic "/sensor _ msgs/Point cloud 2" and image topics "/camera/color/image _ raw" and "/camera/aligned _ depth _ to _ color/image _ raw". And when the difference between the point cloud data timestamp and the image data timestamp is smaller than a threshold value, uniformly transmitting a plurality of topics into the same callback function by using' sync.
According to the further arrangement of the invention, in the step S2-1, firstly, laser beams are continuously emitted and reflection information is obtained according to the laser radar in the high-speed rotation process, then the distance information of obstacles in the working range of the transformer substation is collected and combined into spatial point cloud information, the transformer substation map information is obtained through filtering processing, map splicing and loop detection, and finally, the map modeling of the operation and maintenance environment is carried out according to the obtained transformer substation map information.
Furthermore, the map modeling of the operation and maintenance environment is realized by applying a cartographer algorithm, the data flow process of the whole cartographer algorithm is to collect two sensor data of a laser radar and a milemeter from a sensor at the beginning, each frame of laser radar data is subjected to down-sampling by a filter, then enters a local SLAM for local matching to obtain a camera pose, and then the pose estimation is optimized by fusing milemeter data.
The method comprises the following steps that each frame of laser radar data needs to be subjected to motion filtering, point cloud data with too small position moving distance or short time interval are removed, and if the point cloud data can be filtered, the frame of scanning data is updated into a subgraph; when the subgraph construction is completed and new scanning data is not received any more, the new scanning data is added into the global SLAM to form global constraint for participating in loop detection of a back end.
In step S2-2, after the RGBD image of the depth camera is obtained, the depth learning module may identify semantic information of the object and its position in the image by using YOLO v3 algorithm, and then, according to the depth information of the image obtained from the depth camera, the angle and position of the object relative to the depth camera may be known through the depth information, and the scale information of the target may be understood by using a scene understanding method.
Specifically, as shown in fig. 2, the scene understanding method is based on vanishing point substation scene understanding: by utilizing the characteristic of long and straight inspection environment, obtaining a characteristic line of a detection target by analyzing the relation between the vanishing point and three straight line groups in the horizontal, vertical and depth directions obtained in the vanishing point detection process; and obtaining the geometric parameters of the specific target by utilizing the relation between the cross ratio invariance and the target cube, thereby realizing coordinate conversion and map annotation.
The transformer substation outdoor cable is parallel and long, so that scene understanding of the transformer substation can be realized by using vanishing points. Firstly, the image edge detection is carried out on the image by using a Canny operator, and a straight line is extracted by using Hough transformation. And then, grouping the straight lines in all directions by using a weighted regression algorithm, eliminating redundant straight lines and solving vanishing points.
On the basis, dividing corresponding interested areas by taking the characteristic line as a standard, and selecting the target object in the interested areas according to the space constraint relation. And then, extracting geometric information of the two-dimensional plane through the cross ratio invariance of projective transformation, and selecting a static target as a reference target, wherein the method comprises the following steps:
Cross(X1,X2,X3,V)=Cross(x1,x1,x1,v)
namely:
Figure BDA0003563136960000091
in the formula, XiRepresenting spatial points in the real world; xiRepresents XiProjection points corresponding to the imaging plane; since the vanishing point V is at infinity in the real world, d (X)2V) and d (X)3And V) are infinite, then:
Figure BDA0003563136960000092
according to the above two-mode principle of equivalence, the geometric parameters of the three-dimensional solid are calculated in turn.
According to a further configuration of the present invention, as shown in fig. 3, in the step S2-3, the pixel plane coordinates can be mapped to the imaging plane through calculation, where (u, v,1) represents the homogeneous coordinates of the target in the pixel coordinate system, and u, v,10And v0Is a translation relation of the origin of the pixel coordinate system and the optical axis, ZcObject depth information obtained for depth camera measurements, fxAnd fyThe focal lengths of the cameras in the x and y directions, respectively, can be obtained by calibration,
Figure BDA0003563136960000093
the translation and rotation relation exists between the camera coordinate system and the robot coordinate system, the rotation matrix R and the translation matrix T belong to camera external parameters, can be set manually, and can obtain the coordinate representation of the target object in the robot coordinate system through calculation,
Figure BDA0003563136960000101
and representing different types of detection targets by cubes with different colors through coordinate conversion, and marking the cubes on a map by using a Markerrray to finish the construction of the semantic map.
The embodiments of the present invention have been described in detail with reference to the accompanying drawings, but the present invention is not limited to the described embodiments. It will be apparent to those skilled in the art that various changes, modifications, substitutions and alterations can be made in these embodiments, including components thereof, without departing from the principles and spirit of the invention, and still fall within the scope of the invention.

Claims (10)

1. A transformer substation semantic map construction method based on laser SLAM and visual fusion is characterized by comprising the following steps:
s1, acquisition and preprocessing of sensor data
S1-1, carrying out internal reference calibration and external reference combined calibration of the laser radar and the camera on the depth camera;
s1-2, synchronously preprocessing data acquired by the depth camera and the laser radar;
s2, construction of transformer substation semantic map model
S2-1, performing map modeling of the operation and maintenance environment through point cloud data acquired by a laser radar and odometer information;
s2-2, obtaining an RGBD image of the depth camera, performing target recognition through depth learning, understanding scene information, and obtaining semantic information of the RGBD image;
s2-3, carrying out coordinate conversion, projecting the target identified in the step S2-2 to a grid map, and providing environment cognitive information for the transformer substation;
and S3, repeating the step S2 to complete the fusion construction of the semantic map.
2. The transformer substation semantic map construction method based on laser SLAM and visual fusion as claimed in claim 1, wherein in step S1-1, the internal reference calibration method of the depth camera is as follows: and determining the corresponding relation between the characteristic points in the calibration plate and the actual calibration plate, and acquiring the internal reference and distortion coefficient of the depth camera.
3. The substation semantic map construction method based on laser SLAM and visual fusion as claimed in claim 1, wherein in step S1-1, the external reference joint calibration method of laser radar and camera is as follows: and the laser radar and depth camera external parameter combined calibration is realized by searching three-dimensional points detected by the laser radar and two-dimensional points detected by the corresponding depth camera.
4. The substation semantic map construction method based on the laser SLAM and the visual fusion as claimed in claim 1, wherein in the step S1-2, it is firstly ensured that clock sources of the laser radar and the depth camera on hardware are uniform; and then processed using a time synchronizer.
5. The transformer substation semantic map construction method based on the laser SLAM and the visual fusion as claimed in claim 1, wherein in step S2-1, firstly, laser beams are continuously emitted and reflection information is obtained according to a laser radar in a high-speed rotation process, then distance information of obstacles in a working range of a transformer substation is collected and combined into spatial point cloud information, transformer substation map information is obtained through filtering processing, map splicing and loop detection, and finally map modeling of an operation and maintenance environment is performed according to the obtained transformer substation map information.
6. The transformer substation semantic map construction method based on laser SLAM and vision fusion as claimed in claim 5, wherein map modeling of the operation and maintenance environment is performed by using a cartographer algorithm, a data flow process of the whole cartographer algorithm is to collect two sensor data of laser radar and odometer from a sensor at the beginning, each frame of laser radar data is subjected to down sampling by a filter, then enters a local SLAM for local matching to obtain a camera pose, and then is fused with odometer data to optimize pose estimation.
7. The transformer substation semantic map construction method based on the laser SLAM and the visual fusion is characterized in that each frame of laser radar data needs to be subjected to motion filtering, point cloud data with an excessively small position moving distance or a short time interval is removed, and if the point cloud data can be filtered, the frame of scanning data is updated into a subgraph; when the subgraph construction is completed and new scanning data is not received any more, the new scanning data is added into the global SLAM to form global constraint for participating in loop detection of a back end.
8. The transformer substation semantic map construction method based on laser SLAM and visual fusion as claimed in claim 1, wherein in step S2-2, after obtaining the RGBD map of the depth camera, the depth learning module uses YOLO v3 algorithm to identify semantic information of the object and its position in the image, and then according to the image depth information obtained from the depth camera, the angle and position of the object relative to the depth camera can be known through the depth information, and the scale information of the object can adopt a scene understanding method.
9. The transformer substation semantic map construction method based on laser SLAM and visual fusion as claimed in claim 8, wherein the scene understanding method is as follows: by utilizing the characteristic of long and straight inspection environment, obtaining a characteristic line of a detection target by analyzing the relation between the vanishing point and three straight line groups in the horizontal, vertical and depth directions obtained in the vanishing point detection process; and obtaining the geometric parameters of the specific target by using the relationship between the cross ratio invariance and the target cube, thereby realizing coordinate conversion and map annotation.
10. The transformer substation semantic map construction method based on laser SLAM and visual fusion as claimed in claim 1, wherein in step S2-3, pixel plane coordinates can be mapped to an imaging plane through calculation, wherein (u, v,1) represents homogeneous coordinates of a target in a pixel coordinate system, and u, v,10And v0Is a translation relation of the origin of the pixel coordinate system and the optical axis, ZcObject depth information obtained for depth camera measurements, fxAnd fyThe focal lengths of the cameras in the x and y directions, respectively, can be obtained by calibration,
Figure FDA0003563136950000031
the translation and rotation relation exists between the camera coordinate system and the robot coordinate system, the rotation matrix R and the translation matrix T belong to camera external parameters, can be set manually, and can obtain the coordinate representation of the target object in the robot coordinate system through calculation,
Figure FDA0003563136950000032
and representing different types of detection targets by cubes with different colors through coordinate conversion, and marking the cubes on a map by using a Markerrray to finish the construction of the semantic map.
CN202210295464.4A 2022-03-24 2022-03-24 Substation semantic map construction method based on laser SLAM and visual fusion Pending CN114638909A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202210295464.4A CN114638909A (en) 2022-03-24 2022-03-24 Substation semantic map construction method based on laser SLAM and visual fusion

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202210295464.4A CN114638909A (en) 2022-03-24 2022-03-24 Substation semantic map construction method based on laser SLAM and visual fusion

Publications (1)

Publication Number Publication Date
CN114638909A true CN114638909A (en) 2022-06-17

Family

ID=81950347

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202210295464.4A Pending CN114638909A (en) 2022-03-24 2022-03-24 Substation semantic map construction method based on laser SLAM and visual fusion

Country Status (1)

Country Link
CN (1) CN114638909A (en)

Cited By (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115310187A (en) * 2022-10-09 2022-11-08 中国电建集团山东电力建设第一工程有限公司 Generator set installation method and system based on BIM technology and semantic fusion
CN115903797A (en) * 2022-11-09 2023-04-04 硕能(上海)自动化科技有限公司 Autonomous routing inspection method for multi-floor modeling of transformer substation
CN116030200A (en) * 2023-03-27 2023-04-28 武汉零点视觉数字科技有限公司 Scene reconstruction method and device based on visual fusion
CN116774195A (en) * 2023-08-22 2023-09-19 国网天津市电力公司滨海供电分公司 Excitation judgment and parameter self-adjustment method and system for multi-sensor combined calibration
CN117406185A (en) * 2023-12-14 2024-01-16 深圳市其域创新科技有限公司 External parameter calibration method, device and equipment between radar and camera and storage medium
CN117949968A (en) * 2024-03-26 2024-04-30 深圳市其域创新科技有限公司 Laser radar SLAM positioning method, device, computer equipment and storage medium
CN117968666A (en) * 2024-04-02 2024-05-03 国网江苏省电力有限公司常州供电分公司 Substation inspection robot positioning and navigation method based on integrated SLAM

Cited By (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN115310187A (en) * 2022-10-09 2022-11-08 中国电建集团山东电力建设第一工程有限公司 Generator set installation method and system based on BIM technology and semantic fusion
CN115903797A (en) * 2022-11-09 2023-04-04 硕能(上海)自动化科技有限公司 Autonomous routing inspection method for multi-floor modeling of transformer substation
CN116030200A (en) * 2023-03-27 2023-04-28 武汉零点视觉数字科技有限公司 Scene reconstruction method and device based on visual fusion
CN116774195A (en) * 2023-08-22 2023-09-19 国网天津市电力公司滨海供电分公司 Excitation judgment and parameter self-adjustment method and system for multi-sensor combined calibration
CN116774195B (en) * 2023-08-22 2023-12-08 国网天津市电力公司滨海供电分公司 Excitation judgment and parameter self-adjustment method and system for multi-sensor combined calibration
CN117406185A (en) * 2023-12-14 2024-01-16 深圳市其域创新科技有限公司 External parameter calibration method, device and equipment between radar and camera and storage medium
CN117406185B (en) * 2023-12-14 2024-02-23 深圳市其域创新科技有限公司 External parameter calibration method, device and equipment between radar and camera and storage medium
CN117949968A (en) * 2024-03-26 2024-04-30 深圳市其域创新科技有限公司 Laser radar SLAM positioning method, device, computer equipment and storage medium
CN117968666A (en) * 2024-04-02 2024-05-03 国网江苏省电力有限公司常州供电分公司 Substation inspection robot positioning and navigation method based on integrated SLAM

Similar Documents

Publication Publication Date Title
CN114638909A (en) Substation semantic map construction method based on laser SLAM and visual fusion
CN109544679B (en) Three-dimensional reconstruction method for inner wall of pipeline
CN111583337B (en) Omnibearing obstacle detection method based on multi-sensor fusion
CN111897332B (en) Semantic intelligent substation robot humanoid inspection operation method and system
CN106584451B (en) automatic transformer substation composition robot and method based on visual navigation
CN103279949B (en) Based on the multi-camera parameter automatic calibration system operation method of self-align robot
CN111958592B (en) Image semantic analysis system and method for transformer substation inspection robot
CN110458897B (en) Multi-camera automatic calibration method and system and monitoring method and system
CN111815717B (en) Multi-sensor fusion external parameter combination semi-autonomous calibration method
CN105654732A (en) Road monitoring system and method based on depth image
CN111060924A (en) SLAM and target tracking method
US20230236280A1 (en) Method and system for positioning indoor autonomous mobile robot
CN104809754A (en) Space synchronous positioning and information recording system based on three-dimensional real scene model
CN114841944B (en) Tailing dam surface deformation inspection method based on rail-mounted robot
CN115376109B (en) Obstacle detection method, obstacle detection device, and storage medium
CN113359154A (en) Indoor and outdoor universal high-precision real-time measurement method
CN113012292A (en) AR remote construction monitoring method and system based on unmanned aerial vehicle aerial photography
CN114923477A (en) Multi-dimensional space-ground collaborative map building system and method based on vision and laser SLAM technology
CN116630267A (en) Roadbed settlement monitoring method based on unmanned aerial vehicle and laser radar data fusion
CN113160292B (en) Laser radar point cloud data three-dimensional modeling device and method based on intelligent mobile terminal
CN111612833A (en) Real-time detection method for height of running vehicle
CN112348941A (en) Real-time fusion method and device based on point cloud and image data
CN117310627A (en) Combined calibration method applied to vehicle-road collaborative road side sensing system
CN116562590A (en) Bridge construction and operation maintenance method, system, equipment and medium
CN113947141B (en) Roadside beacon sensing system of urban intersection scene

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination