CN111673748A - Human-computer interaction sensing system and method for gas insulated switch visual inspection robot - Google Patents

Human-computer interaction sensing system and method for gas insulated switch visual inspection robot Download PDF

Info

Publication number
CN111673748A
CN111673748A CN202010508069.0A CN202010508069A CN111673748A CN 111673748 A CN111673748 A CN 111673748A CN 202010508069 A CN202010508069 A CN 202010508069A CN 111673748 A CN111673748 A CN 111673748A
Authority
CN
China
Prior art keywords
robot
information
human
computer interaction
gas insulated
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Granted
Application number
CN202010508069.0A
Other languages
Chinese (zh)
Other versions
CN111673748B (en
Inventor
佃松宜
李振阳
鉴庆之
李勇
钟羽中
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Sichuan University
State Grid Shandong Electric Power Co Ltd
Original Assignee
Sichuan University
State Grid Shandong Electric Power Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Sichuan University, State Grid Shandong Electric Power Co Ltd filed Critical Sichuan University
Priority to CN202010508069.0A priority Critical patent/CN111673748B/en
Publication of CN111673748A publication Critical patent/CN111673748A/en
Application granted granted Critical
Publication of CN111673748B publication Critical patent/CN111673748B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1602Programme controls characterised by the control system, structure, architecture
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J11/00Manipulators not otherwise provided for
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J5/00Manipulators mounted on wheels or on carriages
    • B25J5/007Manipulators mounted on wheels or on carriages mounted on wheels
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1694Programme controls characterised by use of sensors other than normal servo-feedback from position, speed or acceleration sensors, perception control, multi-sensor controlled systems, sensor fusion
    • B25J9/1697Vision controlled systems

Landscapes

  • Engineering & Computer Science (AREA)
  • Robotics (AREA)
  • Mechanical Engineering (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

The invention discloses a human-computer interaction sensing system and a human-computer interaction sensing method for a visual inspection robot for a gas insulated switch, which can master the specific position information of the closed space of the gas insulated switch where an inspection robot is located through a human-computer interaction interface of an upper computer when an operator controls the inspection robot to perform fault inspection on the gas insulated switch equipment, so that the subsequent inspection operation planning of the operator is facilitated, and the human-computer interaction telepresence of the operator of the gas insulated switch inspection robot is improved through a 3D virtual reality interaction interface, so that the determination speed of the position of a fault point is effectively improved, and the inspection efficiency of the gas insulated switch is improved.

Description

Human-computer interaction sensing system and method for gas insulated switch visual inspection robot
Technical Field
The application belongs to the technical field of robots, relates to a vision inspection robot of an electric power system, and particularly relates to a human-computer interaction perception system and method of the vision inspection robot for a gas insulated switch of a transformer substation, aiming at improving the telepresence interaction experience of operators of the vision inspection robot.
Background
In power systems, gas-insulated switches are one of the key power transformation devices. Due to the fact that the structure is complex, manual inspection is time-consuming and low in efficiency, and the fault inspection efficiency of the gas insulated switch can be effectively improved by adopting a robotized operation mode. At present, a robot inspection operation mode mainly helps an operator to effectively judge the fault condition of a gas insulated switch through interaction information provided by a human-computer interaction perception method.
The human-computer interaction perception method of the existing power system inspection robot mainly adopts a video information interaction mode, an operator controls the inspection robot to move in equipment to be inspected, and whether a fault occurs is judged through video information feedback of a human-computer interaction interface. However, for a closed and externally invisible inspection scene such as a gas insulated switch, an operator cannot clearly master specific positioning information of the inspection robot in the gas insulated switch only through video information of a human-computer interaction interface, and even if a fault exists in the current working scene, the operator cannot determine that the fault is located at a specific position of the gas insulated switch, so that the maintenance operation planning of the operator is limited, and the inspection efficiency is low. The reason is that the traditional video information interaction can only feed back the current working scene of the robot acquired by the camera, but cannot feed back the positioning information of the inspection robot to the human-computer interface in real time, so that an operator is difficult to judge the specific position of the robot in the closed space, and further the next maintenance operation planning (such as the positioning of a fault point) of the operator is influenced.
In view of the analysis, a human-computer interaction perception technology capable of feeding back the position information of the robot in real time is developed, and the human-computer interaction perception technology has important significance for improving the human-computer interaction perception experience of operators and perfecting the maintenance operation planning of the operators.
Disclosure of Invention
The invention aims to provide a human-computer interaction perception method and system for a visual inspection robot for a gas insulated switch, aiming at the problem that the traditional visual inspection robot for power equipment cannot feed back robot positioning information in real time in the inspection process of the gas insulated switch of a transformer substation, so that the human-computer interaction perception capability of the inspection robot system is further improved, and an operator can clearly master the positioning information of the robot in the gas insulated switch equipment.
The invention provides a human-computer interaction perception system of a gas insulated switch vision inspection robot, which comprises a vision inspection robot, an upper computer and a visual human-computer interaction interface, wherein the vision inspection robot is used for being placed in gas insulated switchgear to inspect a gas insulated switch;
the vision inspection robot comprises a robot body, a binocular vision inertial sensing module and embedded equipment, wherein the binocular vision inertial sensing module is installed on the robot body;
the binocular vision inertial sensing module is used for acquiring video and robot attitude information in real time and sending the acquired video and attitude information to the embedded equipment;
the embedded equipment is used for controlling the action of the robot body, generating robot position information according to the received video and attitude information, and sending the generated robot position information to a visual human-computer interaction interface of the upper computer in a communication transmission mode;
the visual human-computer interaction interface is provided with a gas insulated switch model and a visual inspection robot model positioned in the gas insulated switch model; and the visual human-computer interaction interface correlates the received robot position information with the visual inspection robot model, and displays the position of the robot model in the gas insulated switch model according to the robot position information.
Above-mentioned human-computer interaction perception system of gas insulated switchgear vision inspection robot, binocular vision inertial sensing module includes binocular vision sensor, inertial measurement unit and arc light source, binocular vision sensor is used for acquireing video information in real time, inertial measurement unit is used for acquireing the gesture information of robot in real time, the arc light source is used for promoting the interior luminance of gas insulated switchgear, and reinforcing binocular vision sensor shoots image quality.
The embedded equipment comprises a main control board, a robot operating system, a data transmission assembly and a data processing module;
the robot operating system and the data processing module are arranged on a main control system of the main control board;
the robot operating system is used for controlling the robot body to act;
the data transmission assembly is respectively connected with the binocular vision inertial sensing module on the robot body and the master control system and is used for transmitting the video acquired by the binocular video inertial sensing module in real time and the robot posture information to the master control system;
the data processing module processes the video and the robot posture information acquired from the master control system to obtain robot position information and sends the robot position information to a visual human-computer interaction interface in communication connection with the master control system.
Furthermore, the data transmission assembly transmits the video and the robot posture information acquired by the binocular video inertial sensing module in real time to the robot operating system of the main control system, and the robot operating system issues the received video and the robot posture information as topic information and reads the topic information by the data processing module.
Further, the data transmission component comprises a USB PHY module used for transmitting video information and an RS232 transceiver used for transmitting robot posture information; the data processing module is based on a visual inertia SLAM algorithm
In the human-computer interaction sensing system of the gas insulated switch vision inspection robot, the upper computer system end adopts a Windows8, Windows9, Windows10 or Windows11 system.
In the human-computer interaction sensing system of the gas insulated switch visual inspection robot, the visual human-computer interaction interface is a visual 3D project and comprises a three-dimensional model folder, a connecting module and a display module;
the three-dimensional model folder is used for placing a gas insulated switch model and a vision inspection robot model;
the connecting module is used for associating the received robot position information with the visual inspection robot model and setting the visual inspection robot model as an instantiation object;
the display module is used for displaying the gas insulated switch model and the vision inspection robot model which are placed in the three-dimensional model folder, and displaying the position of the robot model in the gas insulated switch model according to the robot position information.
The human-computer interaction sensing system of the gas insulated switch vision inspection robot further comprises a video interface arranged in an upper computer, the embedded equipment sends video information acquired by the binocular vision inertial sensing module to the video interface, and associates the position information of the robot in the visual human-computer interaction interface with the video information in the video interface, so that the synchronous display of the video interface and the human-computer interaction interface is realized.
The human-computer interaction perception system of the gas insulated switch visual inspection robot is characterized in that the upper computer is provided with a first storage module used for storing robot position information and a second storage module used for storing video information, and the first storage module and the second storage module are associated through timestamps contained in the stored robot position information and the stored video information. The human-computer interaction interface and the video interface can further realize synchronous loading and review of the position information and the video information of the robot by utilizing the information stored in the first storage module and the second storage module.
The invention further provides a human-computer interaction perception method of the gas insulated switch visual inspection robot, which uses the human-computer interaction perception system to execute the operation according to the following steps:
s1 running of a visual human-computer interaction interface, and placing the visual inspection robot model at the position of an access hole in the gas insulated switch model; meanwhile, the vision inspection robot is placed at the position of an access hole in the gas insulated switchgear;
s2, the embedded equipment operates to control the robot body to act, meanwhile, the robot position information is generated according to the received video and the gesture information, and the generated robot position information is sent to a visual human-computer interaction interface of the upper computer in a communication transmission mode;
and S3, associating the received robot position information with the visual inspection robot model through the visual human-computer interaction interface, and displaying the position of the robot model in the gas insulated switch model according to the robot position information to finish human-computer visual interaction.
In step S2, the embedded device sends the robot location information and the video information to the video interface.
In the step S3, the position information of the robot in the visual human-computer interaction interface is related to the video information in the video interface, so as to achieve the synchronous display of the video interface and the human-computer interaction interface.
The human-computer interaction perception method of the gas insulated switch vision inspection robot further comprises the following steps:
in the step S4, the visual human-computer interaction interface loads the robot position information from the first storage module, and simultaneously, the video information associated with the robot position information is synchronized to the video interface for display.
According to the human-computer interaction sensing system and the human-computer interaction sensing method for the gas insulated switch visual inspection robot, firstly, a 3D project with a 3D visualization function is built on an upper computer to serve as a human-computer interaction interface, and a data processing module with a function of generating robot position information is added in embedded equipment of the visual inspection robot, so that when an operator controls the robot to perform fault inspection on gas insulated switch equipment, the system can feed back gesture information of the visual inspection robot in real time and the gesture information is presented to a display end of the human-computer interaction interface.
The human-computer interaction perception system and the method for the gas insulated switch visual inspection robot provided by the invention at least have the following advantages or beneficial effects:
1) according to the human-computer interaction sensing system provided by the invention, when an operator controls the inspection robot to perform fault inspection on the gas insulated switchgear, the system can feed back the posture information of the vision inspection robot in real time and present the posture information in a visual human-computer interaction interface (such as 3D engineering) of the human-computer interaction interface, so that the operator can master the specific position information of the gas insulated switchgear in which the vision inspection robot is located, the handling control of the position information of the fault point is realized, and the subsequent inspection operation planning of the operator is further facilitated.
2) According to the invention, through the human-computer visual interaction interface, the human-computer interaction telepresence of the gas insulated switch inspection robot operator can be improved, so that the determination speed of the fault point position is effectively improved, and the visual inspection efficiency of the gas insulated switch is improved.
3) According to the invention, the visual human-computer interaction interface and the video interface can realize synchronous display of the position of the robot and the video information of the position of the robot, so that the accuracy of judging the position of the fault point of the gas insulated switch is further improved, and the visual inspection efficiency of the gas insulated switch is improved.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present invention, the drawings needed to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present invention and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained according to the drawings without inventive efforts.
Fig. 1 is a block diagram of a human-computer interaction sensing system of the gas insulated switch vision inspection robot of the present invention.
Fig. 2 is a schematic structural view of the vision inspection robot.
Fig. 3 is a flowchart of an implementation of a human-computer interaction perception system of a first vision inspection robot according to the present invention.
Fig. 4 is a block diagram of a human-computer interaction perception system of a second vision inspection robot according to the present invention.
Fig. 5 is a block diagram of a human-computer interaction perception system of a third vision inspection robot according to the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present invention clearer, the technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are some, but not all, embodiments of the present invention. The components of embodiments of the present invention generally described and illustrated in the figures herein may be arranged and designed in a wide variety of different configurations.
Thus, the following detailed description of the embodiments of the present invention, presented in the figures, is not intended to limit the scope of the invention, as claimed, but is merely representative of selected embodiments of the invention. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The human-computer interaction sensing system and the human-computer interaction sensing method can be applied to the inspection of the inner closed ring dark field environment of the gas insulated switchgear. The technical scheme provided by the invention is developed based on the existing visual inspection robot, and the actual posture information of the robot is imported into the 3D visual human-computer interaction interface by constructing the 3D visual human-computer interaction interface at the upper computer system end, so that the human-computer visual interaction can be realized. The man-machine interaction perception system and the man-machine interaction perception method provided by the invention are explained and explained in detail in the following by combining the embodiment.
Example 1
The vision inspection robot aimed at by the embodiment is mainly a robot facing the inspection of the gas insulated switchgear of the 220kV transformer substation. The human-computer interaction perception system of the gas insulated switch vision inspection robot comprises a vision inspection robot which is placed in a gas insulated switchgear and used for inspecting a gas insulated switch, an upper computer which is in communication connection with the inspection robot, and a visual human-computer interaction interface which is installed in the upper computer, as shown in fig. 1.
First, vision inspection robot
As shown in fig. 1, the vision inspection robot includes a robot body, a binocular vision inertial sensing module, an embedded device, a rechargeable lithium battery power supply module, and a motor driving module.
The robot body in the embodiment can adopt a conventional vision inspection robot used for the vision inspection of the current power system. As shown in fig. 2, the robot body adopted in the present embodiment includes a machine body 1, four mecanum wheels 2 installed below the machine body and evenly distributed for controlling the movement of the robot.
The binocular vision inertial sensing module in the embodiment comprises a binocular vision sensor 3, an inertial measurement unit and an arc-shaped light source 4. The binocular vision sensor adopts a conventional binocular vision sensor 3 which is disclosed in the field, is arranged on the upper part of the robot body, consists of a left camera and a right camera which are symmetrically arranged and is used for acquiring video information in real time. The inertial measurement unit adopts a conventional inertial measurement unit which is disclosed in the field and can realize measurement of the attitude angle and the acceleration of the carrier in three axes, is arranged in the front end of the robot body and is used for acquiring the attitude information of the robot in real time. Arc light source 4 installs in robot body front end below position for promote the interior luminance of gas insulated switchgear, reinforcing binocular vision sensor shoots the image quality.
The embedded device in this embodiment is installed inside the robot body, and includes a main control board, a robot operating system, a data transmission assembly, and a data processing module. In this embodiment, the main control board adopts a high-performance raspberry pi 3(raspberry Module 3), and the main control system installed thereon is a Linux operating system of ubuntu 16.04. The main control system is provided with a robot operating system and a data processing module; in this embodiment, the robot Operating system is used to control the robot body to move, and a conventional robot Operating system, namely, an ros (robot Operating system), is adopted; a visual inertia SLAM algorithm VINS Fusion framework which fuses binocular vision sensor image information and inertial measurement unit attitude data information is used as a data processing module; the ROS can release the relevant information (including vision and robot attitude information collected by the binocular vision inertial sensing module) of the robot to ROS topic information, and the vision inertia SLAM algorithm VIN Fusion framework can read ROS topic information about videos and attitudes collected by the binocular vision inertial sensing module and obtain the topic information of the position information of the robot according to the SLAM algorithm. The data transmission component comprises a USB PHY module and an RS232 transceiver; the USB PHY module is respectively connected with the master control system and the left camera and the right camera of the binocular vision sensor and is used for transmitting the video information collected by the binocular vision sensor to the master control system; and the RS232 transceiver is respectively connected with the main control system and the inertia measurement unit and is used for transmitting the robot attitude information acquired by the inertia measurement unit to the main control system. In the embodiment, a third-party communication software package ROS # (namely an open-source software package with the address of https:// githu. com/siemens/ROS-share) in the ROS is used for operating a ROS _ communication.launch file, so that a communication port at the Ubuntu end of the main control system can be opened.
The Mecanum wheels are connected with the corresponding motor driving modules; the motor driving module is connected with a main control system of the embedded equipment, and the main control system controls the motor driving module to drive the Mecanum wheel to move. The rechargeable lithium battery power supply module supplies power to the motor driving module, the binocular vision inertial sensing module and the embedded equipment respectively. The motor driving module and the rechargeable lithium battery power supply module are both arranged inside the robot body.
Second, upper computer
In this embodiment, the upper computer uses a Windows10 system. The upper computer is provided with a remote control module and a hardware control module for realizing the remote control of the robot and a visual human-computer interaction interface.
The remote control module mainly transmits the operation instructions to the embedded device, and a conventional remote control module disclosed in the art can be used. In this embodiment, Putty software is used for implementation, for example, by Putty, an operation instruction for starting an ROS # communication software package, starting a configuration file related to SLAM in a VINs Fusion framework, and moving a robot is input.
The hardware control module is used for controlling the movement of the vision inspection robot, and the hardware control module disclosed in the field can be adopted. In the embodiment, an operating handle integrated on an upper computer is adopted, the operating handle sends a movement instruction of the vision inspection robot to a main control system of the embedded device through Putty, and then the main control system sends a bottom layer control instruction to a corresponding motor driving module of the Mecanum wheel to control the Mecanum wheel to rotate, advance or retreat so as to complete a corresponding movement instruction of the robot.
Three, man-machine interaction interface
As shown in fig. 3, the visualized man-machine interaction interface is a virtual reality interface built on an upper computer by adopting Unity3D software, and the embodiment is a visualized Unity3D project built on the upper computer.
The human-computer interaction interface comprises a three-dimensional model folder, a connecting module and a display module.
The three-dimensional model folder is used for placing the gas insulated switch model and the vision inspection robot model. The gas insulated switch model and the vision inspection Robot model are established by using solid works three-dimensional mechanical design software, and are converted into a Unified Robot Description Format (URDF) through a solid works plug-in unit sw2 urdf.
And the connecting module is used for associating the robot position information received by the upper computer with the visual inspection robot model and setting the visual inspection robot model as an instantiation object. The connection module comprises a port which is in communication connection with the embedded equipment and can receive the position information of the robot from the main control system in real time.
The display module is used for displaying a gas insulated switch model and a vision inspection robot model which are placed in the three-dimensional model folder, and displaying the position of the robot model in the gas insulated switch model according to the robot position information.
As shown in FIG. 3, the embodiment obtains the executable Unity3D project (i.e. human-computer interface) according to the following steps:
a1 in the upper computer, creates a Unity3D project with the Assets folder of the Unity3D project as the three-dimensional model folder, and places the extension package provided by ROS # in the Assets folder of the Unity3D project so that Unity3D supports the URDF files of gas insulated switches and robots. The URDF of the gas insulated switchgear model and the vision inspection robot model established in the foregoing is placed in the URDF folder created in the Unity3D engineering Assets folder, the created URDF file is selected and imported in the 3D Object in the GameObject toolbar of Unity3D, and the vision inspection robot model is placed at the service opening position of the gas insulated switchgear model.
A2 adds an Empty object (Empty) as a connection module in the Unity3D project, and names the Empty object as a Connector, adds a communication connection script ROS Connector of ROS #, and sets an IP address and a port number of an Ubuntu end of an embedded main control system after the addition is completed so as to realize the communication connection between the 3D project and the main control system. Further adding a script file Odometry script for receiving the position information of the robot in the Connector, and filling a robot positioning information name in a Connector receiving information column; meanwhile, the published objects in the Odometry subsystem, namely the instantiation objects, are selected as the vision inspection robot model, so that the association binding of the robot position information and the vision inspection robot model is realized, and the vision inspection robot model can move according to the position information of the actual robot.
A3 compiles the entire Unity3D project, selects a target platform as Windows and a system architecture as x86_64, and finally packages the entire Unity3D project into an executable program (EXE File), i.e. completes the creation of the Unity3D project. The executable program is operated, so that the communication connection with a master control system Ubuntu of the embedded equipment can be realized, the position information of the actual robot is transmitted, and the visual interaction is carried out on the position information through a virtual visual inspection robot model.
Example 2
This embodiment is a further improvement on the human-computer interaction sensing system of the gas insulated switch vision inspection robot provided in embodiment 1.
As shown in fig. 4, in the human-computer interaction sensing system of the gas insulated switch vision inspection robot provided in this embodiment, in addition to the vision inspection robot, the upper computer and the visual human-computer interaction interface in the embodiment, the upper computer is further provided with a video interface for displaying video information, so that whether the gas insulated switch has a fault is conveniently judged by an operator.
The embedded equipment sends the video information acquired by the binocular vision inertial sensing module to a video interface. In the embodiment, the mjpg-streamer library of the Ubuntu end of the master control system is adopted to send the video information. The mjpg-streamer library can convert video information acquired by the binocular vision sensor into network video information and push the network video information to an upper computer in a network video mode.
The video interface may be implemented using the C # language to call EmguCV.
In this embodiment, both the video information and the robot position information include time stamp information. For example, a timestamp in the video information corresponds to each frame of video; the robot position information corresponds to the time stamp information, and the position of the robot may be stored in the form of, for example, a time stamp-position (x, y, z, α, β, γ) (x, y, z correspond to spatial position coordinates, and α, β, γ correspond to spatial angle information of the robot). When the gas insulated switchgear is detected to have faults through the video information, the robot position information corresponding to the current video frame can be found through searching the corresponding time information.
And a starting module is further arranged at the Ubuntu end of the master control system and is used for simultaneously starting robot position information transmission and video information transmission, and a visual human-computer interaction interface of an upper computer and a video interface are simultaneously received, so that the association of the robot position information and the video information in the video interface can be realized through a timestamp, and the synchronous display of the video interface and the human-computer interaction interface is realized.
Example 3
This embodiment is a further improvement on the human-computer interaction sensing system of the gas insulated switch vision inspection robot provided in embodiment 2.
As shown in fig. 5, the embodiment further provides a first storage module for storing the robot position information and a second storage module for storing the video information at the upper computer end, and associates the first storage module and the second storage module by storing the robot position information and a timestamp included in the video information.
In this embodiment, the first storage module contains a CSV (Comma-Separated Values) file. The input and output stream related function API is called by the C # program, and the received actual robot position information is written in a designated CSV file in a timestamp-position (x, y, z, α, β, γ) format while the audiometric script file associates the received actual robot position information with the visual inspection robot model in Unity 3D. When reviewing the movement behavior of the visual inspection robot in the gas insulated switch, Unity3D can implement review of the movement behavior of the actual robot in the gas insulated switch by associating the robot position information in the CSV file with the visual inspection robot model.
In this embodiment, the video information may be stored in a specified position of the upper computer, that is, the second storage module, by using an object of the Capture class in the C # language. The video interface can read and display the video information by loading a corresponding URL (uniform Resource locator).
It has been explained above that both the robot position information and the video information have corresponding time stamps, so that the first and second memory modules can also be associated by means of time stamps.
The upper computer end is further provided with a synchronous loading module (used for loading the storage information of the first storage module and the storage information of the second storage module simultaneously) or an associated loading module (used for loading one of the storage information of the first storage module and the storage information of the second storage module, and automatically starting the other associated information). Therefore, the visual human-computer interaction interface and the visual interface can further realize synchronous loading and review of the robot position information and the video information.
Example 4
With reference to fig. 1 to 3, in this embodiment, the human-computer interaction sensing system provided in embodiment 1 is used to implement human-computer interaction sensing according to the following steps:
s1 running of a visual human-computer interaction interface, and placing the visual inspection robot model at the position of an access hole in the gas insulated switch model; and meanwhile, the vision inspection robot is placed at the position of an access hole in the gas insulated switchgear.
Starting an upper computer, operating a human-computer interaction interface (namely a Unity3D project), and placing a vision inspection robot model at the position of an access hole in a gas insulated switch when the Unity3D project is built, so that the vision inspection robot model can be further confirmed, and if the deviation is caused with the position of an actual vision inspection robot, the preliminary adjustment can be carried out; the vision inspection robot model can be adjusted automatically according to the received robot position information.
And meanwhile, the vision inspection robot is placed at the position of an access hole in the gas insulated switchgear to be inspected. And the master control system and the ROS driver of the embedded device are started through the robot body control or the remote control.
And S2, the embedded equipment operates to control the robot body to act, meanwhile, the robot position information is generated according to the received video and the gesture information, and the generated robot position information is sent to a visual human-computer interaction interface of the upper computer in a communication transmission mode.
The present embodiment generates robot position information according to the following steps:
and S21, collecting information, wherein the binocular vision sensor and the inertia measurement unit respectively collect relevant video information and robot posture information in the gas insulated switchgear in real time.
And S22, information is transmitted, and the data transmission assembly USB PHY module and the RS232 respectively transmit the video acquired in real time and the robot posture information to the main control system through the USB PHY module and the RS232 transceiver by the binocular vision sensor and the inertial measurement unit.
S23 information issuing, the main control system sends the received video and the robot pose information to the ROS, the ROS issues the received video and the robot pose information to topic information of the ROS, and the topic information is three ROS topic information of/camera _ left/image _ raw (left camera original image),/camera _ right/image _ raw (right camera original image) and/imu/imu _ data (inertial measurement unit information).
And S24, processing information, wherein the visual inertia SLAM algorithm VINS Fusion reads ROS topic information through (launch profile) and saves the ROS topic information in a profile stereo _ imu _ config. In addition, VINSFusion further creates two profiles, left.yaml, right.yaml, where left.yaml stores the calibration parameters of the left camera of the binocular vision sensor, and right.yaml stores the calibration parameters of the right camera of the binocular vision sensor. In the embodiment, the robot position information can be obtained by a conventional SLAM algorithm according to the video and inertial measurement unit information and by combining calibration parameters of a binocular sensor left camera and a binocular sensor right camera (reference documents Qin T, Li P, where S.Viss-mono: Aroust and versatile visual-initial state estimator [ J ]. IEEEtransactions on Robotics,2018,34(4): 1004-. Here, a vision _ rviz.launch file of the VINS Fusion is run, a vision _ node file is run at the same time, and a configuration file stereo _ imu _ config.yaml is designated, thereby obtaining pose information of the binocular vision inertial sensor, that is, positioning information of the entire robot (robot position information for short).
And then, operating a ROS _ communication.launch file by using a third-party communication software package ROS # in the ROS, so that a communication port of the Ubuntu end of the embedded device main control system can be opened.
And S3, associating the received robot position information with the visual inspection robot model through the visual human-computer interaction interface, and displaying the position of the robot model in the gas insulated switch model according to the robot position information to finish human-computer visual interaction.
In the Unity3D project, a communication connection script ROSCONctor of ROS # is added in a connection module Connector, and an IP address and a port number of an embedded main control system Ubuntu end are set, so that communication connection between a 3D project and the main control system can be realized, and robot position information transmitted by the main control system is received in real time. And the robot position information name and the visual inspection robot model are bound in the connection module, so that the association between the robot position information received by the upper computer and the visual inspection robot model can be realized.
Therefore, through the robot position information received by the connecting module in real time, the vision inspection robot model can move in the gas insulated switch model along with the associated robot position information and is presented in the display module, and an operator can know the position of the actual vision inspection robot in the gas insulated switch model, so that the actual robot position information is visually interacted through the virtual vision inspection robot model.
Example 5
With reference to fig. 4, in this embodiment, the human-computer interaction sensing system provided in embodiment 2 is used to implement human-computer interaction sensing according to the following steps:
s1 running of a visual human-computer interaction interface, and placing the visual inspection robot model at the position of an access hole in the gas insulated switch model; and meanwhile, the vision inspection robot is placed at the position of an access hole in the gas insulated switchgear.
Step S1 in the present embodiment is implemented in the same manner as in embodiment 4, and will not be explained in detail here.
And S2, the embedded equipment operates to control the robot body to act, meanwhile, the robot position information is generated according to the received video and the posture information, and the received video information and the generated robot position information are sent to a visual human-computer interaction interface of the upper computer in a communication transmission mode.
In this embodiment, the robot position information generation and transmission implementation is the same as that in embodiment 4, and will not be explained in detail here.
In this embodiment, the video information is pushed to the upper computer in a network video manner through an mjpg-streamer library at the Ubuntu end of the main control system, and the upper computer stores the received video information to a specified location (URL address).
Meanwhile, the master control system starts the robot position information transmission and the video information transmission simultaneously through the set starting module and sends the information to the upper computer.
And S3, associating the received robot position information with the visual inspection robot model through the visual human-computer interaction interface, displaying the position of the robot model in the gas insulated switch model according to the robot position information, and displaying the related video information of the gas insulated switch through the video interface to finish human-computer visual interaction.
In this embodiment, the implementation of the visualized human-computer interaction interface is the same as that in embodiment 4, and is not explained in detail here.
In this embodiment, the video network interface reads and displays the video information by loading the URL address.
Because the robot position information and the video information are transmitted synchronously and both contain timestamp information, if a fault exists in the video at a certain moment, the corresponding robot position information can be searched through the time information, and therefore the fault position information can be grasped. Therefore, the position of the robot and the video information of the position of the robot can be synchronously displayed in real time through the visual human-computer interaction interface and the video interface, the accuracy of judging the position of the fault point of the gas insulated switch can be further improved, and the visual inspection efficiency of the gas insulated switch is improved.
Example 6
With reference to fig. 5, in this embodiment, the human-computer interaction sensing system provided in embodiment 3 is used to implement human-computer interaction sensing according to the following steps:
s1 running of a visual human-computer interaction interface, and placing the visual inspection robot model at the position of an access hole in the gas insulated switch model; and meanwhile, the vision inspection robot is placed at the position of an access hole in the gas insulated switchgear.
And S2, the embedded equipment operates to control the robot body to act, meanwhile, the robot position information is generated according to the received video and the posture information, and the received video information and the generated robot position information are sent to a visual human-computer interaction interface of the upper computer in a communication transmission mode.
And S3, associating the received robot position information with the visual inspection robot model through the visual human-computer interaction interface, displaying the position of the robot model in the gas insulated switch model according to the robot position information, and displaying the related video information of the gas insulated switch through the video interface to finish human-computer visual interaction.
The present embodiment is a further improvement on the human-computer interaction sensing operation of the gas insulated switch vision inspection robot provided in embodiment 5.
In step S3, the robot position information and the video information transmitted to the upper computer are stored in the first storage module and the second storage module, respectively, and the first storage module and the second storage module are associated with each other using the time stamps included in the robot position information and the video information.
When the video information and the robot position information need to be reviewed, the visual human-computer interaction interface can associate the robot position information in the first storage module CSV with the visual inspection robot model in the connection module. And the upper computer synchronously loads the position information and the video information of the robot in the visual human-computer interaction interface and the video interface through the synchronous loading module or the associated loading module and displays the position information and the video information.
Therefore, the man-machine interaction interface and the video interface further realize synchronous loading and review of the position information and the video information of the robot, and the accuracy of fault point inspection is improved.
The invention is mainly characterized in that a man-machine interaction perception method capable of feeding back positioning information of the gas insulated switch vision inspection robot in real time is developed, and compared with a traditional video information interaction mode, the man-machine interaction perception method can effectively guide an operator to master the position information of the robot, thereby achieving the control of the position information of a fault point and improving the vision inspection efficiency of the gas insulated switch.
The above is only a preferred embodiment of the present invention, and is not intended to limit the present invention, and various modifications and changes will occur to those skilled in the art. Any modification, equivalent replacement, or improvement made within the spirit and principle of the present invention should be included in the protection scope of the present invention.

Claims (10)

1. A human-computer interaction perception system of a gas insulated switch vision inspection robot is characterized by comprising a vision inspection robot, an upper computer and a visual human-computer interaction interface, wherein the vision inspection robot is placed in a gas insulated switchgear to inspect a gas insulated switch;
the vision inspection robot comprises a robot body, a binocular vision inertial sensing module and embedded equipment, wherein the binocular vision inertial sensing module is installed on the robot body;
the binocular vision inertial sensing module is used for acquiring video and robot attitude information in real time and sending the acquired video and attitude information to the embedded equipment;
the embedded equipment is used for controlling the action of the robot body, generating robot position information according to the received video and attitude information, and sending the generated robot position information to a visual human-computer interaction interface of the upper computer in a communication transmission mode;
the visual human-computer interaction interface is provided with a gas insulated switch model and a visual inspection robot model positioned in the gas insulated switch model; and the visual human-computer interaction interface correlates the received robot position information with the visual inspection robot model and displays the position of the robot model in the gas insulated switch model according to the positioning information.
2. The human-computer interaction sensing system of the gas insulated switchgear vision inspection robot of claim 1, wherein the binocular vision inertial sensing module comprises a binocular vision sensor, an inertial measurement unit and an arc light source, the binocular vision sensor is used for acquiring video information in real time, the inertial measurement unit is used for acquiring attitude information of the robot in real time, and the arc light source is used for improving brightness in a gas insulated switchgear and enhancing quality of images shot by the binocular vision sensor.
3. The human-computer interaction perception system of the gas insulated switch vision inspection robot of claim 1, wherein the embedded device includes a main control board, a robot operating system, a data transmission assembly and a data processing system;
the robot operating system and the data processing system are arranged on a main control system of the main control board;
the robot operating system is used for controlling the robot body to act;
the data transmission assembly is respectively connected with the binocular vision inertial sensing module on the robot body and the master control system and is used for transmitting the video acquired by the binocular video inertial sensing module in real time and the robot posture information to the master control system;
and the data processing system processes the video and the robot posture information acquired from the main control system to obtain the robot position information and sends the robot position information to a visual human-computer interaction interface in communication connection with the main control system.
4. The human-computer interaction sensing system of the gas insulated switchgear vision inspection robot as claimed in claim 3, wherein the data transmission assembly transmits the video and robot pose information acquired by the binocular video inertial sensing module in real time to the robot operating system of the main control system, and the robot operating system issues the received video and robot pose information as topic information and reads the topic information by the data processing system.
5. The human-computer interaction perception system of the gas insulated switch vision inspection robot of claim 1 or 4, wherein the data transmission component includes a USB PHY module for transmitting video information and an RS232 transceiver for transmitting robot pose information; the data processing system is based on a visual inertia SLAM algorithm.
6. The human-computer interaction perception system of the gas insulated switchgear vision inspection robot as claimed in claim 1, wherein the visual human-computer interaction interface includes a three-dimensional model folder, a connection module and a display module;
the three-dimensional model folder is used for placing a gas insulated switch model and a vision inspection robot model;
the connecting module is used for associating the received robot position information with the visual inspection robot model and setting the visual inspection robot model as an instantiation object;
the display module is used for displaying a gas insulated switch model and a vision inspection robot model which are placed in the three-dimensional model folder, and displaying the position of the robot model in the gas insulated switch model according to the positioning information.
7. The system according to claim 1, further comprising a video interface installed in the upper computer, wherein the embedded device transmits video information collected by the binocular vision inertial sensing module to the video interface, associates the position information of the robot in the visual human-computer interaction interface with the video information in the video interface, and implements synchronous display of the video interface and the human-computer interaction interface.
8. The system according to claim 7, wherein the upper computer is provided with a first storage module for storing the position information of the robot and a second storage module for storing the video information, and the first storage module and the second storage module are associated by storing a time stamp included in the position information of the robot and the video information.
9. A human-computer interaction perception method of a gas insulated switch vision inspection robot, characterized in that the human-computer interaction perception system of any one of claims 1 to 8 is used to perform the following steps:
s1 running of a visual human-computer interaction interface, and placing the visual inspection robot model at the position of an access hole in the gas insulated switch model; meanwhile, the vision inspection robot is placed at the position of an access hole in the gas insulated switchgear;
s2, the embedded equipment operates to control the robot body to act, meanwhile, the robot position information is generated according to the received video and the gesture information, and the generated robot position information is sent to a visual human-computer interaction interface of the upper computer in a communication transmission mode;
and S3, associating the received robot position information with the visual inspection robot model through the visual human-computer interaction interface, and displaying the position of the robot model in the gas insulated switch model according to the positioning information to finish human-computer visual interaction.
10. The human-computer interaction perception method of the gas insulated switch vision inspection robot as claimed in claim 9, wherein the perception method is characterized in that
In step S2, the embedded device sends the robot position information and the video information to the video interface;
in step S3, the position information of the robot in the visual human-computer interaction interface is related to the video information in the video interface, so as to achieve the synchronous display of the video interface and the human-computer interaction interface.
CN202010508069.0A 2020-06-06 2020-06-06 Human-computer interaction sensing system and method for gas insulated switch visual inspection robot Active CN111673748B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202010508069.0A CN111673748B (en) 2020-06-06 2020-06-06 Human-computer interaction sensing system and method for gas insulated switch visual inspection robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202010508069.0A CN111673748B (en) 2020-06-06 2020-06-06 Human-computer interaction sensing system and method for gas insulated switch visual inspection robot

Publications (2)

Publication Number Publication Date
CN111673748A true CN111673748A (en) 2020-09-18
CN111673748B CN111673748B (en) 2022-05-13

Family

ID=72435232

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202010508069.0A Active CN111673748B (en) 2020-06-06 2020-06-06 Human-computer interaction sensing system and method for gas insulated switch visual inspection robot

Country Status (1)

Country Link
CN (1) CN111673748B (en)

Cited By (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113146601A (en) * 2021-03-05 2021-07-23 南京信息工程大学 Modular robot capable of climbing pole
CN114281235A (en) * 2021-12-22 2022-04-05 徐工汉云技术股份有限公司 Pump truck posture display and control method based on 3D model
CN117718974A (en) * 2024-02-08 2024-03-19 成都建工第三建筑工程有限公司 Remote operation control system of light partition board mounting robot

Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
CN105743004A (en) * 2016-03-31 2016-07-06 广东电网有限责任公司中山供电局 Cluster management and control system for substation inspection robot
CN109623839A (en) * 2018-12-24 2019-04-16 西南交通大学 Power distribution station indoor equipment air-ground coordination inspection device and its method for inspecting
CN110103196A (en) * 2019-06-19 2019-08-09 广东电网有限责任公司 The robot for overhauling of GIS a kind of and the examination and repair system of GIS
CN110246235A (en) * 2019-06-18 2019-09-17 广州供电局有限公司 A kind of power distribution room scene method for inspecting and system based on Hololens mixed reality technology
CN111185937A (en) * 2020-01-02 2020-05-22 武汉瑞莱保能源技术有限公司 Nuclear power plant power distribution debugging robot system and operation method

Patent Citations (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN105222760A (en) * 2015-10-22 2016-01-06 一飞智控(天津)科技有限公司 The autonomous obstacle detection system of a kind of unmanned plane based on binocular vision and method
CN105743004A (en) * 2016-03-31 2016-07-06 广东电网有限责任公司中山供电局 Cluster management and control system for substation inspection robot
CN109623839A (en) * 2018-12-24 2019-04-16 西南交通大学 Power distribution station indoor equipment air-ground coordination inspection device and its method for inspecting
CN110246235A (en) * 2019-06-18 2019-09-17 广州供电局有限公司 A kind of power distribution room scene method for inspecting and system based on Hololens mixed reality technology
CN110103196A (en) * 2019-06-19 2019-08-09 广东电网有限责任公司 The robot for overhauling of GIS a kind of and the examination and repair system of GIS
CN111185937A (en) * 2020-01-02 2020-05-22 武汉瑞莱保能源技术有限公司 Nuclear power plant power distribution debugging robot system and operation method

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
罗佳,李伯方,黎立: "配电开关柜辅助作业机器人应用研究", 《电气自动化》 *

Cited By (5)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113146601A (en) * 2021-03-05 2021-07-23 南京信息工程大学 Modular robot capable of climbing pole
CN113146601B (en) * 2021-03-05 2023-05-12 南京信息工程大学 Modularized robot capable of climbing pole
CN114281235A (en) * 2021-12-22 2022-04-05 徐工汉云技术股份有限公司 Pump truck posture display and control method based on 3D model
CN114281235B (en) * 2021-12-22 2024-05-17 徐工汉云技术股份有限公司 Pump truck posture display and control method based on 3D model
CN117718974A (en) * 2024-02-08 2024-03-19 成都建工第三建筑工程有限公司 Remote operation control system of light partition board mounting robot

Also Published As

Publication number Publication date
CN111673748B (en) 2022-05-13

Similar Documents

Publication Publication Date Title
CN111673748B (en) Human-computer interaction sensing system and method for gas insulated switch visual inspection robot
CA1324647C (en) Instruction system of remote-control robot
JP6551184B2 (en) Simulation apparatus, simulation method, and simulation program
CN102120325B (en) Novel remote operation far-end robot control platform and method
CN110047150B (en) Complex equipment operation on-site simulation system based on augmented reality
CN111185937B (en) Nuclear power plant power distribution debugging robot system and operation method
CN109434870A (en) A kind of virtual reality operation system for robot livewire work
Aschenbrenner et al. Artab-using virtual and augmented reality methods for an improved situation awareness for telemaintenance
CN110154029B (en) Online control and simulation test system and method for robot based on LABVIEW
CN110977981A (en) Robot virtual reality synchronization system and synchronization method
CN110751734B (en) Mixed reality assistant system suitable for job site
CN110421559B (en) Teleoperation method and motion track library construction method of distribution network live working robot
CN111897239A (en) Bidirectional digital analog real-time simulation system and simulation method
CN116737483B (en) Assembly test interaction method, device, equipment and storage medium
CN113421470A (en) Teleoperation simulation training system and teleoperation simulation training method for space manipulator
Liu et al. Research on real-time monitoring technology of equipment based on augmented reality
CN117060977A (en) Satellite integrated hardware-in-the-loop test system and control method thereof
JP2004151976A (en) Simulation device
CN115284247A (en) Live working robot master-slave teleoperation method and system based on heterogeneous master hand
CN115616987A (en) Mine equipment digital twin system construction method based on mixed reality
CN112233208B (en) Robot state processing method, apparatus, computing device and storage medium
CN202079595U (en) Novel control platform for tele-operation of remote robot
CN114218702B (en) Virtual visual simulation system for space on-orbit control
CN114633260B (en) Method and system for remotely controlling operation of industrial robot
CN214265594U (en) Robot movement track planning device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant