CN112634318B - Teleoperation system and method for underwater maintenance robot - Google Patents

Teleoperation system and method for underwater maintenance robot Download PDF

Info

Publication number
CN112634318B
CN112634318B CN202011642497.9A CN202011642497A CN112634318B CN 112634318 B CN112634318 B CN 112634318B CN 202011642497 A CN202011642497 A CN 202011642497A CN 112634318 B CN112634318 B CN 112634318B
Authority
CN
China
Prior art keywords
rov
model
scene
virtual
reconstruction
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202011642497.9A
Other languages
Chinese (zh)
Other versions
CN112634318A (en
Inventor
解翠
李健
董军宇
张述
时正午
孙竟豪
徐佳昊
吕清轩
亓琳
范浩
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ocean University of China
Original Assignee
Ocean University of China
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ocean University of China filed Critical Ocean University of China
Priority to CN202011642497.9A priority Critical patent/CN112634318B/en
Publication of CN112634318A publication Critical patent/CN112634318A/en
Application granted granted Critical
Publication of CN112634318B publication Critical patent/CN112634318B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/20Analysis of motion
    • G06T7/207Analysis of motion for motion estimation over a hierarchy of resolutions
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T17/00Three dimensional [3D] modelling, e.g. data description of 3D objects
    • G06T17/20Finite element generation, e.g. wire-frame surface description, tesselation
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T7/00Image analysis
    • G06T7/70Determining position or orientation of objects or cameras
    • G06T7/73Determining position or orientation of objects or cameras using feature-based methods
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/10Image acquisition modality
    • G06T2207/10028Range image; Depth image; 3D point clouds
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06TIMAGE DATA PROCESSING OR GENERATION, IN GENERAL
    • G06T2207/00Indexing scheme for image analysis or image enhancement
    • G06T2207/30Subject of image; Context of image processing
    • G06T2207/30244Camera pose

Abstract

The invention is suitable for the technical field of teleoperation of underwater maintenance robots, and provides a teleoperation system and a teleoperation method of an underwater maintenance robot. The teleoperation system includes: a working end and an operating end; the working end comprises an ROV body, and a mechanical claw, a panoramic image acquisition device, a scene reconstruction device and an environment sensing and measuring device which are arranged on the ROV body; the operation end comprises a VR helmet and a remote control device; the two ends are connected for communication by a communication device. According to the invention, a remote underwater real scene image, a virtual ROV model, a mechanical claw model, a three-dimensional reconstruction model obtained through three-dimensional reconstruction and enhanced control prompt information are presented in the VR helmet, so that an intuitive and immersive interactive remote control interface is provided for an operator, the operator can be helped to judge the position relation of a target object in a three-dimensional scene more accurately, and the accuracy, efficiency, safety and intuitive continuity of control can be effectively improved by matching with a designed ROV motion prediction method, so that the application with higher control accuracy requirement can be met.

Description

Teleoperation system and method for underwater maintenance robot
Technical Field
The invention belongs to the technical field of teleoperation of underwater maintenance robots, and particularly relates to a teleoperation system and method of an underwater maintenance robot.
Background
With the continuous development of the robot technology, more and more robots are applied to extreme environments to replace human operation, and the personnel life safety of operators can be effectively guaranteed by remotely controlling the robots to complete tasks. Particularly, in the marine field, there is a high demand for a Remote unmanned underwater Vehicle (ROV), which is required for tasks such as exploration of submarine resources, maintenance of submarine equipment, and deep sea archaeology. The working quality of the ROV greatly depends on the quality of a remote control system, the teleoperation technology is one of hot problems in the field of robots, and the delay problem of a human-computer interaction interface and operation is the key and difficult point in the teleoperation technology. In recent years, a rapidly developed Mixed Reality technology (MR) provides a better solution to the problems of teleoperation. The mixed reality technology is further developed on the basis of the virtual reality technology, the information of a real scene is introduced into a virtual environment, an interactive feedback loop is built for a user in a virtual and real fused world, so that the reality and the telepresence of the user experience are effectively increased, and the operation convenience can be assisted and improved.
At present, the most common scheme in an underwater application scenario is to install a conventional camera or a motion camera on an ROV, and directly transmit a shot underwater picture back to the local for an operator to view and operate the ROV. The excellent scheme is that the teleoperation unmanned submersible for underwater target detection and treatment proposed in Chinese patent publication No. CN 102975823A adopts a low-light-level camera and a detection sonar respectively for detecting an underwater environment in a clear water area and a turbid water area, and can observe a wide range around by matching with a holder.
The applicant of the present invention finds that, in implementing the above technical solution, the above technical solution has at least the following disadvantages:
although this solution is simple and inexpensive, it is difficult to accurately reflect the relative positional relationship between the ROV and the target to be worked in three-dimensional space, and thus it is difficult to satisfy applications such as underwater maintenance work which require high manipulation accuracy.
Disclosure of Invention
An embodiment of the present invention provides a teleoperation system for an underwater maintenance robot, which aims to solve the problems mentioned in the background art.
The embodiment of the invention is realized in such a way that the teleoperation system of the underwater maintenance robot comprises:
a working end; the working end comprises an ROV body; the ROV body is provided with a mechanical claw, a panoramic image acquisition device and a scene reconstruction device; the mechanical claw is used for performing maintenance operation on a target object; the panoramic image acquisition device is used for acquiring an underwater real scene image which takes the panoramic image acquisition device as a center and contains a surrounding 360-degree scene; the scene reconstruction device is used for performing three-dimensional reconstruction on the underwater main operation area;
an operation end; the operation end comprises a VR helmet and a remote control device; the VR helmet is used for displaying an underwater real scene image acquired by the panoramic image acquisition device, a three-dimensional reconstruction model obtained by three-dimensional reconstruction performed by the scene reconstruction device, and a virtual ROV model and a virtual gripper model generated by equal-proportion modeling according to an ROV body and a gripper; the remote control device is used for remotely controlling the running states of the ROV body and the mechanical claw;
and the communication device is used for communication connection between the working end and the operation end.
Preferably, the communication device is a server; the server is in communication connection with the optical transceiver arranged on the ROV body.
Preferably, a monocular camera for assisting in judging the alignment of the gripper with the work target and a pressure sensor for acquiring a pressure generated when the gripper grips are mounted on the gripper.
Preferably, the remote control device is selected from a control handle with a vibration function, a data glove, a remote control joystick, a gesture recognition device or a somatosensory device.
Preferably, the panoramic image acquisition device comprises two fisheye cameras.
Preferably, the scene reconstruction apparatus includes:
a binocular camera and a laser sensor; the binocular camera and the laser sensor are used for performing three-dimensional reconstruction on a working area and generating a three-dimensional reconstruction model through the server; and the generated three-dimensional reconstruction model and the underwater real scene image acquired by the panoramic image acquisition device are superposed and displayed by the VR helmet.
Preferably, the binocular camera and the laser sensor are used for three-dimensional reconstruction of the working area, and the generation of the three-dimensional reconstruction model by the server specifically includes the following steps:
calibrating laser planes of the binocular camera and the laser sensor;
the method comprises the steps that a working area is mapped and positioned through a binocular camera, and three-dimensional point cloud and pose data of the binocular camera are generated in real time in the mapping process;
performing three-dimensional reconstruction on the operation area through a laser sensor, and performing matching calculation on the generated laser scanning point cloud information and the three-dimensional point cloud generated by the binocular camera through a server to obtain a three-dimensional point cloud model;
and meshing the three-dimensional point cloud model to generate a three-dimensional reconstruction model in the VR helmet.
Preferably, the teleoperation system further comprises an environment sensing and measuring device for acquiring the pose of the ROV body in the underwater scene and the distance information between the ROV body and surrounding objects, and the environment sensing and measuring device comprises:
the ultrasonic ranging sensor is used for detecting whether an obstacle exists in front of the ROV body and detecting the distance between the ROV body and the obstacle when the obstacle exists;
and the inertial measurement unit is used for acquiring the attitude information of the ROV body.
Another object of an embodiment of the present invention is to provide a teleoperation method for an underwater maintenance robot, including:
acquiring an underwater real scene image which takes a panoramic image acquisition device as a center and contains a surrounding 360-degree scene through the panoramic image acquisition device;
performing three-dimensional reconstruction on the region of the mechanical claw for maintenance operation through a scene reconstruction device;
displaying an underwater real scene image acquired by the panoramic image acquisition device, a virtual ROV model and a virtual gripper model generated by carrying out equal-proportion modeling according to an ROV body and a gripper, and a three-dimensional scene model obtained by carrying out three-dimensional reconstruction by the scene reconstruction device through a VR helmet;
according to an underwater real scene image, a virtual ROV model, a virtual gripper model and a three-dimensional scene model obtained by three-dimensional reconstruction of a scene reconstruction device, which are obtained by a panoramic image acquisition device displayed in a VR helmet, the running states of the ROV body and the gripper are controlled through a remote control device, so that the gripper can maintain a target object.
Preferably, the teleoperation method further comprises the steps of:
acquiring pose information of an ROV body and distance information between the pose information and surrounding objects through an environment sensing and measuring device;
converting the information acquired by the environment sensing and measuring device into enhanced operation prompt information to be displayed in a VR helmet, and designing a motion prediction method of an ROV body according to the information acquired by the environment sensing and measuring device;
the method for predicting the motion of the ROV body comprises the following steps:
placing two virtual ROV models in a three-dimensional space of a VR helmet, wherein one virtual ROV model is used for mapping the pose of an ROV body in a real environment and is called as a calibration ROV, and the other virtual ROV model is used for predicting the pose change of the ROV body in a real scene and is called as a prediction ROV;
sending an operation instruction to an ROV body and a prediction ROV; the environment sensing and measuring device acquires pose information of the ROV body after receiving the operation instruction; predicting that the ROV simulates the ROV body to move in a virtual scene after receiving an operation instruction, and leaving a motion track in the virtual scene;
and the calibration ROV moves according to the pose information acquired by the environment sensing and measuring device, and is compared with the motion trail left by the prediction ROV in the moving process to calculate and judge whether the user can be allowed to continue operating.
The embodiment of the invention provides a teleoperation system of an underwater maintenance robot, which comprises: a working end; the working end comprises an ROV body; the ROV body is provided with a mechanical claw, a panoramic image acquisition device and a scene reconstruction device; the mechanical claw is used for performing maintenance operation on a target object; the panoramic image acquisition device is used for acquiring an underwater real scene image which takes the panoramic image acquisition device as a center and contains a surrounding 360-degree scene; the scene reconstruction device is used for performing three-dimensional reconstruction on the underwater main operation area; an operation end; the operation end comprises a VR helmet and a remote control device; the VR helmet is used for displaying an underwater real scene image acquired by the panoramic image acquisition device, a three-dimensional reconstruction model obtained by three-dimensional reconstruction performed by the scene reconstruction device, and a virtual ROV model and a virtual gripper model generated by equal-proportion modeling according to an ROV body and a gripper; the remote control device is used for remotely controlling the running states of the ROV body and the mechanical claw; and the communication device is used for communication connection between the working end and the operation end.
Compared with the prior art, the remote control method has the advantages that the remote underwater real scene image, the virtual ROV model, the mechanical claw model, the operation scene model obtained through three-dimensional reconstruction and the enhanced control prompt information are displayed in the VR helmet, an intuitive and immersive interactive remote control interface is provided for an operator, the operator can be helped to judge the position relation of the target object in the three-dimensional scene more accurately, the control precision, efficiency, safety and intuitive continuity can be effectively improved by matching with the designed ROV motion prediction method, and the application with higher control precision requirements can be met.
Drawings
Fig. 1 is a schematic structural diagram of a teleoperation system of an underwater maintenance robot according to an embodiment of the present invention;
fig. 2 is a schematic diagram of a teleoperation system including a specific composition of robotic arm portions according to an embodiment of the present invention;
fig. 3 is a schematic diagram of a teleoperation system including a specific component of a panoramic image acquisition apparatus according to an embodiment of the present invention;
fig. 4 is a schematic diagram of a teleoperation system including a specific component of a scene reconstruction device according to an embodiment of the present invention;
fig. 5 is a schematic diagram of a teleoperation system including a specific component of an environment sensing and measuring device according to an embodiment of the present invention;
fig. 6 is a functional block diagram of a teleoperation system of an underwater maintenance robot according to an embodiment of the present invention;
FIG. 7 is a flowchart illustrating the steps of a method for teleoperation of an underwater maintenance robot according to an embodiment of the present invention;
fig. 8 is a schematic diagram illustrating a method for predicting motion of an ROV body according to an embodiment of the present invention;
fig. 9 is a flowchart of an algorithm of a method for predicting the motion of an ROV body according to an embodiment of the present invention.
Detailed Description
In order to make the objects, technical solutions and advantages of the present invention more apparent, the present invention is described in further detail below with reference to the accompanying drawings and embodiments. It should be understood that the specific embodiments described herein are merely illustrative of the invention and are not intended to limit the invention.
Specific implementations of the present invention are described in detail below with reference to specific embodiments.
Example 1
As shown in fig. 1, a teleoperation system of an underwater maintenance robot according to an embodiment of the present invention includes:
a working end; the working end comprises an ROV body; the ROV body is provided with a mechanical claw, a panoramic image acquisition device and a scene reconstruction device; the mechanical claw is used for performing maintenance operation on a target object; the panoramic image acquisition device is used for acquiring an underwater real scene image which takes the panoramic image acquisition device as a center and contains a surrounding 360-degree scene; the scene reconstruction device is used for performing three-dimensional reconstruction on the underwater main operation area;
an operation end; the operation end comprises a VR helmet and a remote control device; the VR helmet is used for displaying an underwater real scene image acquired by the panoramic image acquisition device, a three-dimensional reconstruction model obtained by three-dimensional reconstruction performed by the scene reconstruction device, and a virtual ROV model and a virtual gripper model generated by equal-proportion modeling according to an ROV body and a gripper; the remote control device is used for remotely controlling the running states of the ROV body and the mechanical claw;
and the communication device is used for communication connection between the working end and the operation end.
The working principle of the embodiment of the invention is as follows: when the remote operation system operates, an image which takes the panoramic image acquisition device as a center and contains a surrounding 360-degree scene is acquired through the panoramic image acquisition device, a working area for maintenance operation on the mechanical claw is subjected to three-dimensional reconstruction through the scene reconstruction device, then an underwater real scene image (namely a far underwater real scene picture) acquired by the panoramic image acquisition device, a three-dimensional reconstruction model obtained by the scene reconstruction device through three-dimensional reconstruction, and a virtual ROV model and a virtual mechanical claw model generated through equal-proportion modeling according to the ROV body and the mechanical claw are displayed through the VR helmet, and an operator can intuitively and accurately remotely control the movement of the ROV body and the action of the mechanical claw through the remote control device according to the position relation among the far underwater real scene image, the three-dimensional reconstruction model, the virtual ROV model and the virtual mechanical claw model displayed in the VR helmet, so that the maintenance operation is carried out on the target object.
Compared with the prior art, the remote control method and the remote control system have the advantages that the remote underwater real scene picture, the three-dimensional reconstruction model, the virtual ROV model and the virtual gripper model are displayed in the VR helmet, an intuitive and immersive interactive remote control interface is provided for an operator, the relative position relation of a target object in a three-dimensional space can be accurately reflected, the cognitive burden of the operator is effectively reduced, and the control efficiency and precision are improved.
As shown in fig. 2, as a preferred embodiment of the present invention, the communication device is a server; the server is in communication connection with the optical transceiver arranged on the ROV body.
Specifically, the communication between the server and the optical transceiver on the ROV body can use wired or wireless connection modes such as optical fiber, WIFI, bluetooth and the like as required. In this embodiment, optical fiber communication is preferably used.
As shown in fig. 2, as a preferred embodiment of the present invention, a monocular camera for assisting in judging the alignment of the gripper with the work target and a pressure sensor for acquiring a pressure generated when the gripper grips are mounted on the gripper.
Specifically, the pressure sensor is installed inside the jaw of the mechanical jaw and used for measuring the pressure generated when the mechanical jaw grips. The monocular camera is arranged at the tail end of the mechanical claw, and when the mechanical claw starts to perform maintenance operation, whether the mechanical claw is aligned with a target object or not can be judged in an auxiliary mode through an image acquired by the monocular camera, so that the accuracy and the speed of the maintenance operation are improved.
As a preferred embodiment of the present invention, the remote control device is selected from a control handle with a vibration function, a data glove, a remote control joystick, a gesture recognition device, or a motion sensing device.
The remote control device can be selected from various devices, in this embodiment, a remote control handle is preferably used as the remote control device, the remote control handle is preferably a handle with vibration feedback (in general, a VR helmet can be provided with a handle with vibration feedback, and a handle provided by the VR helmet can be directly used), pressure data grasped by the gripper can be transmitted back to the server through a pressure sensor on the gripper and converted into a vibration amplitude of the handle, so that an operator can intuitively sense whether the gripper grasps a target object and adopt a corresponding operation strategy.
Or a data glove can be adopted, the pressure data of the pressure sensor can be converted into the stress intensity of the finger bending by the data glove, the gripping degree of the mechanical claw can be sensed more visually, and the operation experience is improved.
Of course, a remote control joystick, a data glove, a gesture recognition device, or a motion sensing device may be used, but a remote control handle or a remote control joystick is generally selected for economic reasons.
As shown in fig. 3, the panoramic image capturing apparatus includes two fisheye cameras as a preferred embodiment of the present invention.
Specifically, a panoramic camera can be formed by two fisheye cameras, a pair of video pictures which take the panoramic camera as the center and contain surrounding 360-degree scenes can be generated by splicing the pictures shot by the two fisheye cameras, the video pictures are directly transmitted back to a server, the video pictures are displayed in a VR helmet as a main interactive interface, and an operator can observe surrounding 360-degree underwater real environment which takes an ROV body as the center after wearing the VR helmet.
The surrounding scene of ROV body is shown through the panoramic camera that uses two fisheye cameras to constitute to this embodiment, makes the state and the change that the operator can be in the audio-visual observation underwater work scene in remote of 360 degrees locally, has strengthened teleoperation system's immersion nature and has also reduced operator's cognitive burden to need not extra time delay such as control cloud platform rotation.
As shown in fig. 4, as a preferred embodiment of the present invention, the scene reconstructing apparatus includes:
a binocular camera and a laser sensor; the binocular camera and the laser sensor are used for performing three-dimensional reconstruction on a working area and generating a three-dimensional reconstruction model through the server; and the generated three-dimensional reconstruction model and the underwater real scene image acquired by the panoramic image acquisition device are superposed and displayed by the VR helmet.
This example carries out three-dimensional reconstruction to main work scene under water through using binocular camera and laser sensor, because the picture that the scene was shot under water hardly extracts out the depth information, can't only carry out three-dimensional reconstruction through binocular or depth camera, consequently uses laser sensor to cooperate the scanning to rebuild, considers that blue-green light is more far than red light propagation under water, consequently this example chooses green laser sensor for use.
As a preferred embodiment of the present invention, the binocular camera and the laser sensor are used to perform three-dimensional reconstruction on the working area, and the generating of the three-dimensional reconstruction model by the server specifically includes the following steps:
firstly, calibrating a binocular camera and a laser plane off line, wherein the calibration of the binocular camera is mainly used for obtaining internal reference and external reference of the binocular camera, and accordingly, an image with relatively small distortion can be obtained; the laser plane calibration is mainly used for correctly solving the three-dimensional coordinate information of a target object during laser reconstruction;
secondly, carrying out image building and positioning on the operation area by using a binocular camera on line, generating three-dimensional point cloud by extracting characteristic information in a shot image, estimating the pose of the binocular camera in real time in the reconstruction process, transmitting the information to a server for storage and carrying out the next calculation;
and performing three-dimensional reconstruction on the operation area by using a laser sensor, and simultaneously matching the binocular camera with the previously recorded pose information. And calculating laser scanning point cloud information of laser lines extracted from each frame of image of the video sequence by using a laser triangulation method and combining internal parameters of a binocular camera, and converting each laser line to the same visual angle through the previously obtained pose of the binocular camera so as to recover the accurate three-dimensional semi-dense point cloud of the target object. By introducing a mapping factor, scaling the three-dimensional point cloud extracted by shooting with a binocular camera and the three-dimensional semi-dense point cloud obtained by laser scanning for deviation correction, and finally generating a three-dimensional point cloud model with high precision and real proportion;
in addition, in order to facilitate subsequent point cloud processing and operation, real-time semantic segmentation is introduced in the reconstruction process, and based on the result of the semantic segmentation, the same pixel gradient in each frame of image can be clustered, so that objects in the whole scene can be divided into different individual point clouds during reconstruction.
And determining the pose of the ROV body in the operation area according to pose matching of the binocular camera, assigning pose information to a virtual ROV model created in advance, and placing the virtual ROV model in a three-dimensional space of the VR helmet. Meanwhile, the three-dimensional point cloud model obtained in the process is subjected to point cloud smoothing, down sampling and other operations, and the purpose is to reduce the influence of outliers and noise on reconstruction. In addition, when the laser line is extracted, the color of an object in the image is extracted, the point cloud color information can be given, then the surface normal direction is calculated, the triangularization reconstruction is carried out, and a complete three-dimensional reconstruction model is obtained. And finally, the image can be drawn in the three-dimensional space of the VR helmet and overlapped with the image acquired by the panoramic image acquisition device.
As shown in fig. 5, as a preferred embodiment of the present invention, the teleoperation system further includes an environment sensing and measuring device for acquiring the pose of the ROV body in the underwater scene and the distance information between the ROV body and the surrounding objects, the environment sensing and measuring device includes:
the ultrasonic ranging sensor is used for detecting whether an obstacle exists in front of the ROV body and detecting the distance between the ROV body and the obstacle when the obstacle exists;
and the inertial measurement unit is used for acquiring the attitude information of the ROV body.
Specifically, because the environment is complicated changeable under water, consequently install ultrasonic ranging sensor at the front end of ROV body, can detect whether have the barrier in ROV the place ahead to and how far away from the barrier. Data generated by the ultrasonic sensor is returned to the server, converted into specific auxiliary prompt information and displayed through the VR helmet so as to assist an operator in judging, and when the distance between the data and the detected object is too close, a warning can be given out to prompt the operator.
The inertial measurement unit is arranged at the position where the binocular camera is located, and the pose of the virtual ROV model can be corrected.
In summary, as shown in fig. 6, the ROV body mainly includes three functional modules, namely a panoramic image acquisition module, a scene reconstruction module, and an environment sensing and measuring module, which respectively correspond to the panoramic image acquisition device, the scene reconstruction device, and the environment sensing and measuring device shown in fig. 1. The three functional modules can acquire panoramic images of underwater scenes, three-dimensional reconstruction models of main operation areas, poses of ROV bodies, distances between the ROV bodies and surrounding objects and other information. The ROV body is also provided with a motion prediction module, the module carries out equal-proportion modeling according to a real ROV body and a mechanical claw to generate a virtual ROV model and a virtual mechanical claw model, and the motion prediction of the ROV body can be realized by combining the virtual ROV model and the pose of the ROV body acquired by the environment sensing and measuring module. And the relevant information acquired by the environment perception and measurement module is input to the auxiliary enhanced control module to carry out appropriate UI design, and prompt information of enhanced control is provided for an operator. Therefore, the panoramic image, the three-dimensional reconstructed operation scene model, the virtual ROV body, the mechanical claw model and the enhanced control prompt message form a mixed reality interaction scene which can be presented in the VR helmet, and an operator can observe the mixed reality scene by wearing the VR helmet and realize the control of the ROV through the remote control module.
Compared with the prior art, the remote control method and the remote control system have the advantages that the remote underwater real scene image, the virtual ROV model, the mechanical claw model, the three-dimensional reconstruction model and the enhanced control prompt information are displayed in the VR helmet, an intuitive and immersive interactive remote control interface is provided for an operator, the operator can be helped to judge the position relation of the target object in the three-dimensional scene more accurately, the designed motion prediction module is matched, the control precision, efficiency, safety and intuitive continuity can be effectively improved, and the application with higher control precision requirement can be met.
Example 2
As shown in fig. 7, an embodiment of the present invention further provides a teleoperation method of an underwater maintenance robot, the teleoperation method including:
s100, after the system is started, a panoramic image acquisition device is started, the panoramic image acquisition device shoots real scenes around the ROV body in real time, video streams are returned to a server and transmitted to a VR helmet to be displayed, the panoramic image acquisition device representing the visual angle of a user is placed at the center of a panoramic video, real-time change of the real scenes can be comprehensively observed in multiple directions, and meanwhile, the system is remotely operated to enter a mode one state.
The mode one state: the visual angle of the operator is fixed at this moment, and the change of the visual angle can not be caused even if the head is rotated, so that the visual angle of the operator is always aligned with the positive direction of the ROV body, and the control can be carried out orderly without being easily confused. The visual angle can be controlled by the remote control device to rotate left, right, up and down for observing from a plurality of directions, but the visual angle can be automatically switched back to the initial positive direction after the operation is finished.
Because the environment under water is complicated changeable always, can avoid the barrier at any time for the auxiliary operation person, ultrasonic ranging sensor in ROV body the place ahead can work always, whether real-time detection ROV body the place ahead has the barrier to in passing back distance information to the server, and exist always with the information form of supplementary suggestion in the VR helmet, when the place ahead was very close to the object, can send warning information, the suggestion operator slows down or stops.
The auxiliary prompt information means that the enhanced control prompts such as distance information returned by the ultrasonic sensor are displayed in front of the visual angle in the VR helmet in a UI text mode, the UI text can rotate along with the rotation of the visual angle, and an operator can see the enhanced control prompt information in real time.
S200, after the ROV body is controlled to move to the vicinity of a working area by using a remote control device, in order to meet the requirement of further accurate working, starting three-dimensional reconstruction, performing three-dimensional reconstruction on the working area by using a scene reconstruction device, and drawing a reconstruction result in a virtual environment in a grid form, wherein the method specifically comprises the following steps:
s201, starting a picture establishing mode of a binocular camera, controlling an ROV body to perform mobile scanning around a working area, generating three-dimensional point cloud and pose data of the binocular camera in real time in the scanning process, and storing the generated three-dimensional point cloud data and the pose data into a server;
s202, starting a repositioning mode of the binocular camera, starting the laser sensor and starting a reconstruction function, similarly controlling the ROV body to perform mobile scanning around a working area, storing generated laser scanning point cloud information into a server, and performing matching calculation with three-dimensional point cloud data extracted by the binocular camera to obtain an accurate and real three-dimensional point cloud model. Gridding the three-dimensional point cloud model through a corresponding algorithm, and finally drawing a scene grid in a three-dimensional space of the VR helmet according to real three-dimensional coordinates;
s203, repeating the second step until the grid reconstruction is complete;
and S204, closing the rebuilding function after the rebuilding is finished.
The binocular camera repositioning mode is to calculate whether the binocular camera can be matched with the previously recorded pose of a certain binocular camera according to the current shot picture of the binocular camera, and if the matching is successful, the pose of the binocular camera is transmitted to the virtual ROV model.
The gridding of the three-dimensional point cloud model refers to adding grid calculation after denoising and other processing are carried out on laser scanning point cloud obtained through laser scanning, obtaining grid information through calculation, carrying out color assignment on a reconstructed model according to collected color information, and finally drawing scene grids in a VR helmet.
And S300, implementing a motion prediction method of the ROV body designed according to the information acquired by the environment sensing and measuring device to assist an operator in maintenance operation. And after reconstruction is finished, closing the laser sensor, continuously starting the positioning mode by the binocular camera, placing two virtual ROV models built in advance in a VR three-dimensional space according to pose matching information of the camera, wherein one virtual ROV model is used for mapping the pose of the ROV body in a real environment and is called as a calibration ROV, and the other virtual ROV model is used for predicting the pose change of the ROV body in a real scene and is called as a prediction ROV. Meanwhile, certain errors possibly occur in the three-dimensional reconstruction and repositioning process, so that the inertial measurement unit can be started to correct the pose of the calibration ROV after the repositioning mode is started. At this time, the operator can switch to the second mode, so that the position relationship between the mechanical gripper and the reconstruction target object can be observed conveniently from multiple angles. When grabbing and other operations are carried out, the monocular camera at the tail end of the mechanical claw can be started to carry out observation, and an auxiliary operator controls the ROV body to enable the mechanical claw to be aligned with the target object.
The second mode is that the visual angle of the operator is not limited any more, and the operator can change the visual angle of the user by rotating the VR helmet, so that the spatial position relationship between the virtual gripper and the reconstructed target object can be observed conveniently from multiple angles, and the pose of the ROV body and the gripper can be finely adjusted according to the information, so that more accurate alignment is realized.
Finally, the operator realizes the gripping operation of the mechanical claw through the handle with vibration, when the mechanical claw grips a target object, the pressure sensor on the mechanical claw sends real-time return pressure data to the server, and the pressure data is mapped to the vibration amplitude of the handle, so that the operator can intuitively feel whether the object is gripped.
Example 3
The remote detection and control system based on augmented reality proposed in chinese patent publication No. CN 108422435A uses an RGBD camera to shoot a depth image of a scene, extracts a three-dimensional point cloud model of the scene through a reconstruction module, and locally operates a manipulator model in combination with the point cloud model to plan an operation path, so as to drive the manipulator to move. The scheme does not consider the problems of non-intuitive operation, discontinuity and the like caused by time delay in a teleoperation system, and is not suitable for occasions with high requirements on control precision in underwater maintenance operation.
As shown in fig. 8 and 9, an embodiment of the present invention provides a method for predicting the movement of an ROV body, which addresses the problems of handling wait and discontinuous work due to operation delay, and by predicting and simulating the movement of an ROV in a real scene, a real-time feedback can be obtained after an operator performs an operation action, so that the operator can perform continuous work, and the continuous work capability of the system is improved compared with other schemes. The motion prediction method in the present embodiment is a concrete expression of the motion prediction method described in step S300 in embodiment 2.
After the reconstruction of the three-dimensional scene model is completed and before a specific work task is started, two virtual ROV models are placed in the three-dimensional space of the VR helmet, wherein one virtual ROV model is used for mapping the pose of an ROV body in a real environment and is called as a calibration ROV, and the other virtual ROV model is used for predicting the pose change of the ROV body in a real scene and is called as a prediction ROV. The specific motion prediction scheme is mainly divided into the following steps:
an operator sends an operation instruction to an ROV body and a prediction ROV through operation equipment, the ROV body moves in an underwater real scene after receiving the instruction, and the prediction ROV simulates the ROV body to move in a virtual scene after receiving the instruction;
the binocular camera maps the pose of the ROV body in a real scene to the calibration ROV through pose matching, the calibration ROV makes pose changes in a virtual scene identical to the pose changes of the ROV body, and then the pose of the calibration ROV is corrected according to the speed and the pose changes detected by the inertial measurement unit;
the predicted ROV responds immediately upon receiving the operation instruction and leaves a motion trajectory in the virtual scene. And then the calibration ROV moves according to the pose information returned by the ROV body, and the pose information is compared with the predicted motion trail left by the ROV in the moving process to calculate and judge whether the user can be allowed to continue operating. The specific judgment method is as follows:
the motion trail mainly comprises a position part and an orientation part, the motion trail of the predicted ROV is recorded at regular intervals, and the coordinate of the center position of the motion trail and included angle data of a forward orientation vector and x, y and z axial directions are recorded.
And reading pose matching information of the binocular camera and the change of data of the inertial measurement unit in real time, and judging that the calibration ROV starts to move if the pose data changes. And then, calculating the central coordinate position of the calibration ROV and the recorded central coordinate position of the first predicted ROV, and if the difference value between the two is smaller than a set threshold value within a set waiting time t, continuously calculating the difference value of forward direction vectors of the two, and if the difference value is still smaller than the set threshold value, determining that the predicted track is approximately equal to the track of the ROV body, so that correction is not needed, and the user is allowed to continue operation.
If the value of the inertial measurement unit changes and the calibration ROV does not move to the recorded first motion track position within the set waiting time t, the user is prompted to pause the operation if the predicted ROV deviates from the motion track of the ROV body, the pose of the predicted ROV is adjusted according to the pose of the calibration ROV, and the recorded track data is cleared.
Similarly, if the difference value of the central position coordinates or the forward direction vector of the two is larger than the set threshold value, the predicted ROV motion track is considered to deviate from the ROV body motion track, the user is prompted to pause the operation, the pose of the predicted ROV pose is adjusted according to the pose of the calibrated ROV, and the recorded track data is cleared.
The above operations are repeated until the job ends.
Because the predicted ROV can immediately respond when receiving the operation instruction, the time delay generated in the period is far less than the time delay generated by the response of the calibrated ROV after receiving the operation instruction, an operator can immediately obtain real-time feedback from the predicted ROV after performing operation, and the problems of blind operation, non-intuition, discontinuous operation and the like caused by the operation time delay are reduced.
Compared with the prior art, the embodiment combines the information acquired by the environment sensing and measuring device with the created virtual ROV model, judges whether the motion track of the predicted ROV is away from the motion track of the ROV body by utilizing the comparison between the calibrated ROV and the motion track left by the predicted ROV in the moving process, adjusts and predicts the pose of the ROV pose according to the pose of the calibrated ROV, effectively improves the precision, efficiency and safety of operation, reduces the problems of non-intuitive operation, discontinuity and the like caused by time delay, and can fully meet the application with higher requirement on operation precision.
The above description is only for the purpose of illustrating the preferred embodiments of the present invention and is not to be construed as limiting the invention, and any modifications, equivalents and improvements made within the spirit and principle of the present invention are intended to be included within the scope of the present invention.

Claims (9)

1. A teleoperation system for an underwater maintenance robot, the teleoperation system comprising:
a working end; the working end comprises an ROV body; the ROV body is provided with a mechanical claw, a panoramic image acquisition device and a scene reconstruction device; the mechanical claw is used for performing maintenance operation on a target object; the panoramic image acquisition device is used for acquiring an underwater real scene image which takes the panoramic image acquisition device as a center and contains a surrounding 360-degree scene; the scene reconstruction device is used for performing three-dimensional reconstruction on the underwater main operation area;
an operation end; the operation end comprises a VR helmet and a remote control device; the VR helmet is used for displaying an underwater real scene image acquired by the panoramic image acquisition device, a three-dimensional reconstruction model obtained by three-dimensional reconstruction performed by the scene reconstruction device, and a virtual ROV model and a virtual gripper model generated by equal-proportion modeling according to an ROV body and a gripper; the remote control device is used for remotely controlling the running states of the ROV body and the mechanical claw;
the communication device is used for communication connection between the working end and the operation end;
the environment sensing and measuring module is used for acquiring the pose of the ROV body in an underwater scene and the distance information between the ROV body and surrounding objects;
the ROV body is also provided with a motion prediction module, the module carries out equal-proportion modeling according to a real ROV body and a mechanical claw to generate a virtual ROV model and a virtual mechanical claw model, and the motion prediction of the ROV body can be realized by combining the virtual ROV model and the pose of the ROV body acquired by the environment sensing and measuring module.
2. The teleoperation system of claim 1, wherein the communication device is a server; the server is in communication connection with the optical transceiver arranged on the ROV body.
3. The teleoperation system of claim 1, wherein the gripper comprises a monocular camera for aiding in determining alignment of the gripper with a work target, and a pressure sensor for detecting a pressure generated when the gripper grips.
4. The teleoperation system of claim 1, wherein the remote control device is selected from a control handle with a vibration function, a data glove, a remote control joystick, a gesture recognition device, and a motion sensing device.
5. A teleoperation system for an underwater maintenance robot according to claim 1, wherein the panoramic image acquisition means comprises two fisheye cameras.
6. The teleoperation system of claim 1, wherein the scene reconstruction device comprises:
a binocular camera and a laser sensor; the binocular camera and the laser sensor are used for performing three-dimensional reconstruction on a working area and generating a three-dimensional reconstruction model through the server; and the generated three-dimensional reconstruction model and the underwater real scene image acquired by the panoramic image acquisition device are superposed and displayed by the VR helmet.
7. The teleoperation system of claim 6, wherein the binocular camera and the laser sensor are used for three-dimensional reconstruction of a working area, and the generation of the three-dimensional reconstruction model by the server specifically comprises the following steps:
calibrating laser planes of the binocular camera and the laser sensor;
the method comprises the steps that a working area is mapped and positioned through a binocular camera, and three-dimensional point cloud and pose data of the binocular camera are generated in real time in the mapping process;
performing three-dimensional reconstruction on the operation area through a laser sensor, and performing matching calculation on the generated laser scanning point cloud information and the three-dimensional point cloud generated by the binocular camera through a server to obtain a three-dimensional point cloud model;
and meshing the three-dimensional point cloud model, and generating a three-dimensional reconstruction model in the VR helmet.
8. The teleoperation system of claim 1, wherein the environmental sensing and measuring module comprises:
the ultrasonic ranging sensor is used for detecting whether an obstacle exists in front of the ROV body and detecting the distance between the ROV body and the obstacle when the obstacle exists;
and the inertial measurement unit is used for acquiring the attitude information of the ROV body.
9. A teleoperation method of an underwater maintenance robot, the teleoperation method comprising:
acquiring an underwater real scene image which takes a panoramic image acquisition device as a center and contains a surrounding 360-degree scene through the panoramic image acquisition device;
performing three-dimensional reconstruction on the region of the mechanical claw subjected to maintenance operation through a scene reconstruction device;
displaying an underwater real scene image acquired by the panoramic image acquisition device, a virtual ROV model and a virtual gripper model generated by carrying out equal-proportion modeling according to an ROV body and a gripper, and a three-dimensional scene model obtained by carrying out three-dimensional reconstruction by the scene reconstruction device through a VR helmet;
according to an underwater real scene image, a virtual ROV model, a virtual gripper model and a three-dimensional scene model obtained by three-dimensional reconstruction of a scene reconstruction device, which are acquired by a panoramic image acquisition device displayed in a VR helmet, the running states of the ROV body and the gripper are controlled through a remote control device, so that the gripper performs maintenance operation on a target object, and the remote operation method further comprises the following steps:
acquiring pose information of an ROV body and distance information between the pose information and surrounding objects through an environment sensing and measuring module;
converting the information acquired by the environment perception and measurement module into enhanced control prompt information to be displayed in a VR helmet, and designing a motion prediction method of an ROV body according to the information acquired by the environment perception and measurement module;
the method for predicting the motion of the ROV body comprises the following steps:
placing two virtual ROV models in a three-dimensional space of a VR helmet, wherein one virtual ROV model is used for mapping the pose of an ROV body in a real environment and is called as a calibration ROV, and the other virtual ROV model is used for predicting the pose change of the ROV body in a real scene and is called as a prediction ROV;
sending an operation instruction to an ROV body and a prediction ROV; the environment sensing and measuring module acquires pose information of the ROV body after receiving the operation instruction; predicting that the ROV simulates the ROV body to move in the virtual scene after receiving an operation instruction, and recording a motion track left in the virtual scene;
and the calibration ROV moves according to the pose information acquired by the environment perception and measurement module, and is compared with the motion trail left by the prediction ROV in the moving process to calculate and judge whether the user can be allowed to continue operating.
CN202011642497.9A 2020-12-31 2020-12-31 Teleoperation system and method for underwater maintenance robot Active CN112634318B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202011642497.9A CN112634318B (en) 2020-12-31 2020-12-31 Teleoperation system and method for underwater maintenance robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202011642497.9A CN112634318B (en) 2020-12-31 2020-12-31 Teleoperation system and method for underwater maintenance robot

Publications (2)

Publication Number Publication Date
CN112634318A CN112634318A (en) 2021-04-09
CN112634318B true CN112634318B (en) 2022-11-08

Family

ID=75290205

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202011642497.9A Active CN112634318B (en) 2020-12-31 2020-12-31 Teleoperation system and method for underwater maintenance robot

Country Status (1)

Country Link
CN (1) CN112634318B (en)

Families Citing this family (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113276110B (en) * 2021-04-22 2022-12-16 国网浙江省电力有限公司嘉兴供电公司 Transformer substation operation robot control system and method based on AR technology
CN113238556A (en) * 2021-05-14 2021-08-10 西北工业大学 Water surface unmanned ship control system and method based on virtual reality
CN114218702B (en) * 2021-12-10 2022-09-16 哈尔滨工业大学(深圳) Virtual visual simulation system for space on-orbit control
CN114373046B (en) * 2021-12-27 2023-08-18 达闼机器人股份有限公司 Method, device and storage medium for assisting robot operation
CN114927016A (en) * 2022-03-31 2022-08-19 江苏集萃清联智控科技有限公司 Seabed multitask simulation system, device and method
CN115070797B (en) * 2022-07-21 2023-03-24 广东海洋大学 Underwater control device based on bionic mechanical arm
CN115426488B (en) * 2022-11-04 2023-01-10 中诚华隆计算机技术有限公司 Virtual reality image data transmission method, system and chip
CN116418967B (en) * 2023-04-13 2023-10-13 青岛图海纬度科技有限公司 Color restoration method and device for laser scanning of underwater dynamic environment

Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298326A (en) * 2011-06-30 2011-12-28 哈尔滨工程大学 Underactuated autonomous underwater vehicle (AUV) adaptive trajectory tracking control device and control method
CN207488823U (en) * 2017-06-30 2018-06-12 炬大科技有限公司 A kind of mobile electronic device
CN108536157A (en) * 2018-05-22 2018-09-14 上海迈陆海洋科技发展有限公司 A kind of Intelligent Underwater Robot and its system, object mark tracking
CN111498066A (en) * 2020-04-23 2020-08-07 大连海事大学 Underwater weak magnetic signal collection robot and method for detecting target object by using same

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN108459596A (en) * 2017-06-30 2018-08-28 炬大科技有限公司 A kind of method in mobile electronic device and the mobile electronic device
CN108828606B (en) * 2018-03-22 2019-04-30 中国科学院西安光学精密机械研究所 One kind being based on laser radar and binocular Visible Light Camera union measuring method
CN109240315A (en) * 2018-08-27 2019-01-18 西北工业大学 A kind of underwater automatic obstacle avoiding system and underwater barrier-avoiding method
CN109434870A (en) * 2018-09-18 2019-03-08 国网江苏省电力有限公司盐城供电分公司 A kind of virtual reality operation system for robot livewire work
CN109828658B (en) * 2018-12-17 2022-03-08 彭晓东 Man-machine co-fusion remote situation intelligent sensing system
CN110308797A (en) * 2019-07-09 2019-10-08 西北工业大学 Underwater robot environmental interaction system based on body-sensing technology mechanical arm and virtual reality technology
CN111459277B (en) * 2020-04-01 2023-05-30 重庆大学 Mechanical arm teleoperation system based on mixed reality and interactive interface construction method

Patent Citations (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN102298326A (en) * 2011-06-30 2011-12-28 哈尔滨工程大学 Underactuated autonomous underwater vehicle (AUV) adaptive trajectory tracking control device and control method
CN207488823U (en) * 2017-06-30 2018-06-12 炬大科技有限公司 A kind of mobile electronic device
CN108536157A (en) * 2018-05-22 2018-09-14 上海迈陆海洋科技发展有限公司 A kind of Intelligent Underwater Robot and its system, object mark tracking
CN111498066A (en) * 2020-04-23 2020-08-07 大连海事大学 Underwater weak magnetic signal collection robot and method for detecting target object by using same

Also Published As

Publication number Publication date
CN112634318A (en) 2021-04-09

Similar Documents

Publication Publication Date Title
CN112634318B (en) Teleoperation system and method for underwater maintenance robot
CN111055281B (en) ROS-based autonomous mobile grabbing system and method
US11691273B2 (en) Generating a model for an object encountered by a robot
US8265791B2 (en) System and method for motion control of humanoid robot
CN110480634B (en) Arm guide motion control method for mechanical arm motion control
US8989876B2 (en) Situational awareness for teleoperation of a remote vehicle
CN110216674B (en) Visual servo obstacle avoidance system of redundant degree of freedom mechanical arm
CN111459277B (en) Mechanical arm teleoperation system based on mixed reality and interactive interface construction method
CN113103230A (en) Human-computer interaction system and method based on remote operation of treatment robot
EP4057626A1 (en) Stereo camera apparatus having wide field of view, and depth image processing method using same
CN110412584A (en) A kind of mobile quick splicing system of underwater Forward-Looking Sonar
JPH0421105A (en) Stereoscopic teaching device for manipulator
JP2021177144A (en) Information processing apparatus, information processing method, and program
CN106527720A (en) Immersive interaction control method and system
Bosch et al. Towards omnidirectional immersion for ROV teleoperation
CN107363831B (en) Teleoperation robot control system and method based on vision
CN115514885A (en) Monocular and binocular fusion-based remote augmented reality follow-up perception system and method
Peñalver et al. Multi-view underwater 3D reconstruction using a stripe laser light and an eye-in-hand camera
Kim et al. Line Laser mounted Small Agent ROV based 3D Reconstruction Method for Precision Underwater Manipulation
Fournier et al. Immersive virtual environment for mobile platform remote operation and exploration
Jayasurya et al. Gesture controlled AI-robot using Kinect
CN117584123A (en) Robot teleoperation system and method based on visual positioning and virtual reality technology
WO2020235539A1 (en) Method and device for specifying position and posture of object
Zhang et al. A rate-based drone control with adaptive origin update in telexistence
CN116787422A (en) Robot control system and method based on multidimensional sensing

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant