CN113534824B - Visual positioning and close-range dense formation method for underwater robot clusters - Google Patents

Visual positioning and close-range dense formation method for underwater robot clusters Download PDF

Info

Publication number
CN113534824B
CN113534824B CN202110824053.5A CN202110824053A CN113534824B CN 113534824 B CN113534824 B CN 113534824B CN 202110824053 A CN202110824053 A CN 202110824053A CN 113534824 B CN113534824 B CN 113534824B
Authority
CN
China
Prior art keywords
underwater robot
underwater
carrier
vector
follower
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN202110824053.5A
Other languages
Chinese (zh)
Other versions
CN113534824A (en
Inventor
杨翊
周星群
胡志强
范传智
王志超
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenyang Institute of Automation of CAS
Original Assignee
Shenyang Institute of Automation of CAS
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenyang Institute of Automation of CAS filed Critical Shenyang Institute of Automation of CAS
Priority to CN202110824053.5A priority Critical patent/CN113534824B/en
Publication of CN113534824A publication Critical patent/CN113534824A/en
Application granted granted Critical
Publication of CN113534824B publication Critical patent/CN113534824B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/04Control of altitude or depth
    • G05D1/06Rate of change of altitude or depth
    • G05D1/0692Rate of change of altitude or depth specially adapted for under-water vehicles
    • YGENERAL TAGGING OF NEW TECHNOLOGICAL DEVELOPMENTS; GENERAL TAGGING OF CROSS-SECTIONAL TECHNOLOGIES SPANNING OVER SEVERAL SECTIONS OF THE IPC; TECHNICAL SUBJECTS COVERED BY FORMER USPC CROSS-REFERENCE ART COLLECTIONS [XRACs] AND DIGESTS
    • Y02TECHNOLOGIES OR APPLICATIONS FOR MITIGATION OR ADAPTATION AGAINST CLIMATE CHANGE
    • Y02ATECHNOLOGIES FOR ADAPTATION TO CLIMATE CHANGE
    • Y02A90/00Technologies having an indirect contribution to adaptation to climate change
    • Y02A90/30Assessment of water resources

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)
  • Image Processing (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The invention discloses a visual positioning and close-range dense formation method for an underwater robot cluster. The system comprises a vector and scalar marker light group module, an omnidirectional underwater high-definition camera module and an image processing and formation control module. When a plurality of underwater robots execute a formation task, all the underwater robots turn on vectors and scalar marker lamps and acquire optical images formed by the vectors or scalar marker lamps adjacent to other underwater robots through an omnidirectional underwater high-definition camera set; and performing image processing on the acquired images, calculating the space three-dimensional coordinates and the postures of a plurality of adjacent underwater robot targets, and calculating control instructions according to formation tasks to form stable underwater robot formations. The method realizes high-precision visual positioning on the underwater robot carrier and realizes stable underwater robot cluster formation under the communication-free condition on the basis. The underwater robot cluster carrying the method can freely switch the formations in autonomous navigation without modifying any hardware setting.

Description

Visual positioning and close-range dense formation method for underwater robot clusters
Technical Field
The invention relates to visual positioning and close-range dense formation of an underwater robot cluster, in particular to a visual positioning and close-range dense formation method for the underwater robot cluster.
Background
Compared with a single underwater robot, the underwater robot cluster system has great advantages in tasks such as submarine investigation, exploration and the like. In addition, in the fields of large-scale collaborative detection, countermeasure, striking, information networking and the like, the underwater robot cluster system can be cooperatively completed, and a single underwater robot is difficult to complete tasks due to the limitation of the single underwater robot.
The mutual perception positioning of underwater robots in the underwater robot cluster system is a key ring, but the underwater communication and perception means are limited, and the acoustic positioning and the visual positioning are two methods which are commonly used at present. The stable underwater robot cluster system depends on stable and accurate mutual positioning, but the acoustic positioning method has poor anti-interference performance and low positioning precision, and is not suitable for high-precision cooperative motion control of multiple underwater robots under the condition of short distance.
The visual positioning method obtains the pose of the target through the algorithm processing of the obtained image information. The depth camera is used for realizing distance measurement by emitting and receiving infrared light, and is not suitable for an underwater environment; the binocular camera solves the target distance through the imaging relation of the left camera and the right camera, but the binocular camera is large in size, high in installation accuracy requirement and large in acquired image data amount, and is not suitable for the underwater robot. The monocular camera has small volume, flexible installation and relatively less acquired image data volume, and meets the basic requirement of performing visual positioning calculation on a miniature data processing platform. Aiming at the requirement of large-field visual positioning, the wide-angle or fisheye camera has larger camera distortion and is limited by the size of a camera sensor, and high-precision visual positioning cannot be realized. Thus, multi-phase combining in combination with specific camera-enabled strategies is the best choice for large-field visual positioning.
The classical robot formation algorithm at present can be divided into an artificial potential field method, a virtual structure method, a behavior control method, a path following method, an information consistency method and a pilot following method. The artificial potential field method realizes formation control by minimizing individual potential fields. The virtual structure method treats all members of the formation as a whole to realize robot formation. The behavior control method is based on the steps of decomposing tasks into basic behaviors such as driving to path points, formation holding and the like, and realizing formation control through behavior fusion. The space-time decomposition is carried out on the tasks based on the path following method, so that the space-time path following tasks and the time coordination synchronization tasks are obtained, and further formation is realized. The method based on information consistency is to make quantitative information of each agent in the system tend to be consistent under the action of proper control law. The following navigator method is that a follower follows the navigator according to the distance and azimuth information relative to the navigator so as to realize formation. Although various control algorithms exist in the field of robot formation, accurate and stable formation functions for underwater robots are not realized yet. The current underwater robot formation has the following problems:
(1) The mutual position information in various underwater robot formation schemes is acquired by acoustic positioning, but the acoustic positioning mode is difficult to acquire quick, stable and accurate positioning information in a short distance.
(2) The existing monocular vision positioning algorithm is very convenient in hardware setting, but is difficult to acquire accurate three-dimensional coordinates and attitude information of a target, and the situation that the target is blocked easily occurs, so that three-dimensional underwater robot formation cannot be realized.
(3) At present, although various underwater robot formation control methods exist, the method is too complex or has poor robustness, and a stable formation effect is difficult to obtain on an entity underwater robot.
Disclosure of Invention
Aiming at the problems in the aspect of underwater robot formation, the visual positioning and close-range dense formation method for the underwater robot cluster is provided. The laser vector marker lamp group is arranged on the underwater robot carrier, a plurality of laser rays are emitted to serve as markers, scalar marker lamps are deployed on the bow section and the stern section of the underwater robot, high-definition cameras of the bow section of the underwater robot are used for acquiring image information, high-precision relative positions and attitudes of other underwater robots are obtained, and control instructions are sent according to formation formations, so that the formation function of the underwater robot is realized. The method solves the problem that accurate position information is difficult to acquire due to difficult communication in an underwater environment, visual positioning by using a monocular camera can be deployed on a small embedded platform with limited calculation force, and underwater robot positioning and formation control are realized in a very small hardware space.
The visual positioning and close-range dense formation method for the underwater robot cluster is characterized in that the underwater robot cluster is at least composed of 3 carriers, and a vector marker light group, a scalar marker light group, an underwater camera, an image processing and formation controller are mounted on the underwater robot cluster carrier; one of the two is a navigator, the other is a follower, the navigator independently navigates according to navigation tasks, and the positions of the other followers in the cluster are shot by a rearview camera, so that whether the passengers follow is judged; the follower shoots images of the navigator through cameras of the bow section and the stern section of the carrier, positions the carrier and calculates the moving target position of the carrier in the current cluster, and cluster formation based on optical image position correction under the condition of no communication is realized;
the vector marker lamp sets are arranged in the middle cabin section of the underwater robot carrier, and the lasers in the vector marker lamp sets penetrate through the carrier transmission window to emit laser to the surrounding water area environment, so that vector optical images are formed in the axial visual angle direction of the underwater robot carrier;
the scalar marker light group is an annular light source arranged on the carrier bow section and the stern section of the underwater robot and is used for forming scalar optical images representing the positions of the carrier bow section and the stern section;
the underwater cameras are arranged on the carrier bow section and the carrier stern section of the underwater robot and are used for shooting images;
the image processing and formation controller is used for outputting instructions to control the opening of the vector marker lamp group and the scalar marker lamp group, controlling the underwater camera of the navigator or the follower to shoot images, performing image processing on the images so as to position the pixel positions of the carriers in the images, judging whether the follower follows or not, calculating the movement control parameters of the follower in the current cluster by the navigator controller, and executing the calculation.
The vector marking lamp group comprises 1 laser and a beam splitter, the beam splitter splits laser into a plurality of beams in a set direction, and the split laser rays are positioned on the same cross-section circle of the carrier.
The vector marker lamp group is a plurality of lasers, and the laser light outlets are arranged on the same cross-section circle of the carrier, and the laser rays are on the same plane.
The vector marker lamp set comprises a plurality of combinations of lasers and prisms, the lasers emit laser to the prisms, and the prisms reflect the laser to the surrounding water area environment; the reflection points of the laser on the prism are arranged on the same cross-section circle of the carrier, and the laser rays are on the same plane.
The laser, the beam splitter and the prism are respectively arranged in the underwater robot cabin section through mounting seats according to preset angles, so that laser rays form preset included angles with the underwater robot carrier, and the laser rays form vector ray patterns; changing the pattern and the characteristics of the vector light by changing the angle of the mounting seat; the characteristic is the intersection point position of the vector rays;
the underwater robot carrier bow section is provided with front, upper, lower, left and right underwater cameras, and the stern section is provided with a rearview camera;
the vector marker lamp sets are four, any two adjacent underwater cameras form a group, the other two groups form a group, and the laser optical image comprises 4 rays and two intersection points.
The image processing and formation controller comprises a processor memory, wherein a program is stored in the memory, and the processor loads the program to execute the following steps;
step 1, controllers of a navigator and a follower all output instructions to control the turning on of a vector marker light group and a scalar marker light group, and control an underwater camera to shoot images;
step 2, the navigator carries out an area detection priority step to shoot images, carries out image processing, judges whether a follower follows or not, and if the follower is found for a long time or the distance between the follower is larger than a set value, the vehicle runs at a reduced speed;
step 3, the follower executes the region detection priority step to shoot images, image processing is carried out, the pilot and other member positions in the cluster are positioned, and the movement control parameters of the pilot and other member positions in the current cluster are calculated;
the region detection priority step includes:
dividing the view angle of the underwater robot carrier into 3 areas: A. b, C;
the area A is a forward view angle, and a forward view camera of a carrier bow section is started by a current follower in the view angle range to shoot a pilot vector optical image;
the area B is a side view angle, and in the view angle range, the current follower starts the underwater cameras in the upper direction, the lower direction, the left direction and the right direction of the carrier bow section to shoot scalar optical images of the other followers;
and the area C is a rear view angle, and a navigator enables a rear view camera of the stern section of the carrier to shoot a follower vector optical image within the view angle range.
The image processing includes:
performing image correction on the current vector or scalar optical image according to the camera internal parameters;
removing interference color noise, and extracting a three-primary-color single-channel data image;
and carrying out mean value filtering, image enhancement, binarization and morphological operation to obtain a skeletonized mode diagram of the vector marker lamp.
And after the follower shoots scalar optical images in the area B and processes the images, calculating the three-dimensional coordinates of the target underwater robot by acquiring scalar lamp pixel coordinates of the bow section and the stern section of the carrier and combining with internal parameters of the camera.
After the pilot follower shoots vector optical images of the middle cabin of the carrier in the area C and the follower in the area A and processes the images, the following vector position positioning steps are executed:
a. fitting straight lines existing in the image for multiple times by adopting a fitting algorithm to obtain all laser ray equations in the image;
b. calculating intersection point coordinates of every two vector rays, combining the actual ray intersection point three-dimensional coordinates with camera calibration internal parameters, and calculating the position and the posture of the target underwater robot in the image relative to the underwater robot of the current visual angle by adopting a PNP algorithm; the actual light intersection point is a known pre-calibrated intersection point coordinate;
c. and adjusting the target position of the current underwater robot in the formation according to the positions and the postures of other underwater robots under the current view angle, taking the adjusted target position as input by the current follower, calculating the heading angle and the forward speed of the underwater robot as input of a dynamics controller through a PID algorithm, updating the motion state of the underwater robot, and then circularly detecting the position to realize three-dimensional formation of the underwater robot.
The invention has the following beneficial effects and advantages:
1. aiming at the problem that the underwater robot is difficult to acquire stable and accurate positioning information through acoustic positioning, the invention adopts a mode of combining a vector lamp and a scalar lamp, realizes the visual positioning of any azimuth target, realizes quick and accurate target pose information perception through visual information, and provides key information for underwater robot formation under the condition of no communication.
2. The invention realizes visual positioning by using the simple-to-mount monocular camera, reduces hardware cost and reduces input data volume relative to the monocular camera. The calculation amount is reduced, the calculation efficiency is improved, and the global visual positioning calculation and the acquisition of the three-dimensional position coordinates and the gesture information on the small embedded platform are realized.
3. The invention adopts a pilot formation control algorithm and adopts various unexpected mechanisms according to the abnormal visual positioning result, can obtain effective formation effect on the entity underwater robot, and has strong practicability.
Drawings
FIG. 1 is a flow chart of a visual positioning and formation of a underwater robot in accordance with an embodiment of the present invention
FIG. 2 is a mechanical structure diagram of a laser marking pod segment of a submerged robot in an embodiment of the invention;
FIG. 3 is a diagram of a configuration of a bow section of a submerged robot in an embodiment of the present invention;
FIG. 4 is a diagram of the overall structure of the underwater robot in an embodiment of the present invention;
FIG. 5 is a schematic view of visual positioning in different target areas according to an embodiment of the present invention;
FIG. 6 is a schematic view of vector light emitted by a vector light source capsule;
FIG. 7 is a camera acquisition vector laser marker light image in an embodiment of the present invention;
FIG. 8 is a graph showing the effect of detecting the light rays of the camera-collected vector laser marker light in the embodiment of the invention;
Detailed Description
In order that the above objects, features and advantages of the invention will be readily understood, a more particular description of the invention will be rendered by reference to the appended drawings. In the following description, numerous specific details are set forth in order to provide a thorough understanding of the present invention. The invention may be embodied in many other forms than described herein and similarly modified by those skilled in the art without departing from the spirit or scope of the invention, which is therefore not limited to the specific embodiments disclosed below.
Unless defined otherwise, all technical and scientific terms used herein have the same meaning as commonly understood by one of ordinary skill in the art to which this invention belongs. The terminology used in the description of the invention herein is for the purpose of describing particular embodiments only and is not intended to be limiting of the invention.
According to the embodiment, the autonomous underwater robot which is designed autonomously is used as a platform, the problem that accurate position information is difficult to obtain in a short distance in an underwater environment is solved through a three-dimensional formation method of the underwater robot based on a plurality of monocular vision positioning, and the large-field high-precision vision positioning and the three-dimensional formation of the underwater robot are realized on the entity underwater robot.
Fig. 2 is a mechanical structure diagram of a underwater robot vector light source cabin section in an embodiment of the invention, light emitted by a laser is reflected by a prism and then emitted to an external environment, and two groups of laser light emitted by the laser form two vector patterns with adjustable angles. Fig. 3 is a mechanical structure diagram of a 5-high-definition camera device of a bow section of a submerged robot in an embodiment of the invention, wherein one main camera is installed in the middle, and other 4 auxiliary cameras are respectively positioned on the upper, lower, left and right sides of the main camera, so as to realize global image information acquisition. Fig. 4 is a whole structure diagram of the underwater robot in the embodiment of the invention, a plurality of cameras of a bow section realize a large-field visual information acquisition function, annular scalar lamps are respectively arranged on the bow section and a stern section, a vector light source cabin section emits laser rays (shown in fig. 6) to indicate the positions of other underwater robots, and a stern section camera acquires image information of a rear area of the underwater robot to assist in formation of the underwater robot.
The underwater robot formation consists of any number of more than three underwater robots, each underwater robot is provided with six monocular high-definition cameras positioned at a bow section and a stern section of a carrier and used for realizing omnidirectional visual positioning, vectors and scalar marker lamp groups positioned on the carrier, each robot carries out pose calculation by capturing images formed by other underwater robot vectors and scalar marker lamp groups through vision, and controls the pose, the course and the speed of the robot according to the formation requirements in the current formation task so as to complete the underwater cluster formation of a plurality of underwater robots, and the whole flow is shown in figure 1;
the process of positioning other underwater robots and forming a formation by the underwater robots is as follows:
the method comprises the steps that an underwater robot starts a marker lamp set, 6 high-definition cameras with an omnidirectional vision capturing function are installed on a bow section and a stern section of the underwater robot, and the visual field range comprises the whole spherical visual fields of the front, the left, the right, the upper, the lower and the rear of the underwater robot. The method comprises the steps that through five high-definition camera groups of a bow section and a stern section, rear-view high-definition cameras are installed, so that omnidirectional capturing of images of other robot marker lamp groups is realized;
in order to fully utilize the image decoding capability of the hardware platform and improve the frame rate of the acquired video information, the image information acquired in the camera is decoded in a hard decoding mode. Aiming at the problem that the acquired image has a certain degree of distortion, camera calibration and distortion correction are needed before the original image information is subjected to image processing. In addition, as the actual working environment of the underwater robot is underwater and the interface of air, glass and water exists between the camera lens and the shooting target, light rays can be refracted, and the visual positioning result can be greatly influenced, the underwater camera calibration is needed before the visual positioning is carried out. And placing a calibration plate in an underwater environment, shooting a calibration image and calculating internal parameters of the camera.
The self position of the underwater robot is indicated by a plurality of scalar marker light sources located in the bow section and the stern section.
And vector light rays and extension lines are emitted by a plurality of marker lamps to form a closed graph. An annular scalar lamp is arranged on the bow section of the underwater robot, and an upper scalar lamp, a lower scalar lamp, a left scalar lamp and a right scalar lamp are arranged on the stern section. The vector marker light group consists of 4 laser transmitters in the cabin, and can be divided into 4 laser rays through one laser transmitter. The middle section of the vector light source cabin section of the underwater robot is provided with a laser marking lamp group and a reflecting prism, the laser marking lamp group refracts laser of the laser transmitter through the reflecting prism to form a closed graph shown in fig. 6, and the angles alpha, beta, gamma and delta of light rays at two sides of the closed graph of the mounting seat provided with the prisms with different angles can be changed to realize the calculation of the heading gesture of the target underwater robot.
The image processing module reads the video stream data acquired by the camera set through the network interface. And enabling cameras at different positions to acquire image data according to the region where the target is located by the plurality of high-definition cameras. Starting all cameras in a target searching stage, starting a main camera and two auxiliary cameras in a target direction after the target position is acquired, and starting the cameras in the corresponding directions according to the position of the target in a target tracking stage; and if the target is lost in the target tracking process, re-searching the target.
The image processing module performs noise filtering and image enhancement processing on images acquired by all cameras at the same time to obtain a single-channel image;
the target underwater robot can be positioned visually through a scalar or vector marker lamp set in any direction, and no visual dead angle exists. When the target underwater robot is positioned in the area A and the area C, the pose information of the target underwater robot is calculated by adopting the vector marker lamp pattern; when the target underwater robot is located in the B region, the scalar marker light is used to calculate its position information, as shown in fig. 5.
The image processing module performs scalar and vector light source detection on the single-channel image obtained through pretreatment, and calculates relative three-dimensional coordinates and postures of other underwater robots;
in the scalar lamp detection mode, the original image is preprocessed and then is subjected to region extraction, the scalar lamp pixel coordinates of the bow section and the stern section are obtained, and the three-dimensional coordinates of the target underwater robot are calculated by combining the camera internal parameters.
In the vector lamp detection mode, laser ray detection is carried out after image preprocessing, and a curve fitting algorithm is used for fitting laser rays so as to carry out subsequent pose calculation. Because the laser ray line is simpler and has a straight line shape, the random sampling consistency algorithm (RANSAC) can remove the interference of noise points, and the algorithm has robustness. There may be various unknown disturbances in the actual underwater environment, resulting in the optical markers missing or the occurrence of highlight areas of unknown shape, resulting in the detector outputting erroneous detection results. Therefore, a RANSAC algorithm is adopted to perform multiple fitting on lines possibly existing in the image, so that all laser ray positions and linear equations in the image are obtained. Fig. 7 is an effect diagram of a camera collecting images of a laser marker lamp after binarization processing, and the effect of detecting the laser light of the laser marker lamp shown in fig. 8 is obtained after laser light detection. And calculating the coordinates of the intersection points of the light rays every two by two according to the mapping relation between the image position of each light ray and the actual light rays in the space.
After the pixel coordinates of the light intersection Point are obtained through calculation, the position and the gesture of the target underwater robot relative to the camera of the underwater robot are calculated by using a PNP (transparent-n-Point) algorithm by combining the actual three-dimensional space coordinates of the light intersection Point in the space with the camera internal parameters obtained through camera calibration. Because of a certain angle deviation of the actually installed laser, the four laser rays are not coplanar in space, and therefore, four coplanar intersection points do not exist. Therefore, firstly, the actually installed lasers are calibrated, three-dimensional coordinates of the actual approximate intersection points of the four laser rays are calculated, and then pose estimation calculation is carried out.
The formation control module calculates the three-dimensional coordinates of the target according to the relative coordinates and the gestures of other underwater robots output by the image processing module and combined with the formation task, and further controls the underwater robots to move to the target position, so that a stable formation is obtained.
1 underwater robot in the underwater robot formation is an underwater robot of a pilot, and autonomous navigation is carried out according to navigation tasks; and the other underwater robots are follower underwater robots, calculate target pose according to the relative pose of the pilot underwater robot and move towards the target pose to form a task formation.
The detection priorities of the scalar lamps and the vector lamps are adjusted according to the formation designed in the underwater robot formation task, and the detection mode with higher probability of the occurrence of the target in the area is preferentially started.
The underwater robot calculates the relative distance and the rotation angle between the underwater robot and the target position according to the pose information of the target underwater robot and the space position of the underwater robot set in the formation task, and then controls the underwater robot to move to the target position.
If the underwater robot formation task forms formation after a certain time or task node, the underwater robot recalculates the target position with a new task to complete the formation transformation task.
If the high-definition camera installed on the underwater robot does not acquire the preset pattern formed by the complete vector marker lamps, starting an abnormal condition visual coarse positioning scheme, and calculating corresponding formation control instructions according to different conditions:
1. if the camera acquires an image which only contains 1 laser ray, judging the direction of the target underwater robot according to the brightness change trend of the laser rays in the image, and sending a course quantitative control instruction;
2. if the camera acquires an image which only contains 2 laser rays, calculating the direction of the target underwater robot according to the brightness change trend of the laser rays in the image and the intersection point coordinates, and sending a course quantitative control instruction;
3. if the camera acquires an image which only contains 3 laser rays, calculating the direction and the relative distance of the target underwater robot according to the spatial position of the 3 laser rays in the image and two key point coordinate information obtained by intersecting two by two, and sending a course and navigational speed control instruction;
searching for a follower through a rearview camera in the autonomous navigation process of the underwater robot of the navigator, and decelerating navigation if the follower is not found for a long time or the distance of the follower is far greater than the set distance of the formation;
the follower underwater robot positions the pilot underwater robot through the forward-looking camera, and if the pilot underwater robot cannot be positioned in the starting stage, the in-situ rotation detection is performed;
if the follower loses the target in the moving process, accelerating navigation by using the last positioning result, and if the navigator underwater robot is repositioned, continuing to carry out formation control; and if the navigator underwater robot still cannot be perceived, the navigator underwater robot rotates in situ to perceive or start the navigation task of the navigator underwater robot, and the navigator underwater robot navigates autonomously.
The condition that a plurality of laser rays interfere with each other must appear in the formation process of a plurality of UUV, in order to distinguish the light of different carriers, the mode of carrying out discernment based on light brightness is adopted at present, only detects the light that sends the nearest carrier of distance to carry out visual localization.
Since the visual positioning results will be applied to intensive formation of underwater robots, erroneous calculation results will have a great influence on the formation control, and thus here the positioning data results are filtered twice.
And (one), filtering out obvious error data. In the light detection process, pose calculation is not carried out on detection frames with the quantity of light rays not equal to 4 or the intersection point of straight lines being positioned outside the image in the detection result, an abnormal condition coarse positioning scheme is started, and formation control instructions are directly calculated. And when the absolute distance of the target in the positioning result is too large or too small, determining that the positioning is failed, and stopping data output.
And secondly, judging the possibility of the current calculation result according to the historical positioning data. In practical application, the visual positioning algorithm inputs and positions the underwater robot with continuous motion through real-time video, so that the positioning result cannot have rapid jump, the positioning result with rapid change is scaled down according to the positioning result at the last moment, and the stability of continuous positioning data is improved.
The pilot underwater robot executes a task sailing mode and runs autonomously according to a sailing task. The follower underwater robot executes a formation control mode, and based on a visual positioning result (a precise positioning result or a coarse positioning result) and a pilot formation algorithm, a speed and heading angle control instruction is sent to the underwater robot master control so as to execute a command. The follower underwater robot obtains information such as the position, speed and heading angle of other underwater robots according to a visual positioning algorithm, calculates the heading angle and the advancing speed of the underwater robot by taking the target position of the underwater robot in the formation as input through a PID control algorithm, and takes the target position as input of a dynamics controller, the underwater robot updates the motion state of the underwater robot, and then the process is circularly carried out, so that the three-dimensional formation of the underwater robot is realized.
The foregoing description of the embodiments has been provided for the purpose of illustrating the general principles of the invention, and is not meant to limit the invention thereto, but to limit the invention thereto, and any modifications, equivalents, and so on, falling within the spirit and principles of the invention.

Claims (8)

1. The visual positioning and close-range dense formation method for the underwater robot cluster is characterized in that the underwater robot cluster is at least composed of 3 carriers, and a vector marker lamp group, a scalar marker lamp group, an underwater camera, an image processing and formation controller are carried on the underwater robot cluster carrier; one of the two is a navigator, the other is a follower, the navigator independently navigates according to navigation tasks, and the positions of the other followers in the cluster are shot by a rearview camera, so that whether the passengers follow is judged; the follower shoots images of the navigator through cameras of the bow section and the stern section of the carrier, positions the carrier and calculates the moving target position of the carrier in the current cluster, and cluster formation based on optical image position correction under the condition of no communication is realized;
the vector marker lamp set is arranged in the middle cabin section of the underwater robot carrier, and a laser in the vector marker lamp set penetrates through the carrier transmission window to emit laser to the surrounding water area environment, so that a vector optical image is formed in the axial visual angle direction of the underwater robot carrier;
the scalar marker light group is an annular light source arranged on the carrier bow section and the stern section of the underwater robot and is used for forming scalar optical images representing the positions of the carrier bow section and the stern section;
the underwater camera is arranged on the carrier bow section and the carrier stern section of the underwater robot and is used for shooting images;
the image processing and formation controller is used for outputting instructions to control the starting of the vector marker lamp group and the scalar marker lamp group, controlling the underwater camera of a navigator or a follower to shoot images, performing image processing on the images so as to position the pixel positions of the carriers in the images, judging whether the follower follows or not and calculating and executing the movement control parameters of the follower in the current cluster by the navigator controller; the image processing and formation controller comprises a processor memory, wherein a program is stored in the memory, and the processor loads the program to execute the following steps;
step 1, controllers of a navigator and a follower all output instructions to control the turning on of a vector marker light group and a scalar marker light group, and control an underwater camera to shoot images;
step 2, the navigator carries out an area detection priority step to shoot an image, carries out image processing, judges whether a follower follows or not, and if the follower is not found for a long time or the distance of the follower is larger than a set value, the vehicle runs at a reduced speed;
step 3, the follower executes the region detection priority step to shoot images, image processing is carried out, the pilot and other member positions in the cluster are positioned, and the movement control parameters of the pilot and other member positions in the current cluster are calculated;
after the pilot follower shoots vector optical images of the middle cabin of the carrier in the area C and the follower in the area A and processes the images, the following vector position positioning steps are executed:
a. fitting straight lines existing in the image for multiple times by adopting a fitting algorithm to obtain all laser ray equations in the image;
b. calculating intersection point coordinates of every two vector rays, combining the actual ray intersection point three-dimensional coordinates with camera calibration internal parameters, and calculating the position and the posture of the target underwater robot in the image relative to the underwater robot of the current visual angle by adopting a PNP algorithm; the actual light intersection point is a known pre-calibrated intersection point coordinate;
c. and adjusting the target position of the current underwater robot in the formation according to the positions and the postures of other underwater robots under the current view angle, taking the adjusted target position as input by the current follower, calculating the heading angle and the forward speed of the underwater robot as input of a dynamics controller through a PID algorithm, updating the motion state of the underwater robot, and then circularly detecting the position to realize three-dimensional formation of the underwater robot.
2. The visual positioning and close-range dense formation method for the underwater robot clusters according to claim 1, wherein the vector marker lamp group comprises 1 laser and a beam splitter, the beam splitter splits laser into a plurality of beams in a set direction, and the split laser rays are positioned on the same cross-section circle of the carrier.
3. The visual positioning and close-range dense formation method for the underwater robot clusters according to claim 1, wherein the vector marker lamp group is a plurality of lasers, and the laser light outlets are arranged on the same cross-section circle of the carrier and the laser rays are on the same plane.
4. The method for visual positioning and close-range dense formation for underwater robot clusters according to claim 1, wherein the vector marker light group comprises a combination of a plurality of lasers and a prism, the lasers emit laser light to the prism, and the prism reflects the laser light to the surrounding water environment; the reflection points of the laser on the prism are arranged on the same cross-section circle of the carrier, and the laser rays are on the same plane.
5. The visual positioning and close-range dense formation method for the underwater robot clusters according to any one of claims 2-4, wherein the laser, the beam splitter and the prism are respectively arranged in the underwater robot cabin section through mounting seats according to preset angles, so that laser rays form preset included angles with the underwater robot carrier, and the laser beams form vector ray patterns; changing the pattern and the characteristics of the vector light by changing the angle of the mounting seat; the characteristic is the intersection point position of the vector rays;
the underwater robot carrier bow section is provided with front, upper, lower, left and right underwater cameras, and the stern section is provided with a rearview camera;
the vector marker lamp sets are four, any two adjacent underwater cameras form a group, the other two groups form a group, and the laser optical image comprises 4 rays and two intersection points.
6. The method for visual localization and close-up dense formation for underwater robot clusters according to claim 1, wherein said step of detecting the priority of the region comprises:
dividing the view angle of the underwater robot carrier into 3 areas: A. b, C;
the area A is a forward view angle, and a forward view camera of a carrier bow section is started by a current follower in the view angle range to shoot a pilot vector optical image;
the area B is a side view angle, and in the view angle range, the current follower starts the underwater cameras in the upper direction, the lower direction, the left direction and the right direction of the carrier bow section to shoot scalar optical images of the other followers;
and the area C is a rear view angle, and a navigator enables a rear view camera of the stern section of the carrier to shoot a follower vector optical image within the view angle range.
7. The underwater robot cluster-oriented visual positioning and close-up dense formation method of claim 1, wherein the image processing comprises:
performing image correction on the current vector or scalar optical image according to the camera internal parameters;
removing interference color noise, and extracting a three-primary-color single-channel data image;
and carrying out mean value filtering, image enhancement, binarization and morphological operation to obtain a skeletonized mode diagram of the vector marker lamp.
8. The method for visual positioning and close-range dense formation of underwater robot clusters according to claim 1, wherein the follower shoots scalar optical images in the area B and processes the images, and the three-dimensional coordinates of the target underwater robot are obtained by acquiring the scalar lamp pixel coordinates of the bow section and the stern section of the carrier and combining the camera internal parameters.
CN202110824053.5A 2021-07-21 2021-07-21 Visual positioning and close-range dense formation method for underwater robot clusters Active CN113534824B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110824053.5A CN113534824B (en) 2021-07-21 2021-07-21 Visual positioning and close-range dense formation method for underwater robot clusters

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110824053.5A CN113534824B (en) 2021-07-21 2021-07-21 Visual positioning and close-range dense formation method for underwater robot clusters

Publications (2)

Publication Number Publication Date
CN113534824A CN113534824A (en) 2021-10-22
CN113534824B true CN113534824B (en) 2023-04-25

Family

ID=78100656

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110824053.5A Active CN113534824B (en) 2021-07-21 2021-07-21 Visual positioning and close-range dense formation method for underwater robot clusters

Country Status (1)

Country Link
CN (1) CN113534824B (en)

Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109062204A (en) * 2018-07-25 2018-12-21 南京理工大学 It is a kind of based on follow pilotage people form into columns multiple mobile robot's control system

Family Cites Families (7)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4070437B2 (en) * 2001-09-25 2008-04-02 ダイハツ工業株式会社 Forward vehicle recognition device and recognition method
KR101863360B1 (en) * 2016-11-09 2018-07-05 (주)안세기술 3D laser scanning system using the laser scanner capable of tracking dynamic position in real time
CN109857102B (en) * 2019-01-21 2021-06-15 大连理工大学 Wheeled robot formation and tracking control method based on relative position
CN111721194A (en) * 2019-03-19 2020-09-29 北京伟景智能科技有限公司 Multi-laser-line rapid detection method
CN111498070B (en) * 2020-05-08 2021-06-08 中国科学院半导体研究所 Underwater vector light vision guiding method and device
CN112148023A (en) * 2020-10-10 2020-12-29 上海海事大学 Equal-plane underwater formation method for autonomous underwater robot
CN112665613A (en) * 2020-12-22 2021-04-16 三一重型装备有限公司 Pose calibration method and system of heading machine

Patent Citations (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109062204A (en) * 2018-07-25 2018-12-21 南京理工大学 It is a kind of based on follow pilotage people form into columns multiple mobile robot's control system

Also Published As

Publication number Publication date
CN113534824A (en) 2021-10-22

Similar Documents

Publication Publication Date Title
US11237572B2 (en) Collision avoidance system, depth imaging system, vehicle, map generator and methods thereof
US10776939B2 (en) Obstacle avoidance system based on embedded stereo vision for unmanned aerial vehicles
CN110543859B (en) Sea cucumber autonomous identification and grabbing method based on deep learning and binocular positioning
CN106529495B (en) Obstacle detection method and device for aircraft
CN109211241B (en) Unmanned aerial vehicle autonomous positioning method based on visual SLAM
CN110068335B (en) Unmanned aerial vehicle cluster real-time positioning method and system under GPS rejection environment
US10444349B2 (en) Waypoint sharing systems and methods
US11187790B2 (en) Laser scanning system, laser scanning method, movable laser scanning system, and program
US20220033076A1 (en) System and method for tracking targets
US20170168159A1 (en) Augmented reality sonar imagery systems and methods
WO2018159168A1 (en) System and method for virtually-augmented visual simultaneous localization and mapping
WO2019152149A1 (en) Actively complementing exposure settings for autonomous navigation
US10642271B1 (en) Vehicle guidance camera with zoom lens
CN110887486B (en) Unmanned aerial vehicle visual navigation positioning method based on laser line assistance
CN110658916A (en) Target tracking method and system
CN111498070A (en) Underwater vector light vision guiding method and device
WO2021178603A1 (en) Water non-water segmentation systems and methods
Inzartsev et al. Underwater pipeline inspection method for AUV based on laser line recognition: Simulation results
CN113534824B (en) Visual positioning and close-range dense formation method for underwater robot clusters
Negahdaripour et al. A vision system for real-time positioning, navigation, and video mosaicing of sea floor imagery in the application of ROVs/AUVs
CN212193168U (en) Robot head with laser radars arranged on two sides
CN111798496A (en) Visual locking method and device
WO2021045679A1 (en) Method and system for object localization, and controlling movement of an autonomous underwater vehicle for object intervention
CN113034590A (en) AUV dynamic docking positioning method based on visual fusion
CN111152237B (en) Robot head with laser radars arranged on two sides and environment sampling method thereof

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant