CN117008622A - Visual robot underwater target identification tracking method and underwater visual robot thereof - Google Patents
Visual robot underwater target identification tracking method and underwater visual robot thereof Download PDFInfo
- Publication number
- CN117008622A CN117008622A CN202310366613.6A CN202310366613A CN117008622A CN 117008622 A CN117008622 A CN 117008622A CN 202310366613 A CN202310366613 A CN 202310366613A CN 117008622 A CN117008622 A CN 117008622A
- Authority
- CN
- China
- Prior art keywords
- target
- robot
- characteristic
- underwater
- image
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Pending
Links
- 238000000034 method Methods 0.000 title claims abstract description 32
- 230000000007 visual effect Effects 0.000 title claims abstract description 16
- 238000012216 screening Methods 0.000 claims abstract description 8
- 230000008569 process Effects 0.000 claims abstract description 6
- 239000000463 material Substances 0.000 claims description 32
- 238000005286 illumination Methods 0.000 claims description 16
- 239000011159 matrix material Substances 0.000 claims description 15
- 238000001514 detection method Methods 0.000 claims description 9
- 238000012545 processing Methods 0.000 claims description 8
- 238000007781 pre-processing Methods 0.000 claims description 6
- 238000004364 calculation method Methods 0.000 claims description 5
- 239000013077 target material Substances 0.000 claims description 3
- 238000009434 installation Methods 0.000 claims 1
- 238000000265 homogenisation Methods 0.000 abstract description 2
- 230000036544 posture Effects 0.000 description 8
- 238000010586 diagram Methods 0.000 description 5
- 241001270131 Agaricus moelleri Species 0.000 description 2
- 230000006978 adaptation Effects 0.000 description 2
- 230000010365 information processing Effects 0.000 description 2
- 238000005259 measurement Methods 0.000 description 2
- 238000012986 modification Methods 0.000 description 2
- 230000004048 modification Effects 0.000 description 2
- XLYOFNOQVPJJNP-UHFFFAOYSA-N water Substances O XLYOFNOQVPJJNP-UHFFFAOYSA-N 0.000 description 2
- 230000009286 beneficial effect Effects 0.000 description 1
- 230000003750 conditioning effect Effects 0.000 description 1
- 230000010485 coping Effects 0.000 description 1
- 230000007547 defect Effects 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 229910003460 diamond Inorganic materials 0.000 description 1
- 239000010432 diamond Substances 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 238000001914 filtration Methods 0.000 description 1
- 230000004927 fusion Effects 0.000 description 1
- 238000002955 isolation Methods 0.000 description 1
- 238000000691 measurement method Methods 0.000 description 1
- 230000007246 mechanism Effects 0.000 description 1
- 230000009467 reduction Effects 0.000 description 1
- 238000011160 research Methods 0.000 description 1
- 238000005070 sampling Methods 0.000 description 1
Landscapes
- Image Processing (AREA)
Abstract
The invention discloses a visual robot underwater target identification tracking method and an underwater visual robot thereof, wherein a target image is acquired through a visual module, and then the acquired image is preprocessed to obtain a multi-target feature set; then, sequentially identifying and screening targets in the multi-target feature set from the near to the far until a specific target is identified; in the screening process, interference discrimination is carried out on the identified non-specific targets, if no interference exists, the target is directly swept to enter the next target identification, and if interference exists, the robot track is adjusted to a non-interference state, so that obstacle targets are avoided, and specific targets are found; after a specific target is found, the underwater robot body system runs to the specific target according to the planned track. According to the invention, the underwater target is subjected to light homogenization treatment through the lattice light source, so that the target can be clearly found and identified through the vision module, and the underwater target is identified and tracked.
Description
Technical Field
The invention relates to the technical field of vision robots.
Background
At present, the research on machine vision is carried by land robots, but with the development of industrial robot technology and the needs of marine rescue salvage and offshore oil exploitation, the needs of different types and purposes of underwater vision robot systems are gradually improved, the existing underwater vision robots mainly use monocular CCD cameras as vision sensors, under the monocular condition, the pose measurement method based on PnP problem has relatively complex calculation process and easily generates the phenomenon of false solution, provides wrong target position information for the underwater robots, and is difficult to effectively extract information if the laser intensity of the existing recognition tracking system is insufficient due to weaker illumination of the underwater environment, so that the recognition tracking performance is also required to be improved.
Disclosure of Invention
The invention aims to: in order to overcome the defects in the prior art, the invention provides the underwater target identification and tracking method of the vision robot and the underwater vision robot, and the underwater target is subjected to light homogenization treatment through the lattice light source, so that the target can be clearly found and identified through the vision module, and the underwater target identification and tracking are realized.
The technical scheme is as follows: in order to achieve the above purpose, the invention provides a visual robot underwater target recognition tracking method and an underwater visual robot thereof, which are used for recognizing a specific target through a visual module and automatically planning a route to cruise and track the specific target, and specifically comprises the following steps:
step I, detecting possible targets in the visual field, and dividing multiple targets: firstly, acquiring a target image through a vision module, and then preprocessing the acquired image, wherein the image preprocessing comprises graying processing, and when the target image is acquired, homogenizing the light in a detection view field through illumination, so that the image characteristic points of the same target on the grayed image are consistent, and then dividing the area with consistent characteristics on the image into a characteristic block, thereby obtaining a multi-target characteristic set formed by a plurality of characteristic blocks;
step II, sequentially identifying and screening targets in the multi-target feature set from the near to the far until a specific target is identified; in the screening process, interference discrimination is carried out on the identified non-specific targets, if no interference exists, the target is directly swept to enter the next target identification, and if interference exists, the robot track is adjusted to a non-interference state, so that obstacle targets are avoided, and specific targets are found;
step III, after finding the specific target, calculating and adjusting the posture of the robot body to enable the specific target to be positioned right in front of the robot;
step IV, modeling a specific target: according to the extracted specific target characteristics, reconstructing a three-dimensional model of the specific target by utilizing a stereoscopic vision principle, and establishing a three-dimensional model of a measured target or region;
step V, determining the space coordinates of the robot through a navigation positioning algorithm, tracking the running track of the robot, and guiding the robot to move according to the planning, so as to track a specific target;
and step VI, the underwater robot body system runs to a specific target according to the planned track.
Further, the detection view area is illuminated through the dot matrix light source, the dot matrix light source is uniformly distributed and illuminated corresponding to the image display area, each point light source in the dot matrix light source can be independently adjusted in angle and illumination intensity, the gray scale concentration degree of the image after gray scale treatment is fed back to the dot matrix light source control module, and illumination angle and intensity adjustment is carried out on the area with higher gray scale concentration degree, so that the image characteristics between different targets in the area are obviously distinguished.
Further, during identification and judgment, firstly, structured light is emitted to targets where a plurality of adjacent feature blocks are located, the pose where each feature block corresponds to the target is measured and calculated, the corresponding material quality of each feature block is judged according to the comparison between the measured pose and the corresponding image features in a feature library, then, a plurality of feature blocks meeting the requirements are combined to form a new feature block according to the specific target material quality features, and then, the outline features of images in the new feature block are compared in the feature library to judge whether the image is a specific target.
Further, when the specific target is a single material, after judging that each characteristic block corresponds to the material, combining a plurality of characteristic blocks of the same material to form a new characteristic block, and comparing the outline characteristics of the target in the new characteristic block in a characteristic library to judge whether the target is the specific target.
Further, when the specific target is made of multiple materials, after judging that each characteristic block corresponds to a material, if the condition that the characteristic block accords with the material distribution rule of the specific target does not exist, each characteristic block is identified as an unspecified target; if the condition of conforming to the material distribution rule of the specific target exists, combining a plurality of characteristic blocks conforming to the material distribution rule of the specific target to form a new characteristic block, comparing the outline characteristics of the target in the new characteristic block in a characteristic library, and judging whether the outline characteristics are the specific target.
Further, any one of the plurality of feature blocks to be combined is taken as an image feature reference, and the feature points in the rest feature blocks are enabled to be consistent with the image feature reference in the image after the re-acquisition processing by adjusting the light source angles or the illumination intensities corresponding to the rest feature blocks, so that the plurality of feature blocks are re-divided into a new feature block.
Further, if the multi-feature block can not be combined by adjusting the light source under the current posture of the robot body, the robot body is adjusted by horizontally rotating or lifting the calculated robot body, and the feature block is combined after the adjustment is finished, so that the robot body reciprocates until the target is identified and judged.
Further, the robot comprises a robot body, the front end of the robot body is provided with the vision module, the periphery of the vision module is circumferentially provided with the lattice light source, the lattice light source and the vision module are respectively arranged on the front side mounting surface of the swing mounting seat, and the swing mounting seat is arranged in an up-and-down swing manner relative to the front end of the robot body.
Further, each light source unit of the lattice light source is arranged in a telescopic mode relative to the front side mounting surface of the swing mounting seat through a telescopic adjusting structure.
Further, each light source unit is installed at the end part of the telescopic adjusting structure through the multidirectional hinge structure.
Further, the robot body comprises a frame structure, a pressure-resistant bin is fixedly arranged in the middle of the frame structure and used for installing electric elements, a group of horizontal driving structures are symmetrically arranged on two sides of the front end and the rear end of the pressure-resistant bin, and a group of lifting driving structures are symmetrically arranged on two sides of the middle of the pressure-resistant bin.
The beneficial effects are that: the method for identifying and tracking the underwater target of the vision robot and the underwater vision robot thereof at least have the following advantages;
(1) The dot matrix light source is matched with the vision module to clearly extract the target characteristics, and the structural light binocular vision measurement is utilized to realize more accurate target identification and tracking, so that the method is suitable for operations such as underwater searching and salvaging and has wide application.
(2) The robot body moves flexibly in water by adopting a multipoint propulsion system model.
(3) The remote real-time control can be realized, the route can be automatically planned, the smooth proceeding of underwater operation is ensured, and the capability of coping with unexpected situations is provided.
(4) The optimal navigation and the optimal running path are realized, and accidents caused by obstruction in running are avoided.
Drawings
FIG. 1 is a block diagram of a visual robot underwater target recognition tracking method of the present invention;
FIG. 2 is a schematic diagram of relative positions of an underwater target recognition image acquisition example robot and a measured target;
FIG. 3 is a schematic diagram of merging image feature intervals of an example underwater target identification image acquisition;
FIG. 4 is a schematic view of the overall structure of an embodiment of an underwater vision robot;
FIG. 5 is a schematic diagram of a distribution position relationship of the lattice light source relative to the vision module in one embodiment of FIG. 4;
FIG. 6 is a schematic view showing the structure of a light source unit according to an embodiment;
FIG. 7 is a schematic diagram of an internal illuminant distribution of an embodiment of a single illuminant unit;
fig. 8 is a schematic view of a robot body structure of an embodiment of the underwater vision robot.
Detailed Description
The invention will be further described with reference to the accompanying drawings.
The method for identifying and tracking underwater targets of vision robots and the underwater vision robots thereof as shown in fig. 1-8, which are characterized in that the vision module 2 is used for identifying specific targets and automatically planning a route for cruising and tracking the specific targets, specifically comprises the following steps:
step I, detecting possible targets in the visual field, and dividing multiple targets: firstly, acquiring a target image through a vision module 2, and then preprocessing the acquired image, wherein the image preprocessing comprises graying processing, and when the target image is acquired, homogenizing the light in a detection view field through illumination, so that the image characteristic points of the same target on the grayed image are consistent, and then dividing the area with consistent characteristics on the image into a characteristic block, thereby obtaining a multi-target characteristic set formed by a plurality of characteristic blocks;
the detection view area is illuminated through the dot matrix light source 3, the dot matrix light source 3 is uniformly distributed and illuminated corresponding to the image display area, each point light source in the dot matrix light source 3 can be independently adjusted in angle and illumination intensity, the gray level concentration degree of the image after gray level treatment is fed back to the dot matrix light source control module, and illumination angle and intensity adjustment is carried out on the area with higher gray level concentration degree, so that the image characteristics between different targets in the area are obviously distinguished.
Because the underwater environment is complex, light rays are darker and are unevenly distributed, each underwater target has different postures, the collected image can not see the characteristics of the target even after being subjected to noise reduction, enhancement pixels and the like, local gray scale is concentrated and the characteristics of the target can not be seen easily due to uneven light rays after being subjected to gray scale treatment, and autonomous target identification can not be carried out.
Step II, sequentially identifying and screening targets in the multi-target feature set from the near to the far until a specific target is identified; in the screening process, interference discrimination is carried out on the identified non-specific targets, if no interference exists, the target is directly swept to enter the next target identification, and if interference exists, the robot track is adjusted to a non-interference state, so that obstacle targets are avoided, and specific targets are found;
considering the complexity of a target structure, the difference in height of a light receiving surface and the difference in materials of a light receiving surface can lead to different gray levels of each part of the target in an image, so that the target is divided into a plurality of characteristic blocks, if the outline features in a single characteristic block are not obvious and cannot intuitively replace the target features, a robot cannot autonomously and accurately identify the target through outline feature comparison, and the scheme introduces a preset material feature library, namely a pre-stored image feature set shot by various materials under different underwater postures.
When the detected target is segmented into a plurality of characteristic blocks in the image, and the detected target cannot be identified and judged through a single block, firstly, emitting structural light to the targets of a plurality of adjacent characteristic blocks, measuring and calculating the pose of each characteristic block corresponding to the target, judging the corresponding material of each characteristic block according to the measured pose and the comparison of the corresponding image characteristics in a characteristic library, combining a plurality of characteristic blocks meeting the requirements according to the specific target material characteristics to form a new characteristic block, and comparing the outline characteristics of the image in the new characteristic block in the characteristic library to judge whether the detected target is the specific target.
When the specific target is a single material, after judging that each characteristic block corresponds to the material, combining a plurality of characteristic blocks of the same material to form a new characteristic block, and comparing the outline characteristics of the target in the new characteristic block in a characteristic library to judge whether the target is the specific target.
When the specific target is made of multiple materials, after judging the corresponding materials of each characteristic block, if the condition that the distribution rule of the materials accords with the specific target does not exist, each characteristic block is identified as a non-specific target; if the condition of conforming to the material distribution rule of the specific target exists, combining a plurality of characteristic blocks conforming to the material distribution rule of the specific target to form a new characteristic block, comparing the outline characteristics of the target in the new characteristic block in a characteristic library, and judging whether the outline characteristics are the specific target.
And taking any one of the characteristic blocks to be combined as an image characteristic standard, and adjusting the angles or illumination intensities of light sources corresponding to the rest characteristic blocks to ensure that the characteristic points in the rest characteristic blocks are consistent with the image characteristic standard in the image subjected to the re-acquisition processing, so that the characteristic blocks are re-divided into a new characteristic block.
If the multiple characteristic blocks cannot be combined by adjusting the light source under the current posture of the robot body 1, the robot body 1 is adjusted by horizontal rotation or lifting after calculation, and the characteristic blocks are combined after adjustment, so that the robot body reciprocates until the target is identified and judged.
Step III, after finding the specific target, calculating and adjusting the posture of the robot body 1 to enable the specific target to be positioned right in front of the robot;
step IV, modeling a specific target: according to the extracted specific target characteristics, reconstructing a three-dimensional model of the specific target by utilizing a stereoscopic vision principle, and establishing a three-dimensional model of a measured target or region;
step V, determining the space coordinates of the robot through a navigation positioning algorithm, tracking the running track of the robot, and guiding the robot to move according to the planning, so as to track a specific target;
and step VI, the underwater robot body system runs to a specific target according to the planned track.
The specific target is a specific target for executing an underwater target identification tracking task, and other targets are non-specific targets; the structured light is active detection laser structured light, so as to further improve the accuracy and adaptability of detection.
The underwater vision robot utilizing the underwater recognition tracking method comprises a robot body 1, wherein the front end of the robot body 1 is provided with a vision module 2, the periphery of the vision module 2 is circumferentially provided with a dot matrix light source 3, the dot matrix light source 3 and the vision module 2 are respectively arranged on a front side mounting surface of a swing mounting seat 4, and the swing mounting seat 4 is vertically arranged in a swinging manner relative to the front end of the robot body 1.
Each light source unit of the lattice light source 3 is arranged in a telescopic way relative to the front side mounting surface of the swing mounting seat 4 through a telescopic adjusting structure 5; the light source units are respectively arranged at the end parts of the telescopic adjusting structure 5 through the multidirectional hinging structure.
The robot body 1 comprises a frame structure 1-1, wherein a pressure-resistant bin 8 is fixedly arranged in the middle of the frame structure 1-1 and used for mounting electric elements, a group of horizontal driving structures 6 are symmetrically arranged on two sides of the front end and the rear end of the pressure-resistant bin 8, and a group of lifting driving structures 7 are symmetrically arranged on two sides of the middle of the pressure-resistant bin;
the horizontal driving structure and the lifting driving structure both adopt propellers, the robot moves underwater by using 6 propellers, 2 of the propellers are arranged in the vertical direction to move the underwater robot in the vertical direction, the rest 4 of the propellers are symmetrically arranged in the horizontal direction to move and drive the underwater robot in the horizontal direction, and the horizontally arranged 4 propellers surround the underwater robot body to form a diamond and are arranged in the same direction, so that the horizontal rotation adjustment and lifting air floatation adjustment of the underwater robot body can be realized.
The buoyancy of the underwater robot is adjusted by adding floating body materials, and corresponding weights are arranged at the bottom for matching in order to ensure that the stability of the robot under water is improved.
As shown in fig. 2, when the underwater robot performs image acquisition on a polyhedral block-shaped object N placed on a certain inclined plane M in front of the underwater, an image shown in the left side of fig. 3 is acquired after preliminary acquisition pretreatment, and the object N is assumed to be of a single material structure, and because the object N is influenced by the underwater pose and a possible shelter, the light receiving intensities of the four end surfaces of the object N, which can be acquired, are different, so that the gray scales of the abcd four surfaces in the image are different, the object N is divided into a plurality of characteristic blocks in the image, at this time, through the adjustment of a lattice light source, the light adjustment is performed on each end surface in a targeted manner, with the surface c as a reference, and through changing the angle of the light source unit irradiated on three surfaces of the abc, the distance between the object N and the light receiving surface, the distribution, the luminous intensity of the internal luminous light source body and the like, the gray scales of the abcd four surfaces in the image are consistent, so that the object N is divided into the single characteristic blocks in the image, and is compared and identified with the characteristic blocks in the characteristic library.
The light source body in the single light emitting unit can adopt a distribution structure as shown in fig. 7, wherein P is a fixed wide area light source, Q is an adjustable wide area light source, a plurality of P are uniformly and equidistantly distributed and circumferentially arranged, a basic irradiation light field of the light source unit is determined, Q is movably adjusted and arranged in a dotted line range in the figure and used for balancing or adjusting the light intensity of the irradiation range.
As a preferred embodiment, the vision module 2 is composed of a pair of megapixel miniature industrial cameras, the cameras transmit image data through the ethernet, and the video signal integrated circuit integrates two paths of video signals and is connected with the vision detection information processing computer through a bus; and the camera swings the cradle head assembly relative to the camera, so that the camera moves left and right, the search space is expanded during underwater search, and the search efficiency is improved. The swing driving circuit receives the signal conditioning control panel instruction and drives the swing mechanism to move. The drive circuit is designed with the isolation protection and the electromagnetic compatibility.
Firstly, carrying out undistorted secondary sampling treatment on real time by adopting binocular vision, reducing data space on the premise of guaranteeing measurement target information, realizing real-time target estimation, and carrying out fusion filtering treatment on images on the basis;
in the binocular image, the image noise is generally randomly distributed, the characteristics of the measured target are basically unchanged, and by utilizing the characteristics, the binocular image is subjected to statistical calculation, noise pixels are restrained, and the target pixels are enhanced;
on the basis of the filtered image, judging whether the target exists or not on the image, if no obstacle target exists, continuously collecting the image, and carrying out target identification; if obstacle targets exist, dividing the targets, then identifying and dividing the divided targets by using a distance algorithm and other relevance identification algorithms, and finally outputting the characteristics of each target to provide basic data for subsequent processing;
the binocular vision module directly calculates the target distance by adopting the binocular parallax, and compared with monocular distance calculation, the binocular vision module is simpler and is not easy to generate error coordinates.
As an embodiment, when the underwater vision robot is applied to underwater target salvage, a five-degree-of-freedom mechanical arm is matched with a flexible mechanical arm to salvage a target object, and the method comprises the following specific steps:
firstly, an underwater binocular robot is driven to a possible target existence area, an image in a visible area is acquired according to a binocular vision module, and the light intensity of the area is fed back according to the image, so that an illumination module is adjusted, an illumination range is expanded as much as possible on the premise that illumination conditions are met, a plurality of targets in the visible range are searched, and target identification is sequentially carried out from the near to the far; judging whether the target profile features are specific targets or not according to the comparison of the target profile features and the database; if the gray level is too concentrated and the outline is not clear, adjusting the lattice light source until the outline is clear, and then carrying out identification judgment; if the outline feature points of the target image cannot be clear by adjusting the lattice light source only, adjusting the posture of the robot, adjusting the lattice light source until the outline is clear, and then carrying out identification judgment; if the interference exists, the interference is avoided, and if the interference exists, the robot gesture is adjusted to avoid the interference; if the specific target is judged, the posture of the machine is adjusted to enable the specific target to be positioned right in front of the robot, automatic path planning is achieved through target ranging, space modeling and robot navigation, and when the robot is driven to a planning position, the fishing of the specific target is achieved through operating an arm system, and the fishing process is as follows:
firstly, placing a target to be grabbed in the visible range of the binocular camera, acquiring surface information of the object to be grabbed, processing data acquired by the binocular camera by the central information processing part, obtaining a coordinate which can be grabbed by the manipulator, converting the coordinate into a control instruction, sending the control instruction to the manipulator and the manipulator, moving the manipulator and the manipulator to a designated position, grabbing the target object according to the control instruction, placing the target in a frame structure after grabbing the object, returning the manipulator to a reset point, and completing one-time fishing.
The foregoing description is only of the preferred embodiments of the invention, it being noted that: it will be apparent to those skilled in the art that numerous modifications and adaptations can be made without departing from the principles of the invention described above, and such modifications and adaptations are intended to be comprehended within the scope of the invention.
Claims (11)
1. A visual robot underwater target identification tracking method is characterized in that: the specific target is identified through the vision module (2), and the specific target is tracked by automatically planning the cruising of the route, and the method specifically comprises the following steps:
step I, detecting possible targets in the visual field, and dividing multiple targets: firstly, acquiring a target image through a vision module (2), and then preprocessing the acquired image, wherein the image preprocessing comprises graying processing, and when the target image is acquired, homogenizing light in a detection view field through illumination, so that image characteristic points of the same target on the image after graying are identical, and then dividing an area with identical characteristics on the image into a characteristic block, thereby obtaining a multi-target characteristic set formed by a plurality of characteristic blocks;
step II, sequentially identifying and screening targets in the multi-target feature set from the near to the far until a specific target is identified; in the screening process, interference discrimination is carried out on the identified non-specific targets, if no interference exists, the target is directly swept to enter the next target identification, and if interference exists, the robot track is adjusted to a non-interference state, so that obstacle targets are avoided, and specific targets are found;
step III, after finding a specific target, calculating and adjusting the posture of the robot body (1) to enable the specific target to be positioned right in front of the robot;
step IV, modeling a specific target: according to the extracted specific target characteristics, reconstructing a three-dimensional model of the specific target by utilizing a stereoscopic vision principle, and establishing a three-dimensional model of a measured target or region;
step V, determining the space coordinates of the robot through a navigation positioning algorithm, tracking the running track of the robot, and guiding the robot to move according to the planning, so as to track a specific target;
and step VI, the underwater robot body system runs to a specific target according to the planned track.
2. The method for identifying and tracking the underwater target of the vision robot according to claim 1, wherein the method comprises the following steps: the detection view field is illuminated through the dot matrix light source (3), the dot matrix light source (3) is uniformly distributed and illuminated corresponding to the image display area, each point light source in the dot matrix light source (3) can be independently adjusted in angle and illumination intensity, the gray scale concentration degree of the image after gray scale treatment is fed back to the dot matrix light source control module, and illumination angle and intensity adjustment are carried out on the area with higher gray scale concentration degree, so that the image characteristic distinction between different targets in the area is obvious.
3. The method for identifying and tracking the underwater target of the vision robot according to claim 2, wherein the method comprises the following steps: when the detected target is segmented into a plurality of characteristic blocks in the image, and the detected target cannot be identified and judged through a single block, firstly, emitting structural light to the targets of a plurality of adjacent characteristic blocks, measuring and calculating the pose of each characteristic block corresponding to the target, comparing the measured pose with the corresponding image characteristics in a characteristic library, judging the corresponding material of each characteristic block, combining a plurality of characteristic blocks meeting the requirements according to the specific target material characteristics to form a new characteristic block, and comparing the outline characteristics of the image in the new characteristic block in the characteristic library to judge whether the detected target is the specific target.
4. A visual robot underwater target recognition tracking method according to claim 3, characterized in that: when the specific target is a single material, after judging that each characteristic block corresponds to the material, combining a plurality of characteristic blocks of the same material to form a new characteristic block, and comparing the outline characteristics of the target in the new characteristic block in a characteristic library to judge whether the target is the specific target.
5. The method for identifying and tracking the underwater target of the vision robot according to claim 4, wherein: when the specific target is made of multiple materials, after judging the corresponding materials of each characteristic block, if the condition that the distribution rule of the materials accords with the specific target does not exist, each characteristic block is identified as a non-specific target; if the condition of conforming to the material distribution rule of the specific target exists, combining a plurality of characteristic blocks conforming to the material distribution rule of the specific target to form a new characteristic block, comparing the outline characteristics of the target in the new characteristic block in a characteristic library, and judging whether the outline characteristics are the specific target.
6. The method for identifying and tracking the underwater target of the vision robot according to claim 5, wherein the method comprises the following steps: and taking any one of the characteristic blocks to be combined as an image characteristic standard, and adjusting the angles or illumination intensities of light sources corresponding to the rest characteristic blocks to ensure that the characteristic points in the rest characteristic blocks are consistent with the image characteristic standard in the image subjected to the re-acquisition processing, so that the characteristic blocks are re-divided into a new characteristic block.
7. The method for identifying and tracking the underwater target of the vision robot according to claim 6, wherein: if the multi-feature blocks cannot be combined by adjusting the light source under the current posture of the robot body (1), the robot body (1) is adjusted by horizontal rotation or lifting after calculation, and the feature blocks are combined after the adjustment is finished, so that the robot body reciprocates until identification judgment is made on a target.
8. An underwater vision robot in a vision robot underwater target recognition tracking method as claimed in any one of claims 1 to 7, characterized in that: including robot body (1), robot body (1) front end is provided with vision module (2), vision module (2) week side is encircleed and is laid lattice light source (3), lattice light source (3) with vision module (2) are all set up on the front side installation face of swing mount pad (4), swing mount pad (4) for robot body (1) front end upper and lower swing sets up.
9. The underwater vision robot in the vision robot underwater target recognition tracking method of claim 8, characterized in that: the light source units (3-1) of the lattice light source (3) are respectively arranged in a telescopic way relative to the front side mounting surface of the swing mounting seat (4) through telescopic adjusting structures (5).
10. An underwater vision robot in a vision robot underwater target recognition tracking method as claimed in claim 9, characterized in that: the light source units (3-1) are respectively arranged at the end parts of the telescopic adjusting structure (5) through the multidirectional hinging structure (3-2).
11. An underwater vision robot in a vision robot underwater target recognition tracking method as claimed in claim 10, characterized in that: the robot body (1) further comprises a frame structure (1-1), a pressure-resistant bin (8) is fixedly arranged in the middle of the frame structure (1-1) and used for installing electric elements, a group of horizontal driving structures (6) are symmetrically arranged on two sides of the front end and the rear end of the pressure-resistant bin (8), and a group of lifting driving structures (7) are symmetrically arranged on two sides of the middle of the pressure-resistant bin.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310366613.6A CN117008622A (en) | 2023-04-07 | 2023-04-07 | Visual robot underwater target identification tracking method and underwater visual robot thereof |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN202310366613.6A CN117008622A (en) | 2023-04-07 | 2023-04-07 | Visual robot underwater target identification tracking method and underwater visual robot thereof |
Publications (1)
Publication Number | Publication Date |
---|---|
CN117008622A true CN117008622A (en) | 2023-11-07 |
Family
ID=88560725
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN202310366613.6A Pending CN117008622A (en) | 2023-04-07 | 2023-04-07 | Visual robot underwater target identification tracking method and underwater visual robot thereof |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN117008622A (en) |
Cited By (1)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117739994A (en) * | 2024-02-20 | 2024-03-22 | 广东电网有限责任公司阳江供电局 | Visual robot underwater target identification tracking method and system |
-
2023
- 2023-04-07 CN CN202310366613.6A patent/CN117008622A/en active Pending
Cited By (2)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN117739994A (en) * | 2024-02-20 | 2024-03-22 | 广东电网有限责任公司阳江供电局 | Visual robot underwater target identification tracking method and system |
CN117739994B (en) * | 2024-02-20 | 2024-04-30 | 广东电网有限责任公司阳江供电局 | Visual robot underwater target identification tracking method and system |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CA2950791C (en) | Binocular visual navigation system and method based on power robot | |
CN111461023B (en) | Method for quadruped robot to automatically follow pilot based on three-dimensional laser radar | |
CN112418103B (en) | Bridge crane hoisting safety anti-collision system and method based on dynamic binocular vision | |
CN103419944B (en) | Air bridge and automatic abutting method therefor | |
CN106950952B (en) | Farmland environment sensing method for unmanned agricultural machinery | |
CN109931909B (en) | Unmanned aerial vehicle-based marine fan tower column state inspection method and device | |
CN109773783B (en) | Patrol intelligent robot based on space point cloud identification and police system thereof | |
CN113085896B (en) | Auxiliary automatic driving system and method for modern rail cleaning vehicle | |
CN108693535A (en) | A kind of detection system for obstacle and detection method for underwater robot | |
CN105184816A (en) | Visual inspection and water surface target tracking system based on USV and detection tracking method thereof | |
CN114140439B (en) | Laser welding seam characteristic point identification method and device based on deep learning | |
CN109623815B (en) | Wave compensation double-robot system and method for unmanned salvage ship | |
CN110737271A (en) | Autonomous cruise system and method for water surface robots | |
CN117008622A (en) | Visual robot underwater target identification tracking method and underwater visual robot thereof | |
CN116255908B (en) | Underwater robot-oriented marine organism positioning measurement device and method | |
CN108106617A (en) | A kind of unmanned plane automatic obstacle-avoiding method | |
CN111572737A (en) | AUV capturing and guiding method based on acoustic and optical guidance | |
CN108564628A (en) | A kind of cutterhead vision positioning orientation system towards development machine automation | |
JP2003030792A (en) | Device and method for discriminating type of object | |
CN212623088U (en) | Iron tower attitude early warning device based on image recognition and laser ranging | |
CN105824024A (en) | Novel underwater gate anti-frogman three-dimensional early warning identification system | |
CN117197779A (en) | Track traffic foreign matter detection method, device and system based on binocular vision | |
CN115188091B (en) | Unmanned aerial vehicle gridding inspection system and method integrating power transmission and transformation equipment | |
CN110696003A (en) | Water side rescue robot based on SLAM technology and deep learning | |
Cui et al. | Recognition of indoor glass by 3D lidar |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination |