CN110825088B - Multi-view vision guiding ship body cleaning robot system and cleaning method - Google Patents

Multi-view vision guiding ship body cleaning robot system and cleaning method Download PDF

Info

Publication number
CN110825088B
CN110825088B CN201911201037.XA CN201911201037A CN110825088B CN 110825088 B CN110825088 B CN 110825088B CN 201911201037 A CN201911201037 A CN 201911201037A CN 110825088 B CN110825088 B CN 110825088B
Authority
CN
China
Prior art keywords
cleaning
ship body
hull
module
cleaning robot
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201911201037.XA
Other languages
Chinese (zh)
Other versions
CN110825088A (en
Inventor
朱丹丹
张甫
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Yanshan University
Original Assignee
Yanshan University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Yanshan University filed Critical Yanshan University
Priority to CN201911201037.XA priority Critical patent/CN110825088B/en
Publication of CN110825088A publication Critical patent/CN110825088A/en
Application granted granted Critical
Publication of CN110825088B publication Critical patent/CN110825088B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0238Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
    • G05D1/024Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0242Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using non-visible light signals, e.g. IR or UV signals
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
    • G05D1/0253Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0276Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
    • G05D1/0278Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle using satellite positioning signals, e.g. GPS
    • GPHYSICS
    • G06COMPUTING; CALCULATING OR COUNTING
    • G06VIMAGE OR VIDEO RECOGNITION OR UNDERSTANDING
    • G06V20/00Scenes; Scene-specific elements
    • G06V20/10Terrestrial scenes
    • BPERFORMING OPERATIONS; TRANSPORTING
    • B63SHIPS OR OTHER WATERBORNE VESSELS; RELATED EQUIPMENT
    • B63BSHIPS OR OTHER WATERBORNE VESSELS; EQUIPMENT FOR SHIPPING 
    • B63B59/00Hull protection specially adapted for vessels; Cleaning devices specially adapted for vessels
    • B63B59/06Cleaning devices for hulls

Abstract

The invention relates to a multi-view vision guiding ship body cleaning robot system and a cleaning method. The system comprises: the ship hull cleaning system comprises a plurality of monocular cameras, a ship hull workstation module, a ship hull workstation control module and a cleaning robot module which are sequentially connected, wherein each monocular camera is located beside a ship hull and used for collecting partial ship hull surface images in a visual range, the ship hull workstation module is used for receiving the ship hull surface images and transmitting the ship hull surface images to the ship hull workstation control module, the ship hull workstation control module is used for processing the ship hull surface image information to complete splicing of the whole ship hull images, determining the whole ship hull image and marking a ship hull surface cleaning area, position information of the cleaning area and a cleaning path, and the cleaning robot module is used for cleaning the ship hull according to the received ship hull surface cleaning area, the position information of the cleaning area and the cleaning path. The invention can improve the efficiency of hull cleaning and improve the effect of hull cleaning.

Description

Multi-view vision guiding ship body cleaning robot system and cleaning method
Technical Field
The invention relates to the field of hull cleaning, in particular to a multi-view vision guiding hull cleaning robot system and a cleaning method.
Background
When the ship runs in the ocean for a long time, the ship can be accelerated to rust by soaking and corroding the seawater for a long time, a large amount of marine microorganisms are attached, and the ship can be decelerated by rust and sundries, so that the oil consumption is increased. Not only delays the voyage period, but also reduces the service life of the ship. Therefore, the impurity removal and rust removal are indispensable processes in the ship industry, but the manual cleaning and brushing mode of sand blasting impurity removal and rust removal is mainly used at present, so that the time and the labor are consumed, the efficiency is low, the cleaning effect is general, a ship wall coating can be damaged in serious conditions, and the damage to the ship wall is caused to a certain degree.
Disclosure of Invention
The invention aims to provide a multi-view vision-guided ship cleaning robot system and a cleaning method, which can improve the efficiency of ship cleaning and the effect of ship cleaning.
In order to achieve the purpose, the invention provides the following scheme:
a multi-purpose vision guided hull cleaning robot system comprising: the ship comprises a plurality of monocular cameras, a ship body workstation module, a ship body workstation control module and a cleaning robot module which are connected in sequence, wherein each monocular camera is positioned beside a ship body and used for collecting surface images of a part of the ship body in a visual range, the hull work station module is used for receiving each partial hull surface image and transmitting each hull surface image to the hull work station control module, the hull workstation control module processes the hull surface image information to complete the splicing of the whole hull image, determines the whole hull image and marks the hull surface area to be cleaned, the position information of the area to be cleaned and the cleaning path, the cleaning robot module is used for cleaning the ship body according to the received area to be cleaned on the surface of the ship body, the position information of the area to be cleaned and the cleaning path.
Optionally, the cleaning robot module includes robot body, laser radar and infrared sensor, laser radar is used for gathering hull surface obstacle information, infrared sensor is used for gathering the distance information between robot body and the obstacle, hull workstation control module respectively with laser radar with infrared sensor connects, hull workstation control module is used for the basis hull surface obstacle information with distance information carries out the sign to the obstacle.
Optionally, the cleaning robot module includes a positioning sub-module, and the positioning sub-module is connected to the hull work station control module.
Optionally, the positioning sub-module comprises a gyroscope and a beidou navigation system, the gyroscope and the beidou navigation system are used for positioning the position information of the robot body, and the ship body workstation control module is respectively connected with the gyroscope and the beidou navigation system.
Optionally, the cleaning robot module includes servo motor and high-pressure squirt, hull workstation control module respectively with servo motor with the high-pressure squirt is connected, the high-pressure squirt is used for right the hull surface that hull workstation control module confirmed treats clean area and washs, servo motor is used for doing the robot body provides power.
Optionally, the high-pressure water gun is a horizontal 180-degree rotatable pressure-adjustable high-pressure water gun.
Optionally, the hull workstation control module adopts a multi-thread hierarchical cooperation control structure.
Optionally, the cleaning robot module has two cleaning modes: cleaning the whole ship body and cleaning the local part of the ship body; for the integral cleaning of the ship body, the ship body workstation control module is used for controlling each monocular camera to perform visual guidance on the cleaning robot module, and the cleaning robot module is ensured to complete the tasks of cleaning an image overlapping area and cross-area cooperative cleaning; and for local cleaning of the ship body, planning the cleaning robot module to reach a designated position according to the size and the position of the spot area by the ship body workstation control module, completing local cleaning, and judging whether secondary cleaning is needed for the cleaned area.
Optionally, the path planning of the cleaning path by the cleaning robot module is completed based on grid form modeling.
A method of cleaning a multi-purpose vision-guided hull cleaning robot system, comprising:
acquiring partial hull images acquired by a plurality of monocular cameras;
splicing the integral images of the ship body according to the partial ship body images to obtain an integral image of the ship body;
determining an area to be cleaned according to the integral image of the ship body;
selecting a cleaning mode according to the area to be cleaned;
planning a cleaning path according to the cleaning mode and the cleaning area;
controlling the cleaning robot body to reach the cleaning area according to the cleaning path to clean the ship body;
judging whether the cleaned ship body is qualified or not;
if yes, stopping cleaning;
if not, continuously acquiring partial ship body images acquired by the plurality of monocular cameras.
According to the specific embodiment provided by the invention, the invention discloses the following technical effects:
the present invention provides a multi-vision guiding hull cleaning robot system, comprising: the ship comprises a plurality of monocular cameras, a ship body workstation module, a ship body workstation control module and a cleaning robot module which are connected in sequence, wherein each monocular camera is positioned beside a ship body and used for collecting surface images of a part of the ship body in a visual range, the hull work station module is used for receiving each partial hull surface image and transmitting each hull surface image to the hull work station control module, the hull workstation control module processes the hull surface image information to complete the splicing of the whole hull image, determines the whole hull image and marks the hull surface area to be cleaned, the position information of the area to be cleaned and the cleaning path, the cleaning robot module is used for cleaning the ship body according to the received area to be cleaned on the surface of the ship body, the position information of the area to be cleaned and the cleaning path. According to the invention, the traditional manual cleaning mode is replaced by the mode of cleaning the ship body by the robot, so that the efficiency of cleaning the ship body is improved, and the effect of cleaning the ship body is improved.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings needed to be used in the embodiments will be briefly described below, and it is obvious that the drawings in the following description are only some embodiments of the present invention, and it is obvious for those skilled in the art to obtain other drawings without inventive exercise.
FIG. 1 is a structural component diagram of a multi-view vision-guided hull cleaning robot system of the present invention;
FIG. 2 is a schematic diagram of the multi-vision guidance system of the present invention;
FIG. 3 is a schematic diagram of the image matching epipolar constraint of the present invention;
fig. 4 is a flow chart of the cleaning method of the multi-purpose vision-guided ship cleaning robot system of the invention.
Detailed Description
The technical solutions in the embodiments of the present invention will be clearly and completely described below with reference to the drawings in the embodiments of the present invention, and it is obvious that the described embodiments are only a part of the embodiments of the present invention, and not all of the embodiments. All other embodiments, which can be derived by a person skilled in the art from the embodiments given herein without making any creative effort, shall fall within the protection scope of the present invention.
The invention aims to provide a multi-view vision guiding ship body cleaning robot system which can improve the efficiency of ship body cleaning and the effect of ship body cleaning.
In order to make the aforementioned objects, features and advantages of the present invention comprehensible, embodiments accompanied with figures are described in further detail below.
Fig. 1 is a structural composition diagram of a multi-purpose vision-guided ship hull cleaning robot system of the invention. As shown in fig. 1, a multi-vision guiding hull cleaning robot system includes: the ship comprises a plurality of monocular cameras 1, a ship body workstation module 2, a ship body workstation control module 3 and a cleaning robot module 4 which are connected in sequence, wherein each monocular camera 1 is positioned beside a ship body, each monocular camera 1 is used for collecting surface images of a part of the ship body in a visual range, the hull work station module 2 is used for receiving each partial hull surface image and transmitting each hull surface image to the hull work station control module 3, the hull workstation control module 3 processes the hull surface image information to complete the splicing of the whole hull image, determines the whole hull image and marks the hull surface area to be cleaned, the position information of the area to be cleaned and the cleaning path, the cleaning robot module 4 is used for cleaning the ship body according to the received area to be cleaned on the surface of the ship body, the position information of the area to be cleaned and the cleaning path. The hull work station module 2 supplies power for the cleaning robot module 4, the hull work station module 2 can monitor the electric quantity condition of the cleaning robot in real time, the service condition is predicted, and when a cleaning task is completed or the electric quantity is insufficient, a voice prompt is sent.
Cleaning machines people module 4 includes robot, lidar and infrared sensor, lidar is used for gathering hull surface obstacle information, infrared sensor is used for gathering the distance information between robot and the obstacle, hull workstation control module 3 respectively with lidar with infrared sensor connects, hull workstation control module 3 is used for the basis hull surface obstacle information with distance information between robot and the obstacle carries out the sign to the obstacle. The cleaning robot module 4 comprises a positioning submodule connected with the hull work station control module 3. The positioning sub-module comprises a gyroscope and a Beidou navigation system, the gyroscope and the Beidou navigation system are used for positioning the position information of the robot body, and the ship body workstation control module 3 is respectively connected with the gyroscope and the Beidou navigation system. Cleaning machines people module 4 includes servo motor and high-pressure squirt, hull workstation control module 3 respectively with servo motor with the high-pressure squirt is connected, the high-pressure squirt is used for right the hull surface that hull workstation control module 3 confirmed is treated clean region and is washd, servo motor is used for doing the robot body provides power. The high-pressure water gun is a horizontal 180-degree rotatable pressure-adjustable high-pressure water gun. The robot body adopts a permanent magnet attraction and all-terrain double-track chassis structure.
The ship body workstation control module 3 adopts a multithreading layered cooperative control structure, and the multithreading layered cooperative control structure comprises: the system comprises a main thread, an equipment cooperation layer, a path planning layer and an action execution layer; the main thread controls the cleaning robot module 4 to complete cleaning work through a plurality of different threads; the equipment cooperation layer comprises communication and data transmission among all threads and assists sub-threads in processing data; the path planning layer comprises a cleaning route for planning the cleaning robot module 4 and an autonomous planning local cleaning route; and the action execution layer completes the command issued by the sub thread.
The cleaning robot module 4 has two cleaning modes: cleaning the whole ship body and cleaning the local part of the ship body; for the integral cleaning of the ship body, the ship body workstation control module 3 is used for controlling each monocular camera to perform visual guidance on the cleaning robot module 4, and ensuring that the cleaning robot module 4 completes the tasks of cleaning an image overlapping area and cross-area cooperative cleaning; for the local cleaning of the ship body, the ship body workstation control module 3 plans the cleaning robot module 4 to reach a designated position according to the size and the position of the spot area, completes the local cleaning, and judges whether the cleaned area needs secondary cleaning.
Fig. 2 is a schematic diagram of the multi-view vision guidance system of the present invention.
The path planning of the cleaning path by the cleaning robot module 4 is done based on grid-form modeling. The method is a planning method which can represent the environment by using a grid array and can simultaneously process the change of the obstacles. The environmental information around the cleaning robot module 4 is stored in a two-dimensional circular buffer in a grid-occupied manner, which can be updated circularly as the cleaning robot module 4 moves, and the trajectory is represented by a uniform B-spline, which is optimized in a non-linear manner.
The local trajectory planning problem is characterized as a B-spline optimization problem, and the spline value can be calculated by the following formula.
Figure BDA0002295877420000051
Said p isiFor the control point corresponding to time t, Bi,kAnd (t) is a basic function which can be calculated by a DeBoolean-Corx recursion formula. UniformityThe time interval between B-spline control points is fixed.
The cleaning robot module 4 uses a 2D circular buffer to characterize the environment map in order to avoid obstacles on the way. For the convenience of query, the plane is discretized into a small square with the size r, so that the mapping of any point p in the plane to a specific small square index x is established, and the mapping is reversed. The loop buffer consists of a contiguous array of size N and an offset index o that defines the coordinate system position. The cleaning robot module 4 can check whether the small square corresponding to any point in the plane is within the range represented by the circular buffer and its specific storage location.
Limiting the size of the array to N-2pThe above operation can be performed in two ways:
insideVolume(x)=!((x-o)&(~(2p-1)))
address(x)=(x-o)&(2p-1)
starting from the center point of the sensor, updating the map by using a ray casting method, and inquiring the distance between a certain point in the map range and an obstacle and the gradient of distance change by using Euclidean Distance Transformation (EDT) on the map.
The optimization of the B spline is expressed as a nonlinear optimization mode, and an optimization function is as follows:
Etotal=Eep+Ec+Eq
said EepDissipation function representing global path tracking error
Figure BDA0002295877420000064
And p (t) is a sample value.
Said EcIs a dissipation function of the distance to the obstacle
Figure BDA0002295877420000061
Figure BDA0002295877420000062
Said EqIs a dissipation function of smoothness
Figure BDA0002295877420000063
And obtaining a global path by the application optimization mode, and then performing iteration by taking the current position as a starting point. Calculating a tracked target point as the input of a global path at each moment, and taking the target point as a dissipation function parameter of a tracking error; the parameters of the barrier dissipation function come from the circular buffer and the EDT. After each optimization, the first of the control points currently being optimized is fixed and passed to the controller to calculate new control inputs. And new control points are added, and the cycle is repeated to obtain the cleaning path of the cleaning robot module 4.
A plurality of monocular cameras beside a ship body are respectively used as coordinate systems, the same area observed by adjacent cameras is used as a matching area, images are preprocessed, Hessian-affine feature detection is adopted to extract feature points, and a compatible mining method is adopted to search consistent adjacent points for image matching. In image matching, if two pairs of points correspond correctly, then the local affine transformations for these two pairs of points should be close.
And detecting angular points on the scale space image by using a Harris-Laplace scale invariant operator according to the image acquired by the monocular camera, and adding scale parameters. And searching each candidate point on the current scale image to calculate a Laplace response value, and reserving the characteristic points meeting the condition that the maximum value of the Harris matrix is greater than a given threshold value.
F(x,y,σn)=σ2|Lxx(x,y,σn)+Lyy(x,y,σn)|≥thresholdL
In the formula sigmanAs a scale factor of each layer of the image, thresholdLIs a threshold condition.
And comparing the detected corner points with Laplacian response values adjacent to the upper layer and the lower layer, wherein the response value of the current layer is larger than that of the upper layer and the lower layer.
Figure BDA0002295877420000071
The scale features meeting the two steps are scale invariant feature points extracted in a scale space, and local affine transformation information of the feature points is obtained while the feature points are extracted.
Figure BDA0002295877420000072
The above formula a is affine information.
The conversion relation between the matching points can be obtained by the local affine transformation:
Figure BDA0002295877420000073
wherein
Figure BDA0002295877420000074
AiFor local affine information, KiAre the coordinates of the feature points.
For a pair of feature point correspondences, respectively matching conversion relations between the points, and calculating the similarity degree of the two conversion relations, wherein the similarity degree is used as a consistency measurement index of the point correspondences:
Figure BDA0002295877420000075
wherein rho represents the transformed coordinates, e represents the similarity of the transformed corresponding points, and the indexes are normalized by adopting a Gaussian core:
Figure BDA0002295877420000076
corresponds to c for any pointiFinding the closest k point correspondences by the above method to form a graph GiI.e. the corresponding set of local consistency points corresponding to the point.
FIG. 3 is a schematic diagram of the image matching epipolar constraint of the present invention. As shown in fig. 3, point o1,o2And P may define a plane, called the polar plane. o1And o2Connecting line and image plane I1I2Respectively at the intersection points of e1And e2,e1And e2Referred to as pole, o1o2Referred to as the baseline. Polar plane and two image planes I1I2Cross line l between1l2Is the polar line. From the image, p is known1Corresponding to I2The characteristic point of (1) is bound to2On the polar line.
Let p be1Has homogeneous coordinates of (x, y,1)T,p2Has a homogeneous coordinate of
Figure BDA0002295877420000081
F is the known basis matrix, then p1Corresponding polar line equation of
Figure BDA0002295877420000082
The image feature point error is considered to satisfy the normal distribution, p, of (u, σ)2Must be on polar line l, let point p2The distance to the epipolar line l is less than 3 sigma.
Figure BDA0002295877420000083
Figure BDA0002295877420000084
a=f00*x+f01*y+f02
b=f10*x+f11*y+f12
c=f20*x+f21*y+f22
The matching points between the images can be obtained through the steps, so that the visual splicing is completed, and the whole image of the ship body is obtained.
Fig. 4 is a flow chart of the cleaning method of the multi-purpose vision-guided ship cleaning robot system of the invention. As shown in fig. 4, a multi-view vision-guided hull cleaning robot system cleaning method includes:
step 101: and acquiring partial hull images acquired by a plurality of monocular cameras.
Step 102: and splicing the whole ship images according to the partial ship images to obtain the whole ship image, wherein the ship workstation module receives the partial ship images collected by the monocular cameras for splicing.
Step 103: and determining an area to be cleaned according to the integral image of the ship body, specifically determining the area size and the position information of the area to be cleaned.
Step 104: selecting a cleaning mode according to the area to be cleaned, wherein the cleaning mode specifically comprises two modes: cleaning the whole ship body and cleaning the local part of the ship body; for the integral cleaning of the ship body, controlling each monocular camera to perform visual guidance on the cleaning robot module, and ensuring that the cleaning robot module completes the tasks of cleaning an image overlapping area and cross-area cooperative cleaning; and for local cleaning of the ship body, planning the cleaning robot module to reach a designated position according to the size and the position of the spot area, finishing the local cleaning, and judging whether secondary cleaning is needed for the cleaned area.
Step 105: and planning a cleaning path according to the cleaning mode and the cleaning area.
Step 106: and controlling the cleaning robot body to reach the cleaning area according to the cleaning path to clean the ship body.
Step 107: and judging whether the cleaned ship body is qualified, specifically, judging whether the ship body is not cleaned in place by acquiring an image of the ship body.
Step 108: and if the cleaned ship body is qualified, stopping cleaning.
Step 109: and if the cleaned ship body is unqualified, continuously acquiring partial ship body images acquired by the plurality of monocular cameras, namely performing secondary cleaning.
The embodiments in the present description are described in a progressive manner, each embodiment focuses on differences from other embodiments, and the same and similar parts among the embodiments are referred to each other.
The principles and embodiments of the present invention have been described herein using specific examples, which are provided only to help understand the method and the core concept of the present invention; meanwhile, for a person skilled in the art, according to the idea of the present invention, the specific embodiments and the application range may be changed. In view of the above, the present disclosure should not be construed as limiting the invention.

Claims (6)

1. A multi-purpose vision-guided hull cleaning robot system, comprising: the ship comprises a plurality of monocular cameras, a ship body workstation module, a ship body workstation control module and a cleaning robot module which are connected in sequence, wherein each monocular camera is positioned beside a ship body and used for collecting surface images of a part of the ship body in a visual range, the hull work station module is used for receiving each partial hull surface image and transmitting each hull surface image to the hull work station control module, the hull workstation control module processes the hull surface image information to complete the splicing of the whole hull image, determines the whole hull image and marks the hull surface area to be cleaned, the position information of the area to be cleaned and the cleaning path, the cleaning robot module is used for cleaning the ship body according to the received area to be cleaned on the surface of the ship body, the position information of the area to be cleaned and the cleaning path;
the cleaning robot module comprises a robot body, a laser radar and an infrared sensor, wherein the laser radar is used for collecting obstacle information on the surface of a ship body, the infrared sensor is used for collecting distance information between the robot body and an obstacle, the ship body workstation control module is respectively connected with the laser radar and the infrared sensor, and the ship body workstation control module is used for identifying the obstacle according to the obstacle information on the surface of the ship body and the distance information between the robot body and the obstacle;
the cleaning robot module comprises a servo motor and a high-pressure water gun, the ship body workstation control module is respectively connected with the servo motor and the high-pressure water gun, the high-pressure water gun is used for cleaning a ship body surface area to be cleaned determined by the ship body workstation control module, and the servo motor is used for providing power for the robot body;
the cleaning robot module has two cleaning modes: cleaning the whole ship body and cleaning the local part of the ship body; for the integral cleaning of the ship body, the ship body workstation control module is used for controlling each monocular camera to perform visual guidance on the cleaning robot module, and the cleaning robot module is ensured to complete the tasks of cleaning an image overlapping area and cross-area cooperative cleaning; for local cleaning of the ship body, the ship body workstation control module plans the cleaning robot module to reach a designated position according to the size and the position of a stain area, completes local cleaning, and judges whether secondary cleaning is needed or not for the cleaned area;
the path planning of the cleaning path by the cleaning robot module is completed based on grid form modeling.
2. The multi-purpose vision-guided hull cleaning robot system of claim 1, wherein the cleaning robot module includes a positioning sub-module, the positioning sub-module being connected with the hull workstation control module.
3. The multi-view vision-guided hull cleaning robot system according to claim 2, wherein the positioning sub-module comprises a gyroscope and a beidou navigation system, the gyroscope and the beidou navigation system are used for positioning position information of the robot body, and the hull workstation control module is connected with the gyroscope and the beidou navigation system respectively.
4. The multi-purpose vision-guided hull cleaning robot system according to claim 3, characterized in that the high-pressure water gun is a horizontal 180-degree rotatable pressure-adjustable high-pressure water gun.
5. The multi-purpose vision-guided hull cleaning robot system according to claim 1, wherein the hull workstation control module employs a multi-threaded hierarchical cooperative control architecture.
6. A multi-purpose vision guiding hull cleaning robot system cleaning method, characterized in that the method is applied to the multi-purpose vision guiding hull cleaning robot system of any one of claims 1-5, and the method comprises:
acquiring partial hull images acquired by a plurality of monocular cameras;
splicing the integral images of the ship body according to the partial ship body images to obtain an integral image of the ship body;
determining an area to be cleaned according to the integral image of the ship body;
selecting a cleaning mode according to the area to be cleaned;
planning a cleaning path according to the cleaning mode and the cleaning area;
controlling the cleaning robot body to reach the cleaning area according to the cleaning path to clean the ship body;
judging whether the cleaned ship body is qualified or not;
if yes, stopping cleaning;
if not, continuously acquiring partial ship body images acquired by the plurality of monocular cameras.
CN201911201037.XA 2019-11-29 2019-11-29 Multi-view vision guiding ship body cleaning robot system and cleaning method Active CN110825088B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201911201037.XA CN110825088B (en) 2019-11-29 2019-11-29 Multi-view vision guiding ship body cleaning robot system and cleaning method

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201911201037.XA CN110825088B (en) 2019-11-29 2019-11-29 Multi-view vision guiding ship body cleaning robot system and cleaning method

Publications (2)

Publication Number Publication Date
CN110825088A CN110825088A (en) 2020-02-21
CN110825088B true CN110825088B (en) 2021-10-01

Family

ID=69543210

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201911201037.XA Active CN110825088B (en) 2019-11-29 2019-11-29 Multi-view vision guiding ship body cleaning robot system and cleaning method

Country Status (1)

Country Link
CN (1) CN110825088B (en)

Families Citing this family (4)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN111352426B (en) * 2020-03-17 2021-03-02 广西柳工机械股份有限公司 Vehicle obstacle avoidance method, vehicle obstacle avoidance device, vehicle obstacle avoidance system and vehicle
CN112362062B (en) * 2020-11-07 2022-11-11 福州旺星人智能科技有限公司 Building property intelligent management method and system
CN113359780A (en) * 2021-07-28 2021-09-07 陕西欧卡电子智能科技有限公司 Unmanned ship cleaning path planning method and device, computer equipment and storage medium
CN117381803B (en) * 2023-12-13 2024-02-13 深圳市辉熙智能科技有限公司 Automatic cleaning method of cleaning robot and cleaning robot

Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002075469A1 (en) * 2001-03-15 2002-09-26 Aktiebolaget Electrolux Method and device for determining position of an autonomous apparatus
WO2004018158A2 (en) * 2002-08-21 2004-03-04 Neal Solomon Organizing groups of self-configurable mobile robotic agents
CN102033222A (en) * 2010-11-17 2011-04-27 吉林大学 Large-scale multiple-object ultrasonic tracking and locating system and method
CN103179401A (en) * 2013-03-19 2013-06-26 燕山大学 Processing method and device for multi-agent cooperative video capturing and image stitching
WO2014043414A2 (en) * 2012-09-14 2014-03-20 Raytheon Company Hull robot with hull separation countermeasures
CN204059320U (en) * 2014-03-19 2014-12-31 燕山大学 Robot for water surface cleaner
CN104463786A (en) * 2014-12-03 2015-03-25 中国科学院自动化研究所 Mobile robot figure stitching method and device
CN104535047A (en) * 2014-09-19 2015-04-22 燕山大学 Multi-agent target tracking global positioning system and method based on video stitching
CN105068550A (en) * 2015-08-21 2015-11-18 燕山大学 Auction mode-based underwater robot multi-target selection strategy
CN105302131A (en) * 2014-07-22 2016-02-03 德国福维克控股公司 Method for cleaning or processing a room using an automatically moved device
CN105339551A (en) * 2013-06-23 2016-02-17 阿迪博茨有限公司 Methods and apparatus for mobile additive manufacturing
CN205050322U (en) * 2015-06-24 2016-02-24 燕山大学 Centralized control formula multirobot motion control wireless communication device
CN105607635A (en) * 2016-01-05 2016-05-25 东莞市松迪智能机器人科技有限公司 Panoramic optic visual navigation control system of automatic guided vehicle and omnidirectional automatic guided vehicle
WO2017018848A1 (en) * 2015-07-29 2017-02-02 Lg Electronics Inc. Mobile robot and control method thereof
CN106709868A (en) * 2016-12-14 2017-05-24 云南电网有限责任公司电力科学研究院 Image stitching method and apparatus
CN106843242A (en) * 2017-03-21 2017-06-13 天津海运职业学院 A kind of multi-robots system of under-water body cleaning
CN107471218A (en) * 2017-09-07 2017-12-15 南京理工大学 A kind of tow-armed robot hand eye coordination method based on multi-vision visual
EP3327697A1 (en) * 2010-09-24 2018-05-30 iRobot Corporation Systems and methods for vslam optimization
CN109565548A (en) * 2016-08-17 2019-04-02 三星电子株式会社 It controls the method for more view field images and supports the electronic equipment of this method
CN109938650A (en) * 2019-05-20 2019-06-28 尚科宁家(中国)科技有限公司 A kind of panoramic shooting mould group and the sweeping robot based on the camera module
CN109978767A (en) * 2019-03-27 2019-07-05 集美大学 The ground laser SLAM drawing method based on multirobot collaboration

Family Cites Families (6)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
US9038557B2 (en) * 2012-09-14 2015-05-26 Raytheon Company Hull robot with hull separation countermeasures
KR102326479B1 (en) * 2015-04-16 2021-11-16 삼성전자주식회사 Cleaning robot and controlling method thereof
WO2018143620A2 (en) * 2017-02-03 2018-08-09 Samsung Electronics Co., Ltd. Robot cleaner and method of controlling the same
CN109460040A (en) * 2018-12-28 2019-03-12 珠海凯浩电子有限公司 It is a kind of that map system and method are established by mobile phone shooting photo array floor
CN109782772A (en) * 2019-03-05 2019-05-21 浙江国自机器人技术有限公司 A kind of air navigation aid, system and cleaning robot
CN110063694A (en) * 2019-04-28 2019-07-30 彭春生 A kind of binocular sweeping robot and working method

Patent Citations (21)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
WO2002075469A1 (en) * 2001-03-15 2002-09-26 Aktiebolaget Electrolux Method and device for determining position of an autonomous apparatus
WO2004018158A2 (en) * 2002-08-21 2004-03-04 Neal Solomon Organizing groups of self-configurable mobile robotic agents
EP3327697A1 (en) * 2010-09-24 2018-05-30 iRobot Corporation Systems and methods for vslam optimization
CN102033222A (en) * 2010-11-17 2011-04-27 吉林大学 Large-scale multiple-object ultrasonic tracking and locating system and method
WO2014043414A2 (en) * 2012-09-14 2014-03-20 Raytheon Company Hull robot with hull separation countermeasures
CN103179401A (en) * 2013-03-19 2013-06-26 燕山大学 Processing method and device for multi-agent cooperative video capturing and image stitching
CN105339551A (en) * 2013-06-23 2016-02-17 阿迪博茨有限公司 Methods and apparatus for mobile additive manufacturing
CN204059320U (en) * 2014-03-19 2014-12-31 燕山大学 Robot for water surface cleaner
CN105302131A (en) * 2014-07-22 2016-02-03 德国福维克控股公司 Method for cleaning or processing a room using an automatically moved device
CN104535047A (en) * 2014-09-19 2015-04-22 燕山大学 Multi-agent target tracking global positioning system and method based on video stitching
CN104463786A (en) * 2014-12-03 2015-03-25 中国科学院自动化研究所 Mobile robot figure stitching method and device
CN205050322U (en) * 2015-06-24 2016-02-24 燕山大学 Centralized control formula multirobot motion control wireless communication device
WO2017018848A1 (en) * 2015-07-29 2017-02-02 Lg Electronics Inc. Mobile robot and control method thereof
CN105068550A (en) * 2015-08-21 2015-11-18 燕山大学 Auction mode-based underwater robot multi-target selection strategy
CN105607635A (en) * 2016-01-05 2016-05-25 东莞市松迪智能机器人科技有限公司 Panoramic optic visual navigation control system of automatic guided vehicle and omnidirectional automatic guided vehicle
CN109565548A (en) * 2016-08-17 2019-04-02 三星电子株式会社 It controls the method for more view field images and supports the electronic equipment of this method
CN106709868A (en) * 2016-12-14 2017-05-24 云南电网有限责任公司电力科学研究院 Image stitching method and apparatus
CN106843242A (en) * 2017-03-21 2017-06-13 天津海运职业学院 A kind of multi-robots system of under-water body cleaning
CN107471218A (en) * 2017-09-07 2017-12-15 南京理工大学 A kind of tow-armed robot hand eye coordination method based on multi-vision visual
CN109978767A (en) * 2019-03-27 2019-07-05 集美大学 The ground laser SLAM drawing method based on multirobot collaboration
CN109938650A (en) * 2019-05-20 2019-06-28 尚科宁家(中国)科技有限公司 A kind of panoramic shooting mould group and the sweeping robot based on the camera module

Non-Patent Citations (1)

* Cited by examiner, † Cited by third party
Title
基于高压水射流的船体清洗机器人关键技术分析;陈光明 等;《流体机械》;20190930;第47卷(第9期);第56-62页 *

Also Published As

Publication number Publication date
CN110825088A (en) 2020-02-21

Similar Documents

Publication Publication Date Title
CN110825088B (en) Multi-view vision guiding ship body cleaning robot system and cleaning method
US11567502B2 (en) Autonomous exploration framework for indoor mobile robotics using reduced approximated generalized Voronoi graph
CN109720340B (en) Automatic parking system and method based on visual identification
CN106950985B (en) Automatic delivery method and device
CN108958282B (en) Three-dimensional space path planning method based on dynamic spherical window
CN114998276B (en) Robot dynamic obstacle real-time detection method based on three-dimensional point cloud
Chen et al. An enhanced dynamic Delaunay triangulation-based path planning algorithm for autonomous mobile robot navigation
Meng et al. Efficient and reliable LiDAR-based global localization of mobile robots using multiscale/resolution maps
Agrawal et al. PCE-SLAM: A real-time simultaneous localization and mapping using LiDAR data
CN111679664A (en) Three-dimensional map construction method based on depth camera and sweeping robot
CN111964680A (en) Real-time positioning method of inspection robot
CN111721279A (en) Tail end path navigation method suitable for power transmission inspection work
Sun et al. Visual measurement and control for underwater robots: A survey
Li et al. A mobile robotic arm grasping system with autonomous navigation and object detection
CN111380535A (en) Navigation method and device based on visual label, mobile machine and readable medium
CN115542896A (en) Robot path generation method, system and storage medium
CN106482711A (en) A kind of indoor orientation method being extracted based on gray feature with dense optical flow method
CN110728684B (en) Map construction method and device, storage medium and electronic equipment
Fan et al. Long-awaited next-generation road damage detection and localization system is finally here
Chen et al. Multiple-object tracking based on monocular camera and 3-D lidar fusion for autonomous vehicles
CN114511590A (en) Intersection multi-guide-line construction method based on monocular vision 3D vehicle detection and tracking
Rahman et al. Simple near-realtime crane workspace mapping using machine vision
Huang et al. Multi-object Detection, Tracking and Prediction in Rugged Dynamic Environments
Ray et al. Simultaneous Localisation and Image Intensity Based Occupancy Grid Map Building--A New Approach
CN112666937B (en) Optimal path planning method combined with image framework

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant