CN116125973A - Regression docking method and system of self-moving robot and self-moving robot - Google Patents

Regression docking method and system of self-moving robot and self-moving robot Download PDF

Info

Publication number
CN116125973A
CN116125973A CN202211474914.2A CN202211474914A CN116125973A CN 116125973 A CN116125973 A CN 116125973A CN 202211474914 A CN202211474914 A CN 202211474914A CN 116125973 A CN116125973 A CN 116125973A
Authority
CN
China
Prior art keywords
self
base station
moving robot
robot
visual
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202211474914.2A
Other languages
Chinese (zh)
Inventor
翁禹来
刘汶欣
谢雪堃
李宽
袁鹏
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Laiyufei Intelligent Technology Co ltd
Original Assignee
Shenzhen Laiyufei Intelligent Technology Co ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Laiyufei Intelligent Technology Co ltd filed Critical Shenzhen Laiyufei Intelligent Technology Co ltd
Priority to CN202211474914.2A priority Critical patent/CN116125973A/en
Publication of CN116125973A publication Critical patent/CN116125973A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0225Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0231Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
    • G05D1/0246Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means

Abstract

The embodiment of the invention provides a regression butt joint method and system of a self-moving robot and the self-moving robot. In the process of returning and docking, the self-moving robot quickly navigates to a guiding position in a first range away from the base station, and then a visual docking program is adopted at the guiding position to adjust the pose of the self-moving robot relative to the base station, so that the self-moving robot and the base station can be quickly and accurately docked.

Description

Regression docking method and system of self-moving robot and self-moving robot
Technical Field
The invention belongs to the technical field of self-moving robots, and particularly relates to a regression docking method and system of a self-moving robot and the self-moving robot.
Background
Along with the development of scientific technology, intelligent self-mobile robots are well known, and because the self-mobile robots can automatically preset programs to execute preset related tasks without manual operation and intervention, the self-mobile robots are widely applied to industrial application and household products. Industrial applications such as robots for performing various functions, household products such as intelligent mowers, dust collectors and the like, the self-moving robots greatly save time of people, and great convenience is brought to industrial production and household life. However, since these self-mobile robots are powered by batteries, when the power of the batteries is exhausted, these self-mobile robots cannot operate, so it is generally set that when the power of the self-mobile robots is lower than a certain set value, the program can selectively control the self-mobile robots to return to the base station to charge the batteries.
Currently, the prior art adopts a complex base station docking structure or adds an additional guiding magnetic stripe to ensure that the self-moving robot can accurately return to and dock with the base station. The guiding magnetic stripe can be sensed only when the self-moving robot runs on the magnetic stripe, and is troublesome to lay, high in cost and difficult to improve in precision when in close-range butt joint.
Accordingly, there is a need for an improvement over the prior art to overcome the deficiencies described in the prior art.
Disclosure of Invention
Therefore, the invention aims to solve the technical problem of low regression docking precision of the existing self-moving robot.
In order to solve the technical problems, the invention provides a regression docking method of a self-moving robot, which comprises the following steps:
responding to the regression instruction, and controlling the self-mobile robot to start a base station regression program; wherein, the base station is provided with a visual mark;
controlling the self-moving robot to move to a guiding position within a first range from the base station;
controlling the self-moving robot to start a visual docking program at the guiding position, and extracting and processing visual marks in the acquired image information according to a preset algorithm to acquire pose information of the self-moving robot relative to the base station;
and adjusting the pose of the self-moving robot according to the pose information, and then controlling the self-moving robot to butt-joint the base station.
In one embodiment, the controlling the mobile robot to move to a guiding position within a first range from the base station comprises:
and obtaining the position coordinates of the base station in the navigation map by adopting a global navigation algorithm, and then controlling the self-moving robot to move towards the direction of the position coordinates.
In one embodiment, the controlling the mobile robot to move to a guiding position within a first range from the base station comprises:
determining the guiding position according to the position coordinates of the base station in the navigation map; wherein the guiding position is located in front of the base station, within the first range from the base station.
In one embodiment, the controlling the self-moving robot to enable the visual docking procedure at the guidance location includes:
and acquiring the image information of the surrounding panorama of the self-moving robot through a panorama shooting unit.
In one embodiment, the extracting and processing the visual sign in the obtained image information according to a preset algorithm to obtain pose information of the self-mobile robot relative to the base station includes:
detecting and obtaining a target image containing the visual mark;
identifying actual corner coordinates of the visual annotation in the target image;
obtaining a coordinate system conversion corresponding relation between the camera and the visual mark through a visual optimization algorithm;
and acquiring pose information of the self-mobile robot relative to the base station according to the conversion corresponding relation.
In one embodiment, the identifying the actual corner coordinates of the visual mark in the target image includes:
and filtering the acquired image data of the visual mark, and taking the processed data as actual corner coordinates. .
In one embodiment, the pose of the self-moving robot is adjusted, and then the self-moving robot is controlled to dock the base station.
In one embodiment, the adjusting the pose of the self-mobile robot according to the pose information, and then controlling the self-mobile robot to dock the base station includes:
according to the pose information and the current planning path, adjusting the moving direction of the self-moving robot, and enabling the moving direction to be opposite to the central position of the visual mark;
and controlling the self-moving robot to move along the central position until the base station is docked.
In one embodiment, the moving direction of the self-moving robot is adjusted according to the pose information, and the moving direction is opposite to the central position of the visual sign;
comprising the following steps:
based on the pose information and the preset path pose deviation, a PID algorithm is adopted to obtain an adjustment instruction of the moving direction of the self-moving robot;
and adjusting the moving direction of the self-moving robot based on the adjusting instruction.
The embodiment of the invention also provides a regression docking system of the self-moving robot. The regression docking system of the self-moving robot includes:
the regression module is used for responding to the regression instruction and controlling the self-mobile robot to start base station regression; wherein, the base station is provided with a visual mark;
the positioning navigation module is used for controlling the self-moving robot to move to a guiding position within a first range from the base station;
the docking navigation module is used for controlling the self-mobile robot to start a visual docking program at the guiding position, and extracting and processing visual marks in the acquired image information according to a preset algorithm so as to acquire pose information of the self-mobile robot relative to the base station;
and the docking control module is used for controlling the self-moving robot to adjust the pose according to the pose information and controlling the self-moving robot to dock to the base station according to the adjusted pose.
The embodiment of the invention also provides the self-moving robot. The self-moving robot includes;
a robot main body;
the controller is arranged on the robot main body;
wherein the controller is configured to perform the following:
responding to the regression instruction, and controlling the self-mobile robot to start base station regression; wherein, the base station is provided with a visual mark;
controlling the self-moving robot to move to a guiding position within a first range from the base station;
controlling the self-moving robot to start a visual docking program at the guiding position, and extracting and processing visual marks in the acquired image information according to a preset algorithm to acquire pose information of the self-moving robot relative to the base station;
and adjusting the pose of the self-moving robot according to the pose information, and then controlling the self-moving robot to butt-joint the base station.
The technical scheme provided by the invention has the following advantages:
according to the regression docking method and system for the self-mobile robot and the self-mobile robot, in the regression docking process, the self-mobile robot is quickly navigated to the guiding position in the first range from the base station, and then the visual docking program is adopted at the guiding position to adjust the pose of the self-mobile robot relative to the base station, so that the self-mobile robot and the base station can be quickly and accurately docked.
Drawings
In order to more clearly illustrate the embodiments of the present invention or the technical solutions in the prior art, the drawings that are needed in the description of the embodiments or the prior art will be briefly described, and it is obvious that the drawings in the description below are some embodiments of the present invention, and other drawings can be obtained according to the drawings without inventive effort for a person skilled in the art.
FIG. 1 is a flowchart of a regression docking method of a self-moving robot according to an embodiment of the present invention;
fig. 2 is a schematic view of a scenario of a regression docking system of a self-mobile robot according to an embodiment of the present invention;
FIG. 3 is a block diagram of a regressive docking system of the self-moving robot of the embodiment of FIG. 2;
FIG. 4 is a schematic diagram of the embodiment shown in FIG. 2 in a successful docking state between the mobile robot and the base station;
fig. 5 is a schematic perspective view of a base station in the embodiment shown in fig. 2;
fig. 6 is a schematic perspective view of a self-moving robot according to an embodiment of the present invention.
Detailed Description
The following description of the embodiments of the present invention will be made apparent and fully in view of the accompanying drawings, in which some, but not all embodiments of the invention are shown. The invention will be described in detail hereinafter with reference to the drawings in conjunction with embodiments. It should be noted that, without conflict, the embodiments of the present invention and features of the embodiments may be combined with each other.
It should be noted that the terms "first," "second," and the like in the description and the claims of the present invention and the above figures are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order.
In the present invention, unless otherwise indicated, the following examples are merely illustrative and are not intended to limit the embodiments of the present invention to the particular steps, values, conditions, data, sequences, etc. Those skilled in the art can, upon reading the present specification, make and use the concepts of the invention to construct further embodiments not mentioned in the specification.
The embodiment provides a regression butt joint method of a self-moving robot. When the method is concretely implemented, the method is applied to a self-moving robot for autonomously executing tasks, and the method can be a robot for automatically mowing, a robot for mopping or a robot for sweeping and mopping. Of course, the self-moving robot listed above is only a schematic illustration. In specific implementation, the self-mobile robot may further include a patrol robot, a nurse robot, and the like according to specific application scenarios and processing requirements. The present specification is not limited to this. The task to be performed by the self-moving robot in the area to be worked may also be different for different self-moving robots. For example, in the case where the self-moving robot is a sweeping robot, the job task to be performed in the target area may be a sweeping task. For another example, in the case of a self-moving robot for monitoring the robot, the job task to be performed in the target area may be a patrol task. In the specific embodiment, the working scene of the automatic mowing robot is taken as a specific illustrative case.
In the prior art, when a base station returning docking task is started, an automatic mowing robot moves towards the base station in a boundary returning mode or a GPS positioning mode. Because the base station and the self-mobile robot are in butt joint, the whole structure of the butt joint part is small, the self-mobile robot is often required to try to butt joint for many times after moving to the vicinity of the base station, the inbound pose of the self-mobile robot facing the base station is continuously adjusted, and the whole butt joint returning process can take a long time.
In order to solve the above-mentioned problems, the present embodiment provides a regression docking method of a self-moving robot. As shown in fig. 1, when the method is implemented, the method may include the following steps:
and S01, responding to the regression instruction, and controlling the self-mobile robot to start a base station regression program.
When the work task of the self-mobile robot is completed, starting a regression instruction; or when the electric quantity of the self-mobile robot is smaller than or equal to a preset electric quantity threshold value, starting a regression instruction. In this embodiment, a visual marker is provided on the base station to match the need for regressive docking from the mobile robot.
S02, controlling the self-moving robot to move to a guiding position within a first range from the base station.
S03, controlling the self-mobile robot to start a visual docking program at the guiding position, and extracting and processing visual marks in the acquired image information according to a preset algorithm to acquire pose information of the self-mobile robot relative to the base station.
And S04, adjusting the pose of the self-moving robot according to the pose information, and then controlling the self-moving robot to butt-joint the base station.
In this embodiment, in the regression process, the self-mobile robot quickly navigates to the guiding position within the first range from the base station, and then a visual docking program is adopted at the guiding position to quickly and accurately adjust the pose of the self-mobile robot relative to the base station, so that the docking of the self-mobile robot and the base station can be quickly and accurately realized.
In one embodiment, the controlling the mobile robot to move to the guiding position within the first range from the base station in step S02 includes: and obtaining the position coordinates of the base station in the navigation map by adopting a global positioning algorithm, and then controlling the self-moving robot to move towards the direction of the position coordinates. The global positioning algorithm comprises GPS, beidou, UWB, radar, inertial measurement positioning mode, wheel speed meter positioning or external sensors and the like. Combinations of the above positioning algorithms may also be included, and are not limiting herein.
Preferably, the GPS positioning mode specifically comprises RTK assisted positioning navigation, so that the positioning error of a global navigation satellite system is effectively reduced. As shown in fig. 2 and 5, the base station 200 and the self-moving robot 100 constitute an RTK system: an RTK antenna 270 is arranged on the base station 200 and is used as a reference station of the RTK system; a corresponding RTK antenna is provided from the mobile robot 100 as a rover for the RTK system.
Optionally, in the process of obtaining the position coordinates of the base station in the navigation map by adopting the GPS positioning mode, the inertial measurement positioning mode or the wheel speed meter positioning mode can be assisted, so as to further improve the accuracy of obtaining the position coordinates of the base station.
In other alternative embodiments, the position coordinates of the base station in the navigation map are obtained by using only one of a GPS positioning method, an inertial measurement positioning method, or a wheel speed meter positioning method, so that global positioning navigation is realized at the lowest cost, and the navigation is quickly performed to the guiding position within the first range from the base station.
The method for extracting and processing the visual mark in the acquired image information according to the preset algorithm mainly comprises the following steps: the method comprises the steps of extracting data of visual marks from image information, verifying the data to determine whether the visual marks are the visual marks, and if the visual marks are matched with the preset visual mark data, the visual marks are considered to be obtained, the positions of the visual marks in the image information are converted into the positions of the visual marks relative to the mobile robot, and the position-pose conversion relation of the visual marks relative to the base station is also known, so that the positions of the mobile robot relative to the base station can be obtained.
In an alternative embodiment, the controlling the mobile robot to move to the guiding position within the first range from the base station in step S02 includes:
determining the guiding position according to the position coordinates of the base station in the navigation map; wherein the guiding position is located in front of the base station, within the first range from the base station.
In a specific embodiment, the guiding position corresponding to each regression charging of the self-mobile robot may be different. However, in order to obtain the visual image of the base station conveniently, the guiding position needs to be located in front of the base station, the distance between the guiding position and the base station is within a first range, and in the process that the self-moving robot moves to the position in front of the base station in a global positioning navigation mode, the controller updates the position of the self-moving robot relative to the base station in real time, and when the distance is smaller than or equal to the first range for the first time and is located in front of the base station, the current position of the self-moving robot is defined as the guiding position. At the boot location, a visual docking procedure may be initiated, performing a precise docking process.
As shown in fig. 2, the first range is a sector area range centered on the base station. The first range may be set according to a visual sign of the base station and a definition of image information required to be acquired from the mobile robot. The self-moving robot can meet the image definition requirement of the visual docking process in a first range.
In a preferred embodiment, the controlling the self-moving robot to enable the visual docking procedure at the guiding position in step S03 includes: and acquiring the image information of the surrounding panorama of the self-moving robot through a panorama shooting unit. The panoramic camera unit can acquire the environment image information of 360 degrees around the self-moving robot, so that no matter which azimuth of the self-moving robot is located in the base station, the base station information can be identified, and further the pose information of the self-moving robot relative to the base station can be judged.
In an optional embodiment, in step S03, extracting and processing the visual sign in the obtained image information according to a preset algorithm to obtain pose information of the self-mobile robot relative to the base station, including:
s031, detecting and obtaining a target image containing the visual mark.
S032, identifying the actual corner coordinates of the visual annotation in the target image.
S033, obtaining the coordinate system conversion corresponding relation between the camera and the visual mark through a visual optimization algorithm.
S034, according to the conversion corresponding relation, pose information of the self-moving robot relative to the base station is obtained.
In this embodiment, the pre-stored visual sign information is a visual sign set on the corresponding base station, and may be pre-stored in the form of an image, or simply pre-stored image. Each corner coordinate in the visual sign information in the pre-stored image is a pre-stored corner coordinate value when the mobile robot successfully butts against the base station angle. The actual corner coordinates of the target image and its visual indicia acquired from the mobile robot may vary due to the different pose of the mobile robot relative to the base station. And utilizing the difference between the pre-stored corner coordinates and the actual corner coordinates, and specifically adopting a PNP (transparent-n-Point) algorithm to obtain a conversion corresponding relation so as to obtain the actual pose information of the self-mobile robot relative to the base station at the moment. Further, the pose of the self-moving robot is adjusted so that the self-moving robot is in a pose state in which the self-moving robot can be successfully docked with the base station, and then the self-moving robot is controlled to dock the base station.
In an actual application scene, a part including a visual mark in the acquired image information may have defects, for example, the part is blocked by pollutants in a lens, so that the returned image information has errors, and further, the pose calculation errors are caused. In order to reduce errors in pose information of the self-moving robot caused by detection errors of visual markers. In a preferred embodiment, the identifying the actual corner coordinates of the visual mark in the target image in step S032 includes:
and filtering the acquired image data of the visual mark, and taking the processed data as actual corner coordinates to reduce the error of the pose information of the self-moving robot caused by the detection error of the visual mark.
In an optional embodiment, the adjusting the pose of the self-mobile robot according to the pose information in step S04, and then controlling the self-mobile robot to dock the base station includes:
s041, according to the pose information and the current planning path, adjusting the moving direction of the self-moving robot, and enabling the moving direction to be opposite to the central position of the visual sign.
And S042, controlling the self-moving robot to move along the central position until the base station is docked.
In this embodiment, by adopting the program of visual docking, the pose of the self-moving robot is adjusted to the pose that can be successfully docked with respect to the base station, so as to quickly realize docking navigation.
In a preferred embodiment, the adjusting the moving direction of the self-moving robot according to the pose information in step S041, and making the moving direction face the center position of the visual sign includes: based on the pose information and the preset path pose deviation, a PID algorithm is adopted to obtain an adjustment instruction of the moving direction of the self-moving robot; and adjusting the moving direction of the self-moving robot based on the adjusting instruction.
In particular, the adjustment instruction may be a steering, reversing and/or advancing instruction from the mobile robot. In this embodiment, the use of the PID algorithm can effectively enhance the robustness of the self-mobile robot docking station.
As shown in fig. 2 to 5, the embodiment of the invention further provides a regression docking system of the self-mobile robot. The regressive docking system 300 of the self-mobile robot includes a regressive module 310, a positioning navigation module 320, a docking navigation module 330, and a docking control module 340.
The regression module 310 controls the autonomous mobile robot 100 to initiate regression of the base station 200 in response to the regression instruction. Wherein, when the task of the self-mobile robot 100 is completed, the regression module 310 starts a regression instruction; alternatively, the regression module 310 initiates the regression instruction when the power of the autonomous mobile robot 100 is less than or equal to the preset power threshold.
Wherein, the base station 200 is provided with a visual mark 210. In particular, the visual marker 210 may carry some information or marker information, and may be a two-dimensional code or a bar code. With continued reference to fig. 5, visual cue 210 is a two-dimensional code in this embodiment. Preferably, the visual sign 210 includes two-dimensional codes symmetrically arranged on the base station 200. Through two-dimensional codes that symmetry set up, not only double the quantity of feature point or a plurality of corner coordinates, it is also more convenient for define the central point of visual sign 210.
The positioning navigation module 320 controls movement from the mobile robot 100 to a guiding position within a first range from the base station. The positioning navigation module 320 specifically includes one or a combination of a GPS positioning module or an inertial measurement positioning module or a wheel speed meter positioning module.
Preferably, the GPS positioning module further comprises RTK auxiliary positioning navigation, so that positioning errors of the global navigation satellite system are effectively reduced. As shown in fig. 2 and 5, the base station 200 and the self-moving robot 100 constitute an RTK system: an RTK antenna 270 is arranged on the base station 200 and is used as a reference station of the RTK system; a corresponding RTK antenna is provided from the mobile robot 100 as a rover for the RTK system.
In an alternative embodiment, the current position of the self-mobile robot 100 is defined as the guiding position when the distance from the self-mobile robot 100 to the base station 200 is less than or equal to the first range during the movement of the self-mobile robot 100 to the base station 200.
In a specific embodiment, the guiding position corresponding to each regression docking of the autonomous mobile robot 100 may be different. In the process that the self-mobile robot 100 moves to the base station 200 according to the global positioning navigation mode, the controller updates the distance between the self-mobile robot 100 and the base station 200 in real time, and when the distance is smaller than or equal to the first range for the first time, the current position of the self-mobile robot 100 is defined as the guiding position. At the boot location, a visual docking procedure may be initiated, performing a precise docking process.
The first range may be set according to the visual cue 210 of the base station 200 and the definition of the image information required to be acquired from the mobile robot 100. The self-moving robot 100 can meet the image definition requirement of the visual docking process in the first range.
The docking navigation module 330 controls the self-mobile robot 100 to start the visual docking program at the guiding position, and extracts and processes the visual mark in the obtained image information according to a preset algorithm to obtain the pose information of the self-mobile robot 100 relative to the base station 200.
Preferably, the docking navigation module 330 includes the panoramic camera unit 110. The panoramic imaging unit 110 can acquire the environmental image information of 360 degrees around the mobile robot 100, and can ensure that the base station 200 information can be recognized no matter in which direction the mobile robot 100 is located in the base station 200, and further, the pose information of the mobile robot 100 relative to the base station 200 can be determined.
In an embodiment, the pre-stored visual sign information is a visual sign 210 set on the corresponding base station 200, and may be pre-stored in the form of an image, or simply pre-stored image. Each corner coordinate in the visual sign information in the pre-stored image is a pre-stored corner coordinate value from when the mobile robot 100 successfully docks with the base station 200. The actual corner coordinates of the target image and its visual markers acquired in real time from the mobile robot 100 may vary depending on the pose of the mobile robot 100 with respect to the base station 200. By utilizing the difference between the pre-stored corner coordinates and the actual corner coordinates, a PNP (transparent-n-Point) algorithm may be specifically used to obtain the conversion correspondence relationship, so as to obtain the actual pose information of the self-mobile robot 100 relative to the base station 200 at this time. Further, the pose of the self-mobile robot 100 is adjusted so that the self-mobile robot 100 is in a pose state where the self-mobile robot 100 can successfully dock with the base station 200, and then the self-mobile robot 100 is controlled to dock with the base station 200.
The docking control module 340 controls the self-mobile robot 100 to adjust its own pose according to the pose information and controls the self-mobile robot 100 to dock to the base station 200 according to the adjusted pose.
Specifically, the docking control module 340 adjusts the moving direction of the self-moving robot 100 according to the pose information, and makes the moving direction face the center position of the visual sign 210. At this time, the moving direction of the self-moving robot 100 is opposite to the center position of the visual sign 210, that is, the docking port 150 of the self-moving robot 100 is opposite to the docking post 250 of the base station 200. And then control the self-moving robot 100 to move along the center position until the docking station 200.
Preferably, the docking control module 340 obtains an adjustment instruction of the moving direction of the mobile robot 100 by using a PID algorithm based on the pose information and a preset path pose deviation; the moving direction of the self-moving robot 100 is adjusted based on the adjustment instruction. In particular, the adjustment instruction may be a steering, reversing and/or advancing instruction from the mobile robot. In this embodiment, the use of the PID algorithm can effectively enhance the robustness of the self-mobile robot docking station.
In the embodiment, the pose of the self-moving robot is adjusted relative to the pose of the base station which can be successfully docked by adopting a visual docking mode, so that docking navigation is realized quickly.
In a preferred embodiment, the base station 200 is provided at the bottom with a guide groove 230 narrowed by a width in the docking direction to guide the docking peg 250 precisely moved from the mobile robot 100 toward the base station 200.
As shown in fig. 6, the embodiment of the present invention further provides a self-moving robot 100. The self-moving robot 100 includes; a robot main body; and the controller is arranged on the robot main body.
Wherein the controller is configured to perform the following:
s01, responding to a regression instruction, and controlling the self-mobile robot to start base station regression; wherein, the base station is provided with a visual mark.
S02, controlling the self-moving robot to move to a guiding position within a first range from the base station.
S03, controlling the self-mobile robot to start a visual docking program at the guiding position, and extracting and processing visual marks in the acquired image information according to a preset algorithm to acquire pose information of the self-mobile robot relative to the base station.
And S04, adjusting the pose of the self-moving robot according to the pose information, and then controlling the self-moving robot to butt-joint the base station.
In the embodiment, the self-mobile robot quickly navigates to the guiding position within the first range from the base station in the regression docking process, and then a visual docking program is adopted at the guiding position to adjust the pose of the self-mobile robot relative to the base station, so that the self-mobile robot and the base station can be quickly and accurately docked.
The controller in this embodiment should execute the regression butt joint method provided in any of the above embodiments, and reference may be made to the relevant content of the above method, which is not described herein.
It will be appreciated by those skilled in the art that implementing all or part of the above-described embodiment method may be implemented by a computer program to instruct related hardware, where the program may be stored in a computer readable storage medium, and the program may include the above-described embodiment method when executed. The storage medium may be a magnetic Disk, an optical disc, a Read-Only Memory (ROM), a Random access Memory (Random AccessMemory, RAM), a Flash Memory (Flash Memory), a Hard Disk (HDD), or a Solid State Drive (SSD); the storage medium may also comprise a combination of memories of the kind described above.
It will be appreciated by those skilled in the art that the modules or steps of the invention described above may be implemented in a general purpose computing device, they may be concentrated on a single computing device, or distributed across a network of computing devices, they may alternatively be implemented in program code executable by computing devices, so that they may be stored in a memory device for execution by computing devices, and in some cases, the steps shown or described may be performed in a different order than that shown or described, or they may be separately fabricated into individual integrated circuit modules, or multiple modules or steps within them may be fabricated into a single integrated circuit module for implementation. Thus, the present invention is not limited to any specific combination of hardware and software.
It will be apparent that the embodiments described above are merely some, but not all, embodiments of the invention. Based on the embodiments of the present invention, those skilled in the art may make other different changes or modifications without making any creative effort, which shall fall within the protection scope of the present invention.

Claims (10)

1. A method of regressive docking of a self-moving robot, the method comprising:
responding to the regression instruction, and controlling the self-mobile robot to start a base station regression program; wherein, the base station is provided with a visual mark;
controlling the self-moving robot to move to a guiding position within a first range from the base station;
controlling the self-moving robot to start a visual docking program at the guiding position, and extracting and processing visual marks in the acquired image information according to a preset algorithm to acquire pose information of the self-moving robot relative to the base station;
and adjusting the pose of the self-moving robot according to the pose information, and then controlling the self-moving robot to butt-joint the base station.
2. The method of regressive docking of a self-moving robot of claim 1, wherein said controlling movement of the self-moving robot to a guiding position within a first range from a base station comprises:
and obtaining the position coordinates of the base station in the navigation map by adopting a global positioning algorithm, and then controlling the self-moving robot to move towards the direction of the position coordinates.
3. The method of regressive docking of a self-moving robot of claim 2, wherein said controlling movement of the self-moving robot to a guiding position within a first range from a base station comprises:
determining the guiding position according to the position coordinates of the base station in the navigation map; wherein the guiding position is located in front of the base station, within the first range from the base station.
4. The method of regressive docking of a self-moving robot according to claim 1, wherein said controlling the self-moving robot to enable a visual docking procedure at said guiding location comprises:
and acquiring the image information of the surrounding panorama of the self-moving robot through a panorama shooting unit.
5. The regression butt-joint method of the self-moving robot according to claim 4, wherein the extracting and processing the visual sign in the obtained image information according to the preset algorithm to obtain the pose information of the self-moving robot relative to the base station comprises:
detecting and obtaining a target image containing the visual mark;
identifying actual corner coordinates of a visual marker in the target image;
obtaining a coordinate system conversion corresponding relation between the camera and the visual mark through a visual optimization algorithm;
and acquiring pose information of the self-mobile robot relative to the base station according to the conversion corresponding relation.
6. The method of regressive docking of a self-moving robot according to claim 5, wherein said identifying actual corner coordinates of a visual marker in said target image comprises:
and filtering the acquired image data of the visual mark, and taking the processed data as actual corner coordinates.
7. The method of claim 1, wherein adjusting the pose of the self-mobile robot according to the pose information, and then controlling the self-mobile robot to dock the base station comprises:
according to the pose information and the current planning path, adjusting the moving direction of the self-moving robot, and enabling the moving direction to be opposite to the central position of the visual sign;
and then controlling the self-moving robot to move along the central position until the base station is docked.
8. The method for retrogressive docking of a self-moving robot according to claim 7, wherein said moving direction of said self-moving robot is adjusted according to said pose information, and said moving direction is made to face the center position of said visual sign;
comprising the following steps:
based on the pose information and the preset path pose deviation, a PID algorithm is adopted to obtain an adjustment instruction of the moving direction of the self-moving robot;
and adjusting the moving direction of the self-moving robot based on the adjusting instruction.
9. A regressive docking system for a self-moving robot, the system comprising:
the regression module is used for responding to the regression instruction and controlling the self-mobile robot to start base station regression; wherein, the base station is provided with a visual mark;
the positioning navigation module is used for controlling the self-moving robot to move to a guiding position within a first range from the base station;
the docking navigation module is used for controlling the self-mobile robot to start a visual docking program at the guiding position, and extracting and processing visual marks in the acquired image information according to a preset algorithm so as to acquire pose information of the self-mobile robot relative to the base station;
and the docking control module is used for controlling the self-moving robot to adjust the pose according to the pose information and controlling the self-moving robot to dock to the base station according to the adjusted pose.
10. A self-moving robot, comprising;
a robot main body;
the controller is arranged on the robot main body;
wherein the controller is configured to perform the following:
responding to the regression instruction, and controlling the self-mobile robot to start base station regression; wherein, the base station is provided with a visual mark;
controlling the self-moving robot to move to a guiding position within a first range from the base station;
controlling the self-moving robot to start a visual docking program at the guiding position, and extracting and processing visual marks in the acquired image information according to a preset algorithm to acquire pose information of the self-moving robot relative to the base station;
and adjusting the pose of the self-moving robot according to the pose information, and then controlling the self-moving robot to butt-joint the base station.
CN202211474914.2A 2022-11-23 2022-11-23 Regression docking method and system of self-moving robot and self-moving robot Pending CN116125973A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202211474914.2A CN116125973A (en) 2022-11-23 2022-11-23 Regression docking method and system of self-moving robot and self-moving robot

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202211474914.2A CN116125973A (en) 2022-11-23 2022-11-23 Regression docking method and system of self-moving robot and self-moving robot

Publications (1)

Publication Number Publication Date
CN116125973A true CN116125973A (en) 2023-05-16

Family

ID=86294529

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202211474914.2A Pending CN116125973A (en) 2022-11-23 2022-11-23 Regression docking method and system of self-moving robot and self-moving robot

Country Status (1)

Country Link
CN (1) CN116125973A (en)

Similar Documents

Publication Publication Date Title
CN110796063B (en) Method, device, equipment, storage medium and vehicle for detecting parking space
CN108983603B (en) Butt joint method of robot and object and robot thereof
CN110673612A (en) Two-dimensional code guide control method for autonomous mobile robot
KR20200041355A (en) Simultaneous positioning and mapping navigation method, device and system combining markers
CN111070205B (en) Pile alignment control method and device, intelligent robot and storage medium
CN110986920B (en) Positioning navigation method, device, equipment and storage medium
CN105486311A (en) Indoor robot positioning navigation method and device
KR20170088228A (en) Map building system and its method based on multi-robot localization
CN112183133B (en) Aruco code guidance-based mobile robot autonomous charging method
CN105912971A (en) Regular graphic code array for AGV navigation and code reading method thereof
CN113296495B (en) Path forming method and device of self-mobile equipment and automatic working system
CN112217248A (en) Charging pile, and method and device for autonomous charging of mobile robot
CN111624995A (en) High-precision navigation positioning method for mobile robot
CN109387194A (en) A kind of method for positioning mobile robot and positioning system
CN113900454A (en) Charging pile aligning method, device, equipment and storage medium
KR102275083B1 (en) Robotic systems and a returning method of robot for automatic charging
CN116125973A (en) Regression docking method and system of self-moving robot and self-moving robot
US20230210050A1 (en) Autonomous mobile device and method for controlling same
CN111580530A (en) Positioning method, positioning device, autonomous mobile equipment and medium
US20210018926A1 (en) Navigation Method and System
CN114995459A (en) Robot control method, device, equipment and storage medium
CN110989596B (en) Pile alignment control method and device, intelligent robot and storage medium
CN112050814A (en) Unmanned aerial vehicle visual navigation system and method for indoor transformer substation
CN110637265A (en) Ground robot control method and ground robot
CN115576330B (en) Method and device for realizing one-way latent traction type AGV (automatic guided vehicle) and skip car butt joint

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination