CN110032196B - Robot recharging method and device - Google Patents
Robot recharging method and device Download PDFInfo
- Publication number
- CN110032196B CN110032196B CN201910372448.9A CN201910372448A CN110032196B CN 110032196 B CN110032196 B CN 110032196B CN 201910372448 A CN201910372448 A CN 201910372448A CN 110032196 B CN110032196 B CN 110032196B
- Authority
- CN
- China
- Prior art keywords
- robot
- data
- candidate
- environment
- determining
- Prior art date
- Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
- Active
Links
- 238000000034 method Methods 0.000 title claims abstract description 77
- 230000007613 environmental effect Effects 0.000 claims abstract description 94
- 230000008569 process Effects 0.000 claims abstract description 29
- 238000006073 displacement reaction Methods 0.000 claims description 19
- 238000012216 screening Methods 0.000 claims description 8
- 238000013480 data collection Methods 0.000 description 5
- 238000004891 communication Methods 0.000 description 3
- 238000004590 computer program Methods 0.000 description 2
- 238000010586 diagram Methods 0.000 description 2
- 238000012545 processing Methods 0.000 description 2
- 238000004140 cleaning Methods 0.000 description 1
- 238000001514 detection method Methods 0.000 description 1
- 238000011161 development Methods 0.000 description 1
- 238000005516 engineering process Methods 0.000 description 1
- 230000006870 function Effects 0.000 description 1
- 238000012986 modification Methods 0.000 description 1
- 230000004048 modification Effects 0.000 description 1
- 230000003287 optical effect Effects 0.000 description 1
- 230000000638 stimulation Effects 0.000 description 1
- 238000006467 substitution reaction Methods 0.000 description 1
- 238000010408 sweeping Methods 0.000 description 1
Images
Classifications
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0221—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0212—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
- G05D1/0225—Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0238—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors
- G05D1/024—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using obstacle or wall sensors in combination with a laser
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0231—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means
- G05D1/0246—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means
- G05D1/0253—Control of position or course in two dimensions specially adapted to land vehicles using optical position detecting means using a video camera in combination with image processing means extracting relative motion information from a plurality of images taken successively, e.g. visual odometry, optical flow
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0257—Control of position or course in two dimensions specially adapted to land vehicles using a radar
-
- G—PHYSICS
- G05—CONTROLLING; REGULATING
- G05D—SYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
- G05D1/00—Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
- G05D1/02—Control of position or course in two dimensions
- G05D1/021—Control of position or course in two dimensions specially adapted to land vehicles
- G05D1/0276—Control of position or course in two dimensions specially adapted to land vehicles using signals provided by a source external to the vehicle
Landscapes
- Engineering & Computer Science (AREA)
- Physics & Mathematics (AREA)
- Radar, Positioning & Navigation (AREA)
- Remote Sensing (AREA)
- Automation & Control Theory (AREA)
- General Physics & Mathematics (AREA)
- Aviation & Aerospace Engineering (AREA)
- Electromagnetism (AREA)
- Computer Vision & Pattern Recognition (AREA)
- Multimedia (AREA)
- Optics & Photonics (AREA)
- Manipulator (AREA)
- Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)
Abstract
The application provides a robot recharging method and device, wherein the method comprises the steps of obtaining the current state type of a robot; if the state type indicates that the robot is in an idle state, acquiring environmental data of the current environment where the robot is located; calculating to obtain the current position of the robot and the accuracy grade of the position according to the environmental data of the current environment of the robot; when the accuracy grade is smaller than the preset grade, determining a plurality of data acquisition moments according to preset data acquisition intervals, and controlling the robot to move; acquiring environment data corresponding to each data acquisition moment of the robot in the moving process, and determining the target position of the robot according to the environment data corresponding to each data acquisition moment; and controlling the robot to return to the charging pile according to the target position of the robot and a prestored map. Through the robot recharging method provided by the application, the robot can be controlled to return to the charging pile for charging in an idle state.
Description
Technical Field
The application relates to the technical field of data processing, in particular to a robot recharging method and device.
Background
With the continuous development of science and technology, more and more intelligent robots are developed and bring great convenience to the life and work of people. For example, a sweeping robot may perform a cleaning task, a delivery robot may perform a delivery task, and so on.
The existing robot generally adopts a rechargeable battery to provide power, but the battery capacity is limited, and the robot can only be maintained to run for 2-3 hours, so that the robot needs to be recharged under the condition of insufficient electric quantity or the condition that the robot does not execute a task, namely, the robot returns to a charging pile to be charged, so that the electric quantity of the robot is sufficient, and a new task can be executed at any time.
Generally, the robot can successfully get back to and fill electric pile and charge according to predetermineeing the map and predetermineeing the algorithm, in a single day, the robot is getting back to the in-process that fills electric pile or in the in-process that charges, is moved by external force, leads to the robot to be in idle state (do not carry out the task promptly, also do not charge), and the robot will can not successfully get back to and fill electric pile or get back to again and fill electric pile and charge, leads to the robot electric quantity not enough, influences the next use.
Disclosure of Invention
In view of this, an object of the embodiments of the present application is to provide a robot recharging method and apparatus, which can control a robot to return to a charging pile for charging when the robot is in an idle state, so as to avoid the influence of insufficient electric quantity on next use of the robot.
In a first aspect, an embodiment of the present application provides a robot recharging method and apparatus, where the method includes:
acquiring the current state type of the robot;
if the state type indicates that the robot is in an idle state, acquiring environmental data of the current environment where the robot is located;
calculating to obtain the current position of the robot according to the environmental data of the current environment of the robot, and determining the accuracy grade of the position according to the environmental data of the current environment of the robot;
when the accuracy grade is smaller than a preset grade, determining a plurality of data acquisition moments according to a preset data acquisition interval, and controlling the robot to move;
acquiring environmental data corresponding to each data acquisition time of the robot in the moving process, and determining the target position of the robot according to the environmental data corresponding to each data acquisition time;
and controlling the robot to return to the charging pile according to the target position of the robot and a prestored map.
With reference to the first aspect, this embodiment provides a first possible implementation manner of the first aspect, where the determining, according to environment data of an environment in which the robot is currently located, a level of accuracy of the location includes:
determining the data quantity of the environment data of the current environment where the robot is located, wherein the data quantity is included in the environment data corresponding to the pre-stored map;
and determining the accuracy grade of the current position of the robot according to the data volume.
With reference to the first aspect, an embodiment of the present application provides a second possible implementation manner of the first aspect, where the acquiring environmental data corresponding to each data acquisition time of the robot in the moving process, and determining the target position of the robot according to the environmental data corresponding to each data acquisition time includes:
acquiring environmental data corresponding to each data acquisition moment of the robot in the moving process;
calculating a candidate position of the robot at each data acquisition time by using the environmental data corresponding to each data acquisition time;
for each candidate position, calculating a candidate data amount of environment data acquired by the robot at the candidate position, wherein the environment data includes the environment data corresponding to the pre-stored map, and determining the accuracy grade of the candidate position according to the candidate data amount;
and screening candidate positions with the highest accuracy grade from all the candidate positions, and taking the candidate positions with the highest accuracy grade as target positions of the robot.
With reference to the first aspect, an embodiment of the present application provides a third possible implementation manner of the first aspect, where the controlling, according to the target location of the robot and a pre-stored map, the robot to return to the charging pile includes:
matching the target position with all map position information included in the prestored map to obtain the map position information with the highest position matching degree with the highest accuracy grade;
and controlling the robot to return to the charging pile according to the map position information and the pre-stored map.
With reference to the first aspect, an embodiment of the present application provides a fourth possible implementation manner of the first aspect, where when the accuracy level is greater than or equal to the preset level, the method further includes:
and controlling the robot to return to the charging pile according to the current position of the robot and the pre-stored map.
With reference to the first aspect, an embodiment of the present application provides a fifth possible implementation manner of the first aspect, where the method further includes:
recording a plurality of groups of environmental data acquired in the process that the robot returns to the charging pile from the target position;
and updating the pre-stored map by using the plurality of groups of environmental data.
With reference to the fifth possible implementation manner of the first aspect, an embodiment of the present application provides a sixth possible implementation manner of the first aspect, where the updating the pre-stored map by using the multiple sets of environment data includes:
taking every two groups of continuous environmental data in the plurality of groups of environmental data as a candidate data group;
aiming at each candidate array, obtaining the displacement of the robot according to two groups of environment data and a preset algorithm included in the candidate array; the displacement is a linear distance between positions of the robot when the robot respectively collects two sets of environment data included in the candidate array;
updating the pre-stored map by using a preset number of candidate data groups with displacement smaller than a preset threshold value; wherein the predetermined number of candidate data sets are consecutive candidate data sets.
In a second aspect, an embodiment of the present application further provides a robot recharging apparatus, including:
the state type acquisition module is used for acquiring the current state type of the robot;
the environment data acquisition module is used for acquiring the environment data of the current environment of the robot if the state type indicates that the robot is in an idle state;
the accuracy grade determining module is used for calculating the current position of the robot according to the environmental data of the current environment of the robot and determining the accuracy grade of the position according to the environmental data of the current environment of the robot;
the control module is used for controlling the robot to move when the accuracy grade is smaller than a preset grade, and determining a plurality of data acquisition moments according to a preset data acquisition interval;
the target position determining module is used for acquiring environmental data corresponding to each data acquisition time of the robot in the moving process and determining the target position of the robot according to the environmental data corresponding to each data acquisition time;
and the first returning module is used for controlling the robot to return to the charging pile according to the target position of the robot and a pre-stored map.
With reference to the second aspect, the present application provides a first possible implementation manner of the second aspect, where the implementation manner includes:
the accuracy grade determining module is specifically configured to determine a data amount of environment data of an environment where the robot is currently located, the environment data including environment data corresponding to the pre-stored map;
and determining the accuracy grade of the current position of the robot according to the data volume.
With reference to the second aspect, embodiments of the present application provide a second possible implementation manner of the second aspect, where the implementation manner includes:
the target position determining module is specifically used for acquiring environment data corresponding to each data acquisition moment of the robot in the moving process;
calculating a candidate position of the robot at each data acquisition time by using the environmental data corresponding to each data acquisition time;
for each candidate position, calculating a candidate data amount of environment data acquired by the robot at the candidate position, wherein the environment data includes the environment data corresponding to the pre-stored map, and determining the accuracy grade of the candidate position according to the candidate data amount;
and screening candidate positions with the highest accuracy grade from all the candidate positions, and taking the candidate positions with the highest accuracy grade as target positions of the robot.
The robot recharging method and device provided by the embodiment of the application comprise the steps of obtaining the current state type of a robot; if the state type indicates that the robot is in an idle state, acquiring environmental data of the current environment where the robot is located; calculating to obtain the current position of the robot according to the environmental data of the current environment of the robot, and determining the accuracy grade of the position according to the environmental data of the current environment of the robot; when the accuracy grade is smaller than the preset grade, determining a plurality of data acquisition moments according to preset data acquisition intervals, and controlling the robot to move; acquiring environment data corresponding to each data acquisition moment of the robot in the moving process, and determining the target position of the robot according to the environment data corresponding to each data acquisition moment; and controlling the robot to return to the charging pile according to the target position of the robot and a prestored map. Through the robot recharging method provided by the embodiment of the application, the robot can be controlled to return to the charging pile for charging in an idle state, and the robot is prevented from being used next time due to the fact that the electric quantity is insufficient.
In order to make the aforementioned objects, features and advantages of the present application more comprehensible, preferred embodiments accompanied with figures are described in detail below.
Drawings
In order to more clearly illustrate the technical solutions of the embodiments of the present application, the drawings that are required to be used in the embodiments will be briefly described below, it should be understood that the following drawings only illustrate some embodiments of the present application and therefore should not be considered as limiting the scope, and for those skilled in the art, other related drawings can be obtained from the drawings without inventive effort.
Fig. 1 shows a flowchart of a robot recharging method provided by an embodiment of the present application;
FIG. 2 illustrates a flow chart of another robot recharging method provided by embodiments of the present application;
FIG. 3 illustrates a flow chart of another robot recharging method provided by embodiments of the present application;
fig. 4 is a schematic structural diagram illustrating a robot recharging device provided in an embodiment of the present application;
fig. 5 shows a schematic structural diagram of an electronic device provided in an embodiment of the present application.
Detailed Description
In order to make the objects, technical solutions and advantages of the embodiments of the present application clearer, the technical solutions in the embodiments of the present application will be clearly and completely described below with reference to the drawings in the embodiments of the present application, and it is obvious that the described embodiments are only a part of the embodiments of the present application, and not all the embodiments. The components of the embodiments of the present application, generally described and illustrated in the figures herein, can be arranged and designed in a wide variety of different configurations. Thus, the following detailed description of the embodiments of the present application, presented in the accompanying drawings, is not intended to limit the scope of the claimed application, but is merely representative of selected embodiments of the application. All other embodiments, which can be derived by a person skilled in the art from the embodiments of the present application without making any creative effort, shall fall within the protection scope of the present application.
At present, the robot can be according to predetermineeing the map and predetermineeing the algorithm and successfully getting back to and fill electric pile and charge, in a single day, the robot is getting back to the in-process that fills electric pile or at the in-process that charges, is removed by external force, and the robot will not successfully get back to fill electric pile or get back to again and fill electric pile and charge, leads to the robot electric quantity not enough, influences the next use. In order to solve the problems, the robot recharging method and the robot recharging device provided by the embodiment of the application can control the robot to return to the charging pile for charging in an idle state, and the robot is prevented from being used next time due to insufficient electric quantity.
For the convenience of understanding the embodiments of the present application, a robot recharging method disclosed in the embodiments of the present application will be described in detail first.
As shown in fig. 1, a flowchart of a robot recharging method when a controller included in a robot is used as an execution main body in the embodiment of the present application is shown, which includes the following specific steps:
and S101, acquiring the current state type of the robot.
In a particular implementation, the status types of the robot may include an idle status, a task status, and a charging status.
The robot is in an idle state, namely the robot is not charged and does not execute a task, the task state is that the robot executes the task, and the charging state is that the robot charges in a charging pile.
The controller can check the current electric quantity variation, the current position variation and the like of the robot to determine the current state type of the robot.
And S102, if the state type indicates that the robot is in an idle state, acquiring the environmental data of the current environment where the robot is located.
In a specific implementation, when the state type indicates that the robot is in an idle state, the controller may control a laser radar installed on the robot to acquire environmental data where the robot is currently located. The environmental data collected by the laser radar may include line number, point density, horizontal and vertical viewing angle, detection distance, scanning frequency, accuracy, and the like.
Of course, the controller may also control a camera, a photosensor, and the like installed on the robot to acquire environmental data of the current environment in which the robot is located.
S103, calculating to obtain the current position of the robot according to the environmental data of the current environment of the robot, and determining the accuracy grade of the position according to the environmental data of the current environment of the robot.
In specific implementation, the current position of the robot can be calculated according to environment data of the current environment of the robot, which is acquired by a laser radar or a camera.
The controller compares the environmental data of the current environment of the robot with the environmental data corresponding to the pre-stored map to determine the data quantity of the environmental data of the current environment of the robot in the environmental data corresponding to the pre-stored map, and the accuracy grade of the current position of the robot is determined according to the data quantity.
Specifically, each accuracy level corresponds to a percentage range, the percentage of the data volume in the environment data of the current environment where the robot is located is calculated, the percentage range in which the percentage falls is searched, and the accuracy level corresponding to the percentage range is used as the accuracy level of the current position where the robot is located.
For example, the percentage range corresponding to the first accuracy level is 0-20%, the percentage range corresponding to the second accuracy level is 30-40%, the percentage range corresponding to the third accuracy level is 50-60%, the percentage range corresponding to the fourth accuracy level is 70-80%, the percentage range corresponding to the fifth accuracy level is 90-100%, the environmental data of the environment where the robot is currently located includes 98% of the environmental data of the environment where the robot is currently located in the data amount in the environmental data corresponding to the pre-stored map, and then the accuracy level of the position where the robot is currently located is determined to be five levels; and if the data amount of the environment data of the environment where the robot is currently located in the environment data corresponding to the prestored map is 20% of the environment data of the environment where the robot is currently located, determining that the accuracy level of the position where the robot is currently located is one level, and the like.
And S104, when the accuracy grade is less than the preset grade, determining a plurality of data acquisition moments according to preset data acquisition intervals, and controlling the robot to move.
In specific implementation, when the accuracy grade of the current position of the robot is smaller than a preset grade, a plurality of data acquisition moments are determined according to a preset data acquisition interval, and the robot is controlled to move according to a preset route. The preset route may include rotation around the current position as a center and around 0.5 m as a radius, and may also include moving around the current position as a starting point and around a square with a side length of 1 m as a route until the current position is reached.
In the moving process, the environmental data may be collected according to a plurality of data collection times, or the environmental data may be collected continuously, which is not specifically limited in this embodiment of the present application.
And when the accuracy grade is greater than or equal to the preset grade, controlling the robot to return to the charging pile according to the current position of the robot and a pre-stored map.
And S105, acquiring environment data corresponding to each data acquisition time of the robot in the moving process, and determining the target position of the robot according to the environment data corresponding to each data acquisition time.
In specific implementation, after the controller acquires the environmental data corresponding to each data acquisition time of the robot in the moving process, the position of the robot at the data stimulation time and the accuracy level of the position can be calculated according to the environmental data corresponding to each data acquisition time.
And screening out the position with the highest accuracy grade from all the positions as the target position of the robot.
The specific process is described in detail below, and will not be described in detail herein.
And S106, controlling the robot to return to the charging pile according to the target position of the robot and a prestored map.
In specific implementation, after the controller determines the target position of the robot, the target position is matched with all map position information included in a prestored map, and the map position information with the highest position matching degree with the highest accuracy level is obtained; the pre-stored map comprises a plurality of map position information, environment data, environment models corresponding to the environment data and the like.
The controller controls the robot to move to a position corresponding to the map position information with the highest position matching degree and the highest accuracy grade, the position corresponding to the map position information is used as a starting point, and the robot is controlled to return to the charging pile according to a prestored map, so that the robot is charged.
Through the robot recharging method provided by the embodiment of the application, the robot can be controlled to return to the charging pile for charging in an idle state, and the robot is prevented from being used next time due to the fact that the electric quantity is insufficient.
Specifically, the target position of the robot is determined according to the method shown in fig. 2, wherein the specific steps are as follows:
s201, acquiring environment data corresponding to each data acquisition moment of the robot in the moving process;
s202, calculating candidate positions of the robot at each data acquisition time by using the environment data corresponding to each data acquisition time;
s203, calculating the candidate data quantity of the environment data acquired by the robot at the candidate position in the environment data corresponding to the pre-stored map according to each candidate position, and determining the accuracy grade of the candidate position according to the candidate data quantity;
s204, the candidate position with the highest accuracy grade is screened from all the candidate positions, and the candidate position with the highest accuracy grade is used as the target position of the robot.
In specific implementation, the controller controls the laser radar to collect the environment data of the position of the robot at each data collection time in the moving process of the robot, and after the controller obtains the environment data corresponding to each data collection time in the moving process of the robot, the candidate position of the robot at each data collection time is calculated according to the environment data corresponding to each data collection time.
For each candidate position, the controller compares the environment data acquired by the robot at the candidate position with the environment data corresponding to the pre-stored map to determine the data quantity of the environment data acquired by the robot at the candidate position, wherein the environment data is included in the environment data corresponding to the pre-stored map, and the accuracy grade of the candidate position of the robot is determined according to the data quantity.
And screening the candidate position with the highest accuracy grade from all the candidate positions, and taking the candidate position with the highest accuracy grade as the target position of the robot.
In specific implementation, a plurality of groups of environmental data collected in the process that the robot returns to the charging pile from a target position are recorded; when the robot collects each group of environmental data in the multiple groups of environmental data, the accuracy level of the position where the robot is located is high, and the rotation angles calculated by utilizing the Closest Point search (ICP) algorithm of the two continuous groups of environmental data are all larger than a certain threshold value.
And after the controller acquires the plurality of groups of environmental data, updating the pre-stored map by using the plurality of groups of environmental data.
Specifically, the pre-stored map is updated according to the method shown in fig. 3, wherein the method specifically comprises the following steps:
s301, taking each two groups of continuous environmental data in the plurality of groups of environmental data as a candidate data group;
s302, aiming at each candidate array, obtaining the displacement of the robot according to two groups of environment data and a preset algorithm included in the candidate array; the displacement is a linear distance between positions of the robot when the robot respectively collects the two sets of environment data included in the candidate array;
s303, updating the pre-stored map by using the candidate data groups with the preset number and the displacement smaller than the preset threshold value; wherein the predetermined number of candidate data sets are consecutive candidate data sets.
For example, after the robot returns to the return charging pile for charging, 10 sets of environmental data are acquired, and every two sets of continuous environmental data are used as a candidate data set, namely 9 candidate data sets are obtained. And aiming at each candidate data group, obtaining the displacement of the robot according to the two groups of environment data included in the candidate data group and a preset algorithm, wherein the displacement is the linear distance between the positions of the robot when the robot respectively collects the two groups of environment data included in the candidate data group. And, the preset algorithm may include an ICP algorithm.
If the predetermined number is 5, and the calculated displacements of the robots corresponding to the 2 nd candidate data group, the 3 rd candidate data group, the 4 th candidate data group, the 5 th candidate data group, the 6 th candidate data group, the 7 th candidate data group and the 8 th candidate data group are all smaller than the preset threshold, the prestored map is updated by using the 2 nd environment data, the 3 rd environment data, the 4 th environment data, the 5 th candidate data group, the 6 th candidate data group, the 7 th candidate data group and the 8 th environment data included in the 2 nd candidate data group, the 3 rd environment data, the 4 th environment data, the 5 th environment data, the 6 th environment data, the 7 th environment data, the 8 th environment data and the 9 th environment data.
If the calculated displacements of the robots corresponding to the 2 nd candidate data group, the 3 rd candidate data group, the 6 th candidate data group and the 7 th candidate data group are all smaller than the preset threshold, but the continuous environmental data are smaller than the preset quantity, the pre-stored map is not updated, and all the environmental data are deleted.
Based on the same inventive concept, the embodiment of the present application further provides a robot recharging device corresponding to the robot recharging method, and as the principle of solving the problem of the device in the embodiment of the present application is similar to that of the robot recharging method in the embodiment of the present application, the implementation of the device can refer to the implementation of the method, and repeated details are omitted.
Referring to fig. 4, a robot recharging apparatus according to another embodiment of the present application includes:
a state type obtaining module 401, configured to obtain a current state type of the robot;
an environment data obtaining module 402, configured to obtain, if the state type indicates that the robot is in an idle state, environment data of an environment where the robot is currently located;
an accuracy level determining module 403, configured to calculate a current location of the robot according to environment data of a current environment where the robot is located, and determine an accuracy level of the location according to the environment data of the current environment where the robot is located;
the control module 404 is configured to control the robot to move when the accuracy level is smaller than a preset level, and determine a plurality of data acquisition moments according to a preset data acquisition interval;
a target position determining module 405, configured to obtain environment data corresponding to each data acquisition time of the robot during a moving process, and determine a target position of the robot according to the environment data corresponding to each data acquisition time;
and the first returning module 406 is used for controlling the robot to return to the charging pile according to the target position of the robot and a pre-stored map.
In an embodiment, the accuracy level determining module 403 is specifically configured to:
determining the data quantity of the environment data of the current environment where the robot is located, wherein the data quantity is included in the environment data corresponding to the pre-stored map;
and determining the accuracy grade of the current position of the robot according to the data volume.
In another embodiment, the target position determining module 405 is specifically configured to:
acquiring environmental data corresponding to each data acquisition moment of the robot in the moving process;
calculating a candidate position of the robot at each data acquisition time by using the environmental data corresponding to each data acquisition time;
for each candidate position, calculating a candidate data amount of environment data acquired by the robot at the candidate position, wherein the environment data includes the environment data corresponding to the pre-stored map, and determining the accuracy grade of the candidate position according to the candidate data amount;
and screening candidate positions with the highest accuracy grade from all the candidate positions, and taking the candidate positions with the highest accuracy grade as target positions of the robot.
In an embodiment, the first returning module 406 is specifically configured to:
matching the target position with all map position information included in the prestored map to obtain the map position information with the highest position matching degree with the highest accuracy grade;
and controlling the robot to return to the charging pile according to the map position information and the pre-stored map.
In another embodiment, the robot recharging apparatus further includes:
and a second returning module 407, configured to control the robot to return to the charging pile according to the current location of the robot and the pre-stored map.
In still another embodiment, the robot recharging apparatus further includes:
the recording module 408 is configured to record multiple sets of environmental data collected during the process that the robot returns to the charging pile from the target position;
and an updating module 409, configured to update the pre-stored map by using the multiple sets of environmental data.
In another embodiment, the update module 409 is specifically configured to:
taking every two groups of continuous environmental data in the plurality of groups of environmental data as a candidate data group;
aiming at each candidate array, obtaining the displacement of the robot according to two groups of environment data and a preset algorithm included in the candidate array; the displacement is a linear distance between positions of the robot when the robot respectively collects two sets of environment data included in the candidate array;
updating the pre-stored map by using a preset number of candidate data groups with displacement smaller than a preset threshold value; wherein the predetermined number of candidate data sets are consecutive candidate data sets.
Fig. 5 illustrates a structure of an electronic device 500 according to an embodiment of the present invention, where the electronic device 500 includes: at least one processor 501, at least one network interface 504 or other user interface 503, memory 505, at least one communication bus 502. A communication bus 502 is used to enable connective communication between these components. The electronic device 500 optionally contains a user interface 503 including a display (e.g., touchscreen, LCD, CRT, Holographic (Holographic) or projection (Projector), etc.), a keyboard or a pointing device (e.g., mouse, trackball (trackball), touch pad or touchscreen, etc.).
In some embodiments, memory 505 stores the following elements, executable modules or data structures, or a subset thereof, or an expanded set thereof:
an operating system 5051, which includes various system programs for implementing various basic services and processing hardware-based tasks;
the application module 5052 contains various applications, such as a desktop (launcher), a Media Player (Media Player), a Browser (Browser), etc., for implementing various application services.
In an embodiment of the present invention, processor 501, by invoking programs or instructions stored by memory 505, is configured to:
acquiring the current state type of the robot;
if the state type indicates that the robot is in an idle state, acquiring environmental data of the current environment where the robot is located;
calculating to obtain the current position of the robot according to the environmental data of the current environment of the robot, and determining the accuracy grade of the position according to the environmental data of the current environment of the robot;
when the accuracy grade is smaller than a preset grade, determining a plurality of data acquisition moments according to a preset data acquisition interval, and controlling the robot to move;
acquiring environmental data corresponding to each data acquisition time of the robot in the moving process, and determining the target position of the robot according to the environmental data corresponding to each data acquisition time;
and controlling the robot to return to the charging pile according to the target position of the robot and a prestored map.
Optionally, the processor 501 executes a method of determining the accuracy level of the position according to environment data of an environment in which the robot is currently located, including:
determining the data quantity of the environment data of the current environment where the robot is located, wherein the data quantity is included in the environment data corresponding to the pre-stored map;
and determining the accuracy grade of the current position of the robot according to the data volume.
Optionally, in the method executed by the processor 501, the acquiring environmental data corresponding to each data acquisition time of the robot in the moving process, and determining the target position of the robot according to the environmental data corresponding to each data acquisition time includes:
acquiring environmental data corresponding to each data acquisition moment of the robot in the moving process;
calculating a candidate position of the robot at each data acquisition time by using the environmental data corresponding to each data acquisition time;
for each candidate position, calculating a candidate data amount of environment data acquired by the robot at the candidate position, wherein the environment data includes the environment data corresponding to the pre-stored map, and determining the accuracy grade of the candidate position according to the candidate data amount;
and screening candidate positions with the highest accuracy grade from all the candidate positions, and taking the candidate positions with the highest accuracy grade as target positions of the robot.
Optionally, in the method executed by the processor 501, the controlling the robot to return to the charging pile according to the target position of the robot and a pre-stored map includes:
matching the target position with all map position information included in the prestored map to obtain the map position information with the highest position matching degree with the highest accuracy grade;
and controlling the robot to return to the charging pile according to the map position information and the pre-stored map.
Optionally, the processor 501 executes a method, when the accuracy level is greater than or equal to the preset level, further including:
and controlling the robot to return to the charging pile according to the current position of the robot and the pre-stored map.
Optionally, the processor 501 executes a method, further including:
recording a plurality of groups of environmental data acquired in the process that the robot returns to the charging pile from the target position;
and updating the pre-stored map by using the plurality of groups of environmental data.
Optionally, the processor 501 executes a method, in which the updating the pre-stored map by using the plurality of sets of environment data includes:
taking every two groups of continuous environmental data in the plurality of groups of environmental data as a candidate data group;
aiming at each candidate array, obtaining the displacement of the robot according to two groups of environment data and a preset algorithm included in the candidate array; the displacement is a linear distance between positions of the robot when the robot respectively collects two sets of environment data included in the candidate array;
updating the pre-stored map by using a preset number of candidate data groups with displacement smaller than a preset threshold value; wherein the predetermined number of candidate data sets are consecutive candidate data sets.
The computer program product of the robot recharging method and device provided in the embodiments of the present application includes a computer readable storage medium storing a program code, and instructions included in the program code may be used to execute the method in the foregoing method embodiments, and specific implementation may refer to the method embodiments, and will not be described herein again.
Specifically, the storage medium can be a general storage medium, such as a mobile disk, a hard disk, and the like, and when a computer program on the storage medium is executed, the robot recharging method can be executed, so that the robot can be controlled to return to the charging pile for charging in an idle state, and the robot is prevented from being used next time due to insufficient electric quantity.
The functions, if implemented in the form of software functional units and sold or used as a stand-alone product, may be stored in a non-volatile computer-readable storage medium executable by a processor. Based on such understanding, the technical solution of the present application or portions thereof that substantially contribute to the prior art may be embodied in the form of a software product stored in a storage medium and including instructions for causing a computer device (which may be a personal computer, a server, or a network device) to execute all or part of the steps of the method according to the embodiments of the present application. And the aforementioned storage medium includes: various media capable of storing program codes, such as a usb disk, a removable hard disk, a Read-Only Memory (ROM), a Random Access Memory (RAM), a magnetic disk, or an optical disk.
Finally, it should be noted that: the above-mentioned embodiments are only specific embodiments of the present application, and are used for illustrating the technical solutions of the present application, but not limiting the same, and the scope of the present application is not limited thereto, and although the present application is described in detail with reference to the foregoing embodiments, those skilled in the art should understand that: any person skilled in the art can modify or easily conceive the technical solutions described in the foregoing embodiments or equivalent substitutes for some technical features within the technical scope disclosed in the present application; such modifications, changes or substitutions do not depart from the spirit and scope of the exemplary embodiments of the present application, and are intended to be covered by the scope of the present application. Therefore, the protection scope of the present application shall be subject to the protection scope of the claims.
Claims (8)
1. A robot recharging method, comprising:
acquiring the current state type of the robot;
if the state type indicates that the robot is in an idle state, acquiring environmental data of the current environment where the robot is located;
calculating to obtain the current position of the robot according to the environmental data of the current environment of the robot, and determining the accuracy grade of the position according to the environmental data of the current environment of the robot;
when the accuracy grade is smaller than a preset grade, determining a plurality of data acquisition moments according to a preset data acquisition interval, and controlling the robot to move;
acquiring environmental data corresponding to each data acquisition time of the robot in the moving process, and determining the target position of the robot according to the environmental data corresponding to each data acquisition time;
controlling the robot to return to the charging pile according to the target position of the robot and a prestored map;
wherein said determining a level of accuracy of said position from environmental data of an environment in which said robot is currently located comprises:
determining the data quantity of the environment data of the current environment where the robot is located, wherein the data quantity is included in the environment data corresponding to the pre-stored map;
calculating the percentage of the data volume in the environmental data of the environment where the robot is currently located;
searching a percentage range in which the percentage falls, and taking an accuracy grade corresponding to the percentage range as an accuracy grade of the current position of the robot; wherein each accuracy level corresponds to a percentage range.
2. The method of claim 1, wherein the acquiring environmental data corresponding to each data acquisition time of the robot in the moving process, and determining the target position of the robot according to the environmental data corresponding to each data acquisition time comprises:
acquiring environmental data corresponding to each data acquisition moment of the robot in the moving process;
calculating a candidate position of the robot at each data acquisition time by using the environmental data corresponding to each data acquisition time;
for each candidate position, calculating a candidate data amount of environment data acquired by the robot at the candidate position, wherein the environment data includes the environment data corresponding to the pre-stored map, and determining the accuracy grade of the candidate position according to the candidate data amount;
and screening candidate positions with the highest accuracy grade from all the candidate positions, and taking the candidate positions with the highest accuracy grade as target positions of the robot.
3. The method of claim 1, wherein the controlling the robot to return to the charging pile according to the target position of the robot and a pre-stored map comprises:
matching the target position with all map position information included in the prestored map to obtain the map position information with the highest position matching degree with the highest accuracy grade;
and controlling the robot to return to the charging pile according to the map position information and the pre-stored map.
4. The method of claim 1, further comprising, when the accuracy level is greater than or equal to the preset level:
and controlling the robot to return to the charging pile according to the current position of the robot and the pre-stored map.
5. The method of claim 1, further comprising:
recording a plurality of groups of environmental data acquired in the process that the robot returns to the charging pile from the target position;
and updating the pre-stored map by using the plurality of groups of environmental data.
6. The method of claim 5, wherein said updating said pre-stored map with said plurality of sets of environmental data comprises:
taking every two groups of continuous environmental data in the plurality of groups of environmental data as a candidate data group;
aiming at each candidate array, obtaining the displacement of the robot according to two groups of environment data and a preset algorithm included in the candidate array; the displacement is a linear distance between positions of the robot when the robot respectively collects two sets of environment data included in the candidate array;
updating the pre-stored map by using a preset number of candidate data groups with displacement smaller than a preset threshold value; wherein the predetermined number of candidate data sets are consecutive candidate data sets.
7. A robotic refill device, comprising:
the state type acquisition module is used for acquiring the current state type of the robot;
the environment data acquisition module is used for acquiring the environment data of the current environment of the robot if the state type indicates that the robot is in an idle state;
the accuracy grade determining module is used for calculating the current position of the robot according to the environmental data of the current environment of the robot and determining the accuracy grade of the position according to the environmental data of the current environment of the robot;
the control module is used for controlling the robot to move when the accuracy grade is smaller than a preset grade, and determining a plurality of data acquisition moments according to a preset data acquisition interval;
the target position determining module is used for acquiring environmental data corresponding to each data acquisition time of the robot in the moving process and determining the target position of the robot according to the environmental data corresponding to each data acquisition time;
the first returning module is used for controlling the robot to return to the charging pile according to the target position of the robot and a prestored map;
wherein, the accuracy grade determining module is specifically configured to:
determining the data quantity of the environment data of the current environment where the robot is located, wherein the data quantity is included in the environment data corresponding to the pre-stored map;
calculating the percentage of the data volume in the environmental data of the current environment where the robot is located;
searching a percentage range in which the percentage falls, and taking an accuracy grade corresponding to the percentage range as an accuracy grade of the current position of the robot; wherein each accuracy level corresponds to a percentage range.
8. The robot recharging device of claim 7, wherein the target location determining module is specifically configured to:
acquiring environmental data corresponding to each data acquisition moment of the robot in the moving process;
calculating a candidate position of the robot at each data acquisition time by using the environmental data corresponding to each data acquisition time;
for each candidate position, calculating a candidate data amount of environment data acquired by the robot at the candidate position, wherein the environment data includes the environment data corresponding to the pre-stored map, and determining the accuracy grade of the candidate position according to the candidate data amount;
and screening candidate positions with the highest accuracy grade from all the candidate positions, and taking the candidate positions with the highest accuracy grade as target positions of the robot.
Priority Applications (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910372448.9A CN110032196B (en) | 2019-05-06 | 2019-05-06 | Robot recharging method and device |
Applications Claiming Priority (1)
Application Number | Priority Date | Filing Date | Title |
---|---|---|---|
CN201910372448.9A CN110032196B (en) | 2019-05-06 | 2019-05-06 | Robot recharging method and device |
Publications (2)
Publication Number | Publication Date |
---|---|
CN110032196A CN110032196A (en) | 2019-07-19 |
CN110032196B true CN110032196B (en) | 2022-03-29 |
Family
ID=67241392
Family Applications (1)
Application Number | Title | Priority Date | Filing Date |
---|---|---|---|
CN201910372448.9A Active CN110032196B (en) | 2019-05-06 | 2019-05-06 | Robot recharging method and device |
Country Status (1)
Country | Link |
---|---|
CN (1) | CN110032196B (en) |
Families Citing this family (3)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
CN110880798A (en) * | 2019-11-26 | 2020-03-13 | 爱菲力斯(深圳)科技有限公司 | Robot charging method, robot charging device, robot and system |
CN113031613A (en) * | 2021-03-11 | 2021-06-25 | 上海有个机器人有限公司 | Automatic charging method of robot, robot and waybill scheduling system |
CN114789440B (en) * | 2022-04-22 | 2024-02-20 | 深圳市正浩创新科技股份有限公司 | Target docking method, device, equipment and medium based on image recognition |
Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007322138A (en) * | 2006-05-30 | 2007-12-13 | Toyota Motor Corp | Moving device, and own position estimation method for moving device |
US9744665B1 (en) * | 2016-01-27 | 2017-08-29 | X Development Llc | Optimization of observer robot locations |
CN107154664A (en) * | 2017-07-13 | 2017-09-12 | 湖南万为智能机器人技术有限公司 | Multirobot automatic charging dispatching method |
CN107636548A (en) * | 2015-05-12 | 2018-01-26 | 三星电子株式会社 | Robot and its control method |
CN107689075A (en) * | 2017-08-30 | 2018-02-13 | 北京三快在线科技有限公司 | Generation method, device and the robot of navigation map |
CN107817801A (en) * | 2017-11-03 | 2018-03-20 | 深圳市杉川机器人有限公司 | Robot control method, device, robot and cradle |
CN107945233A (en) * | 2017-12-04 | 2018-04-20 | 深圳市沃特沃德股份有限公司 | Vision sweeping robot and its recharging method |
CN107943054A (en) * | 2017-12-20 | 2018-04-20 | 北京理工大学 | Automatic recharging method based on robot |
CN108829111A (en) * | 2018-08-07 | 2018-11-16 | 北京云迹科技有限公司 | Multirobot uses the dispatching method and device of more charging piles |
CN108983761A (en) * | 2017-06-01 | 2018-12-11 | 深圳乐动机器人有限公司 | Method, system and the robot of robot searching charging unit |
CN109211237A (en) * | 2018-08-07 | 2019-01-15 | 北京云迹科技有限公司 | Robot location's bearing calibration and device based on more charging piles |
CN109460020A (en) * | 2018-10-31 | 2019-03-12 | 北京猎户星空科技有限公司 | Robot map sharing method, device, robot and system |
Family Cites Families (7)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
EP1541295A1 (en) * | 2002-08-26 | 2005-06-15 | Sony Corporation | Environment identification device, environment identification method, and robot device |
US8180486B2 (en) * | 2006-10-02 | 2012-05-15 | Honda Motor Co., Ltd. | Mobile robot and controller for same |
US9233472B2 (en) * | 2013-01-18 | 2016-01-12 | Irobot Corporation | Mobile robot providing environmental mapping for household environmental control |
EP3103043B1 (en) * | 2014-09-05 | 2018-08-15 | SZ DJI Technology Co., Ltd. | Multi-sensor environmental mapping |
CN106743321B (en) * | 2016-11-16 | 2020-02-28 | 京东方科技集团股份有限公司 | Carrying method and carrying system |
US10222215B2 (en) * | 2017-04-21 | 2019-03-05 | X Development Llc | Methods and systems for map generation and alignment |
US10761541B2 (en) * | 2017-04-21 | 2020-09-01 | X Development Llc | Localization with negative mapping |
-
2019
- 2019-05-06 CN CN201910372448.9A patent/CN110032196B/en active Active
Patent Citations (12)
Publication number | Priority date | Publication date | Assignee | Title |
---|---|---|---|---|
JP2007322138A (en) * | 2006-05-30 | 2007-12-13 | Toyota Motor Corp | Moving device, and own position estimation method for moving device |
CN107636548A (en) * | 2015-05-12 | 2018-01-26 | 三星电子株式会社 | Robot and its control method |
US9744665B1 (en) * | 2016-01-27 | 2017-08-29 | X Development Llc | Optimization of observer robot locations |
CN108983761A (en) * | 2017-06-01 | 2018-12-11 | 深圳乐动机器人有限公司 | Method, system and the robot of robot searching charging unit |
CN107154664A (en) * | 2017-07-13 | 2017-09-12 | 湖南万为智能机器人技术有限公司 | Multirobot automatic charging dispatching method |
CN107689075A (en) * | 2017-08-30 | 2018-02-13 | 北京三快在线科技有限公司 | Generation method, device and the robot of navigation map |
CN107817801A (en) * | 2017-11-03 | 2018-03-20 | 深圳市杉川机器人有限公司 | Robot control method, device, robot and cradle |
CN107945233A (en) * | 2017-12-04 | 2018-04-20 | 深圳市沃特沃德股份有限公司 | Vision sweeping robot and its recharging method |
CN107943054A (en) * | 2017-12-20 | 2018-04-20 | 北京理工大学 | Automatic recharging method based on robot |
CN108829111A (en) * | 2018-08-07 | 2018-11-16 | 北京云迹科技有限公司 | Multirobot uses the dispatching method and device of more charging piles |
CN109211237A (en) * | 2018-08-07 | 2019-01-15 | 北京云迹科技有限公司 | Robot location's bearing calibration and device based on more charging piles |
CN109460020A (en) * | 2018-10-31 | 2019-03-12 | 北京猎户星空科技有限公司 | Robot map sharing method, device, robot and system |
Non-Patent Citations (2)
Title |
---|
《一种基于地图构建与角度传感器的扫地机器人自动回充方法》;肖奇军 等;《机械与电子》;20190228;第78-80页 * |
《可自动回撤煤矿探测机器人设计》;王志同 等;《工矿自动化》;20180531;第6-12页 * |
Also Published As
Publication number | Publication date |
---|---|
CN110032196A (en) | 2019-07-19 |
Similar Documents
Publication | Publication Date | Title |
---|---|---|
CN110032196B (en) | Robot recharging method and device | |
CN104699099A (en) | Method and apparatus for simultaneous localization and mapping of mobile robot environment | |
CN108241370B (en) | Method and device for acquiring obstacle avoidance path through grid map | |
CN102520858B (en) | Mobile terminal application control method and device | |
CN109087335A (en) | A kind of face tracking method, device and storage medium | |
CN113361999A (en) | Information generation method and device | |
KR102118357B1 (en) | System for structuring observation data and platform for mobile mapping or autonomous vehicle | |
CN110824491A (en) | Charging pile positioning method and device, computer equipment and storage medium | |
CN108398945A (en) | A kind of method and apparatus executing task for mobile robot | |
US20160147373A1 (en) | Input device, and control method and program therefor | |
CN104062669A (en) | Positioning processing apparatus and positioning processing method | |
CN106021007A (en) | Method for detecting fault of terminal and terminal | |
CN110850882A (en) | Charging pile positioning method and device of sweeping robot | |
CN112886670A (en) | Charging control method and device for robot, robot and storage medium | |
CN104951055A (en) | Method and device for setting operation mode of equipment | |
CN112084853A (en) | Footprint prediction method, footprint prediction device and humanoid robot | |
CN104135718A (en) | Position information obtaining method and device | |
CA2894863A1 (en) | Indoor localization using crowdsourced data | |
CN112540613A (en) | Method and device for searching recharging seat position and mobile robot | |
US9927917B2 (en) | Model-based touch event location adjustment | |
JP2021135473A (en) | Search support system, search support method | |
CN110556893A (en) | Robot automatic charging method, system and control background | |
JP2012185990A (en) | Secondary battery replacing method and device for acquiring secondary battery for replacement | |
CN111541844B (en) | Object distance prediction method and device for pan-tilt control camera and storage equipment | |
CN114355903A (en) | Robot automatic charging method and device, computer equipment and storage medium |
Legal Events
Date | Code | Title | Description |
---|---|---|---|
PB01 | Publication | ||
PB01 | Publication | ||
SE01 | Entry into force of request for substantive examination | ||
SE01 | Entry into force of request for substantive examination | ||
CB02 | Change of applicant information | ||
CB02 | Change of applicant information |
Address after: Room 702, 7th floor, NO.67, Beisihuan West Road, Haidian District, Beijing 100089 Applicant after: Beijing Yunji Technology Co.,Ltd. Address before: Room 201, building 4, courtyard 8, Dongbeiwang West Road, Haidian District, Beijing Applicant before: BEIJING YUNJI TECHNOLOGY Co.,Ltd. |
|
GR01 | Patent grant | ||
GR01 | Patent grant |