CN113138596A - Robot automatic charging method, system, terminal device and storage medium - Google Patents

Robot automatic charging method, system, terminal device and storage medium Download PDF

Info

Publication number
CN113138596A
CN113138596A CN202110344749.8A CN202110344749A CN113138596A CN 113138596 A CN113138596 A CN 113138596A CN 202110344749 A CN202110344749 A CN 202110344749A CN 113138596 A CN113138596 A CN 113138596A
Authority
CN
China
Prior art keywords
point cloud
map
robot
charging
preset reference
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202110344749.8A
Other languages
Chinese (zh)
Inventor
李强
毕艳飞
柴黎林
李贝
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Ubtech Robotics Corp
Original Assignee
Ubtech Robotics Corp
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Ubtech Robotics Corp filed Critical Ubtech Robotics Corp
Priority to CN202110344749.8A priority Critical patent/CN113138596A/en
Publication of CN113138596A publication Critical patent/CN113138596A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0225Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving docking at a fixed facility, e.g. base station or loading bay
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course, altitude or attitude of land, water, air or space vehicles, e.g. using automatic pilots
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0257Control of position or course in two dimensions specially adapted to land vehicles using a radar

Landscapes

  • Engineering & Computer Science (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application provides an automatic robot charging method, an automatic robot charging system, terminal equipment and a storage medium, wherein the method comprises the following steps: generating a charging navigation path, and controlling the robot to move to a path terminal point according to the charging navigation path; acquiring a point cloud map of the current environment of the robot; calculating point cloud errors between the point cloud map and different preset reference maps; and calculating attitude transformation parameters between the point cloud map and a preset reference map corresponding to the minimum point cloud error, and controlling the robot to carry out attitude transformation according to the attitude transformation parameters. The gesture transformation parameter control robot carries out the gesture transformation according to calculating and obtains to reach and fill the automatic location effect that charges between the electric pile, need not to set up the mark characteristic on filling electric pile, need not promptly to carry out the deployment installation of feature structure to filling electric pile, made things convenient for user's operation, improved user's use and experienced.

Description

Robot automatic charging method, system, terminal device and storage medium
Technical Field
The application belongs to the technical field of automatic charging, and particularly relates to an automatic robot charging method, an automatic robot charging system, a terminal device and a storage medium.
Background
With the development of science and technology, more and more robots enter the field of view of the public, such as banks, restaurants, hospitals and the like can see the body shadow of the robots, and even some service robots enter the families of people. Because the service type robot is mostly powered by the storage battery, in daily use, when the electric quantity of the robot is exhausted or will be exhausted, the robot and the charging pile need to be connected for charging. The robot is charged repeatedly every day, so that the use experience of the user on the robot is greatly reduced, and the automatic charging function of the robot has great necessity.
In the automatic charging process of the existing robot, the robot and the charging pile are positioned in a charging mode by adopting a marking characteristic, for example, mechanical structure characteristics such as an arc or a deep groove are arranged on the charging pile to realize the charging positioning between the robot and the charging pile, but because the robot and the charging pile are positioned in the charging mode by adopting the marking characteristic, the charging pile needs to be installed in a structural arrangement mode, so that the operation of a user is complicated, and the use experience of the user is reduced.
Disclosure of Invention
The embodiment of the application provides an automatic robot charging method, an automatic robot charging system, terminal equipment and a storage medium, and aims to solve the problems that in the existing automatic robot charging process, due to the fact that charging positioning between a robot and a charging pile is achieved in a marking characteristic mode, operation of a user is complex, and use experience of the user is low.
In a first aspect, an embodiment of the present application provides an automatic robot charging method, where the method includes:
if a charging instruction is received, generating a charging navigation path, and controlling the robot to move to a path end point of the charging navigation path according to the charging navigation path;
acquiring a point cloud map of an environment where the robot is currently located, wherein the point cloud map comprises first data points, and the first data points are data vectors corresponding to obstacles in the current environment;
calculating point cloud errors between the point cloud map and different preset reference maps, wherein one preset reference map is the point cloud map when the robot and one charging pile are in a charging butt joint state, the point cloud errors are used for representing distance errors of an average value of point cloud distances between the point cloud map and the preset reference maps, and the point cloud distances are used for representing distances between obstacles and a map origin;
and calculating attitude transformation parameters between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and controlling the robot to carry out attitude transformation according to the attitude transformation parameters.
Compared with the prior art, the embodiment of the application has the advantages that: if a charging instruction is received, the robot can be effectively controlled to move to a path end point by generating a charging navigation path, and the path end point calculates point cloud errors between the point cloud map and different preset reference maps to obtain distance errors of point cloud distances between the point cloud map and the different preset reference maps. The preset reference map is the point cloud map when the robot and the charging pile are in a charging butt joint state, therefore, when the point cloud error is the smallest, the point cloud map is more similar to the preset reference map corresponding to the minimum point cloud error, the attitude transformation parameter between the current attitude of the robot and the standard attitude in the preset reference map corresponding to the minimum point cloud error can be effectively calculated by calculating the attitude transformation parameter between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and the robot is controlled to perform attitude transformation according to the calculated attitude transformation parameter so as to achieve an automatic charging positioning effect with the charging pile.
Further, the calculating the point cloud error between the point cloud map and different preset reference maps comprises:
aiming at different preset reference maps, point cloud pairs are constructed according to the first data points and second data points in the preset reference maps, the second data points are data vectors corresponding to obstacles in the preset reference maps, and the similarity between the first data points and the second data points in the same point cloud pair is greater than a similarity threshold value;
calculating a rotation and translation matrix corresponding to the point cloud pair according to a least square function, wherein the rotation and translation matrix comprises a rotation matrix and a translation vector, and calculating the variation of the rotation and translation matrix;
if the variation of the rotation and translation matrix is smaller than a variation threshold, outputting the rotation and translation matrix;
if the variation of the rotation and translation matrix is larger than or equal to the variation threshold, performing position transformation on a first data point in the point cloud map according to the rotation and translation matrix obtained through calculation;
according to the data points in the point cloud map after the position transformation, continuing to execute the step of constructing point cloud pairs according to the first data points and the second data points in the preset reference map until the variation of the rotation-translation matrix is smaller than the variation threshold, and outputting the rotation-translation matrix;
and calculating point cloud errors between the point cloud map and different preset reference maps according to the output rotation and translation matrix.
Further, the calculation formula for calculating the point cloud error between the point cloud map and the different preset reference maps according to the output rotation and translation matrix is as follows:
E=Pref-(R*Psim+t)
where E is the point cloud error, PrefIs the average value of the point cloud distances in the preset reference map corresponding to the minimum point cloud error, PsimAnd the average value of the point cloud distances in the point cloud map, R is a rotation matrix in the output rotation and translation matrix, and t is a translation vector in the output rotation and translation matrix.
Further, before calculating the point cloud error between the point cloud map and different preset reference maps, the method further comprises:
charging and butting the robot and different charging piles, and scanning radar images according to different preset scanning angles to obtain radar point cloud images;
aiming at the same charging pile, combining different radar point cloud images obtained at different preset scanning angles to obtain a preset reference map, and obtaining an average value of point cloud distances in different radar point cloud images;
calculating the average value of the point cloud distances in the different radar point cloud images to obtain a reference average value, and setting the calculated average value between the reference average values as the average value corresponding to the point cloud distances in the preset reference map.
Further, the generating the charging navigation path includes:
acquiring the position coordinates of the robot, and calculating the moving distances between the position coordinates and different charging piles;
setting the charging pile corresponding to the minimum moving distance as a target charging pile, and acquiring a path end point in the path direction of the target charging pile and the robot, wherein the distance between the path end point and the charging pile is equal to a preset distance;
and generating the charging navigation path according to the coordinates of the path end point and the position coordinates of the robot.
Further, the acquiring a point cloud map of an environment where the robot is currently located includes:
and controlling the robot to rotate by a preset angle according to the resolution of the preset angle to scan the radar image so as to obtain the point cloud map of the current environment.
Further, the calculating an attitude transformation parameter between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and controlling the robot to perform attitude transformation according to the attitude transformation parameter includes:
acquiring horizontal transformation parameters, vertical transformation parameters and rotation angles in the output rotation and translation matrix to obtain the attitude transformation parameters;
and controlling the robot to perform horizontal posture transformation, vertical posture transformation and angle posture transformation according to the horizontal transformation parameters, the vertical transformation parameters and the rotation angle respectively.
In a second aspect, an embodiment of the present application provides an automatic robot charging system, including:
the navigation module is used for generating a charging navigation path if a charging instruction is received, and controlling the robot to move to a path end point of the charging navigation path according to the charging navigation path;
the robot comprises a point cloud map acquisition module, a data acquisition module and a data acquisition module, wherein the point cloud map acquisition module is used for acquiring a point cloud map of an environment where the robot is currently located, the point cloud map comprises a first data point, and the first data point is a data vector corresponding to an obstacle in the current environment;
the point cloud error calculation module is used for calculating point cloud errors between the point cloud map and different preset reference maps, wherein one preset reference map is the point cloud map when the robot and one charging pile are in a charging butt joint state, the point cloud errors are used for representing distance errors of an average value of point cloud distances between the point cloud map and the preset reference map, and the point cloud distances are used for representing distances between obstacles and a map origin;
and the attitude transformation module is used for calculating an attitude transformation parameter between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and controlling the robot to carry out attitude transformation according to the attitude transformation parameter.
In a third aspect, an embodiment of the present application provides a terminal device, which includes a memory, a processor, and a computer program stored in the memory and executable on the processor, and the processor executes the computer program to implement the method described above.
In a fourth aspect, the present application provides a storage medium storing a computer program, and when the computer program is executed by a processor, the computer program implements the method as described above.
In a fifth aspect, the present application provides a computer program product, when the computer program product runs on a terminal device, the terminal device is caused to execute the robot automatic charging method according to any one of the first aspect.
It is understood that the beneficial effects of the second aspect to the fifth aspect can be referred to the related description of the first aspect, and are not described herein again.
Drawings
In order to more clearly illustrate the technical solutions in the embodiments of the present application, the drawings used in the embodiments or the description of the prior art will be briefly described below.
Fig. 1 is a flowchart of an automatic robot charging method according to a first embodiment of the present disclosure;
fig. 2 is a flowchart of an automatic robot charging method according to a second embodiment of the present disclosure;
fig. 3 is a schematic structural diagram of an automatic robot charging system according to a third embodiment of the present disclosure;
fig. 4 is a schematic structural diagram of a terminal device according to a fourth embodiment of the present application.
Detailed Description
In the following description, for purposes of explanation and not limitation, specific details are set forth, such as particular system structures, techniques, etc. in order to provide a thorough understanding of the embodiments of the present application. It will be apparent, however, to one skilled in the art that the present application may be practiced in other embodiments that depart from these specific details. In other instances, detailed descriptions of well-known systems, devices, circuits, and methods are omitted so as not to obscure the description of the present application with unnecessary detail.
It will be understood that the terms "comprises" and/or "comprising," when used in this specification and the appended claims, specify the presence of stated features, integers, steps, operations, elements, and/or components, but do not preclude the presence or addition of one or more other features, integers, steps, operations, elements, components, and/or groups thereof.
It should also be understood that the term "and/or" as used in this specification and the appended claims refers to and includes any and all possible combinations of one or more of the associated listed items.
As used in this specification and the appended claims, the term "if" may be interpreted contextually as "when", "upon" or "in response to" determining "or" in response to detecting ". Similarly, the phrase "if it is determined" or "if a [ described condition or event ] is detected" may be interpreted contextually to mean "upon determining" or "in response to determining" or "upon detecting [ described condition or event ]" or "in response to detecting [ described condition or event ]".
Furthermore, in the description of the present application and the appended claims, the terms "first," "second," "third," and the like are used for distinguishing between descriptions and not necessarily for describing or implying relative importance.
Reference throughout this specification to "one embodiment" or "some embodiments," or the like, means that a particular feature, structure, or characteristic described in connection with the embodiment is included in one or more embodiments of the present application. Thus, appearances of the phrases "in one embodiment," "in some embodiments," "in other embodiments," or the like, in various places throughout this specification are not necessarily all referring to the same embodiment, but rather "one or more but not all embodiments" unless specifically stated otherwise. The terms "comprising," "including," "having," and variations thereof mean "including, but not limited to," unless expressly specified otherwise.
Example one
Referring to fig. 1, a flowchart of an automatic robot charging method according to a first embodiment of the present application is shown, including the steps of:
step S10, if a charging instruction is received, a charging navigation path is generated, and the robot is controlled to move to the path end point of the charging navigation path according to the charging navigation path;
when the fact that the electric quantity of the robot is smaller than the electric quantity threshold value is detected, the fact that the charging instruction is received is judged, a charging navigation path is generated, or the charging instruction sent manually by a user is received, the charging navigation path is used for controlling the robot to move to a corresponding posture change point, the posture change point is a path end point of the charging navigation path, and the robot is controlled to move to the corresponding posture change point, so that the space required by the follow-up robot during posture change is guaranteed.
Specifically, in this step, the generating a charging navigation path includes:
acquiring the position coordinates of the robot, and calculating the moving distances between the position coordinates and different charging piles;
the robot is provided with a positioning device, and based on the positioning device, the position coordinate of the robot can be effectively obtained;
setting the charging pile corresponding to the minimum moving distance as a target charging pile, and acquiring the path end point in the path direction of the target charging pile and the robot;
wherein, this route terminal point and the distance of filling between the electric pile equal to the default distance, this default distance can set up according to the demand, in this embodiment, this default distance can set up to more than or equal to 1cm, and is less than or equal to 30cm, in this step, through obtain the route terminal point on the route direction of electric pile and robot in the target, prevented effectively to fill electric pile corresponding position and directly set up to the route terminal point, required space when having ensured follow-up robot carries out the gesture transform.
And generating the charging navigation path according to the coordinates of the path end point and the position coordinates of the robot, wherein the robot can be effectively controlled to move to the path end point according to the generated charging navigation path.
Step S20, acquiring a point cloud map of the current environment of the robot;
in the step, the point cloud map is obtained by controlling the image scanning device to perform radar scanning on the current environment of the robot, and the point cloud map includes first data points, where the first data points are data vectors corresponding to obstacles in the current environment.
Specifically, in this step, the obtaining of the point cloud map of the current environment where the robot is located includes: with the route terminal point is the original point, predetermines the direction and is the positive direction to control according to predetermineeing angular resolution the rotatory angle of predetermineeing of robot carries out radar image scanning, obtains the point cloud map of current environment, wherein, should predetermine the direction, predetermine angular resolution and predetermine the angle and all can set up as required, for example, can set up the anticlockwise direction into the positive direction, 360 set up to predetermineeing the angle.
Step S30, calculating point cloud errors between the point cloud map and different preset reference maps;
the system comprises a robot, a preset reference map, a point cloud error and a point cloud distance, wherein the preset reference map is the point cloud map when the robot and a charging pile are in a charging butt joint state, the point cloud error is used for representing a distance error of an average value of point cloud distances between the point cloud map and the preset reference map, and the point cloud distance is used for representing a distance between an obstacle and a map origin.
Further, in this embodiment, before calculating the point cloud error between the point cloud map and the different preset reference maps, the method further includes:
the robot and the different charging piles are charged and butted, radar image scanning is carried out according to different preset scanning angles, a radar point cloud image is obtained, the preset scanning angles can be set according to requirements, and in the step, the maximum preset scanning Angle is AnglemaxThe minimum preset scan Angle is AngleminFrom AngleminTo AnglemaxRespectively acquiring radar point cloud images corresponding to one frame;
combining different radar point cloud images obtained by different preset scanning angles aiming at the same charging pile to obtain a preset reference map, and obtaining the average value of point cloud distances in different radar point cloud images, wherein Angle is respectively used for different charging pilesminTo AnglemaxCombining the radar point cloud images of the corresponding frames to obtain preset reference maps corresponding to different charging piles, namely, each charging pile corresponds to one preset reference map;
calculating the average value of the point cloud distances in different radar point cloud images to obtain a reference average value, and setting the average value between the calculated reference average values as the average value corresponding to the point cloud distances in the preset reference map;
the method comprises the steps of calculating the average value of point cloud distances corresponding to obstacles in different radar point cloud images to obtain a reference average value, and setting the calculated reference average value as the average value of the point cloud distances in the corresponding preset reference map aiming at the same preset reference map;
for example, for charging post A1, from AngleminTo AnglemaxThe radar point cloud image obtained after radar scanning comprises a radar point cloud image a1, a radar point cloud image a2 and a radar point cloud image a3, the radar point cloud image a1, the radar point cloud image a2 and the radar point cloud image a3 are combined to obtain a preset reference map C1, average values of the point cloud distances corresponding to obstacles in the radar point cloud image a1, the radar point cloud image a2 and the radar point cloud image a3 are respectively calculated to obtain a reference average value b1, a reference average value b2 and a reference average value b3, and the average values among the reference average value b1, the reference average value b2 and the reference average value b3 are set as the average value of the point cloud distances in the preset reference map C1.
Step S40, calculating attitude transformation parameters between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and controlling the robot to perform attitude transformation according to the attitude transformation parameters;
the preset reference map is the point cloud map when the robot and the charging pile are in a charging butt joint state, when the point cloud error is the smallest, the point cloud map is judged to be more similar to the preset reference map corresponding to the smallest point cloud error, namely, the current attitude of the robot is more similar to the standard attitude in the preset reference map, and the attitude transformation parameter between the current attitude of the robot and the standard attitude in the preset reference map corresponding to the smallest point cloud error can be effectively calculated by calculating the attitude transformation parameter between the point cloud map and the preset reference map corresponding to the smallest point cloud error.
Specifically, in this step, the calculating an attitude transformation parameter between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and controlling the robot to perform attitude transformation according to the attitude transformation parameter includes:
the robot is controlled to perform horizontal posture transformation, vertical posture transformation and angle posture transformation according to the horizontal transformation parameter, the vertical transformation parameter and the rotation angle respectively, so that the robot can be accurately in charging butt joint with a charging pile, and the accuracy of charging positioning in the automatic charging process of the robot is improved.
In the embodiment, if a charging instruction is received, the robot can be effectively controlled to move to the path end point by generating the charging navigation path, the distance error of the point cloud distance between the point cloud map and different preset reference maps can be obtained by calculating the point cloud error between the point cloud map and different preset reference maps, because the preset reference map is the point cloud map when the robot and the charging pile are in the charging butt joint state, when the point cloud error is minimum, the point cloud map is judged to be more similar to the preset reference map corresponding to the minimum point cloud error, the attitude transformation parameter between the current attitude of the robot and the standard attitude in the preset reference map corresponding to the minimum point cloud error can be effectively calculated by calculating the attitude transformation parameter between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and the robot is controlled to carry out attitude transformation according to the calculated attitude transformation parameter, with reach and fill the automatic location effect that charges between the electric pile, this application embodiment need not to set up the mark characteristic on filling electric pile, need not to carry out the deployment installation of feature structure to filling electric pile promptly, has made things convenient for user's operation, has improved user's use and has experienced.
Example two
Referring to fig. 2, it is a flowchart of an automatic robot charging method according to a second embodiment of the present application, where the second embodiment is used to refine step S30, and includes:
step S31, aiming at different preset reference maps, point cloud pairs are constructed according to the first data points and second data points in the preset reference maps;
for example, when the preset reference map includes a preset reference map C1, a preset reference map C2 and a preset reference map C3, point-to-point construction is performed on the first data point and second data points in the preset reference map C1, the preset reference map C2 and the preset reference map C3 respectively to obtain a point cloud pair e1, a point cloud pair e2 and a point cloud pair e 3;
specifically, the similarity between a first data point and a second data point in the same point cloud pair is greater than a similarity threshold, in the step, the distances (similarities) between the first data point and different second data points are respectively calculated according to an Euclidean distance formula, the second data point with the similarity greater than the similarity threshold is set as a matching point of the first data point, and the point cloud pair is constructed according to the first data point and the matching point.
Optionally, in this step, for the same preset reference map, a second data point corresponding to the maximum similarity of the first data point in the preset reference map is set as a matching point of the first data point, and the point cloud pair is constructed according to the first data point and the matching point.
Step S32, calculating a corresponding rotation and translation matrix of the point cloud pair according to a least square function, and calculating the variation of the rotation and translation matrix;
in the step, whether parameters in the rotation and translation matrix are converged can be effectively judged by calculating the variation of the rotation and translation matrix, that is, when the variation of the rotation and translation matrix is smaller than a variation threshold, the rotation and translation matrix is judged to be converged, and when the variation of the rotation and translation matrix is larger than or equal to the variation threshold, the rotation and translation matrix is judged to be not converged.
Specifically, in this step, for different preset reference maps, a corresponding rotation-translation matrix of the corresponding point cloud pair is calculated according to a least square function, and a variation of the rotation-translation matrix is calculated.
Step S33, if the variation of the rotation and translation matrix is smaller than the variation threshold, outputting the rotation and translation matrix;
if the variation of the rotation and translation matrix is smaller than the variation threshold, the variation of the first data point in the point cloud map after being changed by the rotation and translation matrix is judged to be smaller than the variation threshold, namely, the posture of the robot after being changed by the corresponding posture change parameter of the rotation and translation matrix is judged, and the posture similarity between the robot and the standard posture of the robot in the preset reference map is larger than the preset similarity, so that the accuracy of subsequent robot posture change is improved by outputting the rotation and translation matrix.
Step S34, if the variation of the rotation and translation matrix is larger than or equal to the variation threshold, performing position transformation on a first data point in the point cloud map according to the calculated rotation and translation matrix;
the position of the first data point in the point cloud map is transformed according to the calculated rotation and translation matrix, so that whether the subsequent rotation and translation matrix is converged or not is guaranteed.
Step S35, according to the data point in the point cloud map after the position transformation, continuing to execute the step of constructing a point cloud pair according to the first data point and the second data point in the preset reference map until the variation of the rotation-translation matrix is smaller than the variation threshold, and outputting the rotation-translation matrix;
the step of constructing the point cloud pair according to the first data point and the second data point is continuously executed according to the data points in the point cloud map after the position transformation, so that the effect of iterative updating on the rotation and translation matrix can be effectively achieved, and the accuracy of the rotation and translation matrix is improved.
Specifically, in this step, for different preset reference maps, the corresponding converged rotation and translation matrices are output.
Step S36, calculating point cloud errors between the point cloud map and different preset reference maps according to the output rotation and translation matrix;
specifically, the calculation formula for calculating the point cloud error between the point cloud map and the different preset reference maps according to the output rotational translation matrix is as follows:
E=Pref-(R*Psim+t)
where E is the point cloud error, PrefIs the average value of the point cloud distances in the preset reference map corresponding to the minimum point cloud error, PsimAnd the average value of the point cloud distances in the point cloud map, R is a rotation matrix in the output rotation and translation matrix, and t is a translation vector in the output rotation and translation matrix.
In this embodiment, a point cloud pair is constructed according to a first data point and a corresponding second data point, so as to effectively improve the accuracy of calculation of the rotation and translation matrix and the variation of the rotation and translation matrix, and by calculating the variation of the rotation and translation matrix, whether the parameter in the rotation and translation matrix is converged can be effectively judged, if the variation of the rotation and translation matrix is smaller than the variation threshold, it is determined that the variation of the first data point in the point cloud map after being changed by the rotation and translation matrix is smaller than the variation threshold, that is, the rotation and translation matrix is converged, and by performing position transformation on the first data point in the point cloud map according to the rotation and translation matrix obtained by calculation, and continuing to execute the step of constructing a point cloud according to the first data point and the second data point according to the data point in the point cloud map after the position transformation, the effect of iterative update on the rotation and translation matrix can be effectively achieved, the accuracy of the rotational translation matrix is improved.
EXAMPLE III
Fig. 3 shows a schematic structural diagram of an automatic robot charging system 100 provided in a third embodiment of the present application, corresponding to the automatic robot charging method described in the above embodiments, and only the parts related to the embodiments of the present application are shown for convenience of description.
Referring to fig. 3, the system includes: navigation module 10, point cloud map acquisition module 11, point cloud error calculation module 12 and attitude transformation module 13, wherein:
and the navigation module 10 is configured to generate a charging navigation path if a charging instruction is received, and control the robot to move to a path end point of the charging navigation path according to the charging navigation path.
Wherein the navigation module 10 is further configured to: acquiring the position coordinates of the robot, and calculating the moving distances between the position coordinates and different charging piles;
setting the charging pile corresponding to the minimum moving distance as a target charging pile, and acquiring a path end point in the path direction of the target charging pile and the robot, wherein the distance between the path end point and the charging pile is equal to a preset distance;
and generating the charging navigation path according to the coordinates of the path end point and the position coordinates of the robot.
The point cloud map acquiring module 11 is configured to acquire a point cloud map of an environment where the robot is currently located, where the point cloud map includes a first data point, and the first data point is a data vector corresponding to an obstacle in the current environment.
Wherein, the point cloud map obtaining module 11 is further configured to: and controlling the robot to rotate by a preset angle according to the resolution of the preset angle to scan the radar image so as to obtain the point cloud map of the current environment.
The point cloud error calculation module 12 is configured to calculate point cloud errors between the point cloud map and different preset reference maps, where one preset reference map is the point cloud map when the robot and one charging pile are in a charging and docking state, the point cloud error is used to represent a distance error of an average value of point cloud distances between the point cloud map and the preset reference map, and the point cloud distance is used to represent a distance between an obstacle and an origin of the map.
Wherein, the point cloud error calculating module 12 is further configured to: aiming at different preset reference maps, point cloud pairs are constructed according to the first data points and second data points in the preset reference maps, the second data points are data vectors corresponding to obstacles in the preset reference maps, and the similarity between the first data points and the second data points in the same point cloud pair is greater than a similarity threshold value;
calculating a rotation and translation matrix corresponding to the point cloud pair according to a least square function, wherein the rotation and translation matrix comprises a rotation matrix and a translation vector, and calculating the variation of the rotation and translation matrix;
if the variation of the rotation and translation matrix is smaller than a variation threshold, outputting the rotation and translation matrix;
if the variation of the rotation and translation matrix is larger than or equal to the variation threshold, performing position transformation on a first data point in the point cloud map according to the rotation and translation matrix obtained through calculation;
according to the data points in the point cloud map after the position transformation, continuing to execute the step of constructing point cloud pairs according to the first data points and the second data points in the preset reference map until the variation of the rotation-translation matrix is smaller than the variation threshold, and outputting the rotation-translation matrix;
and calculating point cloud errors between the point cloud map and different preset reference maps according to the output rotation and translation matrix.
Specifically, the calculation formula for calculating the point cloud error between the point cloud map and the different preset reference maps according to the output rotational translation matrix is as follows:
E=Pref-(R*Psim+t)
where E is the point cloud error, PrefIs the average value of the point cloud distances in the preset reference map corresponding to the minimum point cloud error, PsimAnd the average value of the point cloud distances in the point cloud map, R is a rotation matrix in the output rotation and translation matrix, and t is a translation vector in the output rotation and translation matrix.
And the attitude transformation module 13 is configured to calculate an attitude transformation parameter between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and control the robot to perform attitude transformation according to the attitude transformation parameter.
Wherein, the posture transformation module 13 is further configured to: acquiring horizontal transformation parameters, vertical transformation parameters and rotation angles in the output rotation and translation matrix to obtain the attitude transformation parameters;
and controlling the robot to perform horizontal posture transformation, vertical posture transformation and angle posture transformation according to the horizontal transformation parameters, the vertical transformation parameters and the rotation angle respectively.
Further, the robot automatic charging system 100 further includes:
the reference map generation module 14 is configured to perform charging docking on the robot and different charging piles, and perform radar image scanning according to different preset scanning angles to obtain a radar point cloud image;
aiming at the same charging pile, combining different radar point cloud images obtained at different preset scanning angles to obtain a preset reference map, and obtaining an average value of point cloud distances in different radar point cloud images;
calculating the average value of the point cloud distances in the different radar point cloud images to obtain a reference average value, and setting the calculated average value between the reference average values as the average value corresponding to the point cloud distances in the preset reference map.
In the embodiment, if a charging instruction is received, the robot can be effectively controlled to move to the path end point by generating the charging navigation path, the distance error of the point cloud distance between the point cloud map and different preset reference maps can be obtained by calculating the point cloud error between the point cloud map and different preset reference maps, because the preset reference map is the point cloud map when the robot and the charging pile are in the charging butt joint state, when the point cloud error is minimum, the point cloud map is judged to be more similar to the preset reference map corresponding to the minimum point cloud error, the attitude transformation parameter between the current attitude of the robot and the standard attitude in the preset reference map corresponding to the minimum point cloud error can be effectively calculated by calculating the attitude transformation parameter between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and the robot is controlled to carry out attitude transformation according to the calculated attitude transformation parameter, with reach and fill the automatic location effect that charges between the electric pile, this application embodiment need not to set up the mark characteristic on filling electric pile, need not to carry out the deployment installation of feature structure to filling electric pile promptly, has made things convenient for user's operation, has improved user's use and has experienced.
It should be noted that, for the information interaction, execution process, and other contents between the above-mentioned devices/modules, the specific functions and technical effects thereof are based on the same concept as those of the embodiment of the method of the present application, and reference may be made to the part of the embodiment of the method specifically, and details are not described here.
Fig. 4 is a schematic structural diagram of a terminal device 2 according to a fourth embodiment of the present application. As shown in fig. 4, the terminal device 2 of this embodiment includes: at least one processor 20 (only one processor is shown in fig. 4), a memory 21, and a computer program 22 stored in the memory 21 and executable on the at least one processor 20, the steps of any of the various method embodiments described above being implemented when the computer program 22 is executed by the processor 20.
The terminal device 2 may be a desktop computer, a notebook, a palm computer, a cloud server, or other computing devices. The terminal device may include, but is not limited to, a processor 20, a memory 21. Those skilled in the art will appreciate that fig. 4 is merely an example of the terminal device 2, and does not constitute a limitation of the terminal device 2, and may include more or less components than those shown, or combine some components, or different components, such as an input-output device, a network access device, and the like.
The Processor 20 may be a Central Processing Unit (CPU), and the Processor 20 may be other general purpose Processor, a Digital Signal Processor (DSP), an Application Specific Integrated Circuit (ASIC), a Field Programmable Gate Array (FPGA) or other Programmable logic device, discrete Gate or transistor logic device, discrete hardware component, or the like. A general purpose processor may be a microprocessor or the processor may be any conventional processor or the like.
The memory 21 may in some embodiments be an internal storage unit of the terminal device 2, such as a hard disk or a memory of the terminal device 2. The memory 21 may also be an external storage device of the terminal device 2 in other embodiments, such as a plug-in hard disk, a Smart Media Card (SMC), a Secure Digital (SD) Card, a Flash memory Card (Flash Card), and the like, which are provided on the terminal device 2. Further, the memory 21 may also include both an internal storage unit and an external storage device of the terminal device 2. The memory 21 is used for storing an operating system, an application program, a BootLoader (BootLoader), data, and other programs, such as program codes of the computer program. The memory 21 may also be used to temporarily store data that has been output or is to be output.
It will be apparent to those skilled in the art that, for convenience and brevity of description, only the above-mentioned division of the functional units and modules is illustrated, and in practical applications, the above-mentioned function distribution may be performed by different functional units and modules according to needs, that is, the internal structure of the apparatus is divided into different functional units or modules to perform all or part of the above-mentioned functions. Each functional unit and module in the embodiments may be integrated in one processing unit, or each unit may exist alone physically, or two or more units are integrated in one unit, and the integrated unit may be implemented in a form of hardware, or in a form of software functional unit. In addition, specific names of the functional units and modules are only for convenience of distinguishing from each other, and are not used for limiting the protection scope of the present application. The specific working processes of the units and modules in the system may refer to the corresponding processes in the foregoing method embodiments, and are not described herein again.
An embodiment of the present application further provides a network device, where the network device includes: at least one processor, a memory, and a computer program stored in the memory and executable on the at least one processor, the processor implementing the steps of any of the various method embodiments described above when executing the computer program.
The present application further provides a storage medium, which may be a computer-readable storage medium, where a computer program is stored, and when the computer program is executed by a processor, the computer program implements the steps in the above method embodiments.
The embodiments of the present application provide a computer program product, which when running on a mobile terminal, enables the mobile terminal to implement the steps in the above method embodiments when executed.
The integrated unit, if implemented in the form of a software functional unit and sold or used as a stand-alone product, may be stored in a computer readable storage medium. Based on such understanding, all or part of the processes in the methods of the embodiments described above can be implemented by a computer program, which can be stored in a computer-readable storage medium and can implement the steps of the embodiments of the methods described above when the computer program is executed by a processor. Wherein the computer program comprises computer program code, which may be in the form of source code, object code, an executable file or some intermediate form, etc. The computer readable medium may include at least: any entity or device capable of carrying computer program code to a photographing apparatus/terminal apparatus, a recording medium, computer Memory, Read-Only Memory (ROM), Random Access Memory (RAM), an electrical carrier signal, a telecommunications signal, and a software distribution medium. Such as a usb-disk, a removable hard disk, a magnetic or optical disk, etc. In certain jurisdictions, computer-readable media may not be an electrical carrier signal or a telecommunications signal in accordance with legislative and patent practice.
In the above embodiments, the descriptions of the respective embodiments have respective emphasis, and reference may be made to the related descriptions of other embodiments for parts that are not described or illustrated in a certain embodiment.
Those of ordinary skill in the art will appreciate that the various illustrative elements and algorithm steps described in connection with the embodiments disclosed herein may be implemented as electronic hardware or combinations of computer software and electronic hardware. Whether such functionality is implemented as hardware or software depends upon the particular application and design constraints imposed on the implementation. Skilled artisans may implement the described functionality in varying ways for each particular application, but such implementation decisions should not be interpreted as causing a departure from the scope of the present application.
In the embodiments provided in the present application, it should be understood that the disclosed apparatus/network device and method may be implemented in other ways. For example, the above-described apparatus/network device embodiments are merely illustrative, and for example, the division of the modules or units is only one logical division, and there may be other divisions when actually implementing, for example, a plurality of units or components may be combined or integrated into another system, or some features may be omitted, or not implemented. In addition, the shown or discussed mutual coupling or direct coupling or communication connection may be an indirect coupling or communication connection through some interfaces, devices or units, and may be in an electrical, mechanical or other form.
The units described as separate parts may or may not be physically separate, and parts displayed as units may or may not be physical units, may be located in one place, or may be distributed on a plurality of network units. Some or all of the units can be selected according to actual needs to achieve the purpose of the solution of the embodiment.
The above-mentioned embodiments are only used for illustrating the technical solutions of the present application, and not for limiting the same; although the present application has been described in detail with reference to the foregoing embodiments, it should be understood by those of ordinary skill in the art that: the technical solutions described in the foregoing embodiments may still be modified, or some technical features may be equivalently replaced; such modifications and substitutions do not substantially depart from the spirit and scope of the embodiments of the present application and are intended to be included within the scope of the present application.

Claims (10)

1. A method for robotic automatic charging, the method comprising:
if a charging instruction is received, generating a charging navigation path, and controlling the robot to move to a path end point of the charging navigation path according to the charging navigation path;
acquiring a point cloud map of an environment where the robot is currently located, wherein the point cloud map comprises first data points, and the first data points are data vectors corresponding to obstacles in the current environment;
calculating point cloud errors between the point cloud map and different preset reference maps, wherein one preset reference map is the point cloud map when the robot and one charging pile are in a charging butt joint state, the point cloud errors are used for representing distance errors of an average value of point cloud distances between the point cloud map and the preset reference maps, and the point cloud distances are used for representing distances between obstacles and a map origin;
and calculating attitude transformation parameters between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and controlling the robot to carry out attitude transformation according to the attitude transformation parameters.
2. The robot automatic charging method of claim 1, wherein the calculating of the point cloud error between the point cloud map and different preset reference maps comprises:
aiming at different preset reference maps, point cloud pairs are constructed according to the first data points and second data points in the preset reference maps, the second data points are data vectors corresponding to obstacles in the preset reference maps, and the similarity between the first data points and the second data points in the same point cloud pair is greater than a similarity threshold value;
calculating a rotation and translation matrix corresponding to the point cloud pair according to a least square function, wherein the rotation and translation matrix comprises a rotation matrix and a translation vector, and calculating the variation of the rotation and translation matrix;
if the variation of the rotation and translation matrix is smaller than a variation threshold, outputting the rotation and translation matrix;
if the variation of the rotation and translation matrix is larger than or equal to the variation threshold, performing position transformation on a first data point in the point cloud map according to the rotation and translation matrix obtained through calculation;
according to the data points in the point cloud map after the position transformation, continuing to execute the step of constructing point cloud pairs according to the first data points and the second data points in the preset reference map until the variation of the rotation-translation matrix is smaller than the variation threshold, and outputting the rotation-translation matrix;
and calculating point cloud errors between the point cloud map and different preset reference maps according to the output rotation and translation matrix.
3. The robot automatic charging method according to claim 2, wherein the calculation formula for calculating the point cloud error between the point cloud map and the different preset reference maps according to the outputted rotational-translational matrix is as follows:
E=Pref-(R*Psim+t)
where E is the point cloud error, PrefIs the average value of the point cloud distances in the preset reference map corresponding to the minimum point cloud error, PsimAnd the average value of the point cloud distances in the point cloud map, R is a rotation matrix in the output rotation and translation matrix, and t is a translation vector in the output rotation and translation matrix.
4. The robot automatic charging method according to claim 1, wherein before calculating the point cloud error between the point cloud map and a different preset reference map, further comprising:
charging and butting the robot and different charging piles, and scanning radar images according to different preset scanning angles to obtain radar point cloud images;
aiming at the same charging pile, combining different radar point cloud images obtained at different preset scanning angles to obtain a preset reference map, and obtaining an average value of point cloud distances in different radar point cloud images;
calculating the average value of the point cloud distances in the different radar point cloud images to obtain a reference average value, and setting the calculated average value between the reference average values as the average value corresponding to the point cloud distances in the preset reference map.
5. The robot automatic charging method of claim 1, wherein the generating a charging navigation path comprises:
acquiring the position coordinates of the robot, and calculating the moving distances between the position coordinates and different charging piles;
setting the charging pile corresponding to the minimum moving distance as a target charging pile, and acquiring a path end point in the path direction of the target charging pile and the robot, wherein the distance between the path end point and the charging pile is equal to a preset distance;
and generating the charging navigation path according to the coordinates of the path end point and the position coordinates of the robot.
6. The robot automatic charging method of claim 1, wherein the obtaining of the point cloud map of the environment in which the robot is currently located comprises:
and controlling the robot to rotate by a preset angle according to the resolution of the preset angle to scan the radar image so as to obtain the point cloud map of the current environment.
7. The robot automatic charging method according to claim 2, wherein the calculating of the attitude transformation parameter between the point cloud map and the preset reference map corresponding to the minimum point cloud error and the controlling of the robot for attitude transformation according to the attitude transformation parameter comprises:
acquiring horizontal transformation parameters, vertical transformation parameters and rotation angles in the output rotation and translation matrix to obtain the attitude transformation parameters;
and controlling the robot to perform horizontal posture transformation, vertical posture transformation and angle posture transformation according to the horizontal transformation parameters, the vertical transformation parameters and the rotation angle respectively.
8. A robotic automatic charging system, comprising:
the navigation module is used for generating a charging navigation path if a charging instruction is received, and controlling the robot to move to a path end point of the charging navigation path according to the charging navigation path;
the robot comprises a point cloud map acquisition module, a data acquisition module and a data acquisition module, wherein the point cloud map acquisition module is used for acquiring a point cloud map of an environment where the robot is currently located, the point cloud map comprises a first data point, and the first data point is a data vector corresponding to an obstacle in the current environment;
the point cloud error calculation module is used for calculating point cloud errors between the point cloud map and different preset reference maps, wherein one preset reference map is the point cloud map when the robot and one charging pile are in a charging butt joint state, the point cloud errors are used for representing distance errors of an average value of point cloud distances between the point cloud map and the preset reference map, and the point cloud distances are used for representing distances between obstacles and a map origin;
and the attitude transformation module is used for calculating an attitude transformation parameter between the point cloud map and the preset reference map corresponding to the minimum point cloud error, and controlling the robot to carry out attitude transformation according to the attitude transformation parameter.
9. A terminal device comprising a memory, a processor and a computer program stored in the memory and executable on the processor, characterized in that the processor implements the method according to any of claims 1 to 7 when executing the computer program.
10. A storage medium storing a computer program, characterized in that the computer program, when executed by a processor, implements the method according to any one of claims 1 to 7.
CN202110344749.8A 2021-03-31 2021-03-31 Robot automatic charging method, system, terminal device and storage medium Pending CN113138596A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202110344749.8A CN113138596A (en) 2021-03-31 2021-03-31 Robot automatic charging method, system, terminal device and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202110344749.8A CN113138596A (en) 2021-03-31 2021-03-31 Robot automatic charging method, system, terminal device and storage medium

Publications (1)

Publication Number Publication Date
CN113138596A true CN113138596A (en) 2021-07-20

Family

ID=76810192

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202110344749.8A Pending CN113138596A (en) 2021-03-31 2021-03-31 Robot automatic charging method, system, terminal device and storage medium

Country Status (1)

Country Link
CN (1) CN113138596A (en)

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895686A (en) * 2022-05-27 2022-08-12 广州高新兴机器人有限公司 Method and system for charging pile by robot

Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN107945233A (en) * 2017-12-04 2018-04-20 深圳市沃特沃德股份有限公司 Vision sweeping robot and its recharging method
CN109211236A (en) * 2017-06-30 2019-01-15 沈阳新松机器人自动化股份有限公司 navigation locating method, device and robot
CN109648602A (en) * 2018-09-11 2019-04-19 深圳优地科技有限公司 Automatic recharging method, device and terminal device
CN109683605A (en) * 2018-09-25 2019-04-26 上海肇观电子科技有限公司 Robot and its automatic recharging method, system, electronic equipment, storage medium
CN111546333A (en) * 2020-04-24 2020-08-18 深圳市优必选科技股份有限公司 Robot and automatic control method and device thereof
US20200276713A1 (en) * 2019-02-28 2020-09-03 Intelligrated Headquarters, Llc Vision calibration system for robotic carton unloading
CN112147994A (en) * 2019-06-28 2020-12-29 深圳市优必选科技股份有限公司 Robot and recharging control method and device thereof
CN112346453A (en) * 2020-10-14 2021-02-09 深圳市杉川机器人有限公司 Automatic robot recharging method and device, robot and storage medium
CN112561998A (en) * 2020-12-16 2021-03-26 国网江苏省电力有限公司检修分公司 Robot positioning and autonomous charging method based on three-dimensional point cloud registration

Patent Citations (9)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN109211236A (en) * 2017-06-30 2019-01-15 沈阳新松机器人自动化股份有限公司 navigation locating method, device and robot
CN107945233A (en) * 2017-12-04 2018-04-20 深圳市沃特沃德股份有限公司 Vision sweeping robot and its recharging method
CN109648602A (en) * 2018-09-11 2019-04-19 深圳优地科技有限公司 Automatic recharging method, device and terminal device
CN109683605A (en) * 2018-09-25 2019-04-26 上海肇观电子科技有限公司 Robot and its automatic recharging method, system, electronic equipment, storage medium
US20200276713A1 (en) * 2019-02-28 2020-09-03 Intelligrated Headquarters, Llc Vision calibration system for robotic carton unloading
CN112147994A (en) * 2019-06-28 2020-12-29 深圳市优必选科技股份有限公司 Robot and recharging control method and device thereof
CN111546333A (en) * 2020-04-24 2020-08-18 深圳市优必选科技股份有限公司 Robot and automatic control method and device thereof
CN112346453A (en) * 2020-10-14 2021-02-09 深圳市杉川机器人有限公司 Automatic robot recharging method and device, robot and storage medium
CN112561998A (en) * 2020-12-16 2021-03-26 国网江苏省电力有限公司检修分公司 Robot positioning and autonomous charging method based on three-dimensional point cloud registration

Cited By (1)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN114895686A (en) * 2022-05-27 2022-08-12 广州高新兴机器人有限公司 Method and system for charging pile by robot

Similar Documents

Publication Publication Date Title
CN111442722B (en) Positioning method, positioning device, storage medium and electronic equipment
CN109974727B (en) Robot charging method and device and robot
KR102498439B1 (en) Method, apparatus, system, and storage medium for calibrating exterior parameter of on-board camera
JP6505939B1 (en) Method of identifying charging stand, device, robot, and computer readable storage medium
CN111427061A (en) Robot mapping method and device, robot and storage medium
CN112556685B (en) Navigation route display method and device, storage medium and electronic equipment
CN110349212B (en) Optimization method and device for instant positioning and map construction, medium and electronic equipment
CN111680596B (en) Positioning true value verification method, device, equipment and medium based on deep learning
WO2022179094A1 (en) Vehicle-mounted lidar external parameter joint calibration method and system, medium and device
CN112017236A (en) Method and device for calculating position of target object based on monocular camera
CN108520543B (en) Method, equipment and storage medium for optimizing relative precision map
CN113124872A (en) Robot positioning navigation method and device, terminal equipment and robot
CN113138596A (en) Robot automatic charging method, system, terminal device and storage medium
US20220114813A1 (en) Detecting obstacle
CN113295159B (en) Positioning method and device for end cloud integration and computer readable storage medium
CN113984068A (en) Positioning method, positioning apparatus, and computer-readable storage medium
CN112150550B (en) Fusion positioning method and device
CN113034582A (en) Pose optimization device and method, electronic device and computer readable storage medium
CN113459088A (en) Map adjusting method, electronic device and storage medium
CN112419423A (en) Calibration method, calibration device, electronic equipment and storage medium
CN111191596A (en) Closed area drawing method and device and storage medium
CN112729349B (en) Method and device for on-line calibration of odometer, electronic equipment and storage medium
CN114926542A (en) Mixed reality fixed reference system calibration method based on optical positioning system
CN113192138A (en) Robot autonomous relocation method and device, robot and storage medium
CN110389349B (en) Positioning method and device

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination