CN112147995B - Robot motion control method and device, robot and storage medium - Google Patents

Robot motion control method and device, robot and storage medium Download PDF

Info

Publication number
CN112147995B
CN112147995B CN201910579972.3A CN201910579972A CN112147995B CN 112147995 B CN112147995 B CN 112147995B CN 201910579972 A CN201910579972 A CN 201910579972A CN 112147995 B CN112147995 B CN 112147995B
Authority
CN
China
Prior art keywords
robot
motion
module
error
yaw angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Active
Application number
CN201910579972.3A
Other languages
Chinese (zh)
Other versions
CN112147995A (en
Inventor
刘洋
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Makeblock Co Ltd
Original Assignee
Makeblock Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Makeblock Co Ltd filed Critical Makeblock Co Ltd
Priority to CN201910579972.3A priority Critical patent/CN112147995B/en
Publication of CN112147995A publication Critical patent/CN112147995A/en
Application granted granted Critical
Publication of CN112147995B publication Critical patent/CN112147995B/en
Active legal-status Critical Current
Anticipated expiration legal-status Critical

Links

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0221Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving a learning process
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle

Landscapes

  • Engineering & Computer Science (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • Physics & Mathematics (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Manipulator (AREA)

Abstract

Robot motion control method and device, robot, and storage medium. The invention discloses a motion control method and a motion control device for a robot, wherein the method corrects the measurement error of a micro-mechanical inertial element through static posture information output by the micro-mechanical inertial module and map area codes read by a photosensitive module in a static state, so that the micro-mechanical inertial module can calculate the posture according to an accurate measurement result output by the micro-mechanical inertial element after the robot is switched from the static state to the motion state according to a triggered motion control signal, thereby accurately obtaining the current motion posture information of the robot, and further accurately correcting the motion posture of the robot in real time according to the map area codes read by the photosensitive module or the motion posture information output by the micro-mechanical inertial module in the motion process, and ensuring that the robot moves on a plurality of spliced puzzles according to a motion path indicated by the motion control signal.

Description

Robot motion control method and device, robot and storage medium
Technical Field
The present invention relates to the field of communications technologies, and in particular, to a method and apparatus for controlling motion of a robot, and a computer readable storage medium.
Background
When the robot moves for a long distance, the robot can deviate from a preset movement path due to the fact that road conditions are poor, deviation exists in the robot, and the like.
In the prior art, the current position is detected through a camera installed on the robot at preset time intervals to acquire the current position characteristic information of the robot, and the acquired position characteristic information is sent to a processing module which re-programs a motion path for the robot, so that the robot can avoid the problem of deviating from the preset motion path when moving according to the re-planned motion path.
However, the scheme has strict requirements on factors such as the installation position and the number of the cameras, the external environment and the like, for example, the installation position of the cameras and the ambient light can influence the accuracy of current position imaging, so that the accuracy of the obtained position characteristic information is influenced, the more the number of the cameras is, the more accurate the obtained position characteristic information is, the more the calculated amount is, the performance of the robot is easily influenced, and the accurate control of the movement path of the robot is difficult.
Therefore, how to accurately control the motion path of the robot is a problem to be solved.
Disclosure of Invention
In order to solve the technical problem that the motion path of a robot cannot be accurately controlled in the prior art, the invention provides a motion control method and device of the robot, the robot and a computer readable storage medium, which are used for correcting the motion gesture of the robot in real time so that the robot moves according to a preset motion path.
The technical scheme adopted by the invention is as follows:
a motion control method of a robot, comprising:
a motion control apparatus of a robot, comprising:
a robot comprising a processor and a memory for storing executable instructions of the processor; wherein the processor is configured to perform the above-described robot motion control method via execution of the executable instructions.
A computer readable storage medium having stored thereon a computer program which, when executed by a processor, implements the above-described robot motion control method.
According to the technical scheme, the measuring error of the micro-mechanical inertial element is corrected according to the static posture information output by the micro-mechanical inertial element and the map area code read by the photosensitive module in the static state, so that the micro-mechanical inertial element can calculate the posture according to the accurate measuring result output by the micro-mechanical inertial element after the robot is switched from the static state to the moving state according to the triggered movement control signal, the current movement posture information of the robot can be accurately obtained, and the movement posture of the robot can be accurately corrected in real time according to the map area code read by the photosensitive module or the movement posture information output by the micro-mechanical inertial element in the movement process, so that the robot is ensured to move on a plurality of spliced puzzles according to the movement path indicated by the movement control signal.
Compared with the prior art, the current position information of the robot is accurately obtained through the map area code read by the photosensitive module or the motion attitude information output by the micromechanical inertial module, is not influenced by factors such as a camera and an external environment, can accurately correct the motion attitude of the robot, and ensures the accuracy of the motion path of the robot.
It is to be understood that both the foregoing general description and the following detailed description are exemplary and explanatory only and are not restrictive of the invention as claimed.
Drawings
The accompanying drawings, which are incorporated in and constitute a part of this specification, illustrate embodiments consistent with the invention and together with the description, serve to explain the principles of the invention.
FIG. 1 is a schematic illustration of an environment in which the present invention may be practiced;
FIG. 2 is a schematic illustration of a road map related to the implementation environment shown in FIG. 1;
FIG. 3 is a schematic illustration of an angular code contained in a puzzle related to the implementation of FIG. 1;
FIG. 4 is a schematic illustration of a grid map related to the implementation environment shown in FIG. 1;
FIG. 5 is a block diagram of the hardware architecture of a robot shown in accordance with an exemplary embodiment;
FIG. 6 is a flow chart illustrating a method of controlling motion of a robot in accordance with an exemplary embodiment;
FIG. 7 is a flow chart of step 210 in one embodiment of the corresponding embodiment of FIG. 6;
FIG. 8 is a flow chart of step 250 in one embodiment of the corresponding embodiment of FIG. 6;
FIG. 9 is a flow chart of step 253 in one embodiment of the corresponding embodiment of FIG. 7;
FIG. 10 is a flow chart of step 250 in another embodiment of the corresponding embodiment of FIG. 6;
fig. 11 is a flowchart of a motion control method of a robot shown in another exemplary embodiment;
fig. 12 is a block diagram of a motion control apparatus of a robot shown in an exemplary embodiment.
There has been shown in the drawings, and will hereinafter be described, specific embodiments of the invention with the understanding that the present disclosure is to be considered in all respects as illustrative, and not restrictive, the scope of the inventive concepts being indicated by the appended claims.
Detailed Description
Reference will now be made in detail to exemplary embodiments, examples of which are illustrated in the accompanying drawings. When the following description refers to the accompanying drawings, the same numbers in different drawings refer to the same or similar elements, unless otherwise indicated. The implementations described in the following exemplary examples do not represent all implementations consistent with the invention. Rather, they are merely examples of apparatus and methods consistent with aspects of the invention as detailed in the accompanying claims.
Referring to fig. 1, fig. 1 is a schematic diagram of an implementation environment according to the present invention. As shown in fig. 1, the implementation environment of the present invention includes several tiles that are spliced adjacently, and the robot moves on the spliced tiles. It should be noted that, the robot described in the present invention refers to a machine device that automatically performs a moving process.
As shown in fig. 1, in an embodiment, the spliced jigsaw puzzle is a road map, and a road pattern and a road edge pattern are laid on each road map, and the road map is spliced according to the road pattern to obtain a movement path of the robot, so that the robot moves along the road pattern.
In fig. 2, 3 different types of road maps are shown, wherein the puzzles 5 are straight road maps, the puzzles 6 are cross road maps, the puzzles 7 are curved road maps, and the road maps of the same type or different types can be combined and spliced to obtain different movement paths. For example, as shown in fig. 1, the puzzle 1 and the puzzle 2 are both straight-path maps, the two are spliced to obtain straight-path paths, the puzzle 2 and the puzzle 4 are spliced to obtain turning paths through the puzzle 3, and the robot first moves straight along the puzzle 1 and the puzzle 2 to the center of the puzzle 3 in the motion, then rotates 90 ° right, and then continues straight along the puzzle 4.
The road map is divided into at least 3 map areas, namely a road area, a public boundary area and an edge area, the number of the public boundary areas and the edge areas divided by different types of road maps is different, as shown in fig. 2, the straight map comprises 2 public boundary areas and upper and lower side edge areas, the cross map comprises 4 public boundary areas and upper 4 side corner edge areas, and the curved map comprises 2 public boundary areas and 3 edge areas.
The different map areas of the road map also contain corresponding map area codes, wherein the whole road map is paved with the angle codes shown in fig. 3, and the road areas are paved with the road codes, so that the different road maps can be distinguished by the road codes; the public boundary region is also paved with public boundary codes, and the public boundary codes paved in the public boundary region of each road map are the same; the edge region is also paved with edge region codes, the edge region codes are related to the positions of the corresponding edge regions relative to the road region, and the edge region codes corresponding to the same positions relative to the road region are the same.
It should be noted that, the angle codes shown in fig. 3 are laid on each road map, and when the road maps are spliced, only the road patterns laid on the road maps need to be considered, and the laying direction of the angle codes does not need to be ensured to be consistent.
In one exemplary embodiment, the map area codes are identified and read by the robot through its own configured photosensitive module by printing a special paint on the road map that will reflect red light.
In another embodiment, the stitched tiles are grid maps as shown in FIG. 4. As shown in fig. 4, a scene pattern or a character pattern is laid on the grid map, for example, a grass pattern in a forest exploration scene is laid on the jigsaw puzzle 8, and an animal pattern is laid on the jigsaw puzzle 9, and when the grid map is spliced, the grid map can be arbitrarily combined according to the scene pattern and the character pattern.
The grid map is at least divided into a central area, a public boundary area, an edge area and other map areas which are respectively positioned at the upper/lower/left/right side of the central area, and when the robot moves on the spliced grid map, the robot can directly move along the central area of the grid map, can also rotate in situ for a certain angle in the central area of the grid map, or is a combination of the two.
The map areas divided by the grid map also contain corresponding map area codes, wherein the angle codes shown in fig. 3 are paved on the whole grid map, the edge area codes are paved on the edge areas correspondingly, and the common boundary codes are paved on the common boundary areas. Unlike road maps, the central area of the grid map is also laid with a position code or a central area code, for example, the central area shown by the tiles 8 is laid with a position code, and the central area shown by the tiles 9 is laid with a central area code, wherein the position code refers to the position coordinates where the central area is located.
In one exemplary embodiment, the edge region code is set according to the angle code laid down by the grid map. As shown in fig. 4, tiles 8 and 9, the edge region code on the tile on the side of the X-axis minimum is set to the minimum X-axis, the edge region code on the side of the X-axis maximum is set to x+2, the edge region code on the side of the Y-axis minimum is set to the maximum x+1, and the edge region code on the side of the Y-axis maximum is set to x+3. When a center region code is laid down in the center region, the center region code is correspondingly x+4.
In an exemplary embodiment, the minimum value X of the edge region code is set according to the scene pattern or character pattern laid on the grid map, for example, the grassland pattern in the forest quest scene shown in the jigsaw 8 is inconsistent with the X value corresponding to the animal pattern shown in the jigsaw 9.
In another embodiment, the spliced puzzles can be blank maps, and when the robot moves on the spliced blank maps, the robot moves according to a movement strategy on the road map or the grid map, so that the movement process of the robot occurs in the central area of the blank maps.
In another embodiment, the spliced tiles may be a combination of the road map, the grid map and the blank map, which is not limited herein.
Fig. 5 is a block diagram of a hardware structure of a robot according to an exemplary embodiment. It should be noted that the robot is only an example adapted to the present invention, and should not be construed as providing any limitation on the scope of use of the present invention. Nor should the robot be interpreted as necessarily relying on or necessarily having one or more of the components of the exemplary robot shown in fig. 5.
As shown in fig. 4, the robot includes a processor 101, a memory 102, a photosensitive module 104, a micromechanical inertia module 105, a remote control receiving module 103, a driving motor 106, and an encoder 107.
The processor 101 is used as a core module for data processing of the robot, and is used for calculating data stored in the memory 102.
The memory 102 is used to store computer readable instructions and modules, such as those corresponding to the motion control method of the robot shown in the exemplary embodiment of the present invention, and the processor 101 performs various functions and data processing, for example, to implement motion control of the robot by executing the computer readable instructions stored in the memory 102. The memory 102 may be a random access memory, such as a high speed random access memory, a non-volatile memory, or one or more magnetic storage devices, flash memory, or other solid state memory. The memory 102 may be stored in a temporary or permanent manner.
The remote control receiving module 103 is configured to receive a motion control signal transmitted by triggering the remote controller, and the processor 101 obtains control information for controlling a motion state of the robot by processing the motion control signal, so as to control the robot to move on the jigsaw according to a motion path indicated by the motion control signal, or control the robot to stop moving.
The photosensitive module 104 is also called a photosensitive pen or an optical recognition instrument, and in the process that the robot moves on the jigsaw, the photosensitive module 104 contacts with the surface of the jigsaw and reads the map area codes corresponding to the map areas divided by the jigsaw to obtain the special information of the robot such as position information, yaw angle and the like, so that the motion gesture of the robot is corrected in real time according to the obtained special information, and the robot is prevented from deviating from the motion path indicated by the motion control signal.
When the motion of the robot deviates from the motion path indicated by the motion control signal, the motion path needs to be corrected in the motion process after correcting the current yaw angle of the robot because the robot cannot translate.
The micromechanical inertial module 105, also called the tens inertial module, includes micromechanical inertial elements and a pose resolving unit. The micromechanical inertial element comprises a gyroscope and an accelerometer and is used for measuring the angular velocity and the acceleration of the movement of the robot in real time, so that the measurement result of the micromechanical inertial element is obtained. The gesture resolving unit is provided with computer readable instructions which, when executed, resolve the gesture of the robot according to the measurement result output by the micromechanical inertial element, thereby obtaining gesture information of the robot.
The gesture information includes rolling angle, pitch angle, yaw angle and other angle information of the robot, and further includes current position information and speed information of the robot, wherein the yaw angle refers to angle information of the robot in a horizontal direction, and the angle information can be used for determining a real-time direction of the robot. The robot can also correct the motion gesture in real time according to the gesture information output by the micro-mechanical inertia module 105, so as to avoid the robot from deviating from the motion path indicated by the motion control signal.
The driving motor 106 is used for driving the joints of the robot to move according to the control information output by the processor 101, so as to drive the robot to perform the motion actions such as walking, rotating and the like.
The encoder 107 is a sensor for acquiring real-time motion information of the robot, the encoding scale of the encoder 107 is used for recording the motion distance of the robot, and the motion speed of the robot can be obtained according to the motion time and the motion distance of the robot. By the motion information output from the encoder 107, the robot can control its own position and speed, thereby ensuring the accuracy of the conventional motion of the robot.
In order to avoid that the robot deviates from a preset motion path when moving on the puzzle, in the exemplary embodiment shown in fig. 6, there is provided a motion control method of the robot, as shown in fig. 6, which at least comprises the steps of:
step 210, in the static state, the robot corrects the measurement error of the micro-mechanical inertial element in the micro-mechanical inertial module according to the static attitude information output by the micro-mechanical inertial module and the map area code read by the photosensitive module.
As described above, in the process of moving the robot on the jigsaw, the moving gesture of the robot needs to be corrected in real time according to the map area code read by the photosensitive module or the gesture information output by the micromechanical inertial module, so as to avoid the deviation of the robot from the predetermined moving path. In order to accurately correct the motion gesture of the robot, the micro-mechanical inertial module is required to output accurate gesture information, and the photosensitive module accurately reads the map region code on the jigsaw.
The attitude information output by the attitude calculation unit comprises a rolling angle, a pitch angle and a yaw angle, wherein the rolling angle and the pitch angle can be corrected through acceleration information in three axial directions output by an accelerometer in the micromechanical inertial element, but the yaw angle cannot be corrected, so that the attitude information output by the attitude calculation unit has errors with the real attitude of the robot.
Since the posture information output by the posture resolving unit is obtained by performing posture resolving on the measurement result of the micro-mechanical inertial element, the essential reason that the posture information output by the posture resolving unit has errors is that the micro-mechanical inertial element has measurement errors, and the measurement errors of the micro-mechanical inertial element are accumulated continuously along with time, so that the error of the posture information obtained by performing posture resolving on the measurement result output by the micro-mechanical inertial element by the posture resolving unit is larger and larger, and therefore, the measurement errors of the micro-mechanical inertial element are required to be corrected.
When the robot is in a static state, the theoretical speed of the robot is zero, at the moment, the static posture information obtained by calculating the measurement result output by the micromechanical inertial element by the posture calculating unit and errors between the information such as position information, yaw angle and the like obtained by reading the map area code by the photosensitive module reflect the position error, yaw angle error and speed error of the robot, and the constant drift of the gyroscope and zero offset of the accelerometer in the micromechanical inertial element can be estimated according to the obtained position error, yaw angle error and speed error, so that the measurement error of the micromechanical inertial element is obtained.
Therefore, in a static state, the robot can obtain the measurement error of the micro-mechanical inertial element in the micro-mechanical inertial module according to the static attitude information output by the micro-mechanical inertial module and the map area code read by the photosensitive module.
The process of error correction for the micromechanical inertial element according to the obtained measurement error means that after the micromechanical inertial element obtains the corresponding angular velocity and acceleration by measurement, the obtained angular velocity and acceleration need to be subtracted by the measurement error, and the obtained difference is output as a measurement result.
It will be seen that the process of error correction of the micromechanical inertial element must be performed by resting the robot on a puzzle containing a map region code, such as any of the road maps or grid maps described above, without limitation.
Step 230, switching the robot from the stationary state to the moving state according to the triggered movement control signal, which indicates the robot to move through the designated map area.
After error correction is carried out on the micro-mechanical inertial element in a static state, the micro-mechanical inertial element can output an accurate measurement result, so that the gesture resolving unit outputs accurate gesture information, and the motion path of the robot can be corrected according to the gesture information output by the micro-mechanical inertial module configured by the robot.
In an exemplary embodiment, the triggered motion control signal is received by the robot through the remote control receiving module shown in fig. 5. Illustratively, a user causes the remote control to generate a corresponding motion control signal by triggering a manipulation device (e.g., a manipulation button, a joystick, etc.) provided on the remote control and to transmit the motion control signal to the remote control receiving module. The robot moves on the jigsaw according to the movement control signal received by the remote control receiving module, so that the robot is switched from a static state to a moving state.
In other embodiments, the motion control signal may be obtained by other means, without limitation. The robot itself is illustratively provided with a motion path presetter, and the user presets a motion path for the robot by triggering the motion path presetter so that the robot moves on the puzzle according to the preset motion path.
The motion control signal is used for indicating the robot to move the designated map area on the jigsaw. For example, in the case where the robot moves on the road map, the movement control signal instructs the robot to perform a linear movement (including forward or backward) of the road region of the adjacently spliced straight road map, to perform a rotation of a fixed angle when the robot moves to the center position of the cross road map or the curved road map, or to stop the movement when the robot moves to a designated place.
In the case where the robot moves on the grid map, the movement control signal may instruct the robot to perform a linear movement along the central region of the grid map or instruct the robot to rotate in place at a fixed angle in the central region of the grid map.
After the robot is switched to a motion state, the robot moves on the jigsaw according to a motion path indicated by the motion control signal. In the motion process of the robot, the robot deviates from the motion path indicated by the motion control signal due to the fact that the surface of the jigsaw may be uneven or the robot itself has deviation, so that the motion gesture of the robot needs to be corrected in real time in the motion process.
And 250, correcting the motion gesture of the robot in real time according to the map area code read by the photosensitive module or the motion gesture information output by the micro-mechanical inertial module, wherein the motion gesture information is obtained by performing gesture calculation on a measurement result output by the micro-mechanical inertial element by the micro-mechanical inertial module.
As described above, in the motion of the robot, the motion gesture of the robot refers to the yaw angle of the robot, and when the motion path of the robot deviates from the motion path indicated by the motion control signal, the motion path indicated by the motion control signal can be gradually restored by correcting the current yaw angle of the robot during the motion. That is, the correction of the motion gesture of the robot can be realized by correcting the yaw angle of the robot in the motion process in real time.
Since the map area code read by the photosensitive module and the motion gesture information output by the micromechanical inertial module can be used for correcting the motion gesture of the robot, the robot needs to select a proper mode to execute the correction of the motion gesture.
In one exemplary embodiment, the robot reads the map area code on the current puzzle by the photosensitive module preferentially, and when the photosensitive module reads the map area code, the robot corrects the motion gesture of the robot in real time according to the read map area code.
And if the photosensitive module cannot read the map region code, the robot executes real-time correction of the motion gesture according to the motion gesture information output by the micro-mechanical inertia module. For example, when the map region code laid by the tiles is worn or the robot moves on the tiles of the blank map, the photosensitive module cannot read the corresponding map region code.
In other embodiments, the robot may choose any one of the ways to correct the motion gesture of the robot, which is not limited herein.
In step 210, since the measurement error of the micromechanical inertial element is corrected, the measurement error of the micromechanical inertial element is removed from the measurement result output by the micromechanical inertial element during the movement of the robot, and the movement posture information obtained by the calculation of the measurement result by the posture calculation unit is accurate posture information, which can be used for accurately correcting the movement posture of the robot.
According to the embodiment, the measuring error of the micro-mechanical inertial element is corrected in the static state, so that after the robot is switched from the static state to the motion state, the micro-mechanical inertial module can accurately obtain the current motion gesture information of the robot, and the motion gesture information which can be read by the photosensitive module or output by the micro-mechanical inertial module in the motion process of the robot is also accurate, therefore, the motion gesture of the robot can be accurately corrected in real time, and the robot is ensured to move on a plurality of spliced puzzles according to the motion path indicated by the motion control signal.
FIG. 7 is a flow chart of step 210 in one embodiment of the corresponding embodiment of FIG. 6. As shown in fig. 6, the process of correcting the measurement error of the micromechanical inertial element in the rest state comprises at least the following steps:
step 211, in a static state, the robot acquires static posture information output by the micro-mechanical inertia module and map area codes read by the photosensitive module.
As described above, since the theoretical speed is zero when the robot is in the stationary state, the stationary posture information output by the micromechanical inertial module and the map area code read by the photosensitive module reflect the position error, yaw angle error and speed error of the robot, and the measurement error of the micromechanical inertial element can be obtained according to the position error, yaw angle error and speed error. Therefore, it is necessary to acquire the static attitude information output by the micromechanical inertial module and the map area code read by the photosensitive module.
The static gesture information output by the gesture resolving unit comprises current position information, yaw angle and speed information of the robot, and the photosensitive module obtains the current position information and yaw angle of the robot by reading map area codes on the jigsaw.
Because the map region code is correspondingly paved on the jigsaw after being accurately designed, the position information and the yaw angle obtained by the photosensitive module through reading the map region code are also accurate, and therefore, the information error between the static attitude information output by the micro-mechanical inertial module and the map region code read by the photosensitive module is caused by the measurement error of the micro-mechanical inertial module, and the measurement error of the micro-mechanical inertial element can be correspondingly obtained through the information error.
And step 213, calculating the difference between the static attitude information and the map region code, and filtering the calculation result to obtain the measurement error of the micromechanical inertial element in the micromechanical inertial module.
The difference value calculation of the static attitude signal and the map region code means that the difference between the position information, the yaw angle and the speed information corresponding to the static attitude information and the map region code is calculated respectively, so that the position error, the yaw angle error and the speed error of the robot are obtained.
The position error, yaw angle error and speed error of the robot are input into a filter for filtering processing, so that the filter can well estimate constant drift of a gyroscope and zero offset of an accelerometer in the micromechanical inertial element, and measurement error of the micromechanical inertial element is obtained.
The filter may include any one of a kalman filter, an adaptive filter, or other filters, and in this embodiment, the kalman filter may be selected and used, which is not limited herein. The output result of the filter is the measurement error of the micromechanical inertial element.
Step 215, calculating the difference between the measured information of the micro-mechanical inertial element and the measurement error, and obtaining the difference as the measurement result output by the micro-mechanical inertial element.
After the measuring error of the micro-mechanical inertial element is obtained through the filter, the difference between the information obtained by measuring the micro-mechanical inertial element and the measuring error is calculated, and the difference is obtained as a measuring result output by the micro-mechanical inertial element.
That is, after the micromechanical inertial element measures the corresponding angular velocity and acceleration, the measurement error needs to be subtracted from the obtained angular velocity and acceleration, and the obtained difference is output as the measurement result.
In this embodiment, the measurement error of the micromechanical element is corrected in the static state, so that after the robot is switched to the motion state, the measurement result output by the micromechanical inertial element is an accurate result obtained by removing the constant drift of the gyroscope and the zero offset of the accelerometer, and therefore, the motion gesture information output by the micromechanical inertial module is also accurate, and a data basis is laid for accurate correction of the motion gesture of the robot.
FIG. 8 is a flow chart of step 250 in one embodiment of the corresponding embodiment of FIG. 6. As shown in fig. 8, the motion state of the robot switched according to the motion control signal includes the linear motion of the map area specified by the robot path, and the process of correcting the motion gesture of the robot in real time described in step 250 at least includes the following steps:
step 251, according to the map area code read by the photosensitive module or the motion gesture information output by the micromechanical inertial module, the position information and yaw angle of the robot on the jigsaw are obtained.
The position information of the robot on the jigsaw is used for reflecting the degree of the robot deviating from a motion path indicated by the motion control signal, the yaw angle of the robot represents the current motion gesture of the robot, and when the motion gesture of the robot is corrected, the yaw angle needs to be corrected in real time according to the current position information of the robot.
The motion attitude information output by the micro-mechanical inertia module contains the current position information and yaw angle of the robot, but the current position information and yaw angle of the robot cannot be directly obtained by reading the map area code by the photosensitive module, so that the position information and yaw angle are required to be correspondingly obtained according to the read map area code.
The detailed process of acquiring the position information and yaw angle of the robot on the tiles according to the map area code read by the photosensitive module will be described below taking a grid map as an example.
As described above, since the angle code shown in fig. 3 is laid on the entire grid map, when the robot moves to any position on the grid jigsaw, the photosensitive module can read the angle code corresponding to the current position of the robot, and the angle code is the current yaw angle of the robot.
When the robot moves to the public boundary area or the edge area, the map area code read by the photosensitive module comprises the public boundary code and the edge area code besides the angle code; when the robot moves to the central area, the map area code read by the photosensitive module comprises a position code or a central area code besides an angle code, and the type of the map area code corresponding to the read central area depends on the type of the grid map.
Under the condition that the central area contains a position code and the robot moves in the central area, the photosensitive module can directly read the current position information of the robot.
In addition, the map area code read by the photosensitive module cannot directly reflect the current position information of the robot. In an exemplary embodiment, the process of acquiring the current position information of the robot according to the map area code read by the photosensitive module is as follows:
after the map region code corresponding to the map region where the robot is currently located is read through the photosensitive module, calculating a difference value of the minimum value of the read map region code and the edge region code in the grid map, and then taking the remainder of the difference value on the total number of the edge regions or the total number of the edge regions and the central region, so that the current position coordinate of the robot is set through the obtained remainder, and the current position information of the robot is obtained.
If the center area of the grid map contains a position code, the difference between the minimum values of the map area code and the edge area code is made to have a remainder of 4; if the center region of the trellis map contains a center region code, the difference is caused to take the remainder of 5.
When the resulting remainder is 0, indicating that the robot is currently located in an edge region on one side of the X-axis minimum value, and the robot moves in the X-axis direction, setting the X-axis position coordinates to the minimum value; similarly, the Y-axis position coordinate is set to the minimum value when the remainder is 1, the X-axis position coordinate is set to the maximum value when the remainder is 2, and the Y-axis position coordinate is set to the maximum value when the remainder is 3.
It should be noted that, the maximum value and the minimum value of the X-axis coordinate and the Y-axis coordinate are preset values, for example, the range of the position coordinates of the preset X-axis coordinate and the Y-axis coordinate is 30-110, the maximum value of the X-axis coordinate and the Y-axis coordinate is 110, the minimum value is 30, and the central position coordinates of the grid jigsaw are (70, 70).
Step 253, calculating the motion error amount of the robot performing the linear motion by the position information and the yaw angle.
The motion error amount of the robot in linear motion comprises a position error and a first yaw angle error, wherein the position error refers to an error between current position information of the robot and the center position of the jigsaw, and the first yaw angle error refers to an error between the current yaw angle of the robot and a target yaw angle.
The target yaw angle is obtained according to the angle code read by the photosensitive module and is used for representing the target motion direction of the robot. As shown in fig. 3, the angle codes paved on the grid map are marked with an X axis and a Y axis, when the current yaw angle of the robot corresponding to the angle codes read by the photosensitive module is 45 ° -135 °, the robot is indicated to move at two sides of the X axis, and the target moving direction of the robot is the positive direction of the X axis, so that the target yaw angle is 90 °. Similarly, when the current yaw angle is 135-225 degrees, acquiring a target yaw angle of 180 degrees; acquiring a target yaw angle of 270 degrees when the current yaw angle is 255-315 degrees; when the current yaw angle is 0 ° -45 ° or 315 ° -360 °, then the target yaw angle is acquired as 0 °.
Therefore, the target yaw angle of the robot is related to the current yaw angle through the preset mapping relation, and the target yaw angle can be correspondingly obtained according to the current yaw angle of the robot.
In an exemplary embodiment, as shown in fig. 9, calculating the motion error amount of the robot performing the linear motion according to the position information of the robot and the yaw angle includes at least the steps of:
in step 2531, a difference between the position information and the center position of the puzzle and a difference between the yaw angle and a target yaw angle, which is associated with the yaw angle through a preset mapping relationship, are calculated, respectively, to obtain a position error and a first yaw angle error of the robot.
Wherein, since the current position information of the robot is obtained according to the description in step 251, the current position error of the robot can be obtained by calculating the difference between the position information and the center position of the jigsaw where the robot is located.
When the robot moves on the jigsaw along the X-axis direction, the position error of the robot is reflected by the X-axis position coordinate, and the position error of the robot can be obtained by calculating the difference between the X-axis position coordinate and the center position coordinate of the jigsaw. Similarly, when the robot moves on the jigsaw along the Y-axis direction, the position error of the robot is reflected by the Y-axis position coordinate, and the position error of the robot can be obtained by calculating the difference between the Y-axis position coordinate and the center position coordinate of the jigsaw.
The current first yaw angle error of the robot can be obtained by calculating the difference between the current yaw angle of the robot and the target yaw angle. The process of acquiring the current yaw angle and the target yaw angle of the robot is described in the foregoing, and details are not repeated here.
In step 2533, the position error is converted into a second yaw angle error according to the set conversion rule.
The position error is the deviation between the current position of the robot and the center position of the jigsaw, and the robot cannot translate in the correction process, so that the position error needs to be corrected in the motion process after rotating for a certain angle, and the position error of the robot needs to be converted into a second yaw angle error to correct the motion gesture of the robot.
For example, the motion error amount of the robot may be represented by the formula σ=k 1 σ pa Representation, wherein sigma represents motion errorDifference, sigma p Representing position error, k 1 Representing the set conversion rule, k 1 The magnitude of (2) is a set value determined according to actual debugging experience and correction effect, for example, 0.9, k 1 Depends on the target yaw angle and the movement direction of the robot, sigma a Representing a first yaw angle error.
As shown in fig. 3, the coordinate values of the positions along the coordinate axis direction in the jigsaw are gradually increased, the yaw angle is gradually increased when the robot rotates leftwards, the motion error amount is also increased, the yaw angle is gradually decreased when the robot rotates rightwards, and the motion error amount is also decreased, so that the embodiment will correspond to k according to the situation 1 The manner of setting the positive and negative directions of (a) is described.
In the forward state of the robot, if the target yaw angle is 0 DEG or 270 DEG, k 1 Taking the positive direction, if the target yaw angle is 90 degrees or 180 degrees, k 1 Taking the negative direction; in the backward state of the robot, if the target yaw angle is 0 DEG or 270 DEG, k 1 Taking the negative direction, if the target yaw angle is 90 degrees or 180 degrees, k 1 And taking the positive direction.
Thus, in determining k 1 After the positive and negative directions of (2) are calculated by calculating the position error and k of the robot 1 And the product of (2) to obtain a second yaw angle error.
In step 2535, a motion error amount is obtained by calculating a sum of the first yaw angle error and the second yaw angle error.
Still referring to the calculation formula of the motion error amount, after the second yaw angle error is obtained, the sum of the first yaw angle error and the second yaw angle error is calculated, so as to obtain the motion error amount of the robot.
Therefore, the motion error quantity comprehensively considers the current position error and yaw angle error of the robot, and the current attitude error of the robot can be accurately described.
And 255, controlling the robot to correct the motion gesture in real time according to the value of the motion error amount and the positive and negative directions, wherein the positive and negative directions indicate the motion gesture correction direction of the robot.
The magnitude of the motion error amount reflects the error degree of the current motion gesture of the robot, and when the magnitude of the motion error amount is larger than a set value (for example, 5 degrees), the error degree of the current gesture of the robot is higher, and the motion gesture needs to be corrected.
The positive and negative directions of the motion error amount are used for indicating the direction of correcting the motion gesture of the robot so as to gradually reduce the value of the motion error amount of the robot in the motion process until the value of the motion error amount is smaller than a set value. Illustratively, the robot is controlled to rotate to the right when the amount of motion error is greater than 5 ° and to rotate to the left when the amount of motion error is less than-5 °.
In another exemplary embodiment, the robot stops correcting the motion gesture when the magnitude of the motion error amount is slightly smaller than a first set value, wherein the first set value is smaller than the aforementioned set value, during correcting the motion gesture. For example, the first setting value is 3 degrees, when the motion error amount is between-3 degrees and 3 degrees, the motion posture error of the robot is small, and the robot is controlled to continuously keep linear motion without correction.
It should be noted that the first set value is set to be slightly smaller than the first set value, so as to avoid frequent switching of the correction and the linear motion of the robot when the yaw angle error is at the critical value.
It should be further noted that, for the road map, the motion gesture of the robot is still corrected in real time according to the above-described process, which is not described herein.
Therefore, the method provided by the embodiment can control the robot to correct the motion gesture in real time in the process of performing linear motion on the jigsaw, so that the robot can move along the central area of the grid map or the road area of the road map in the process of performing linear motion.
In another exemplary embodiment, the robot can also accurately stop at the center position of the puzzle according to the acquired motion stop signal while the robot is performing the linear motion of the puzzle. As shown in fig. 10, in an exemplary embodiment, a method of controlling a robot to accurately stop at a center position of a puzzle during a linear motion includes at least the steps of:
in the linear motion performed, the robot acquires a triggered motion stop signal and performs reading of the common boundary code in the map area code according to the motion stop signal, step 310.
Wherein the motion stop signal acquired by the robot can still be received by the remote control receiving module shown in fig. 5. When a user triggers a stop button on the remote controller, the remote controller generates a motion stop signal and sends the motion stop signal to the remote control receiving module, so that the robot acquires the motion stop signal.
In order to ensure that the robot accurately stops at the center of the jigsaw, the robot needs to read the public boundary code in the map area code through the photosensitive module after acquiring the motion stop signal. The purpose of the robot reading the common boundary code is to obtain a common boundary area of the tiles, the common boundary area being an adjacent area of each tile splice, the common boundary area being used to locate a stop point of the robot.
And 330, if the common boundary code is read, controlling the robot to stop moving when moving to the central position of the jigsaw, or calculating the moving distance after the common boundary code by using an encoder configured by the robot, and controlling the robot to stop moving when the moving distance reaches half the length of the jigsaw.
If the robot moves to enter the public boundary area of the current jigsaw, the robot correspondingly reads the public boundary code, and the robot is controlled to stop moving when moving to the central position of the current jigsaw. If the robot moves to the common boundary area moving out of the current jigsaw and still reads the common boundary code, the robot is controlled to stop moving when moving to the center position of the next jigsaw.
It should be noted that, since the adjacent spliced tiles each have a common boundary area, if the robot reads the common boundary area code again in the motion continued after reading the common boundary code, it means that the robot moves to the common boundary area moving out of the current tile when the common boundary area is read for the first time, otherwise, it is determined that the robot moves to the common boundary area entering the current tile.
If the robot can read the position coordinates of the central position of the jigsaw, the robot is controlled to stop moving when the photosensitive module reads the position coordinates of the central position, so that the robot stops moving at an accurate place.
If the robot cannot read the position coordinates of the center position of the puzzle, it is necessary to acquire the center position of the puzzle through an encoder. Because the distance from the public boundary area of the jigsaw to the central position is half of the length of the jigsaw, and the length of each jigsaw is fixed, when the robot reads the public boundary code, the movement distance after the public boundary code is calculated through the encoder, and when the movement distance reaches half of the length of the jigsaw, the robot is controlled to stop moving, so that the robot can be ensured to stop at the central position of the jigsaw.
In an exemplary embodiment, the set stopping distance will be slightly greater than half the length of the puzzle, thereby avoiding the robot from stopping movement in advance.
And 350, otherwise, calculating the movement distance after the robot acquires the movement stop signal through the encoder, and stopping movement when the movement distance reaches the length of the jigsaw.
If the map region code arranged on the jigsaw is worn or the jigsaw is a blank map, and the robot cannot read the common boundary code, calculating the movement distance after the robot obtains the movement stop signal by the encoder, and stopping movement when the movement distance reaches integer times of the length of the jigsaw.
When the robot moves on the jigsaw worn by the map area codes or moves on the blank map, the position where the robot starts to move is the center position of the jigsaw, the robot starts to measure the movement distance from the beginning of movement, after the movement stop signal is obtained, the robot is controlled to stop moving when the movement distance reaches the integral multiple of the length of the jigsaw, and therefore the robot is controlled to move according to the movement strategy of the grid map and the road map.
Therefore, the method provided by the embodiment can be suitable for the motion stopping strategies of the robot under different conditions, so that the robot is controlled to accurately stop at the center position of the jigsaw.
In another exemplary embodiment, as shown in fig. 11, when the robot performs a rotational motion in place in a center area of the puzzle, a process of correcting a motion gesture of the robot in real time includes at least the following steps:
in step 410, the robot acquires the yaw angle in real time through map region coding or motion gesture information in the in-situ rotation motion.
It should be noted that the rotational movement performed by the robot is also performed based on the triggered movement control signal. For example, in the implementation environment shown in fig. 1, after the robot moves to the center position of the jigsaw 3, the user controls the robot to rotate 90 degrees to the right in situ at the center position by triggering a rotation button on the remote controller.
In step 430, a difference between the yaw angle and a target rotation angle contained in the motion control signal is calculated.
The target rotation angle included in the motion control signal is a target angle for controlling the robot to rotate in situ, and in the in-situ rotation motion of the robot, when the current yaw angle of the robot is consistent with the target rotation angle, the rotation of the robot reaches the target gesture, so that the rotation motion can be stopped.
Since the target rotation angle generally refers to an angle value rotated in a certain direction, it is necessary to convert the yaw angle or the target rotation angle into each other before calculating the difference between the yaw angle and the target rotation angle. For example, a target yaw angle at which the robot rotates to reach a target attitude may be obtained from the target rotation angle, and then a difference between a current yaw angle of the robot and the target yaw angle during rotation may be calculated.
Or, in the case that the target rotation angle included in the motion control signal is the target yaw angle, the difference between the current yaw angle of the robot and the target yaw angle is directly calculated.
And step 450, controlling the robot to stop the in-situ rotation movement when the difference value is smaller than the set threshold value.
And when the difference value between the current yaw angle and the target yaw angle of the robot is smaller than the set threshold value, the rotation performed by the robot is indicated to reach the target gesture, and the robot is controlled to stop the in-situ rotation movement.
In one exemplary embodiment, the current yaw angle of the robot is obtained using dual constraints of the photosensitive module and the micromechanical inertial module. The double constraint condition is that the robot obtains the current yaw angle in real time through the photosensitive module and the micro-mechanical inertia module respectively, calculates the difference between the current yaw angle and the target yaw angle, and considers that the robot has rotated to the target position when any one of the photosensitive module and the micro-mechanical inertia module obtains the difference smaller than the set threshold value.
Therefore, the method provided by the embodiment can control the robot to accurately perform the in-situ rotation motion.
In another exemplary embodiment, the method for controlling the motion of the robot further includes a process of correcting the motion speed of the robot in real time, which specifically includes the following steps:
in the motion state, the robot corrects the motion speed error in real time according to the coding scale of the configured encoder so that the motion speed of the robot is consistent with the target motion speed set in the motion control signal.
The motion speed error of the robot is the difference between the real-time speed of the robot and the target speed contained in the motion control signal. As previously described, the real-time speed of the robot is obtained from the encoded scale values acquired in real-time by the encoder.
After the real-time speed of the robot is obtained through the encoder, the PWM value of the driving motor is adjusted according to the difference value between the real-time speed and the target speed, so that the movement speed of the robot is adjusted to be consistent with the target speed, and the robot moves on the jigsaw according to the target speed set by the movement control signal.
Fig. 12 is a motion control apparatus of a robot shown according to an exemplary embodiment. It should be noted that the movement of the robot is performed on a plurality of spliced puzzles, at least one puzzles is divided into a plurality of map areas, the map areas contain map area codes, and the robot is configured with a photosensitive module and a micromechanical inertial module.
As shown in fig. 12, the apparatus shown in this exemplary embodiment includes a measurement error correction module 510, a motion state switching module 530, and a motion posture correction module 550.
The measurement error correction module 510 is configured to control the robot to correct a measurement error of a micromechanical inertial element in the micromechanical inertial module according to the static attitude information output by the micromechanical inertial module and the map area code read by the photosensitive module in a static state.
The motion state switching module 530 is configured to control the robot to switch from the stationary state to the motion state according to a triggered motion control signal, where the motion control signal indicates the robot to move through the designated map area.
The motion gesture correcting module 550 is configured to correct the motion gesture of the robot in real time according to the map area code read by the photosensitive module or the motion gesture information output by the micromechanical inertial module, where the motion gesture information is obtained by performing gesture calculation on a measurement result output by the micromechanical inertial element by the micromechanical inertial module.
In another exemplary embodiment, the measurement error correction module 510 includes a first information acquisition unit, a measurement error acquisition unit, and an error correction unit.
The first information acquisition unit is used for controlling the robot to acquire static attitude information output by the micro-mechanical inertia module and map area codes read by the photosensitive module in a static state.
The measuring error obtaining unit is used for obtaining the measuring error of the micro-mechanical inertial element in the micro-mechanical inertial module by carrying out difference value calculation on the static attitude information and the map region code and carrying out filtering processing on the calculation result.
The error correction unit is used for calculating the difference value between the information obtained by measuring the micromechanical inertial element and the measuring error, and obtaining the difference value as the measuring result output by the micromechanical inertial element.
In another exemplary embodiment, the motion state switched by the robot includes the robot moving linearly through the designated map area, and the motion gesture correction module 550 includes a second information acquisition unit, a motion error amount acquisition unit, and a motion control unit.
The second information acquisition unit is used for controlling the robot to acquire position information and yaw angle on the jigsaw according to the map region code or the motion gesture information.
The motion error amount acquisition unit is used for calculating the motion error amount of the robot for linear motion through the position information and the yaw angle.
The motion control unit is used for controlling the robot to correct the motion gesture in real time according to the numerical value of the motion error amount and the positive and negative directions, and the positive and negative directions indicate the motion gesture correction direction of the robot.
In another exemplary embodiment, the motion error amount acquisition unit includes an information calculation subunit, an error conversion subunit, and an error calculation subunit.
The information calculation subunit is used for respectively calculating the difference between the position information and the central position of the jigsaw and the difference between the yaw angle and the target yaw angle, so as to obtain the position error and the first yaw angle error of the robot, and the target yaw angle is related to the yaw angle through a preset mapping relation.
The error conversion subunit is used for converting the position error into a second yaw angle error according to a set conversion rule.
The error calculation operator unit is configured to obtain a motion error amount by calculating a sum of the first yaw angle error and the second yaw angle error.
In another exemplary embodiment, the motion gesture correction module 550 further includes a motion stop signal processing unit, a first motion control unit, and a second motion control unit.
The motion stop signal processing unit is used for controlling the robot to acquire a triggered motion stop signal in the linear motion, and reading the common boundary code in the map region code according to the motion stop signal.
The first motion control unit is used for controlling the robot to stop moving when the robot moves to the center position of the jigsaw when the common boundary area is read, or calculating the motion distance after the common boundary coding through the encoder configured by the robot, and controlling the robot to stop moving when the motion distance reaches half the length of the jigsaw.
The second motion control unit is used for calculating the motion distance after the robot acquires the motion stop signal through the encoder under the condition that the public boundary area is not read, and stopping the motion when the motion distance reaches the integral multiple of the length of the jigsaw.
In another exemplary embodiment, the motion state switched by the robot further includes a rotational motion of the robot in place on the designated map area, and the motion gesture correction module 550 includes a third information acquisition unit, a yaw angle difference calculation unit, and a rotation control unit.
The third information acquisition unit is used for controlling the robot to acquire the yaw angle in real time through map area coding or motion gesture information in the in-situ rotation motion.
The yaw angle difference calculation unit is used for calculating a difference between the yaw angle and a target rotation angle contained in the motion control signal.
The rotation control unit is used for controlling the robot to stop the in-situ rotation movement when the difference value is smaller than a set threshold value.
It should be noted that, the apparatus provided in the foregoing embodiments and the method provided in the foregoing embodiments belong to the same concept, and a specific manner in which each module performs an operation has been described in detail in the method embodiment, which is not described herein again.
In an exemplary embodiment, a robot includes a processor and a memory, wherein the memory is configured to store executable instructions of the processor, the processor configured to perform the method of controlling movement of the robot described in any of the above exemplary embodiments via execution of the executable instructions.
In an exemplary embodiment, a computer readable storage medium has stored thereon a computer program which, when executed by a processor, implements the method for controlling the motion of a robot described in any of the above exemplary embodiments.
The foregoing is merely a preferred exemplary embodiment of the present application and is not intended to limit the embodiments of the present application, and those skilled in the art may make various changes and modifications according to the main concept and spirit of the present application, so that the protection scope of the present application shall be subject to the protection scope of the claims.

Claims (10)

1. A method for controlling movement of a robot, wherein movement of the robot is performed on a plurality of tiles spliced, at least one tile is divided into a plurality of map areas, the map areas contain map area codes, the robot is configured with a photosensitive module and a micromechanical inertial module, the method comprising:
in a static state, the robot corrects the measurement error of a micro-mechanical inertial element in the micro-mechanical inertial module according to static attitude information output by the micro-mechanical inertial module and map area codes read by the photosensitive module;
According to the triggered motion control signal, the robot is switched from the static state to the motion state, and the motion control signal indicates the robot to move through the designated map area;
correcting the motion gesture of the robot in real time according to the map area code read by the photosensitive module or the motion gesture information output by the micro-mechanical inertial module, wherein the motion gesture information is obtained by performing gesture calculation on a measurement result output by a micro-mechanical inertial element by the micro-mechanical inertial module;
the motion state comprises that the robot carries out linear motion through the appointed map area, and the motion state of the robot is corrected in real time according to the map area code read by the photosensitive module or the motion state information output by the micromechanical inertial module, and the motion state correction comprises the following steps:
the robot obtains position information and yaw angle on the jigsaw according to the map region code or the motion gesture information;
calculating a motion error amount of the robot performing the linear motion by the position information and the yaw angle;
and controlling the robot to correct the motion gesture in real time according to the numerical value of the motion error amount and the positive and negative directions, wherein the positive and negative directions indicate the motion gesture correction direction of the robot.
2. The method according to claim 1, wherein the correcting the measurement error of the micromechanical inertial element in the micromechanical inertial module by the robot in the stationary state according to the stationary posture information output by the micromechanical inertial module and the map area code read by the photosensitive module includes:
in a static state, the robot acquires static attitude information output by the micro-mechanical inertial module and map area codes read by the photosensitive module;
calculating the difference value between the static attitude information and the map region code, and filtering the calculation result to obtain a measurement error of a micromechanical inertial element in the micromechanical inertial module;
and calculating a difference value between the information obtained by measuring the micromechanical inertial element and the measurement error, and obtaining the difference value as a measurement result output by the micromechanical inertial element.
3. The method according to claim 1, wherein the real-time correction of the motion gesture of the robot based on the map area code read by the photosensitive module or the motion gesture information output by the micromechanical inertial module comprises:
When the photosensitive module reads the map region code, the robot corrects the motion gesture of the robot in real time according to the map region code;
otherwise, the robot executes real-time correction of the motion gesture according to the motion gesture information output by the micro mechanical inertia module.
4. The method according to claim 1, wherein the calculating a motion error amount of the robot performing the linear motion from the position information and the yaw angle includes:
respectively calculating the difference between the position information and the central position of the jigsaw and the difference between the yaw angle and a target yaw angle to obtain a position error and a first yaw angle error of the robot, wherein the target yaw angle is related to the yaw angle through a preset mapping relation;
converting the position error into a second yaw angle error according to a set conversion rule;
the motion error amount is obtained by calculating a sum of the first yaw angle error and the second yaw angle error.
5. The method according to claim 1, wherein the method further comprises:
in the linear motion, the robot acquires a triggered motion stop signal, and reads a common boundary code in the map region code according to the motion stop signal;
If the public boundary code is read, controlling the robot to stop moving when moving to the central position of the jigsaw; or calculating the movement distance after the common boundary codes by an encoder configured by the robot, and controlling the robot to stop moving when the movement distance reaches half the length of the jigsaw;
otherwise, calculating the movement distance of the robot after acquiring the movement stop signal through the encoder, and stopping movement when the movement distance reaches an integral multiple of the jigsaw length.
6. The method of claim 1, wherein the motion state includes a rotational motion of the robot in place on a designated map area, and the real-time correction of the motion pose of the robot based on the map area code read by the photosensitive module or the motion pose information output by the micromechanical inertial module includes:
the robot acquires a yaw angle in real time through the map region code or the motion gesture information in the in-situ rotation motion;
calculating a difference between the yaw angle and a target rotation angle contained by the motion control signal;
And controlling the robot to stop the in-situ rotation movement when the difference value is smaller than a set threshold value.
7. The method according to claim 1, wherein the method further comprises:
in the motion state, the robot corrects the motion speed error in real time according to the coding scale of the configured encoder, so that the motion speed of the robot is consistent with the target motion speed set in the motion control signal.
8. A motion control device for a robot, wherein the motion of the robot is performed on a plurality of tiles that are tiled, at least one tile being divided into a plurality of map areas and the map areas containing map area codes, the robot being configured with a photosensitive module and a micromechanical inertial module, the device comprising:
the measuring error correction module is used for controlling the robot to correct the measuring error of the micro-mechanical inertial element in the micro-mechanical inertial module according to the static attitude information output by the micro-mechanical inertial module and the map area code read by the photosensitive module in a static state;
the motion state switching module is used for controlling the robot to switch from the static state to the motion state according to a triggered motion control signal, and the motion control signal indicates the robot to move through the designated map area;
The motion gesture correction module is used for correcting the motion gesture of the robot in real time according to the map region code read by the photosensitive module or the motion gesture information output by the micromechanical inertial module, wherein the motion gesture information is obtained by performing gesture calculation on a measurement result output by the micromechanical inertial element by the micromechanical inertial module;
the motion state comprises that the robot carries out linear motion through the designated map area, and the motion gesture correction module is used for acquiring position information and yaw angle on the jigsaw according to the map area code or the motion gesture information when correcting the motion gesture of the robot in real time according to the map area code read by the photosensitive module or the motion gesture information output by the micromechanical inertia module; calculating a motion error amount of the robot performing the linear motion by the position information and the yaw angle; and controlling the robot to correct the motion gesture in real time according to the numerical value of the motion error amount and the positive and negative directions, wherein the positive and negative directions indicate the motion gesture correction direction of the robot.
9. A robot, comprising:
A processor; and
a memory for storing executable instructions of the processor;
wherein the processor is configured to perform the method of any one of claims 1 to 7 via execution of the executable instructions.
10. A computer readable storage medium, on which a computer program is stored, characterized in that the computer program, when being executed by a processor, implements the method of any of claims 1 to 7.
CN201910579972.3A 2019-06-28 2019-06-28 Robot motion control method and device, robot and storage medium Active CN112147995B (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN201910579972.3A CN112147995B (en) 2019-06-28 2019-06-28 Robot motion control method and device, robot and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN201910579972.3A CN112147995B (en) 2019-06-28 2019-06-28 Robot motion control method and device, robot and storage medium

Publications (2)

Publication Number Publication Date
CN112147995A CN112147995A (en) 2020-12-29
CN112147995B true CN112147995B (en) 2024-02-27

Family

ID=73891609

Family Applications (1)

Application Number Title Priority Date Filing Date
CN201910579972.3A Active CN112147995B (en) 2019-06-28 2019-06-28 Robot motion control method and device, robot and storage medium

Country Status (1)

Country Link
CN (1) CN112147995B (en)

Families Citing this family (3)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN113029200A (en) * 2021-03-29 2021-06-25 上海景吾智能科技有限公司 Method, system and medium for testing course angle and accuracy based on robot sensor
CN114571482B (en) * 2022-03-30 2023-11-03 长沙朗源电子科技有限公司 Painting robot system and control method of painting robot
CN117086868B (en) * 2023-08-09 2024-04-09 北京小米机器人技术有限公司 Robot, control method and device thereof, and storage medium

Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205049153U (en) * 2015-09-24 2016-02-24 北京理工大学 Sustainable navigation data collection system of vehicle under environment of photoelectric type GPS blind area
CN106382934A (en) * 2016-11-16 2017-02-08 深圳普智联科机器人技术有限公司 High-precision moving robot positioning system and method
CN106647814A (en) * 2016-12-01 2017-05-10 华中科技大学 System and method of unmanned aerial vehicle visual sense assistant position and flight control based on two-dimensional landmark identification
CN106780325A (en) * 2016-11-29 2017-05-31 维沃移动通信有限公司 A kind of picture joining method and mobile terminal
CN107782305A (en) * 2017-09-22 2018-03-09 郑州郑大智能科技股份有限公司 A kind of method for positioning mobile robot based on digital alphabet identification
CN108592906A (en) * 2018-03-30 2018-09-28 合肥工业大学 AGV complex navigation methods based on Quick Response Code and inertial sensor
CN108592914A (en) * 2018-04-08 2018-09-28 河南科技学院 The positioning of complex region inspecting robot, navigation and time service method under no GPS scenario
CN109752003A (en) * 2018-12-26 2019-05-14 浙江大学 A kind of robot vision inertia dotted line characteristic positioning method and device

Family Cites Families (2)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
JP4300199B2 (en) * 2005-06-13 2009-07-22 株式会社東芝 Mobile robot, mobile robot position and orientation calculation method, mobile robot autonomous traveling system
EP3428885A4 (en) * 2016-03-09 2019-08-14 Guangzhou Airob Robot Technology Co., Ltd. Map construction method, and correction method and apparatus

Patent Citations (8)

* Cited by examiner, † Cited by third party
Publication number Priority date Publication date Assignee Title
CN205049153U (en) * 2015-09-24 2016-02-24 北京理工大学 Sustainable navigation data collection system of vehicle under environment of photoelectric type GPS blind area
CN106382934A (en) * 2016-11-16 2017-02-08 深圳普智联科机器人技术有限公司 High-precision moving robot positioning system and method
CN106780325A (en) * 2016-11-29 2017-05-31 维沃移动通信有限公司 A kind of picture joining method and mobile terminal
CN106647814A (en) * 2016-12-01 2017-05-10 华中科技大学 System and method of unmanned aerial vehicle visual sense assistant position and flight control based on two-dimensional landmark identification
CN107782305A (en) * 2017-09-22 2018-03-09 郑州郑大智能科技股份有限公司 A kind of method for positioning mobile robot based on digital alphabet identification
CN108592906A (en) * 2018-03-30 2018-09-28 合肥工业大学 AGV complex navigation methods based on Quick Response Code and inertial sensor
CN108592914A (en) * 2018-04-08 2018-09-28 河南科技学院 The positioning of complex region inspecting robot, navigation and time service method under no GPS scenario
CN109752003A (en) * 2018-12-26 2019-05-14 浙江大学 A kind of robot vision inertia dotted line characteristic positioning method and device

Non-Patent Citations (2)

* Cited by examiner, † Cited by third party
Title
MEMS惯性传感器ADIS16355在姿态测量中的应用;黎永键 等;数据采集与处理(04);501-507 *
基于惯性导航、RFID及图像识别的AGV融合导航系统;汪思迪 等;起重运输机械(08);81-84 *

Also Published As

Publication number Publication date
CN112147995A (en) 2020-12-29

Similar Documents

Publication Publication Date Title
CN112147995B (en) Robot motion control method and device, robot and storage medium
KR100772915B1 (en) Apparatus and method for correcting bias of gyroscope on a moving robot
CN106708051B (en) Navigation system and method based on two-dimensional code, navigation marker and navigation controller
KR100886340B1 (en) Apparatus and method for calibrating gyro-sensor of mobile robot
CN103033184B (en) Error correction method, device and system for inertial navigation system
CN108731673B (en) Autonomous navigation positioning method and system for robot
JP5610870B2 (en) Unmanned traveling vehicle guidance device and unmanned traveling vehicle guidance method
JP6977921B2 (en) Mapping method, image collection processing system and positioning method
CN102666032A (en) Slip detection apparatus and method for a mobile robot
CN112113582A (en) Time synchronization processing method, electronic device, and storage medium
US10365101B2 (en) Movable marking system, controlling method for movable marking apparatus, and computer readable recording medium
JP2019056571A (en) Survey system
CN111427361A (en) Recharging method, recharging device and robot
JP2018184815A (en) Calibration device of imaging device, work machine and calibration method
WO2020264089A1 (en) Gyroscope and optical flow sensor scale calibration
CN116382315B (en) Picture construction method and system thereof, underwater robot, storage medium and electronic equipment
KR20110081701A (en) Calibration apparatus for gyro sensor
CN103542864B (en) A kind of inertial navigation fall into a trap step method and device
CN105806331A (en) Positioning method for indoor robot and indoor robot
CN115990880A (en) Robot course adjustment method, robot, device and computer storage medium
JP6734764B2 (en) Position estimation device, map information preparation device, moving body, position estimation method and program
CN114789439B (en) Slope positioning correction method, device, robot and readable storage medium
CN110989596B (en) Pile alignment control method and device, intelligent robot and storage medium
CN114066963A (en) Drawing construction method and device and robot
US20240083036A1 (en) Method and apparatus for robot system management

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination
GR01 Patent grant
GR01 Patent grant