CN116300873A - Robot movement control method, device, equipment and storage medium - Google Patents

Robot movement control method, device, equipment and storage medium Download PDF

Info

Publication number
CN116300873A
CN116300873A CN202310009748.7A CN202310009748A CN116300873A CN 116300873 A CN116300873 A CN 116300873A CN 202310009748 A CN202310009748 A CN 202310009748A CN 116300873 A CN116300873 A CN 116300873A
Authority
CN
China
Prior art keywords
distance
mobile robot
target
sensor
attitude angle
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202310009748.7A
Other languages
Chinese (zh)
Inventor
李勇
成慧
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Sun Yat Sen University
Original Assignee
Guangzhou Shiyuan Electronics Thecnology Co Ltd
Sun Yat Sen University
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Guangzhou Shiyuan Electronics Thecnology Co Ltd, Sun Yat Sen University filed Critical Guangzhou Shiyuan Electronics Thecnology Co Ltd
Priority to CN202310009748.7A priority Critical patent/CN116300873A/en
Publication of CN116300873A publication Critical patent/CN116300873A/en
Pending legal-status Critical Current

Links

Images

Classifications

    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0212Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory
    • G05D1/0223Control of position or course in two dimensions specially adapted to land vehicles with means for defining a desired trajectory involving speed control of the vehicle
    • GPHYSICS
    • G05CONTROLLING; REGULATING
    • G05DSYSTEMS FOR CONTROLLING OR REGULATING NON-ELECTRIC VARIABLES
    • G05D1/00Control of position, course or altitude of land, water, air, or space vehicles, e.g. automatic pilot
    • G05D1/02Control of position or course in two dimensions
    • G05D1/021Control of position or course in two dimensions specially adapted to land vehicles
    • G05D1/0259Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means
    • G05D1/0263Control of position or course in two dimensions specially adapted to land vehicles using magnetic or electromagnetic means using magnetic strips

Landscapes

  • Engineering & Computer Science (AREA)
  • Physics & Mathematics (AREA)
  • Aviation & Aerospace Engineering (AREA)
  • Radar, Positioning & Navigation (AREA)
  • Remote Sensing (AREA)
  • General Physics & Mathematics (AREA)
  • Automation & Control Theory (AREA)
  • Electromagnetism (AREA)
  • Control Of Position, Course, Altitude, Or Attitude Of Moving Bodies (AREA)

Abstract

The application discloses a robot movement control method, a device, equipment and a storage medium, and relates to the technical field of mobile robots. The technical scheme that this application provided includes: determining the distance between the sensor module and the reference object according to the electric signal generated by the sensor module; determining a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance; according to the target lateral offset and the target attitude angle, a preset nonlinear control model is calculated to obtain movement control parameters of the mobile robot, and the mobile robot is controlled to move according to the movement control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function. By the technical means, the problem that the mobile control precision of the mobile robot in the prior art is low is solved, the navigation accuracy of the mobile robot is improved, and further the operation precision of the mobile robot is improved.

Description

Robot movement control method, device, equipment and storage medium
Technical Field
The present disclosure relates to the field of mobile robots, and in particular, to a method, an apparatus, a device, and a storage medium for controlling movement of a robot.
Background
With the rapid development of the automatic navigation technology of mobile robots, the mobile robots are widely used in various scenes such as cleaning operations of the mobile robots in houses or carrying operations of the mobile robots in factory workshops. When a mobile robot performs cleaning operation indoors, it generally performs automatic navigation by a laser SLAM (instant localization and mapping) technique. However, since the environment in the factory workshop is clear and the environment changes greatly, the laser SLAM is not suitable for the scene of the factory workshop, so the mobile robot can automatically navigate based on the line inspection navigation technology.
In the existing line patrol navigation technology, a magnetic stripe is paved on the ground of a factory workshop, an electric signal is collected through a magnetic stripe sensor installed on a mobile robot, and the mobile robot is controlled to move along the magnetic stripe according to the electric signal. The control scheme adopts traditional PID (Proportional Integral Derivative, proportion, integral and derivative) to control the mobile robot to move, and the movement control precision is low, so that the navigation accuracy of the mobile robot is poor, and the operation precision of the mobile robot is influenced.
Disclosure of Invention
The application provides a robot movement control method, device, equipment and storage medium, which are used for solving the problem of low movement control precision of a mobile robot in the prior art, improving the navigation accuracy of the mobile robot and further improving the operation precision of the mobile robot.
In a first aspect, the present application provides a robot movement control method, including:
determining the distance between the sensor module and a reference object according to the electric signal generated by the sensor module;
determining a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance;
according to the target lateral offset and the target attitude angle, a preset nonlinear control model is calculated to obtain movement control parameters of the mobile robot, and the mobile robot is controlled to move according to the movement control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function.
In a second aspect, the present application provides a robot movement control device, including:
a distance determining module configured to determine a distance of the sensor module relative to a reference based on an electrical signal generated by the sensor module;
a target parameter determination module configured to determine a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance;
the mobile control module is configured to calculate a preset nonlinear control model according to the target lateral offset and the target attitude angle to obtain mobile control parameters of the mobile robot, and control the mobile robot to move according to the mobile control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function.
In a third aspect, the present application provides a robot movement control device, comprising:
one or more processors; and a storage device storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the robot movement control method according to the first aspect.
In a fourth aspect, the present application provides a storage medium containing computer executable instructions for performing the robot movement control method according to the first aspect when executed by a computer processor.
In the application, according to the electric signal generated by the sensor module, determining the distance between the sensor module and the reference object; determining a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance; according to the target lateral offset and the target attitude angle, a preset nonlinear control model is calculated to obtain movement control parameters of the mobile robot, and the mobile robot is controlled to move according to the movement control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function. By the technical means, the target attitude angle and the target lateral offset of the mobile robot relative to the reference object are estimated based on the distance measured by the sensor module, and the target attitude angle and the target lateral offset can be regarded as errors of the current pose and the expected pose of the mobile robot. When the nonlinear control model is designed through the error dynamic model and the Lyapunov function, the pose error is used as an unknown number, the pose error model is derived to obtain the error dynamic model, and the pose error and the error dynamic equation are substituted into the derivative of the Lyapunov function to construct the nonlinear control model. The nonlinear control model designed based on the Lyapunov function can control the pose deviation of the mobile robot to gradually converge, the control precision is high, the control stability is high, the mobile robot can be controlled to move along a reference object accurately, the navigation accuracy of the mobile robot is improved, and the operation precision of the mobile robot is further improved.
Drawings
Fig. 1 is a flowchart of a robot movement control method provided in an embodiment of the present application;
fig. 2 is a schematic structural diagram of a mobile robot according to an embodiment of the present application;
FIG. 3 is a flow chart for determining a target attitude angle provided by an embodiment of the present application;
FIG. 4 is a schematic diagram of a target attitude angle and a target lateral offset provided by an embodiment of the present application;
FIG. 5 is a flow chart for determining a target attitude angle provided by an embodiment of the present application;
FIG. 6 is a schematic diagram of a mobile robot provided in an embodiment of the present application in a world coordinate system;
FIG. 7 is a flow chart for determining movement control parameters provided by an embodiment of the present application;
FIG. 8 is a flow chart for controlling movement of a mobile robot based on a linear control model provided in an embodiment of the present application;
fig. 9 is a schematic structural diagram of a robot movement control device according to an embodiment of the present application;
fig. 10 is a schematic structural diagram of a robot movement control device according to an embodiment of the present application.
Detailed Description
For the purpose of making the objects, technical solutions and advantages of the present application more apparent, the following detailed description of specific embodiments thereof is given with reference to the accompanying drawings. It is to be understood that the specific embodiments described herein are merely illustrative of the application and not limiting thereof. It should be further noted that, for convenience of description, only some, but not all of the matters related to the present application are shown in the accompanying drawings. Before discussing exemplary embodiments in more detail, it should be mentioned that some exemplary embodiments are described as processes or methods depicted as flowcharts. Although a flowchart depicts operations (or steps) as a sequential process, many of the operations can be performed in parallel, concurrently, or at the same time. Furthermore, the order of the operations may be rearranged. The process may be terminated when its operations are completed, but may have additional steps not included in the figures. The processes may correspond to methods, functions, procedures, subroutines, and the like.
The terms first, second and the like in the description and in the claims, are used for distinguishing between similar objects and not necessarily for describing a particular sequential or chronological order. It is to be understood that the data so used may be interchanged, as appropriate, such that embodiments of the present application may be implemented in sequences other than those illustrated or described herein, and that the objects identified by "first," "second," etc. are generally of a type and not limited to the number of objects, e.g., the first object may be one or more. Furthermore, in the description and claims, "and/or" means at least one of the connected objects, and the character "/", generally means that the associated object is an "or" relationship.
The robot movement control method provided in this embodiment may be performed by a robot movement control device, which may be implemented in software and/or hardware, and may be configured by two or more physical entities or may be configured by one physical entity. For example, the robot movement control device may be a mobile robot or a processor of a mobile robot.
The robot mobile control device is provided with at least one type of operating system, wherein the operating system comprises, but is not limited to, an android system, a Linux system and a Windows system. The robot mobile control device may install at least one application program based on the operating system, and the application program may be an application program carried by the operating system, or may be an application program downloaded from a third party device or a server. In this embodiment, the robot movement control device has at least an application program that can execute the robot movement control method.
For ease of understanding, the present embodiment will be described taking a mobile robot as an example of a main body that performs a robot movement control method.
In one embodiment, the mobile robot carries out the handling operation in the factory workshop, lays the magnetic stripe on the ground of the factory workshop, installs two magnetic stripe sensors, and the mobile robot determines the distance between the magnetic stripe sensors and the magnetic stripe through the magnetic stripe sensors. According to the distance between the magnetic stripe sensor and the magnetic stripe and the structure of the mobile robot, estimating the pose error of the mobile robot relative to the magnetic stripe, inputting the pose error into the PID controller, obtaining the mobile control parameters output by the PID controller, and controlling the mobile robot to move along the magnetic stripe direction as much as possible according to the mobile control parameters. The PID controller is a linear controller, can adjust control parameters according to the deviation between the expected and the current situation, and is simple and flexible to use and convenient to adjust. However, the motion control of the mobile robot is essentially nonlinear due to factors such as mechanical structure. When linear control using a PID controller approximates nonlinear control of a mobile robot, the accuracy of the mobile control may be lowered, resulting in poor navigation accuracy of the mobile robot, affecting the operation accuracy of the mobile robot.
In order to solve the above problems, the present embodiment provides a method for controlling movement of a robot, so as to improve the navigation accuracy of the mobile robot, and further improve the operation accuracy of the mobile robot.
Fig. 1 is a flowchart of a method for controlling movement of a robot according to an embodiment of the present application. Referring to fig. 1, the robot movement control method specifically includes:
s110, determining the distance between the sensor module and the reference object according to the electric signal generated by the sensor module.
In this embodiment, the reference object refers to a navigation band, such as a magnetic stripe or a color band, for guiding the mobile robot to move. The sensor module is a mobile robot mounted sensor for measuring the distance to a reference. When the reference object is a magnetic stripe, the sensor module is a magnetic stripe sensor; when the object is referred to as a color band, the sensor module is a color band sensor.
The present embodiment will be described taking a reference object as a magnetic stripe and a sensor module as a magnetic stripe sensor as an example. For example, assuming that a mobile robot performs a carrying operation in a factory floor where a magnetic stripe is laid, the mobile robot moves over or sideways along the magnetic stripe to carry an article to a specific location. When the mobile robot moves, the magnetic stripe sensor is close to the magnetic stripe to generate an electric signal, and the electric signal is converted into the distance between the magnetic stripe sensor and the magnetic stripe.
In an embodiment, the sensor module comprises two magnetic stripe sensors, wherein the two magnetic stripe sensors are respectively arranged at the front side and the rear side of the datum line of the mobile robot, and the distances between the two magnetic stripe sensors and the magnetic stripe are respectively determined according to the electric signals generated by the two magnetic stripe sensors. The reference line is a line segment representing the gesture of the mobile robot, and is generally a central connecting line of a front wheel and a rear wheel of the mobile robot. The datum line and the magnetic stripe form two similar triangles, and based on the principle of the similar triangles, the attitude angle and the transverse offset of the mobile robot relative to the reference object can be estimated through the distances between the two magnetic stripe sensors and the magnetic stripe. However, when the magnetic stripe is nonlinear, the relationship of similar triangles is no longer satisfied between the datum line and the magnetic stripe, and the attitude angle and the transverse offset estimated by the distance between the two magnetic stripe sensors and the magnetic stripe are not accurate enough, so that the accuracy and the stability of the follow-up movement control are affected.
In another embodiment, a sensor module includes a first sensor, a second sensor, and a third sensor. First, second and third distances of the first, second and third sensors relative to the reference object are determined from the electrical signals generated by the first, second and third sensors, respectively. The three sensors are respectively arranged at the front side, the middle and the rear side of the datum line, the attitude angle and the transverse offset of the distance estimation mobile robot relative to the reference object are detected through the three sensors, the estimation errors caused to the included angle and the offset when the magnetic stripe is nonlinear are eliminated, the accuracy of pose deviation is improved, and the accuracy and the stability of follow-up mobile control are improved.
In this embodiment, the mobile robot includes a steering wheel and a driven wheel, the steering wheel is disposed at a front portion of the mobile robot, the driven wheel is disposed at a rear portion of the mobile robot, the first sensor is located in front of the steering wheel, the third sensor is located behind the driven wheel, the second sensor is located between the steering wheel and the driven wheel, and the center points of the first sensor, the second sensor, the third sensor, the steering wheel and the driven wheel are located on the same straight line. Fig. 2 is a schematic structural diagram of a mobile robot according to an embodiment of the present application. As shown in fig. 2, the driven wheels include a left driven wheel 15 and a right driven wheel 16, a center point O4 between the left driven wheel and the right driven wheel, a center point 17 of the first sensor 17, a center point O0 of the steering wheel 12, a center point O2 of the second sensor 13, and a center point O3 of the third sensor 14 are located on a center line of the mobile robot. The line connecting the center point O4 and the center point O0 is a reference line of the mobile robot.
S120, determining a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance.
The target attitude angle refers to an error between a current attitude angle and a desired attitude angle of the mobile robot, and the target lateral offset refers to a lateral error between a current position and a desired position of the mobile robot. Referring to fig. 2, if the mobile robot 11 should move along the reference object 18, the position and tangential direction of each track point on the reference object 12 are the expected position and expected attitude angle of the mobile robot, so that the included angle between the reference line and the reference object 18 can be regarded as the error between the current attitude angle and the expected attitude angle of the mobile robot, and the distance between the center point O4 and the reference object 18 can be regarded as the lateral error between the position and the expected position of the mobile robot. Correspondingly, the included angle between the datum line and the reference 18 is the target attitude angle theta e Sag of the center point O4 to the reference 18The straight distance is the target lateral offset y e The target attitude angle theta can be estimated based on the principle of similar triangles according to the distance measured by the sensor module e And a target lateral offset y e
In one embodiment, fig. 3 is a flowchart for determining a target attitude angle according to an embodiment of the present application. As shown in fig. 3, the step of determining the target attitude angle specifically includes S1201-S1204:
s1201, acquiring a fourth distance between the first sensor and the second sensor, dissociating a preset first similar triangle model according to the first distance, the second distance and the fourth distance, and determining a first attitude angle according to a first calculation result of the first similar triangle model.
Fig. 4 is a schematic diagram of a target attitude angle and a target lateral offset provided in an embodiment of the present application. Referring to fig. 2 and 4, a fourth distance L1 between the first sensor 17 and the second sensor is measured in advance, a fifth distance L2 between the second sensor and the center point O4 is measured, and a sixth distance L3 between the third sensor and the center point O5 is measured. The first sensor 17 measures the distance from the center point O1 to the vertical point A on the reference 18 as a first distance D1, the second sensor 13 measures the distance from the center point O2 to the vertical point B on the reference 18 as a second distance D2, the third sensor 14 measures the distance from the center point O3 to the vertical point D on the reference 18 as a third distance, and the distance from the center point O4 to the vertical point C on the reference 18 as a target lateral offset y e The reference line intersects the reference line at point E. The vertical point A, the center point O1 and the point E form a first triangle, the vertical point B, the center point O2 and the point E form a second triangle, the vertical point C, the center point O4 and the point E form a third triangle, and the vertical point D, the center point O3 and the point E form a third triangle.
In this embodiment, the expression of the first triangle-like model is as follows:
Figure BDA0004037529240000061
where x1 is the distance from the center point O1 to the point E, i.e. the first solution. It is understood that the first triangle and the second triangle are considered as two similar triangles, and the first similar triangle model can be obtained based on similar triangle properties. Substituting the first distance d1, the second distance d2 and the fourth distance L1 into the first similar triangle model can obtain a first resolving result x1.
Further, the first solution result x1 is substituted into the first sine inverse function θ e1 =sin -1 (d1/x1)=sin -1 (d 2/(L1-x 1)) to obtain a first attitude angle θ e1
S1202, obtaining a fifth distance between the second sensor and the driven wheel, obtaining a sixth distance between the driven wheel and the third sensor, and determining total distances of the fifth distance, the sixth distance and the fourth distance.
S1203, calculating a preset second similar triangle model according to the first distance, the third distance and the total distance dissociation, and determining a second attitude angle according to a second calculation result of the second similar triangle model.
Illustratively, the expression of the second similar triangle model is as follows:
Figure BDA0004037529240000071
where x2 is the distance from the center point O1 to the point E, i.e. the first solution. The first triangle and the fourth triangle are considered as two similar triangles, and a second similar triangle model can be obtained based on similar triangle properties. The second solution result x2 can be calculated by substituting the total distance of the first distance d1, the third distance d3, the fifth distance L2, the sixth distance L3, and the fourth distance L1 into the second similar triangle model.
Further, the second solution result x2 is substituted into the second sine inverse function θ e2 =sin -1 (d1/x2)=sin -1 ((d1+d3)/(L1+L2+L3-x 2)) to obtain a second attitude angle θ e2
S1204, taking the average of the first attitude angle and the second attitude angle as the target attitude angle.
It can be understood that in this embodiment, the distances measured by the three sensors are used to determine the attitude angle errors when the first triangle and the second triangle are similar to each other, and the attitude angle errors when the first triangle and the fourth triangle are similar to each other, and the average value of the two is taken as the target attitude angle to eliminate the errors caused when the reference object is a nonlinear line, so as to improve the estimation accuracy of the target attitude angle.
As can be obtained from the above embodiments, the target attitude angle θ e The calculation formula of (a) can be deduced as follows:
Figure BDA0004037529240000072
exemplary, after the sensor module measures the first distance d1, the second distance d2 and the third distance d3, the first distance d1, the second distance d2 and the third distance d3 are substituted into the above calculation formula to calculate the target attitude angle θ e
In one embodiment, fig. 5 is a flowchart for determining a target attitude angle according to an embodiment of the present application. As shown in fig. 5, the step of determining the target attitude angle specifically includes S1205-S1207:
and S1205, calculating a preset third similar triangle model according to the fifth distance, the first calculation result and the second distance, and obtaining the first transverse offset.
In this embodiment, the expression of the third triangle-like model is as follows:
Figure BDA0004037529240000081
it will be appreciated that the second triangle and the third triangle are considered as two similar triangles, and that a third similar triangle model may be derived based on similar triangle properties. Substituting the second distance d2, the fourth distance L1 and the fifth distance L2 into a third triangle-like model to obtain a first lateral offset y e1
And S1206, dissociating a preset fourth similar triangle model according to the total distance, the second calculation result, the sixth distance and the third distance to obtain a second transverse offset.
In this embodiment, the expression of the fourth triangle-like model is as follows:
Figure BDA0004037529240000082
It will be appreciated that the fourth triangle and the third triangle are considered as two similar triangles, and that a fourth similar triangle model may be derived based on similar triangle properties. Substituting the third distance d3, the fourth distance L1, the fifth distance L2 and the sixth distance L3 into a third triangle-like model to obtain a second lateral offset y e2
S1207, taking the average value of the first lateral offset and the second lateral offset as the target lateral offset.
It can be understood that in this embodiment, the distances measured by the three sensors are used to determine the lateral errors when the third triangle and the second triangle are similar to each other, and the lateral errors when the third triangle and the fourth triangle are similar to each other are used to take the average value of the two as the target lateral offset to eliminate the errors caused when the reference object is a nonlinear line, so as to improve the estimation accuracy of the target lateral offset.
As can be obtained from the above embodiments, the target lateral offset y e The calculation formula of (a) can be deduced as follows:
Figure BDA0004037529240000083
exemplary, after the sensor module measures the first distance d1, the second distance d2 and the third distance d3, the first distance d1, the second distance d2 and the third distance d3 are substituted into the above calculation formula to calculate the target lateral offset y e
S130, a preset nonlinear control model is calculated according to the target lateral offset and the target attitude angle, so that movement control parameters of the mobile robot are obtained, and the mobile robot is controlled to move according to the movement control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function.
Fig. 6 is a schematic diagram of a mobile robot in a world coordinate system according to an embodiment of the present application. As shown in fig. 6, it is assumed that the steering wheel linear velocity of the current mobile robot is v i Steering wheel angular velocity is omega i When the steering wheel course angle is beta and the rotation center point of the mobile robot is O point, the rotation radius R and the rotation angular velocity omega of the mobile robot are respectively expressed as:
Figure BDA0004037529240000091
Figure BDA0004037529240000092
the vehicle body forward speed v of the mobile robot and the speed components of the X-axis and the Y-axis in the world coordinate system can be expressed as:
v=v i cosβ
v x =vcosθ
v y =vsinθ
and θ is the included angle between the body of the mobile robot and the X axis. Let the coordinates of the center point O4 of the mobile robot at the initial time in the world coordinate system be (x 0 ,y 0 ) The included angle between the body of the mobile robot and the X axis is theta 0 . Integrating the expressions of the revolving angular speed omega, the vehicle body advancing speed v and the speed component to obtain a kinematic model of the mobile robot, wherein the kinematic model is as follows:
Figure BDA0004037529240000093
Defining a control input u= [ v, ω of the mobile robot] T State variable q= [ x, y, θ ]] T The pose error model of the mobile robot can be deduced according to the expression as
Figure BDA0004037529240000096
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004037529240000094
defining a desired state variable q of a mobile robot d =[x d ,y dd ] T Correspondingly, the state variable error q e =[x e ,y ee ] T Can be expressed as q e =T(q d -q),x e Is the longitudinal error of the current position and the expected position of the mobile robot.
Wherein, the liquid crystal display device comprises a liquid crystal display device,
Figure BDA0004037529240000095
t is a pose transformation matrix between the world coordinate system and the vehicle body coordinate system. Desired state variable q d (x) d ,y d ) For moving the desired position of the robot, a desired state variable q d θ in (a) d Is the desired attitude angle of the mobile robot.
Further, the error dynamic model of the mobile robot can be deduced as follows:
Figure BDA0004037529240000101
wherein v is d Satisfying the desired state variable q for a mobile robot d Forward speed of vehicle at time omega d Desired state variable q for mobile robot d Rotational angular velocity at that time.
The embodiment can design the output feedback controller of the mobile robot based on the Lyapunov function so that the state variable error q of the mobile robot e Can gradually converge. For this purpose, the lyapunov function can be designed as shown in the following expression:
V=k(x e 2 +y e 2 ) 2 +2(sin(θ e /2))
wherein the state variable error q e Is an argument of the lyapunov function V. Deriving the Lyapunov function V to obtain:
Figure BDA0004037529240000102
When (when)
Figure BDA0004037529240000103
State variable error q e Is progressively convergent. Thus, a control model can be constructed:
Figure BDA0004037529240000104
wherein k is 1 And k 2 Is constant, and its value can be determined by practical experiments.
Since the control inputs of the above control model are the vehicle body forward speed v and the turning angular speed ω, which are difficult to directly obtain, the present embodiment defines the control input as u i =[v ii ] T Based on the vehicle body forward speed v and the steering wheel linear speed v i V=v i cos beta, rotational angular velocity ω and steering wheel angular velocity ω i Is ω of the relation of (2) i Conversion of the control model described above yields a nonlinear control model =ω:
Figure BDA0004037529240000105
in this embodiment, the movement control parameters are steering wheel linear speed and steering wheel angular speed. The steering wheel linear speed and steering wheel angular speed can be determined according to the nonlinear control model. Fig. 7 is a flowchart illustrating determining a mobile control parameter according to an embodiment of the present application. As shown in fig. 7, the step of determining the movement control parameter specifically includes S1301-S1302:
s1301, converting the preset expected position and expected speed from a world coordinate system to a coordinate system in Fei Le to obtain the expected linear speed and the expected angular speed.
Exemplary, will Desired state variable q at the current time d =[x d ,y dd ] T Converting into Fei Le internal coordinate system (Frenet coordinate system) to obtain desired linear velocity v d And a desired angular velocity omega d .
S1302, acquiring a steering wheel course angle of the mobile robot, and solving a nonlinear control model according to the expected linear speed, the expected angular speed, the target transverse offset, the target attitude angle, the steering wheel course angle and the preset longitudinal offset to obtain the steering wheel linear speed and the steering wheel angular speed.
In this embodiment, the longitudinal offset is a constant measured according to practical experiments. And after the current steering wheel course angle of the mobile robot is obtained and the target attitude angle and the target transverse offset are calculated according to the distance measured by the sensor module, substituting the expected linear speed, the expected angular speed, the target transverse offset, the target attitude angle, the steering wheel course angle and the preset longitudinal offset into the nonlinear control model to calculate the steering wheel linear speed and the steering wheel angular speed. And controlling the steering wheel to move according to the steering wheel linear speed and the steering wheel angular speed so as to drive the mobile robot to move along the reference object as much as possible. According to the expression of the nonlinear control model, when the mobile robot has no parameter uncertainty, uncertain external interference and the number of the mobile robot parameters is known, the mobile control error of the mobile robot is bounded, so that the mobile control of the mobile robot is ensured not to have too large error, and the control precision and the control stability of the mobile robot are greatly improved.
In another embodiment, the control input of the mobile robot is steering wheel linear speed, and in order to ensure the stability of the forward speed of the vehicle body in the working process, the steering wheel linear speed needs to be frequently adjusted, and the control mode is complex. Therefore, the embodiment provides a simple and feasible linear control model on the basis of the nonlinear control model. It is understood that the vehicle body forward speed mainly affects the magnitude of the longitudinal error, but in magnetic stripe navigation, the longitudinal error is an imaginary meaningless parameter. Therefore, when the control model is designed, the steering wheel linear speed can be made to be a constant value, the control mode can be simplified, and the control performance is not affected.
From the nonlinear control model and the expression of the revolving angular velocity, the steering wheel course angles beta and theta can be known e 、y e The nonlinear relationship of (2) can be expressed as: beta=f (y ee )。
The expression of the nonlinear relation is expressed in y e0 =0、θ e0 Taylor expansion at=0 can be obtained:
Figure BDA0004037529240000111
the linear control model can be designed according to the above formula as follows:
β=-k 3 y e -k 4 θ e
wherein k is 3 And k 4 Is a proportional system and its value can be determined according to practical experiments.
In this embodiment, fig. 8 is a flowchart of controlling the movement of the mobile robot based on the linear control model provided in the embodiment of the present application. As shown in fig. 8, the step of controlling the movement of the mobile robot based on the linear control model specifically includes S1303-S1304:
S1303, a preset linear control model is calculated according to the target lateral offset and the target attitude angle, and a steering wheel course angle is obtained; the linear control model is obtained by carrying out Taylor expansion on a nonlinear relation between a steering wheel course angle and a target lateral offset and a target attitude angle.
Illustratively, after calculating the target attitude angle and the target lateral offset according to the distance measured by the sensor module, substituting the target attitude angle and the target lateral offset into the linear control model to calculate the steering wheel course angle.
And S1304, controlling the steering wheel to drive the mobile robot to move according to the steering wheel course angle and the preset steering wheel linear speed.
In this embodiment, the steering wheel linear speed is preset according to the actual requirement, so that the steering wheel always maintains the same steering wheel linear speed, and then the steering wheel course angle is adjusted to achieve the effect of controlling the mobile robot to turn, the control mode is simpler, and the mobile robot is easier to execute.
In summary, according to the robot movement control method provided by the embodiment of the application, the distance between the sensor module and the reference object is determined according to the electric signal generated by the sensor module; determining a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance; according to the target lateral offset and the target attitude angle, a preset nonlinear control model is calculated to obtain movement control parameters of the mobile robot, and the mobile robot is controlled to move according to the movement control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function. By the technical means, the target attitude angle and the target lateral offset of the mobile robot relative to the reference object are estimated based on the distance measured by the sensor module, and the target attitude angle and the target lateral offset can be regarded as errors of the current pose and the expected pose of the mobile robot. When the nonlinear control model is designed through the error dynamic model and the Lyapunov function, the pose error is used as an unknown number, the pose error model is derived to obtain the error dynamic model, and the pose error and the error dynamic equation are substituted into the derivative of the Lyapunov function to construct the nonlinear control model. The nonlinear control model designed based on the Lyapunov function can control the pose deviation of the mobile robot to gradually converge, the control precision is high, the control stability is high, the mobile robot can be controlled to move along a reference object accurately, the navigation accuracy of the mobile robot is improved, and the operation precision of the mobile robot is further improved.
On the basis of the above embodiments, fig. 9 is a schematic structural diagram of a robot movement control device according to an embodiment of the present application. Referring to fig. 9, the robot movement control device provided in this embodiment specifically includes: a distance determination module 21, a target parameter determination module 22 and a movement control module 23.
The distance determining module is configured to determine the distance between the sensor module and the reference object according to the electric signal generated by the sensor module;
a target parameter determination module configured to determine a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance;
the mobile control module is configured to calculate a preset nonlinear control model according to the target lateral offset and the target attitude angle to obtain mobile control parameters of the mobile robot, and control the mobile robot to move according to the mobile control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function.
On the basis of the embodiment, the sensor module comprises a first sensor, a second sensor and a third sensor; accordingly, the distance determination module is configured to determine a first distance, a second distance and a third distance of the first sensor, the second sensor and the third sensor relative to the reference object according to the electric signals generated by the first sensor, the second sensor and the third sensor respectively.
On the basis of the embodiment, the mobile robot comprises a steering wheel and a driven wheel, the steering wheel is arranged at the front part of the mobile robot, the driven wheel is arranged at the rear part of the mobile robot, the first sensor is positioned in front of the steering wheel, the third sensor is positioned at the rear part of the driven wheel, the second sensor is positioned between the steering wheel and the driven wheel, and the central points of the first sensor, the second sensor, the third sensor, the steering wheel and the driven wheel are positioned on the same straight line.
On the basis of the above embodiment, the target parameter determination module includes: a first attitude angle determining unit configured to acquire a fourth distance between the first sensor and the second sensor, to dissociate a preset first similar triangle model according to the first distance, the second distance and the fourth distance, and to determine a first attitude angle according to a first resolving result of the first similar triangle model; a total distance determining unit configured to acquire a fifth distance between the second sensor and the driven wheel, acquire a sixth distance between the driven wheel and the third sensor, and determine a total distance of the fifth distance, the sixth distance, and the fourth distance; a second attitude angle determination unit configured to calculate a preset second similar triangle model from the first distance, the third distance, and the total distance dissociation, and determine a second attitude angle from a second solution result of the second similar triangle model; and a target attitude angle determination unit configured to take the average of the first attitude angle and the second attitude angle as a target attitude angle.
On the basis of the above embodiment, the target parameter determination module includes: a first offset determining unit configured to calculate a preset third triangle model according to the fifth distance, the first calculation result and the second distance dissociation, and obtain a first lateral offset; a second offset determining unit configured to obtain a second lateral offset according to a fourth similar triangle model preset by total distance, second calculation result, sixth distance and third distance calculation; and a target offset determination unit configured to take the average of the first lateral offset and the second lateral offset as a target lateral offset.
On the basis of the above embodiment, the movement control module includes: a desired speed determination unit configured to convert a preset desired position and desired speed from a world coordinate system to a coordinate system within Fei Le, resulting in a desired linear speed and a desired angular speed; the steering wheel speed control unit is configured to acquire a steering wheel course angle of the mobile robot, and calculate a nonlinear control model according to the expected linear speed, the expected angular speed, the target transverse offset, the target attitude angle, the steering wheel course angle and the preset longitudinal offset to obtain the steering wheel linear speed and the steering wheel angular speed.
On the basis of the above embodiment, the robot movement control device includes: the steering angle determining module is configured to calculate a preset linear control model according to the target transverse offset and the target attitude angle to obtain a steering wheel course angle; the linear control model is obtained by carrying out Taylor expansion on a nonlinear relation between a steering wheel course angle and a target lateral offset and a target attitude angle; the steering wheel rotation angle control unit is configured to control the steering wheel to drive the mobile robot to move according to the steering wheel course angle and the preset steering wheel linear speed.
In the above, according to the robot movement control device provided in the embodiment of the present application, the distance between the sensor module and the reference object is determined according to the electrical signal generated by the sensor module; determining a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance; according to the target lateral offset and the target attitude angle, a preset nonlinear control model is calculated to obtain movement control parameters of the mobile robot, and the mobile robot is controlled to move according to the movement control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function. By the technical means, the target attitude angle and the target lateral offset of the mobile robot relative to the reference object are estimated based on the distance measured by the sensor module, and the target attitude angle and the target lateral offset can be regarded as errors of the current pose and the expected pose of the mobile robot. When the nonlinear control model is designed through the error dynamic model and the Lyapunov function, the pose error is used as an unknown number, the pose error model is derived to obtain the error dynamic model, and the pose error and the error dynamic equation are substituted into the derivative of the Lyapunov function to construct the nonlinear control model. The nonlinear control model designed based on the Lyapunov function can control the pose deviation of the mobile robot to gradually converge, the control precision is high, the control stability is high, the mobile robot can be controlled to move along a reference object accurately, the navigation accuracy of the mobile robot is improved, and the operation precision of the mobile robot is further improved.
The robot movement control device provided by the embodiment of the application can be used for executing the robot movement control method provided by the embodiment, and has corresponding functions and beneficial effects.
Fig. 10 is a schematic structural diagram of a robot movement control device provided in an embodiment of the present application, and referring to fig. 10, the robot movement control device includes: a processor 31, a memory 32, a communication device 33, an input device 34 and an output device 35. The number of processors 31 in the robotic movement controlling device may be one or more and the number of memories 32 in the robotic movement controlling device may be one or more. The processor 31, the memory 32, the communication means 33, the input means 34 and the output means 35 of the robot movement control device may be connected by bus or other means.
The memory 32 is a computer readable storage medium, and may be used to store a software program, a computer executable program, and modules, such as program instructions/modules (e.g., the distance determining module 21, the target parameter determining module 22, and the movement control module 23 in the robot movement control device) corresponding to the robot movement control method according to any embodiment of the present application. The memory 32 may mainly include a storage program area that may store an operating system, at least one application program required for functions, and a storage data area; the storage data area may store data created according to the use of the device, etc. In addition, memory 32 may include high-speed random access memory, and may also include non-volatile memory, such as at least one magnetic disk storage device, flash memory device, or other non-volatile solid-state storage device. In some examples, the memory may further include memory remotely located with respect to the processor, the remote memory being connectable to the device through a network. Examples of such networks include, but are not limited to, the internet, intranets, local area networks, mobile communication networks, and combinations thereof.
The communication means 33 are for data transmission.
The processor 31 executes various functional applications of the apparatus and data processing, namely, implements the above-described robot movement control method by running software programs, instructions, and modules stored in the memory 32.
The input means 34 may be used to receive entered numeric or character information and to generate key signal inputs related to user settings and function control of the device. The output means 35 may comprise a display device such as a display screen.
The robot movement control device provided by the embodiment can be used for executing the robot movement control method provided by the embodiment, and has corresponding functions and beneficial effects.
The present embodiments also provide a storage medium containing computer executable instructions, which when executed by a computer processor, are for performing a robot movement control method comprising: determining the distance between the sensor module and the reference object according to the electric signal generated by the sensor module; determining a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance; according to the target lateral offset and the target attitude angle, a preset nonlinear control model is calculated to obtain movement control parameters of the mobile robot, and the mobile robot is controlled to move according to the movement control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function.
Storage media-any of various types of memory devices or storage devices. The term "storage medium" is intended to include: mounting media such as CD-ROM, floppy disk or tape devices; computer system memory or random access memory such as DRAM, DDR RAM, SRAM, EDO RAM, lanbas (Rambus) RAM, etc.; nonvolatile memory such as flash memory, magnetic media (e.g., hard disk or optical storage); registers or other similar types of memory elements, etc. The storage medium may also include other types of memory or combinations thereof. In addition, the storage medium may be located in a first computer system in which the program is executed, or may be located in a second, different computer system connected to the first computer system through a network such as the internet. The second computer system may provide program instructions to the first computer for execution. The term "storage medium" may include two or more storage media residing in different locations (e.g., in different computer systems connected by a network). The storage medium may store program instructions (e.g., embodied as a computer program) executable by one or more processors.
Of course, the storage medium containing the computer executable instructions provided in the embodiments of the present application is not limited to the above-mentioned robot movement control method, and may also perform the related operations in the robot movement control method provided in any embodiment of the present application.
The robot movement control device, the storage medium, and the robot movement control apparatus provided in the above embodiments may perform the robot movement control method provided in any embodiment of the present application, and technical details not described in detail in the above embodiments may be referred to the robot movement control method provided in any embodiment of the present application.
The foregoing description is only of the preferred embodiments of the present application and the technical principles employed. The present application is not limited to the specific embodiments described herein, but is capable of numerous obvious changes, rearrangements and substitutions as will now become apparent to those skilled in the art without departing from the scope of the present application. Therefore, while the present application has been described in connection with the above embodiments, the present application is not limited to the above embodiments, but may include many other equivalent embodiments without departing from the spirit of the present application, and the scope of the present application is determined by the scope of the claims.

Claims (10)

1. A robot movement control method, comprising:
determining the distance between the sensor module and a reference object according to the electric signal generated by the sensor module;
determining a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance;
according to the target lateral offset and the target attitude angle, a preset nonlinear control model is calculated to obtain movement control parameters of the mobile robot, and the mobile robot is controlled to move according to the movement control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function.
2. The robot movement control method of claim 1, wherein the sensor module comprises a first sensor, a second sensor, and a third sensor; correspondingly, the determining the distance between the sensor module and the reference object according to the electric signal generated by the sensor module comprises:
first, second and third distances of the first, second and third sensors relative to a reference object are determined from electrical signals generated by the first, second and third sensors, respectively.
3. The robot movement control method according to claim 2, wherein the mobile robot includes a steering wheel and a driven wheel, the steering wheel is provided at a front portion of the mobile robot, the driven wheel is provided at a rear portion of the mobile robot, the first sensor is located in front of the steering wheel, the third sensor is located behind the driven wheel, the second sensor is located between the steering wheel and the driven wheel, and center points of the first sensor, the second sensor, the third sensor, the steering wheel, and the driven wheel are located on the same straight line.
4. A robot movement control method according to claim 3, wherein said determining a target attitude angle of a mobile robot with respect to the reference based on the distance comprises:
acquiring a fourth distance between the first sensor and the second sensor, dissociating a preset first similar triangle model according to the first distance, the second distance and the fourth distance, and determining a first attitude angle according to a first resolving result of the first similar triangle model;
obtaining a fifth distance between the second sensor and the driven wheel, obtaining a sixth distance between the driven wheel and the third sensor, and determining a total distance of the fifth distance, the sixth distance and the fourth distance;
A preset second similar triangle model is calculated according to the first distance, the third distance and the total distance dissociation, and a second attitude angle is determined according to a second calculation result of the second similar triangle model;
and taking the average value of the first attitude angle and the second attitude angle as the target attitude angle.
5. The robot movement control method according to claim 4, wherein the determining a target attitude angle and a target lateral offset of the mobile robot with respect to the reference based on the distance includes:
obtaining a first transverse offset according to the fifth distance, the first resolving result and a third quasi-triangle model which is preset by resolving the second distance;
obtaining a second transverse offset according to the total distance, the second resolving result, the sixth distance and the third distance resolving calculation and a preset fourth similar triangle model;
and taking the average value of the first lateral offset and the second lateral offset as the target lateral offset.
6. The method according to claim 1, wherein the calculating a preset nonlinear control model according to the target lateral offset and the target attitude angle to obtain the movement control parameter of the mobile robot includes:
Converting a preset expected position and expected speed from a world coordinate system to a coordinate system in Fei Le to obtain an expected linear speed and an expected angular speed;
and acquiring a steering wheel course angle of the mobile robot, and solving the nonlinear control model according to the expected linear speed, the expected angular speed, the target transverse offset, the target attitude angle, the steering wheel course angle and the preset longitudinal offset to obtain the steering wheel linear speed and the steering wheel angular speed.
7. The robot movement control method according to claim 1, characterized in that the method further comprises:
according to the target transverse offset and the target attitude angle, a preset linear control model is calculated to obtain a steering wheel course angle; the linear control model is obtained by carrying out taylor expansion on a nonlinear relation between the steering wheel course angle and the target lateral offset and the target attitude angle;
and controlling the steering wheel to drive the mobile robot to move according to the steering wheel course angle and the preset steering wheel linear speed.
8. A robot movement control device, comprising:
a distance determining module configured to determine a distance of the sensor module relative to a reference based on an electrical signal generated by the sensor module;
A target parameter determination module configured to determine a target attitude angle and a target lateral offset of the mobile robot relative to the reference object according to the distance;
the mobile control module is configured to calculate a preset nonlinear control model according to the target lateral offset and the target attitude angle to obtain mobile control parameters of the mobile robot, and control the mobile robot to move according to the mobile control parameters; the nonlinear control model is constructed based on an error dynamic model and a Lyapunov function.
9. A robot movement control apparatus, characterized by comprising: one or more processors; storage means storing one or more programs which, when executed by the one or more processors, cause the one or more processors to implement the robot movement control method of any of claims 1-7.
10. A storage medium containing computer executable instructions, which when executed by a computer processor are for performing the robot movement control method according to any of claims 1-7.
CN202310009748.7A 2023-01-04 2023-01-04 Robot movement control method, device, equipment and storage medium Pending CN116300873A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202310009748.7A CN116300873A (en) 2023-01-04 2023-01-04 Robot movement control method, device, equipment and storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202310009748.7A CN116300873A (en) 2023-01-04 2023-01-04 Robot movement control method, device, equipment and storage medium

Publications (1)

Publication Number Publication Date
CN116300873A true CN116300873A (en) 2023-06-23

Family

ID=86794915

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202310009748.7A Pending CN116300873A (en) 2023-01-04 2023-01-04 Robot movement control method, device, equipment and storage medium

Country Status (1)

Country Link
CN (1) CN116300873A (en)

Similar Documents

Publication Publication Date Title
CN111070205B (en) Pile alignment control method and device, intelligent robot and storage medium
WO2019047639A1 (en) Method and device for calculating curvature of vehicle trajectory
CN112183133A (en) Aruco code guidance-based mobile robot autonomous charging method
CN104964683A (en) Closed loop correction method for indoor environment map creation
CN111836185B (en) Method, device, equipment and storage medium for determining base station position coordinates
CN109656240A (en) A kind of vehicle route follow-up control apparatus, method and vehicle
WO2022143919A1 (en) Steering parameter detection method and apparatus, computer device, and readable storage medium
Demim et al. Cooperative SLAM for multiple UGVs navigation using SVSF filter
WO2023093594A1 (en) Floor polishing robot trajectory deviation correction method and apparatus having error correction function
US11796357B2 (en) Magnetic encoder calibration
CN110849387A (en) Sensor parameter calibration method and device
Liu et al. An improved hybrid error control path tracking intelligent algorithm for omnidirectional AGV on ROS
Ivanjko et al. Simple off-line odometry calibration of differential drive mobile robots
Xu et al. Indoor vision/ins integrated mobile robot navigation using multimodel-based multifrequency kalman filter
CN112148017B (en) Error calibration method and device, electronic equipment and storage medium
CN115993089B (en) PL-ICP-based online four-steering-wheel AGV internal and external parameter calibration method
Victorino et al. Safe navigation for indoor mobile robots. Part I: a sensor-based navigation framework
CN116300873A (en) Robot movement control method, device, equipment and storage medium
Zhou et al. Vision-based state estimation and trajectory tracking control of car-like mobile robots with wheel skidding and slipping
Dai Research on robot positioning and navigation algorithm based on SLAM
WO2022161271A1 (en) Slope location correction method and apparatus, robot and readable storage medium
Jiao et al. A sliding parameter estimation method based on UKF for agricultural tracked robot
Marques et al. Autonomous robot for mapping using ultrasonic sensors
Hu et al. Research on Intelligent Car PID Autonomous Navigation System Based on ROS and Lidar
Tak et al. Path Tracing in holonomic drive system with Reduced Overshoot using rotary encoders

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination