CN117464675A - Method, device and equipment for generating motion trail of robot and readable storage medium - Google Patents

Method, device and equipment for generating motion trail of robot and readable storage medium Download PDF

Info

Publication number
CN117464675A
CN117464675A CN202311479352.5A CN202311479352A CN117464675A CN 117464675 A CN117464675 A CN 117464675A CN 202311479352 A CN202311479352 A CN 202311479352A CN 117464675 A CN117464675 A CN 117464675A
Authority
CN
China
Prior art keywords
motion
robot
target point
pose
track
Prior art date
Legal status (The legal status is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the status listed.)
Pending
Application number
CN202311479352.5A
Other languages
Chinese (zh)
Inventor
屈云飞
周壮
许长华
聂闻飞
Current Assignee (The listed assignees may be inaccurate. Google has not performed a legal analysis and makes no representation or warranty as to the accuracy of the list.)
Shenzhen Inovance Technology Co Ltd
Original Assignee
Shenzhen Inovance Technology Co Ltd
Priority date (The priority date is an assumption and is not a legal conclusion. Google has not performed a legal analysis and makes no representation as to the accuracy of the date listed.)
Filing date
Publication date
Application filed by Shenzhen Inovance Technology Co Ltd filed Critical Shenzhen Inovance Technology Co Ltd
Priority to CN202311479352.5A priority Critical patent/CN117464675A/en
Publication of CN117464675A publication Critical patent/CN117464675A/en
Pending legal-status Critical Current

Links

Classifications

    • BPERFORMING OPERATIONS; TRANSPORTING
    • B25HAND TOOLS; PORTABLE POWER-DRIVEN TOOLS; MANIPULATORS
    • B25JMANIPULATORS; CHAMBERS PROVIDED WITH MANIPULATION DEVICES
    • B25J9/00Programme-controlled manipulators
    • B25J9/16Programme controls
    • B25J9/1656Programme controls characterised by programming, planning systems for manipulators
    • B25J9/1664Programme controls characterised by programming, planning systems for manipulators characterised by motion, path, trajectory planning

Abstract

The application discloses a method, a device and equipment for generating a motion trail of a robot and a readable storage medium. Compared with the traditional teaching scheme, on one hand, the motion trail is obtained in a curve fitting mode, so that target points required to be used in a motion trail curve part can be reduced, and the manual operation amount and the space required by engineering motion trail storage are reduced; on the other hand, the motion constraint condition is added in advance in the fitting process, the motion track is limited in advance, and the accuracy of the obtained curve is ensured, so that repeated modification is avoided. In addition, the target point positions and the motion trail can be displayed through a multidimensional interactive interface without actually running a robot demonstration. Therefore, the method can reduce the manual workload and improve the accuracy of the motion trail, thereby meeting the motion requirement of the actual robot.

Description

Method, device and equipment for generating motion trail of robot and readable storage medium
Technical Field
The present disclosure relates to the field of robot teaching technologies, and in particular, to a method, an apparatus, a device, and a readable storage medium for generating a motion track of a robot.
Background
At present, when a robot (such as a manipulator) realizes a complex motion track, a large amount of target points are usually required to be taught, linear motion is used for replacing complex curve motion, and a curve is spliced by a sufficiently short line segment, so that the track adjustment process is complex, when the robot faces the complex motion track, a large amount of time is required to be spent for teaching a large amount of target points, the target points are required to be repeatedly adjusted, a robot instruction is repeatedly operated to check whether an actual track reaches an ideal track, the workload is large, and the track precision requirement is difficult to reach.
Disclosure of Invention
The main aim of the application is to provide a generation method of a robot motion track, which aims at solving the technical problems that the current traditional robot motion track teaching scheme is large in workload and difficult to meet the track precision requirement.
In order to achieve the above object, the present application provides a method for generating a motion trajectory of a robot, where an application device of the method for generating a motion trajectory of a robot is configured with a multidimensional interactive interface, the method for generating a motion trajectory of a robot includes:
Determining each target point position through which the robot needs to pass;
under the constraint of the motion constraint condition of the robot, curve fitting is carried out on each target point position, and a motion track to be executed by the robot is obtained;
and outputting the motion trail of the robot based on the multi-dimensional interactive interface.
Optionally, the motion constraint condition includes a track mode, and the type of the track mode includes an accurate mode and a smooth mode, and the smooth mode is provided with a basic position error;
under the constraint of the motion constraint condition of the robot, performing curve fitting on each target point location to obtain a motion track to be executed by the robot, wherein the step of obtaining the motion track to be executed by the robot comprises the following steps:
if the track mode is the accurate mode, performing first curve fitting on each target point to obtain the motion track, wherein each target point is located on the motion track under the condition of performing the first curve fitting;
and if the track mode is the smooth mode, performing second curve fitting on each target point to obtain the motion track, wherein under the condition of performing the second curve fitting, the shortest position deviation between each target point and the motion track is smaller than or equal to the basic position error.
Optionally, each target point is configured with a target pose, the motion track is configured with a pose track, and the smoothing mode is further provided with a basic pose error;
the method for generating the motion trail of the robot further comprises the following steps:
if the track mode is the accurate mode, performing first pose fitting on each target pose to obtain a pose track of the robot on the motion track, wherein the pose corresponding to the target point in the pose track is the same as the target pose of the target point for any one target point under the condition of performing the first pose fitting;
and if the track mode is the smooth mode, performing second pose fitting on each target pose to obtain the pose track, wherein the pose difference between the pose corresponding to the target point and the target pose of the target point in the pose track is smaller than or equal to the basic pose error for any one target point under the condition of performing the second pose fitting.
Optionally, the motion constraint further comprises a filtering level;
before the step of performing curve fitting on each target point to obtain a motion trail to be executed by the robot, the method comprises the following steps:
Determining noise points with fluctuation amplitude larger than the amplitude threshold corresponding to the filtering level in the target points based on the filtering level, wherein the fluctuation amplitude comprises the amplitude of position fluctuation or the amplitude of pose fluctuation;
and adjusting the position or the pose of the noise point so that the fluctuation amplitude of the noise point is smaller than or equal to the amplitude threshold.
Optionally, the motion constraint condition further comprises a corner judgment threshold value, a position stress coefficient and a posture stress coefficient, and the motion track is configured with a posture track and a speed curve;
the method for generating the motion trail of the robot further comprises the following steps:
determining a position corner on the motion trajectory and a pose corner on the pose trajectory based on the corner determination threshold;
determining a speed threshold of the position corner on the motion trail according to the position stress coefficient, and determining a speed threshold of the position corner on the motion trail according to the posture stress coefficient;
and fitting based on a speed threshold value on each corner position in the motion track and a preset speed of the motion track, and generating a speed curve of the robot on the motion track, wherein the corner positions comprise positions of the position corners and positions of the pose corners, and a speed value of any position point on the speed curve represents the moving speed of the robot on a position corresponding to the position point in the motion track.
Optionally, the multidimensional interactive interface includes a display area and a function control, where the display area is used to determine operation content in the user operation in a function mode corresponding to the function control;
the step of determining each target point position through which the robot needs to pass comprises the following steps:
and determining each target point location based on the operation content of the user operation in the corresponding function mode of the function control, and displaying each target point location through the display area.
Optionally, the functional controls include an add control, an import control and a modify control; the operation content of the user operation comprises an adding operation step in the function mode corresponding to the adding control, an importing operation step in the function mode corresponding to the importing control and a modifying operation step in the function mode corresponding to the modifying control;
the step of determining each target point location based on the operation content of the user operation in the function mode corresponding to the function control includes:
based on the adding operation step, taking the selected point location on the display area as the target point location;
and/or receiving a target point location file based on the importing operation step, and reading the target point location from the target point location file;
And/or, based on the modification operation step, correcting the selected point location on the display area to obtain the target point location.
Optionally, the function controls further include a view angle switching control, a zoom control, and a target pickup control to derive a control; the operation content of the user operation further comprises a view angle switching operation step in a function mode corresponding to the view angle switching control, a zooming operation step in a function mode corresponding to the zooming control, a picking operation step in a function mode corresponding to the target picking control and a exporting operation step in a function mode corresponding to the exporting control;
the method for generating the motion trail of the robot further comprises the following steps:
switching the view angles of the target points or the motion trails displayed on the display area based on the view angle switching operation step;
and/or, based on the scaling operation step, changing the display size of each target point location or the movement track on the display area;
and/or highlighting the picked up target point location, the picked up motion trajectory, or the picked up partial motion trajectory based on the picking up operation step;
and/or, based on the deriving operation step, deriving each target point location or the motion trail to a derived object.
Optionally, the step of outputting the motion trail of the robot based on the multi-dimensional interactive interface includes the steps of:
and outputting the motion trail and the motion image of the robot through the multi-dimensional interactive interface, wherein the motion image is an image of the robot moving on the motion trail according to the gesture trail and the speed curve of the motion trail.
Optionally, after the step of outputting the motion profile, the method includes:
and if the point set consisting of the target points is updated, returning to execute the curve fitting of the target points under the constraint of the motion constraint condition of the robot based on the target points in the new point set, and obtaining the motion trail to be executed by the robot.
Optionally, the method for generating the motion trail of the robot further comprises the following steps:
under the condition that the robot has a plurality of continuous motion tracks, determining target points of a joint area in any two continuous motion tracks;
and performing curve fitting on each target point position under the constraint of the motion constraint condition of the robot based on the target point position return of the joint area to obtain a motion track to be performed by the robot so as to splice the two continuous motion tracks.
In order to achieve the above object, the present application further provides a generation device of a motion trajectory of a robot, the generation device of a motion trajectory of a robot including:
the determining module is used for responding to the user operation of the user based on the multidimensional interactive interface and determining each target point position through which the robot needs to pass;
the fitting module is used for performing curve fitting on each target point location under the constraint of the motion constraint condition of the robot to obtain a motion track to be executed by the robot;
and the output module is used for outputting the motion trail of the robot based on the multi-dimensional interactive interface.
In order to achieve the above object, the present application further provides a generation apparatus of a motion trajectory of a robot, the generation apparatus of a motion trajectory of a robot including: the method comprises the steps of a memory, a processor and a robot motion trail generation program stored in the memory and capable of running on the processor, wherein the robot motion trail generation program realizes the robot motion trail generation method when being executed by the processor.
In order to achieve the above object, the present application further provides a readable storage medium, where the readable storage medium is a computer readable storage medium, and a generation program of a robot motion trail is stored on the readable storage medium, and when the generation program of the robot motion trail is executed by a processor, the steps of the generation method of the robot motion trail are implemented.
The embodiment of the application provides a method, a device and equipment for generating a robot motion trail and a readable storage medium. In the embodiment of the application, the application device of the method for generating the motion trail of the robot is configured with a multidimensional interactive interface; the application equipment responds to the user operation of the user based on the multidimensional interactive interface, and determines each target point position through which the robot needs to pass; under the constraint of the motion constraint condition of the robot, curve fitting is carried out on each target point position, and a motion track to be executed by the robot is obtained; and outputting the motion trail of the robot based on the multi-dimensional interactive interface. In other words, in the embodiment of the application, under the motion constraint condition of the robot, the motion trail of the robot is generated based on all target points passed by the robot in a curve fitting mode. Compared with the traditional teaching scheme, on one hand, the motion trail is obtained in a curve fitting mode, so that target points required to be used in a motion trail curve part can be reduced, and the manual operation amount and the space required by engineering motion trail storage are reduced; on the other hand, the motion constraint condition is added in advance in the fitting process, the motion track is limited in advance, and the accuracy of the obtained curve is ensured, so that repeated modification is avoided, and the manual operation amount can be reduced. In addition, the target point positions and the motion track can be displayed through a multidimensional interactive interface, and the point positions and the track can be visually seen without actually running the robot. In combination, the method for generating the motion trail of the robot can reduce the manual workload and improve the precision of the motion trail, thereby meeting the motion requirement of an actual robot.
Drawings
FIG. 1 is a schematic diagram of a device architecture of a hardware operating environment according to an embodiment of the present application;
fig. 2 is a schematic flow chart of a first embodiment of a method for generating a motion trail of a robot according to the present application;
FIG. 3 is a schematic diagram of a motion trail obtained in an accurate mode in the method for generating a motion trail of a robot;
fig. 4 is a schematic diagram of a motion track obtained in a smooth mode of a method for generating a motion track of a robot according to the present application;
fig. 5 is a schematic flow chart of a second embodiment in the method for generating a motion trail of a robot in the present application;
FIG. 6 is a schematic diagram of a corner in the method for generating a motion trajectory of a robot according to the present application;
fig. 7 is a schematic diagram of smoothing processing in the method for generating a motion trail of a robot in the present application;
fig. 8 is a schematic flow chart of a third embodiment in a method for generating a motion trail of a robot in the present application;
FIG. 9 is a schematic diagram of a multidimensional interactive interface in the method for generating a motion trail of a robot;
FIG. 10 is a schematic diagram of an interactive framework of a robot in the method for generating a motion trail of the robot;
fig. 11 is a schematic flow chart of a fourth embodiment in a method for generating a motion trail of a robot in the present application;
fig. 12 is a schematic diagram of a robot motion trajectory generation device in the method for generating a robot motion trajectory according to the present application.
The realization, functional characteristics and advantages of the present application will be further described with reference to the embodiments, referring to the attached drawings.
Detailed Description
It should be understood that the specific embodiments described herein are for purposes of illustration only and are not intended to limit the present application.
As shown in fig. 1, fig. 1 is a schematic device structure diagram of a hardware running environment according to an embodiment of the present application.
The equipment of the embodiment of the application can be a robot, a manipulator, or an electronic terminal device such as a PC, a smart phone, a tablet personal computer, a portable computer and the like.
As shown in fig. 1, the apparatus may include: a processor 1001, such as a CPU, a network interface 1004, a user interface 1003, a memory 1005, a communication bus 1002. Wherein the communication bus 1002 is used to enable connected communication between these components. The user interface 1003 may include a Display, an input unit such as a Keyboard (Keyboard), and the optional user interface 1003 may further include a standard wired interface, a wireless interface. The network interface 1004 may optionally include a standard wired interface, a wireless interface (e.g., WI-FI interface). The memory 1005 may be a high-speed RAM memory or a stable memory (non-volatile memory), such as a disk memory. The memory 1005 may also optionally be a storage device separate from the processor 1001 described above.
Optionally, the device may also include a camera, RF (Radio Frequency) circuitry, sensors, audio circuitry, wiFi modules, and the like. Among other sensors, such as light sensors, motion sensors, and other sensors. Specifically, the light sensor may include an ambient light sensor that may adjust the brightness of the display screen according to the brightness of ambient light, and a proximity sensor that may turn off the display screen and/or the backlight when the mobile terminal moves to the ear. As one of the motion sensors, the gravity acceleration sensor can detect the acceleration in all directions (generally three axes), and can detect the gravity and the direction when the mobile terminal is stationary, and the mobile terminal can be used for recognizing the gesture of the mobile terminal (such as horizontal and vertical screen switching, related games, magnetometer gesture calibration), vibration recognition related functions (such as pedometer and knocking), and the like; of course, the mobile terminal may also be configured with other sensors such as a gyroscope, a barometer, a hygrometer, a thermometer, an infrared sensor, and the like, which are not described herein.
It will be appreciated by those skilled in the art that the device structure shown in fig. 1 is not limiting of the device and may include more or fewer components than shown, or may combine certain components, or a different arrangement of components.
As shown in fig. 1, an operating system, a network communication module, a user interface module, and a robot motion trajectory generation program may be included in a memory 1005 as one type of computer storage medium.
In the device shown in fig. 1, the network interface 1004 is mainly used for connecting to a background server, and performing data communication with the background server; the user interface 1003 is mainly used for connecting a client (user side) and performing data communication with the client; and the processor 1001 may be configured to call a generation program of a robot motion trajectory stored in the memory 1005, an application device of the generation program of the robot motion trajectory is configured with a multi-dimensional interactive interface, and perform the following operations:
responding to user operation of a user based on the multidimensional interactive interface, and determining each target point position through which the robot needs to pass;
under the constraint of the motion constraint condition of the robot, curve fitting is carried out on each target point position, and a motion track to be executed by the robot is obtained;
and outputting the motion trail of the robot based on the multi-dimensional interactive interface.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
The motion constraint condition comprises a track mode, wherein the track mode comprises an accurate mode and a smooth mode, and the smooth mode is provided with a basic position error;
under the constraint of the motion constraint condition of the robot, performing curve fitting on each target point location to obtain a motion track to be executed by the robot, wherein the step of obtaining the motion track to be executed by the robot comprises the following steps:
if the track mode is the accurate mode, performing first curve fitting on each target point to obtain the motion track, wherein each target point is located on the motion track under the condition of performing the first curve fitting;
and if the track mode is the smooth mode, performing second curve fitting on each target point to obtain the motion track, wherein under the condition of performing the second curve fitting, the shortest position deviation between each target point and the motion track is smaller than or equal to the basic position error.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
the target pose is configured on each target point, the motion trail is configured with a pose trail, the smoothing mode is also provided with a basic pose error, and the generation method of the motion trail of the robot further comprises the following steps:
If the track mode is the accurate mode, performing first pose fitting on each target pose to obtain a pose track of the robot on the motion track, wherein the pose corresponding to the target point in the pose track is the same as the target pose of the target point for any one target point under the condition of performing the first pose fitting;
and if the track mode is the smooth mode, performing second pose fitting on each target pose to obtain the pose track, wherein the pose difference between the pose corresponding to the target point and the target pose of the target point in the pose track is smaller than or equal to the basic pose error for any one target point under the condition of performing the second pose fitting.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
the motion constraint condition further comprises a filtering level, and before the step of performing curve fitting on each target point location to obtain a motion track to be executed by the robot, the method comprises the following steps:
Determining noise points with fluctuation amplitude larger than the amplitude threshold corresponding to the filtering level in the target points based on the filtering level, wherein the fluctuation amplitude comprises the amplitude of position fluctuation or the amplitude of pose fluctuation;
and adjusting the position or the pose of the noise point so that the fluctuation amplitude of the noise point is smaller than or equal to the amplitude threshold.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
the motion constraint condition further comprises a corner judging threshold value, a position stress coefficient and a posture stress coefficient, and the motion track is configured with a posture track and a speed curve;
the method for generating the motion trail of the robot further comprises the following steps:
determining a position corner on the motion trajectory and a pose corner on the pose trajectory based on the corner determination threshold;
determining a speed threshold of the position corner on the motion trail according to the position stress coefficient, and determining a speed threshold of the position corner on the motion trail according to the posture stress coefficient;
and fitting based on a speed threshold value on each corner position in the motion track and a preset speed of the motion track, and generating a speed curve of the robot on the motion track, wherein the corner positions comprise positions of the position corners and positions of the pose corners, and a speed value of any position point on the speed curve represents the moving speed of the robot on a position corresponding to the position point in the motion track.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
the multi-dimensional interaction interface comprises a display area and a functional control, wherein the display area is used for determining operation content in the user operation under the functional mode corresponding to the functional control, and the step of determining each target point position through which the robot needs to pass comprises the following steps:
and determining each target point location based on the operation content of the user operation in the corresponding function mode of the function control, and displaying each target point location through the display area.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
the function controls comprise an adding control, an importing control and a modifying control; the operation content of the user operation comprises an adding operation step in the function mode corresponding to the adding control, an importing operation step in the function mode corresponding to the importing control and a modifying operation step in the function mode corresponding to the modifying control;
the step of determining each target point location based on the operation content of the user operation in the function mode corresponding to the function control includes:
Based on the adding operation step, taking the selected point location on the display area as the target point location; and/or receiving a target point location file based on the importing operation step, and reading the target point location from the target point location file;
and/or, based on the modification operation step, correcting the selected point location on the display area to obtain the target point location.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
the function control further comprises a visual angle switching control, a scaling control and a target pickup control to derive the control; the operation content of the user operation further comprises a view angle switching operation step in a function mode corresponding to the view angle switching control, a zooming operation step in a function mode corresponding to the zooming control, a picking operation step in a function mode corresponding to the target picking control and a exporting operation step in a function mode corresponding to the exporting control;
the method for generating the motion trail of the robot further comprises the following steps:
switching the view angles of the target points or the motion trails displayed on the display area based on the view angle switching operation step;
And/or, based on the scaling operation step, changing the display size of each target point location or the movement track on the display area;
and/or highlighting the picked up target point location, the picked up motion trajectory, or the picked up partial motion trajectory based on the picking up operation step;
and/or, based on the deriving operation step, deriving each target point location or the motion trail to a derived object.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
the step of outputting the motion trail of the robot based on the multi-dimensional interaction interface comprises the following steps:
and outputting the motion trail and the motion image of the robot through the multi-dimensional interactive interface, wherein the motion image is an image of the robot moving on the motion trail according to the gesture trail and the speed curve of the motion trail.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
after the step of outputting the motion profile, the method includes:
And if the point set consisting of the target points is updated, returning to execute the curve fitting of the target points under the constraint of the motion constraint condition of the robot based on the target points in the new point set, and obtaining the motion trail to be executed by the robot.
Further, the processor 1001 may call a generation program of a robot motion trajectory stored in the memory 1005, and further perform the following operations:
the method for generating the motion trail of the robot further comprises the following steps:
under the condition that the robot has a plurality of continuous motion tracks, determining target points of a joint area in any two continuous motion tracks;
and performing curve fitting on each target point position under the constraint of the motion constraint condition of the robot based on the target point position return of the joint area to obtain a motion track to be performed by the robot so as to splice the two continuous motion tracks.
Referring to fig. 2, in a first embodiment of a method for generating a motion trajectory of a robot, an application device of the method for generating a motion trajectory of a robot is configured with a multi-dimensional interactive interface, the method for generating a motion trajectory of a robot includes:
Step S10, determining each target point to be passed by the robot according to user operation of the user based on the multidimensional interactive interface;
in this embodiment, the robot generally refers to a manipulator, for example, in the manufacturing industry, according to different production lines or different production links where the manipulator is located, the manipulator needs to perform different actions according to different task tasks, that is, a motion track of the robot (manipulator). The motion trail is required to be determined by a related operator through teaching, especially when facing a complex motion trail curve, in a traditional teaching mode, a large number of target points are required to be set for teaching by the operator because the actual curve is replaced by a micro line segment, and in the teaching process, the target points are required to be repeatedly adjusted, and a robot instruction is repeatedly operated to check whether the actual trail reaches an ideal trail. Huge workload is caused, the debugging process is inconvenient, and the precision of the finally obtained motion trail cannot be ensured. In addition, the number of points to be stored in the engineering will be greatly increased, resulting in the enlargement of engineering files and occupying storage space. In view of the above, in the present embodiment, a method for generating a motion trajectory of a robot is provided, in which a motion trajectory of the robot is generated by means of curve fitting, so as to simplify a curve teaching part of the motion trajectory. The teaching workload is reduced, and the precision of the curve in the motion trail is improved, so that the motion requirement of an actual robot is met.
In this embodiment, an application device of the method for generating a motion trail of a robot is configured with a multi-dimensional interaction interface, for example, the application device may be a robot debugging device, the debugging device may be a multi-dimensional interaction interface, and a user may complete determination of a target point location based on the multi-dimensional operation interface, where the multi-dimensional interaction interface may be a two-dimensional interaction interface, a three-dimensional interaction interface, or a four-dimensional interaction interface, for example, in the case of the two-dimensional interaction interface, the target point location or the motion trail will be displayed in a view angle of a two-dimensional plane, for example, a plane formed by an x-axis and a y-axis that are perpendicular to each other may present the target point location or the motion trail; under the condition of a three-dimensional interactive interface, the target point position or the motion track is presented in a three-dimensional space view angle, for example, the target point position or the motion track can be presented in a space view angle formed by an x axis, a y axis and a z axis which are mutually perpendicular; the interactive interface is to add a time axis on the basis of the three-dimensional interactive interface, for example, for a motion track, the length of the time axis may be the time required by the manipulator to complete the motion track, and a user may display the positions and the postures of the manipulator in the three-dimensional space at different moments by dragging the time axis. It can be understood that due to the existence of the multidimensional interactive interface, a user can intuitively see the motion trail of the robot, so that the motion trail is observed without actually controlling the motion of the robot, and the operation content of an operator is simplified.
For determining each target point position that the robot needs to pass through, there may be multiple determination modes, for example, according to actual motion requirements, an operator may input coordinates of the target point position through the multidimensional interactive interface, may directly select the target point position in the multidimensional interactive interface, may directly import a point position file, and then read the target point position in the point position file. It should be noted that, in the following steps, the motion track in a fitting manner is compared with the scheme of splicing the line segments into the curve by using the short enough line segments, so that the number of the target points can be greatly reduced, and the operation amount of operators can be simplified.
Step S20, under the constraint of the motion constraint condition of the robot, curve fitting is carried out on each target point position, and a motion track to be executed by the robot is obtained;
the motion constraint condition refers to a constraint condition when the robot moves, and the motion constraint condition can be set by an operator according to actual situations, for example, the motion constraint condition can be a motion constraint caused by a structure of the robot, or a constraint of deviation degree between a motion track and a target point, and the like, and the motion track obtained by fitting can meet the actual motion requirement through the constraint condition, so that the operator is prevented from repeatedly modifying the motion track. In addition, each target point will include the order in which the robot is routed, e.g., for target point a and target point B, the robot needs to pass through target point a before passing through target point B. And under the constraint condition, curve fitting is carried out on each target point based on the sequence of the robot passing through each target point, so that the motion trail of the robot can be obtained. For example, according to the order of passing through each target point by the robot, discrete target points are interpolated (for example, the track between two adjacent target points is interpolated according to the passing order), and a smooth curve (i.e. a smooth motion track) can be obtained based on the interpolated data, so that the target points required to be set by an operator are reduced. The specific curve fitting algorithm may be set by a technician according to actual requirements, for example, solving a least square method, a cubic spline curve fitting algorithm, and the like, and fitting under the constraint of a motion constraint condition.
And step S30, outputting the motion trail of the robot.
For example, the motion trajectory of the robot may be output in a curved situation. The motion trail can be output to an executing part of the robot, and then the robot executing part controls the robot to execute according to the motion trail. And the motion trail can be output to a target point position editing part, namely an operator can judge whether the motion trail is ideal or has deviation according to the motion trail displayed by the editing part so as to be convenient for further modification and the like.
In this embodiment, determining each target point through which the robot needs to pass; under the constraint of the motion constraint condition of the robot, curve fitting is carried out on each target point position, and a motion track to be executed by the robot is obtained; outputting the motion trail of the robot. In other words, in the embodiment of the application, under the motion constraint condition of the robot, the motion trail of the robot is generated based on all target points passed by the robot in a curve fitting mode. Compared with the traditional teaching scheme, on one hand, the motion trail is obtained in a curve fitting mode, so that target points required to be used in a motion trail curve part can be reduced, and the manual operation amount and the space required by engineering motion trail storage are reduced; on the other hand, the motion constraint condition is added in advance in the fitting process, the motion track is limited in advance, and the accuracy of the obtained curve is ensured, so that repeated modification is avoided, and the manual operation amount can be reduced. In addition, the target point positions and the motion track can be displayed through a multidimensional interactive interface, and the point positions and the track can be visually seen without actually running the robot. In combination, the method for generating the motion trail of the robot can reduce the manual workload and improve the precision of the motion trail, thereby meeting the motion requirement of an actual robot.
In a possible implementation manner, the motion constraint condition includes a track mode, the type of the track mode includes an accurate mode and a smooth mode, the smooth mode is provided with a basic position error, and the step of performing curve fitting on each target point location under the constraint of the motion constraint condition of the robot to obtain a motion track to be executed by the robot includes:
step S210, if the track mode is the accurate mode, performing a first curve fitting on each target point location to obtain the motion track, where each target point location is located on the motion track when the first curve fitting is performed;
step S220, if the track mode is the smooth mode, performing a second curve fitting on each target point location to obtain the motion track, where, in the case of performing the second curve fitting, the shortest position deviation between each target point location and the motion track is smaller than or equal to the base position error.
It should be noted that, in this embodiment, the motion constraint condition may include a track mode, that is, the motion track is constrained by the track mode, where the track mode includes an accurate mode and a smooth mode, and a basic position error is further set in the smooth mode, and it is understood that, in some scenarios, there may be a position error between the curved path and the input point because the motion track does not necessarily pass through the target point strictly. The allowable position deviation between the motion trail and the target point is called as basic position error. The base position error parameter may be used as a curve fit reference.
In practical application, the user can select different modes according to different application scenes. The mode is generally used for most working condition application requirements, and can be used for tempering according to curve parameter requirements input by a user. The smoothing mode is suitable for working condition scenes with dense input data, large acquisition noise and low path precision requirements. The curve in the mode does not strictly pass through the input point position, but moves in a mode approaching to the target point position, the movement approaching degree is related to the density degree of the point, and the mode has better smoothness. The accurate mode is suitable for the working condition scene with high input data accuracy, small noise, smooth original track and strict arrival of path points.
In an exemplary case, when the track mode is the precise mode, performing first curve fitting based on each target point location to obtain a motion track. And under the condition of performing the first curve fitting, each target point is positioned on the obtained motion trail. As shown in fig. 3, a schematic diagram of a motion trajectory obtained in the precise mode in the present application is shown. And sequentially passing through the target point position A, the target point position B, the target point position C, the target point position D and the target point position E in the figure according to the passing sequence of the robot. In the precise mode, a curved first line fitting is performed to obtain a motion track 1, and as can be seen from fig. 3, the target point location a to the target point location E are all located on the motion track 1. And performing a second curve fitting on each target point in the smoothing mode to obtain a motion track, and under the condition of performing the second curve fitting, the shortest position deviation between each target point and the motion track is smaller than or equal to the basic position error, as shown in fig. 4, which is a motion track schematic diagram obtained in the smoothing mode of the application, and performing the second curve fitting on the target point a, the target point B, the target point C, the target point D and the target point E, which are sequentially passed by the same robot, to obtain a motion track 2, it can be seen from fig. 4 that not all the target points are strictly located on the motion track 2, a certain position deviation exists between part of the target points and the motion track, and the shortest position deviation between each target point and the motion track is controlled within the basic position error, that is, the shortest position deviation is smaller than or equal to the basic position error. The specific basic position error can be set by an operator according to actual conditions. In addition to the smoothing mode and the accurate mode described above, a default mode may be included, i.e. the base position error is a default value, such as 1mm, in the mode.
In a possible implementation manner, each target point location is configured with a target pose, the motion trail is configured with a pose trail, the smoothing mode is further provided with a basic pose error, and the method for generating the motion trail of the robot further includes:
step S201, if the track mode is the accurate mode, performing first pose fitting on each target pose to obtain a pose track of the robot on the motion track, where, in the case of performing the first pose fitting, for any one target point location, the pose corresponding to the target point location in the pose track is the same as the target pose of the target point location;
step S202, if the track mode is the smooth mode, performing second pose fitting on each target pose to obtain the pose track, where, in the case of performing the second pose fitting, for any one target point, a pose difference between a pose corresponding to the target point and a target pose of the target point in the pose track is smaller than or equal to the basic pose error.
In practical application, in addition to setting each target point, an operator may configure a target pose on the target point, where the target pose is a position that the robot needs to reach after reaching the corresponding target point, for example, the robot is set as a manipulator, the manipulator is provided with a pointing direction, and for any one target point, the target pose of the target point refers to the pointing direction after the manipulator reaches the target point. Correspondingly, the motion trail can be configured with gesture trail, and the gesture trail represents the pose of the robot at different positions on the motion trail. For example, fitting may be performed on each target pose according to the passing sequence of the target point positions corresponding to each target pose, for example, for two adjacent target poses in the passing sequence, interpolation may be performed on the two adjacent target poses to obtain excessive poses between the two adjacent target poses, that is, interpolation data, by using the fitting method, fitting may be performed on each target pose, so as to obtain a complete gesture track of the robot under the motion track. Specifically, the fitting algorithm used in the gesture track is the same as the fitting algorithm used in the motion track, and will not be described here again.
For example, similar to the motion trajectory fitting, the motion constraint condition may further include a basic pose error, that is, an approximation error may exist between a pose corresponding to the target point in the pose trajectory and a target pose of the target point. Where the allowed pose bias between the pose trajectory and the target pose is generally referred to as the base pose error. The curve normally defaults to a tolerance of 1 deg.. The parameter is used as a reference value for pose track fitting. In the occasion with higher precision requirement, the user can reduce the pose deviation of the pose track by adjusting the basic pose error or use an accurate mode. And if the robot is in the accurate mode, performing first pose fitting on all target poses according to the passing sequence of the robot to obtain a pose track of the robot on a motion track, wherein under the condition of performing first pose fitting, for any one target point, the pose corresponding to the target point in the pose track is the same as the target pose of the target point. And in the smooth mode, performing second pose fitting on the target poses according to the passing sequence of the robot to obtain a pose track, wherein under the condition of performing second pose fitting, for any one target point position, the pose difference between the pose corresponding to the target point position and the target pose of the target point position in the pose track is smaller than or equal to a basic pose error. In addition, in practical application, the fitting process of the motion track and the fitting process of the gesture track can be synchronously performed, so that the teaching efficiency is improved.
In a possible embodiment, the motion constraint further comprises a filtering level;
before the step of performing curve fitting on each target point to obtain a motion trail to be executed by the robot, the method comprises the following steps:
step S01, determining noise points with fluctuation amplitude larger than a corresponding amplitude threshold value of the filtering level in the target points based on the filtering level, wherein the fluctuation amplitude comprises the amplitude of position fluctuation or the amplitude of pose fluctuation;
and step S02, adjusting the position or the pose of the noise point so that the fluctuation amplitude of the noise point is smaller than or equal to the amplitude threshold.
In practical application, there are various ways in which an operator teaches the robot, for example, drag teaching, that is, the operator drags the robot according to a desired track, and then the relevant sensor records the target point in the motion track of the robot, thereby completing the acquisition of the target point. However, in an application scenario with low accuracy of the track recognition sensor, due to low accuracy of the acquired data and severe jitter, the actual motion track may have a jam and poor smoothness, i.e. the original target point position has large fluctuation, so that the smoothness of the motion track obtained by fitting is affected. Therefore, the original target point position is trimmed in a wave recording mode, high-frequency noise burrs are filtered, and the smoothness of the motion trail is improved. It should be noted that, in general, the filtering processing is performed by default, and an operator can set a filtering level according to an actual requirement, so as to perform the filtering processing on the original target point location and remove the dither noise in the original target point location.
Illustratively, the motion constraint further includes a filtering level, through which a noise point in each target point may be determined, where it is understood that the noise point refers to a point where the position fluctuation amplitude or the gesture fluctuation amplitude is greater than the corresponding amplitude threshold of the filtering level. For example, for the position fluctuation range, linear fitting may be performed on a local target point, and the deviation of the target point from the linear fitting result may be used as the fluctuation range, and for the pose fluctuation range, the same manner as described above may be used to determine. And then, the position or the pose of the noise point is adjusted and corrected, so that the fluctuation amplitude of the noise point is smaller than or equal to the amplitude threshold. And the motion trail obtained by final curve fitting has good smoothness by adjusting the noise point positions.
Referring to fig. 5, a second embodiment of the present application is proposed based on the first embodiment of the present application, and in this embodiment, the same or similar parts as those of the above embodiment may be referred to the above, and will not be described herein again. The motion constraint condition further comprises a corner judging threshold value, a position stress coefficient and a posture stress coefficient, and the motion track is configured with a posture track and a speed curve;
The method for generating the motion trail of the robot further comprises the following steps:
step S100, determining a position corner on the motion track and a pose corner on the pose track based on the corner judging threshold;
step S200, determining a speed threshold of the position corner on the motion track according to the position stress coefficient, and determining a speed threshold of the position corner on the motion track according to the posture stress coefficient;
and step S300, fitting is performed based on a speed threshold value on each corner position in the motion track and a preset speed of the motion track, and a speed curve of the robot on the motion track is generated, wherein the corner positions comprise positions of the position corners and positions of the pose corners, and for a speed value of any position point on the speed curve, the moving speed of the robot on a position corresponding to the position point in the motion track is represented.
After determining the motion track and the gesture track, the motion speed of the robot on the motion track, that is, the speed curve, needs to be determined. In the present embodiment, the motion constraint conditions further include a corner determination threshold, a positional stress coefficient, and an attitude stress coefficient. The corner judging threshold is used for judging a region with larger variation in the motion track and the gesture track, namely a corner position, and if obvious turning features appear in the motion track, the flexibility of the track is greatly affected, so that local sharp or distortion of the track is easily caused to interfere with the environment, and impact or obvious deceleration is caused. The corner determination threshold may be a curve angle threshold or a pose angle change threshold. Regarding the motion trajectory, a region smaller than the curve angle threshold is taken as a position corner, for example, referring to fig. 6, which is a corner schematic diagram in the present application, similarly, the robot needs to sequentially perform the target point a, the target point B, the target point C, the target point D, and the target point E, set the local relative angle change of the target position C as an angle θ, and compare θ with the curve angle threshold to determine whether the target point C is a corner. Similarly, for the gesture track, a region in the gesture track, the position change of which is greater than the gesture angle change threshold, may be used as the gesture corner. It should be noted that, for a region where the points are closely spaced (e.g., a point having a front-rear point pitch of less than 5 mm), the point may be set so as not to be included in the inflection region.
The position stress coefficient and the attitude stress coefficient are the motion limiting conditions of the robot at the corners. For example, when the robot passes through a corner position in the motion track, if the motion speed near the corner is relatively high, the motion impact or shake of the robot becomes obvious, which is unfavorable for stable motion, so in this embodiment, the robot is used for limiting the running speed of the corner position by setting the position stress coefficient and the posture stress coefficient so as to avoid causing severe impact vibration or triggering acceleration overload alarm. The speed threshold of the position corner on the motion track can be determined by the position stress coefficient, and the speed threshold of the position corner on the motion track can be determined by the gesture stress coefficient, for example, the position stress coefficient and the gesture stress coefficient can be respectively proportional to the speed threshold, and if the position stress coefficient is larger, the speed threshold is determined by the position stress coefficient. For the overlapping area of the position corner and the pose corner, namely, two speed thresholds existing in a position are equivalent, one with smaller speed threshold is selected as a final speed limiting condition in the position. And fitting the speed threshold value on each corner position (the position of the position corner and the position of the pose corner) in the motion track and the preset speed of the motion track, so as to obtain a speed curve of the robot on the motion track, wherein the speed value of any position point on the speed curve represents the moving speed of the robot on the position corresponding to the position point in the motion track, for example, the speed value of the position point arranged on the speed curve is a, and the moving speed of the robot on the position corresponding to the position point in the motion track is a. The preset speed is the normal moving speed of the robot under the condition that no corner exists, and can be set by an operator according to actual requirements.
In addition, it should be noted that, when the motion trajectory or the gesture trajectory is obtained by fitting in the above embodiment, there are different trajectory modes, and the motion trajectory is taken as an example for explanation, in the smoothing mode, the position deviation between each target point and the motion trajectory is constrained to be less than or equal to the base position error, and in practical application, in order to ensure the smoothness of the motion trajectory, the constraint condition of the corner position may be relaxed, that is, the motion constraint condition may further include a smoothing error multiplying factor, for example, if the smoothing error multiplying factor is set to be 10, the position error of the corner region is constrained to be 10 times of the base position error when fitting is performed, that is, the position error between the target point of the corner region and the motion trajectory is constrained to be less than or equal to 10 times of the base position error. Similarly, the smoothing error multiplying power can also be used in the fitting process of the gesture track, and will not be described here again. Referring to fig. 7, a schematic diagram of smoothing processing in the present application is shown, in which a region of a dashed box is a smoothing region for smoothing a corner portion in a track, and a specific degree of smoothing processing is proportional to a smoothing error magnification.
In a possible embodiment, after the step of outputting the motion trajectory, the method includes:
And step S40, if the point aggregation set formed by the target points is updated, returning to execute the curve fitting of the target points under the constraint of the motion constraint condition of the robot based on the target points in the new point aggregation set to obtain the motion trail to be executed by the robot.
It can be understood that the obtained motion trail may be output to the outside, for example, output to an operator, the operator may determine whether the output motion trail reaches the expected target, and in the case that the output motion trail does not reach the expected target, the current target point may be changed, for example, the position or the pose of the target point may be changed, a new target point may be added or the target point may be removed, that is, the point set formed by the target points is updated. In this case, the step of obtaining the motion track to be executed by the robot may be performed by performing curve fitting on each target point location under the constraint of the motion constraint condition of the robot based on each target point location in the new point location set, and regenerating the motion track, the gesture track, the speed curve, and the like.
Referring to fig. 8, a third embodiment of the present application is proposed based on the first embodiment and the second embodiment, and in this embodiment, the same or similar parts as those of the above embodiment may be referred to the above, and will not be described herein again. The multidimensional interactive interface comprises a display area and a functional control, wherein the display area is used for determining the operation content in the user operation under the functional mode corresponding to the functional control:
Step S110, determining each target point location based on the operation content of the user operation in the function mode corresponding to the function control, and displaying each target point location through the display area.
It should be noted that, in order to facilitate setting of the target point location by the operator, the application device of the method for generating the motion trail of the robot in this embodiment is configured with a multidimensional interactive interface. The application device may be a robot itself or a debugging device connected to the robot. The multi-dimensional interactive interface can be built through a preset geometric engine tool and a preset window tool, for example, the multi-dimensional interactive interface can be built based on an open source geometric engine OpenCASCADE (a set of open geometric model cores), namely the preset geometric engine tool, and is packaged in two layers, namely a C++ dynamic link library and a managed C++ dynamic link library respectively, and a WinForm window is adopted, namely the preset window tool is used as a display window. The multidimensional interactive interface can comprise a display area and functional controls, different functional controls correspond to different functional modes, in different functional modes, a user can realize different operations, specific operation content can be performed on the display area by the user, namely the display area can be used for determining the operation content in the user operation in the functional modes corresponding to the functional controls, for example, the user selects adding operation, the user can select an expected point position on the display area as a target point position, and the display area can determine the specific operation content of the user, such as the position coordinates of the selected expected point position. Through the mode, various basic interaction actions such as view rotation, translation, zooming, view switching, color modification, topology structure pickup highlighting and the like can be supported, and an operator can directly set a target point position on a display area in the multi-dimensional interaction interface. In addition, the method also supports the steps of importing an external target point file and generating a visualized point in a multidimensional environment, an operator modifies the position and the gesture of the target point on the basis of the visualized point, and the method also supports the steps of exporting the adjusted target point to the file, exporting the target point to a robot program, supporting the motion trail after fitting and interpolation to be drawn, and the like.
In a possible implementation, the functional controls include an add control, an import control, and a modify control; the operation content of the user operation comprises an adding operation step in the function mode corresponding to the adding control, an importing operation step in the function mode corresponding to the importing control and a modifying operation step in the function mode corresponding to the modifying control;
the step of determining each target point location based on the operation content of the user operation in the function mode corresponding to the function control includes:
step S111, based on the adding operation step, of taking the selected point location on the display area as the target point location;
step S112, and/or based on the importing operation step, receiving a target point location file, and reading the target point location from the target point location file;
step S113, and/or, based on the modifying operation step, correcting the selected point location on the display area, to obtain the target point location.
For example, the types of the functional controls are various, for example, adding controls, importing controls and modifying controls, different functional controls can present different functional icons on the multidimensional interactive interface, and an operator can select the functional controls according to own requirements and then perform specific operations on the display area. Correspondingly, in the function mode corresponding to the adding control, the operation step performed by the operator on the display area is the adding operation step, for example, in the function mode corresponding to the adding control, the operator performs the point position selecting operation step and the point position determining operation step on the display area, namely, the adding operation step, so that the selected point position on the display area can be used as the target point position based on the adding operation step, and the determination of the target point position is completed. In addition, if in the function mode corresponding to the import control, the operator can select and determine the imported target point location file in the display area, that is, in the importing operation step, the corresponding application device can receive the target point location file and read the target point location from the target point location file, for example, can read the position of the target point location, thereby realizing the determination of the target; if the control is modified in the corresponding function mode, the operator can select a point in the display area, and move or modify parameters of the point selected and then determine the point, namely modify operation steps, and finally modify the point and determine the point as a target point.
In a possible implementation manner, the function controls further comprise a visual angle switching control, a scaling control and a target pickup control to derive the controls; the operation content of the user operation further comprises a view angle switching operation step in a function mode corresponding to the view angle switching control, a zooming operation step in a function mode corresponding to the zooming control, a picking operation step in a function mode corresponding to the target picking control and a exporting operation step in a function mode corresponding to the exporting control;
the method for generating the motion trail of the robot further comprises the following steps:
step S101, based on the visual angle switching operation step, switching the visual angle of each target point or the motion trail displayed on the display area;
step S102, and/or based on the scaling operation step, changing the display size of each target point position or the movement track on the display area;
step S103, and/or, based on the picking operation step, highlighting the picked target point location, the picked motion trajectory, or the picked partial motion trajectory;
step S104, and/or, based on the deriving step, deriving each target point location or the motion trail to a deriving object.
It should be noted that the display area and the functionality control may not only be used for determining the target point location, but also may support basic interactive operations or auxiliary functions.
Illustratively, the functionality controls also include a perspective switch control, a zoom control, a target pick control to derive a control, and the like. Correspondingly, the operation content of the user operation may further include a view angle switching operation step in a view angle switching control corresponding functional mode, a scaling operation step in a scaling control corresponding functional mode, a picking operation step in a target picking control corresponding functional mode, and a deriving operation step in a deriving control corresponding functional mode. For example, the user's view angle switching operation may include selection of a view angle switching center, selection of a switching direction, and the like, and based on the view angle switching operation step, the application device may complete switching of the display view angle of the target point location or the motion trail on the display area; the scaling operation step can comprise the steps of selecting a scaling position and determining the scaling degree, and the application device changes the display size of the target point position or the movement track on the display area based on the scaling degree for the selected scaling position; the picking operation step may include picking of the target point, picking of the motion track (may pick up the completed motion track, or may pick up a part of the motion track), and determining the highlighting manner, for example, may be highlighting, may use other colors, etc., and the application device may highlight the target point to be picked up, the picked up motion track, or the picked up part of the motion track in the display area for highlighting; the exporting operation may include exporting content selections, such as target points or motion trajectories, and selecting export objects to which the application device may export the selected export content in the form of files.
In addition, the technician can correspondingly increase the function control according to the actual function requirement.
In a possible implementation manner, the step of outputting the motion trail of the robot based on the multi-dimensional interactive interface includes:
step S301, outputting the motion trail and the motion image of the robot through the multidimensional interactive interface, where the motion image is an image of the robot moving on the motion trail according to the gesture trail and the speed curve of the motion trail.
It can be understood that the obtained motion trail can be output outwards through the multi-dimensional interaction interface, and images of the robot moving according to the gesture trail and the speed curve of the motion trail on the motion trail can be demonstrated on the multi-dimensional interaction interface synchronously when the motion trail is output, so that an operator is prevented from actually controlling the robot to repeatedly move to observe whether the trail reaches the expectation or not, and the operation steps are simplified. Referring to fig. 9, a schematic diagram of a multidimensional interactive interface in the present application is shown, and an operator may set a target point location on the multidimensional interactive interface, and may also be used for outputting a motion track.
Referring to fig. 10, a schematic diagram of an interactive framework of a robot in the present application is shown. And the position editing part is used for determining the target point position through the multi-dimensional interactive interface and configuring the track parameters, the target point position and the track parameter piece are input into the robot control, the corresponding motion track is obtained through a fitting algorithm, and the motion track can be also input into the multi-dimensional interactive interface in the position editing part for display.
Referring to fig. 11, a fourth embodiment of the present application is proposed based on the first, second and third embodiments of the present application, and in this embodiment, the same or similar parts as those of the above embodiments may be referred to the above, and will not be repeated here. The method for generating the motion trail of the robot further comprises the following steps:
step S51, determining target points of a joint area in any two continuous motion tracks under the condition that the robot has a plurality of continuous motion tracks;
and step S52, performing curve fitting on each target point position under the constraint of the motion constraint condition of the robot based on the target point position return execution of the joint region to obtain a motion track to be executed by the robot so as to splice the two continuous motion tracks.
For example, in some cases, in order to avoid the problem that all target points are processed for a long time, the pressure may be too high, so that waiting pause, transitional failure and the like caused by untimely data distribution occur. In this embodiment, a complete motion target point may be processed in batches, and accordingly, a motion track may be obtained for each batch of target points, so that a complete motion is formed by a plurality of continuous motion tracks, in which case, the target point of a joint area in the two continuous motion tracks may be determined, where the joint area refers to an area formed by deriving a preset distance from the position where the two continuous motion tracks are butted to two sides. And performing curve fitting on each target point position under the constraint of the motion constraint condition of the robot based on the target point position return of the joint area to obtain a motion track to be performed by the robot, so that two continuous motion tracks are spliced together smoothly.
In addition, referring to fig. 12, an embodiment of the present application further proposes a generation apparatus 100 of a motion trajectory of a robot, where an application device of the generation method of a motion trajectory of a robot is configured with a multidimensional interactive interface, where the generation apparatus 100 of a motion trajectory of a robot includes:
the determining module 10 is used for determining each target point to be passed by the robot in response to user operation of the user based on the multidimensional interactive interface;
the fitting module 20 is configured to perform curve fitting on each target point location under the constraint of the motion constraint condition of the robot, so as to obtain a motion track to be executed by the robot;
and the output module 30 is used for outputting the motion trail of the robot based on the multi-dimensional interactive interface.
Optionally, the motion constraint condition includes a track mode, the kind of track mode includes an accurate mode and a smooth mode, the smooth mode is provided with a basic position error, and the fitting module 20 further includes:
if the track mode is the accurate mode, performing first curve fitting on each target point to obtain the motion track, wherein each target point is located on the motion track under the condition of performing the first curve fitting;
And if the track mode is the smooth mode, performing second curve fitting on each target point to obtain the motion track, wherein under the condition of performing the second curve fitting, the shortest position deviation between each target point and the motion track is smaller than or equal to the basic position error.
Optionally, each target point is configured with a target pose, the motion track is configured with a pose track, the smoothing mode is further provided with a basic pose error, and the fitting module 20 is further configured to:
if the track mode is the accurate mode, performing first pose fitting on each target pose to obtain a pose track of the robot on the motion track, wherein the pose corresponding to the target point in the pose track is the same as the target pose of the target point for any one target point under the condition of performing the first pose fitting;
and if the track mode is the smooth mode, performing second pose fitting on each target pose to obtain the pose track, wherein the pose difference between the pose corresponding to the target point and the target pose of the target point in the pose track is smaller than or equal to the basic pose error for any one target point under the condition of performing the second pose fitting.
Optionally, the motion constraint condition further includes a filtering level, and the generating device 100 of the motion trajectory of the robot further includes a filtering module 40, where the filtering module is further configured to:
determining noise points with fluctuation amplitude larger than the amplitude threshold corresponding to the filtering level in the target points based on the filtering level, wherein the fluctuation amplitude comprises the amplitude of position fluctuation or the amplitude of pose fluctuation;
and adjusting the position or the pose of the noise point so that the fluctuation amplitude of the noise point is smaller than or equal to the amplitude threshold.
Optionally, the motion constraint condition further includes a corner determination threshold, a position stress coefficient, and an attitude stress coefficient, the motion trajectory is configured with an attitude trajectory and a velocity curve, and the fitting module 20 is further configured to:
determining a position corner on the motion trajectory and a pose corner on the pose trajectory based on the corner determination threshold;
determining a speed threshold of the position corner on the motion trail according to the position stress coefficient, and determining a speed threshold of the position corner on the motion trail according to the posture stress coefficient;
and fitting based on a speed threshold value on each corner position in the motion track and a preset speed of the motion track, and generating a speed curve of the robot on the motion track, wherein the corner positions comprise positions of the position corners and positions of the pose corners, and a speed value of any position point on the speed curve represents the moving speed of the robot on a position corresponding to the position point in the motion track.
Optionally, the multidimensional interactive interface includes a display area and a function control, the display area is used for determining operation content in the user operation in a function mode corresponding to the function control, and the determining module 10 is further used for:
and determining each target point location based on the operation content of the user operation in the corresponding function mode of the function control, and displaying each target point location through the display area.
Optionally, the functional controls include an add control, an import control and a modify control; the operation content of the user operation includes an adding operation step in the adding control corresponding function mode, an importing operation step in the importing control corresponding function mode, and a modifying operation step in the modifying control corresponding function mode, and the determining module 10 is further configured to:
based on the adding operation step, taking the selected point location on the display area as the target point location;
and/or receiving a target point location file based on the importing operation step, and reading the target point location from the target point location file;
and/or, based on the modification operation step, correcting the selected point location on the display area to obtain the target point location.
Optionally, the function controls further include a view angle switching control, a zoom control, and a target pickup control to derive a control; the operation content of the user operation further includes a view angle switching operation step in the view angle switching control corresponding function mode, a zoom operation step in the zoom control corresponding function mode, a pick-up operation step in the target pick-up control corresponding function mode, and a export operation step in the export control corresponding function mode, and the determining module 10 is further configured to:
switching the view angles of the target points or the motion trails displayed on the display area based on the view angle switching operation step;
and/or, based on the scaling operation step, changing the display size of each target point location or the movement track on the display area;
and/or highlighting the picked up target point location, the picked up motion trajectory, or the picked up partial motion trajectory based on the picking up operation step;
and/or, based on the deriving operation step, deriving each target point location or the motion trail to a derived object.
Optionally, the output module 30 is further configured to:
And outputting the motion trail and the motion image of the robot through the multi-dimensional interactive interface, wherein the motion image is an image of the robot moving on the motion trail according to the gesture trail and the speed curve of the motion trail.
Optionally, the fitting module 20 is further configured to:
and if the point set consisting of the target points is updated, returning to execute the curve fitting of the target points under the constraint of the motion constraint condition of the robot based on the target points in the new point set, and obtaining the motion trail to be executed by the robot.
Optionally, the fitting module 20 is further configured to:
under the condition that the robot has a plurality of continuous motion tracks, determining target points of a joint area in any two continuous motion tracks;
and performing curve fitting on each target point position under the constraint of the motion constraint condition of the robot based on the target point position return of the joint area to obtain a motion track to be performed by the robot so as to splice the two continuous motion tracks.
The generation device of the robot motion trail provided by the application adopts the generation system method of the robot motion trail in the embodiment, and aims at solving the technical problems that the current traditional robot motion trail teaching scheme is large in workload and difficult to meet the requirement of trail precision. Compared with the prior art, the electronic device provided by the embodiment of the present application has the same beneficial effects as the method for generating the motion trail of the robot provided by the first embodiment, and other technical features in the device for generating the motion trail of the robot are the same as features disclosed in the method of the first embodiment, which are not described herein.
In addition, the embodiment of the application also provides a generation device of the motion trail of the robot, which comprises: the method comprises the steps of a memory, a processor and a robot motion trail generation program stored in the memory and capable of running on the processor, wherein the robot motion trail generation program realizes the robot motion trail generation method when being executed by the processor.
The specific implementation manner of the generation device of the motion trail of the robot is basically the same as the above embodiments of the generation method of the motion trail of the robot, and is not repeated here.
In addition, the embodiment of the application also provides a readable storage medium, which is a computer readable storage medium, wherein the readable storage medium stores a generation program of the robot motion trail, and the generation program of the robot motion trail realizes the steps of the generation method of the robot motion trail when being executed by a processor.
The specific embodiments of the medium in the present application are basically the same as the embodiments of the method for generating the motion trail of the robot, and are not described herein again.
It should be noted that, in this document, the terms "comprises," "comprising," or any other variation thereof, are intended to cover a non-exclusive inclusion, such that a process, method, article, or system that comprises a list of elements does not include only those elements but may include other elements not expressly listed or inherent to such process, method, article, or system. Without further limitation, an element defined by the phrase "comprising one … …" does not exclude the presence of other like elements in a process, method, article, or system that comprises the element.
The foregoing embodiment numbers of the present application are merely for describing, and do not represent advantages or disadvantages of the embodiments.
From the above description of the embodiments, it will be clear to those skilled in the art that the above-described embodiment method may be implemented by means of software plus a necessary general hardware platform, but of course may also be implemented by means of hardware, but in many cases the former is a preferred embodiment. Based on such understanding, the technical solution of the present application may be embodied essentially or in a part contributing to the prior art in the form of a software product stored in a storage medium (e.g. ROM/RAM, magnetic disk, optical disk) as described above, including several instructions for causing a terminal device (which may be a vehicle, a computer, a server, or a network device, etc.) to perform the method described in the embodiments of the present application.
The foregoing description is only of the preferred embodiments of the present application, and is not intended to limit the scope of the claims, and all equivalent structures or equivalent processes using the descriptions and drawings of the present application, or direct or indirect application in other related technical fields are included in the scope of the claims of the present application.

Claims (14)

1. A method for generating a robot motion trail is characterized in that application equipment of the method for generating the robot motion trail is provided with a multidimensional interactive interface
The method for generating the motion trail of the robot comprises the following steps:
responding to user operation of a user based on the multidimensional interactive interface, and determining each target point position through which the robot needs to pass;
under the constraint of the motion constraint condition of the robot, curve fitting is carried out on each target point position, and a motion track to be executed by the robot is obtained;
and outputting the motion trail of the robot based on the multi-dimensional interactive interface.
2. The method for generating a motion trajectory of a robot according to claim 1, wherein the motion constraint condition includes a trajectory pattern including an accurate pattern and a smooth pattern, the smooth pattern being provided with a basic position error;
under the constraint of the motion constraint condition of the robot, performing curve fitting on each target point location to obtain a motion track to be executed by the robot, wherein the step of obtaining the motion track to be executed by the robot comprises the following steps:
if the track mode is the accurate mode, performing first curve fitting on each target point to obtain the motion track, wherein each target point is located on the motion track under the condition of performing the first curve fitting;
and if the track mode is the smooth mode, performing second curve fitting on each target point to obtain the motion track, wherein under the condition of performing the second curve fitting, the shortest position deviation between each target point and the motion track is smaller than or equal to the basic position error.
3. The method for generating a motion trajectory of a robot according to claim 2, wherein a target pose is configured on each target point, a pose trajectory is configured on the motion trajectory, and a base pose error is further set in the smoothing mode;
the method for generating the motion trail of the robot further comprises the following steps:
if the track mode is the accurate mode, performing first pose fitting on each target pose to obtain a pose track of the robot on the motion track, wherein the pose corresponding to the target point in the pose track is the same as the target pose of the target point for any one target point under the condition of performing the first pose fitting;
and if the track mode is the smooth mode, performing second pose fitting on each target pose to obtain the pose track, wherein the pose difference between the pose corresponding to the target point and the target pose of the target point in the pose track is smaller than or equal to the basic pose error for any one target point under the condition of performing the second pose fitting.
4. The method for generating a motion trajectory of a robot of claim 1, wherein the motion constraint condition further comprises a filtering level;
Before the step of performing curve fitting on each target point to obtain a motion trail to be executed by the robot, the method comprises the following steps:
determining noise points with fluctuation amplitude larger than the amplitude threshold corresponding to the filtering level in the target points based on the filtering level, wherein the fluctuation amplitude comprises the amplitude of position fluctuation or the amplitude of pose fluctuation;
and adjusting the position or the pose of the noise point so that the fluctuation amplitude of the noise point is smaller than or equal to the amplitude threshold.
5. The method of generating a motion trajectory of a robot according to claim 1, wherein the motion constraint condition further includes a corner determination threshold, a position stress coefficient, and a posture stress coefficient, the motion trajectory being configured with a posture trajectory and a speed profile;
the method for generating the motion trail of the robot further comprises the following steps:
determining a position corner on the motion trajectory and a pose corner on the pose trajectory based on the corner determination threshold;
determining a speed threshold of the position corner on the motion trail according to the position stress coefficient, and determining a speed threshold of the position corner on the motion trail according to the posture stress coefficient;
And fitting based on a speed threshold value on each corner position in the motion track and a preset speed of the motion track, and generating a speed curve of the robot on the motion track, wherein the corner positions comprise positions of the position corners and positions of the pose corners, and a speed value of any position point on the speed curve represents the moving speed of the robot on a position corresponding to the position point in the motion track.
6. The method for generating a motion trail of a robot according to claim 1, wherein the multidimensional interactive interface includes a display area and a functional control, the display area is used for determining operation content in the user operation in a functional mode corresponding to the functional control;
the step of determining each target point to be passed by the robot in response to the user operation based on the multidimensional interactive interface by the user comprises the following steps:
and determining each target point location based on the operation content of the user operation in the corresponding function mode of the function control, and displaying each target point location through the display area.
7. The method for generating a motion trail of a robot of claim 6, wherein the functionality controls include an add control, an import control, and a modify control; the operation content of the user operation comprises an adding operation step in the function mode corresponding to the adding control, an importing operation step in the function mode corresponding to the importing control and a modifying operation step in the function mode corresponding to the modifying control;
The step of determining each target point location based on the operation content of the user operation in the function mode corresponding to the function control includes:
based on the adding operation step, taking the selected point location on the display area as the target point location;
and/or receiving a target point location file based on the importing operation step, and reading the target point location from the target point location file;
and/or, based on the modification operation step, correcting the selected point location on the display area to obtain the target point location.
8. The method for generating a motion trail of a robot according to claim 7, wherein the functional controls further comprise a view angle switching control, a zoom control, and a target pickup control to derive the controls; the operation content of the user operation further comprises a view angle switching operation step in a function mode corresponding to the view angle switching control, a zooming operation step in a function mode corresponding to the zooming control, a picking operation step in a function mode corresponding to the target picking control and a exporting operation step in a function mode corresponding to the exporting control;
the method for generating the motion trail of the robot further comprises the following steps:
Switching the view angles of the target points or the motion trails displayed on the display area based on the view angle switching operation step;
and/or, based on the scaling operation step, changing the display size of each target point location or the movement track on the display area;
and/or highlighting the picked up target point location, the picked up motion trajectory, or the picked up partial motion trajectory based on the picking up operation step;
and/or, based on the deriving operation step, deriving each target point location or the motion trail to a derived object.
9. The method of generating a motion profile of a robot of claim 6, wherein the outputting the motion profile of the robot based on the multi-dimensional interactive interface comprises:
and outputting the motion trail and the motion image of the robot through the multi-dimensional interactive interface, wherein the motion image is an image of the robot moving on the motion trail according to the gesture trail and the speed curve of the motion trail.
10. The method of generating a motion trajectory of a robot according to claim 1, wherein after the step of outputting the motion trajectory, the method comprises:
And if the point set consisting of the target points is updated, returning to execute the curve fitting of the target points under the constraint of the motion constraint condition of the robot based on the target points in the new point set, and obtaining the motion trail to be executed by the robot.
11. The method of generating a robot motion trajectory according to any one of claims 1 to 10, further comprising:
under the condition that the robot has a plurality of continuous motion tracks, determining target points of a joint area in any two continuous motion tracks;
and executing curve fitting on each target point position under the constraint of the motion constraint condition of the robot based on the target point position of the joint area to obtain a motion track to be executed by the robot so as to splice the two continuous motion tracks.
12. The generation device of the robot motion trail is characterized in that application equipment of the generation method of the robot motion trail is provided with a multidimensional interactive interface, and the generation device of the robot motion trail comprises:
The determining module is used for responding to the user operation of the user based on the multidimensional interactive interface and determining each target point position through which the robot needs to pass;
the fitting module is used for performing curve fitting on each target point location under the constraint of the motion constraint condition of the robot to obtain a motion track to be executed by the robot;
and the output module is used for outputting the motion trail of the robot based on the multi-dimensional interactive interface.
13. A generation apparatus of a robot motion trajectory, characterized in that the generation apparatus of a robot motion trajectory includes: a memory, a processor, and a robot motion trajectory generation program stored on the memory and operable on the processor, which when executed by the processor, implements the steps of the robot motion trajectory generation method according to any one of claims 1 to 11.
14. A readable storage medium, characterized in that the readable storage medium is a computer readable storage medium, on which a robot motion trajectory generation program is stored, which when executed by a processor, implements the steps of the robot motion trajectory generation method according to any one of claims 1 to 11.
CN202311479352.5A 2023-11-07 2023-11-07 Method, device and equipment for generating motion trail of robot and readable storage medium Pending CN117464675A (en)

Priority Applications (1)

Application Number Priority Date Filing Date Title
CN202311479352.5A CN117464675A (en) 2023-11-07 2023-11-07 Method, device and equipment for generating motion trail of robot and readable storage medium

Applications Claiming Priority (1)

Application Number Priority Date Filing Date Title
CN202311479352.5A CN117464675A (en) 2023-11-07 2023-11-07 Method, device and equipment for generating motion trail of robot and readable storage medium

Publications (1)

Publication Number Publication Date
CN117464675A true CN117464675A (en) 2024-01-30

Family

ID=89630910

Family Applications (1)

Application Number Title Priority Date Filing Date
CN202311479352.5A Pending CN117464675A (en) 2023-11-07 2023-11-07 Method, device and equipment for generating motion trail of robot and readable storage medium

Country Status (1)

Country Link
CN (1) CN117464675A (en)

Similar Documents

Publication Publication Date Title
US20200338730A1 (en) Trajectory planning device, trajectory planning method and program
US11213945B2 (en) Robot simulator, robot system and simulation method
CN106873550B (en) Simulation device and simulation method
JP7095262B2 (en) Programming support device, robot system and program generation method
US10194144B2 (en) Projection image adjusting system and projection image adjusting method
US6836700B2 (en) System and method generating a trajectory for an end effector
US10649416B2 (en) Machine learning model construction device, numerical control, machine learning model construction method, and non-transitory computer readable medium encoded with a machine learning model construction program
CN105094049B (en) Learning path control
JPH04344503A (en) Numerical controller for robot
US8838276B1 (en) Methods and systems for providing functionality of an interface to control orientations of a camera on a device
KR20220080080A (en) dynamic planning controller
JP2021000678A (en) Control system and control method
CN111715738B (en) Shaft action configuration method, device, equipment and computer readable storage medium
CN113664835B (en) Automatic hand-eye calibration method and system for robot
TW202019642A (en) Calibration method and device for robotic arm system
JP2018001393A (en) Robot device, robot control method, program and recording medium
US10507585B2 (en) Robot system that displays speed
JP2018114576A (en) Off-line programming device and position parameter correction method
JP7259860B2 (en) ROBOT ROUTE DETERMINATION DEVICE, ROBOT ROUTE DETERMINATION METHOD, AND PROGRAM
JP2012071394A (en) Simulation system and simulation program therefor
US10761523B2 (en) Method for controlling an automation system
CN115703227A (en) Robot control method, robot, and computer-readable storage medium
CN117464675A (en) Method, device and equipment for generating motion trail of robot and readable storage medium
WO2023207164A1 (en) Robot operation control method and apparatus
CN113021329A (en) Robot motion control method and device, readable storage medium and robot

Legal Events

Date Code Title Description
PB01 Publication
PB01 Publication
SE01 Entry into force of request for substantive examination
SE01 Entry into force of request for substantive examination